Oxford Handbook of Online Intermediary Liability 9780198837138, 0198837135

This book provides a comprehensive, authoritative, and state-of-the-art discussion of fundamental legal issues in interm

160 12 5MB

English Pages 801 Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Oxford Handbook of Online Intermediary Liability
 9780198837138, 0198837135

Table of contents :
Cover
The Oxford Handbook of ONLINE INTERMEDIARY LIABILITY
Copyright
Contents
Editor’s Note: A Dialogue on the Role of Online Intermediaries
Notes on Contributors
Part I: INTRODUCTION
Chapter 1: Mapping Online Intermediary Liability
1. Mapping Fundamental Notions
2. Mapping International Fragmentation: From Safe Harbours to Liability
3. Mapping Subject-Specific Regulation
4. Mapping Intermediary Liability Enforcement
5. Mapping Private Ordering and Intermediary Responsibility
6. Mapping Internet Jurisdiction, Extraterritoriality, and Intermediary Liability
Part II: MAPPING FUNDAMENTAL NOTIONS
Chapter 2: Who Are Internet Intermediaries?
1. Definitions of ‘Internet Intermediaries’
2. Alternative Terms
3. A Functional Taxonomy
4. Typological Considerations
4.1 Intermediaries in Distinct Fields
4.2 Status as a Measure of Legal Applicability
4.3 The Directive on Copyright in the Digital Single Market
5. Conclusions
Chapter 3: A Theoretical Taxonomy of Intermediary Liability
1. What is ‘Liability’?
1.1 Moral Agency and Individual Responsibility
1.2 Monetary Liability
1.2.1 Strict Liability
1.2.2 Negligence-Based Standards
1.2.3 Knowledge-Based Standards
1.2.4 Immunity
1.3 Non-Monetary Liability
2. Classifying Liability
2.1 Primary Liability
2.2 Secondary Liability
2.2.1 Causative Secondary Liability
2.2.1.1 Procurement
2.2.1.2 Common Design
2.2.1.3 Criminal Accessory Liability
2.2.2 Relational Secondary Liability
3. Justifying Intermediary Liability
3.1 Normative Justifications
3.1.1 Holding Causes of Harm Accountable
3.1.2 Fictional Attribution to Secondary Wrongdoers
3.1.3 Upholding Primary Duties
3.1.4 Upholding Duties Voluntarily Assumed
3.2 Practical Functions
3.2.1 Reducing Claimants’ Enforcement Costs
3.2.2 Encouraging Innovation
3.2.3 Regulating Communications Policy
4. Types of Wrongdoing
4.1 Copyright Infringement
4.2 Trade Mark Infringement
4.3 Defamation
4.4 Hate Speech, Disinformation, and Harassment
4.5 Breach of Regulatory Obligations
4.6 Disclosure Obligations
5. Conclusions
Chapter 4: Remedies First, Liability Second: Or Why We Fail to Agree on Optimal Design of Intermediary Liability
1. Three Legal Pillars
2. Distinguishing Reasons from Consequences
3. Typology of Consequences
3.1 Scope of Damages
3.2 Aggregation of Damages
3.3 Scope and Goals of Injunctions
3.4 Costs of Injunctions
4. Putting the Cart before the Horse
5. Conclusions
Chapter 5: Empirical Approaches to Intermediary Liability
1. Volume of Notices
2. Accuracy of Notices
3. Over-Enforcement and Abuse
4. Due Process and Transparency
5. Balancing of Responsibilities and Costs
6. Conclusion: Limitations, Gaps, and Future Research
Chapter 6: The Civic Role of OSPs in Mature Information Societies
1. Managing Access to Information
2. Human Rights: Harmful Content and Internet Censorship
3. The Civic Role of Osps in Mature Information Societies
4. Conclusion: The Duty of Ethical Governance
Chapter 7: Intermediary Liability and Fundamental Rights
1. Users’ Rights
1.1 Freedom of Information and Internet Access
1.2 Freedom of Expression
1.3 Right to Privacy and Data Protection
2. OSPs, Freedom of Business, and Innovation
3. IP Owners and Property Rights
4. Conclusions
Part III: SAFE HARBOURS, LIABILITY, AND FRAGMENTATION
Chapter 8: An Overview of the United States’ Section 230 Internet Immunity
1. Pre-Section 230 Law
1.1 The Moderator’s Dilemma
2. Section 230’s Protections for Defendants
2.1 Section 230’s Statutory Exclusions
3. Section 230’s Implications
4. Comparative Analysis
4.1 EU’s ‘Right to Be Forgotten’
4.2 EU Electronic Commerce Directive
4.3 The UK Defamation Law
4.4 Germany’s Network Enforcement Law (NetzDG)
4.5 Brazil’s Internet Bill of Rights
4.6 Section 230 and Foreign Judgments
5. What’s Next for Section 230?
Chapter 9: The Impact of Free Trade Agreements on Internet Intermediary Liability in Latin America
1. Background: Notice and Takedown in the Dmca
2. Free Trade Agreements and Notice-and-Take down Provisions
3. Implementation of FTA Intermediary Liability Provisions in Latin America
3.1 A Comparison of Provisions on Effective Notice in FTAs with Latin American Countries
3.2 Completed Implementation
3.2.1 Chile
3.2.2 Costa Rica
3.3 Pending Implementation
3.4 The Current Promotion of the DMCA Model in FTAs
4. The Convenience of the DMCA Approach for Notice and Takedown in Latin America
5. Conclusions
Chapter 10: The Marco Civil da Internet and Digital Constitutionalism
1. The Civil Rights Framework for the Internet
1.1 The Path Towards the MCI’s Enactment
1.2 The MCI Legislative Process
2. The ‘MCI on the Books’
2.1 General Provisions of the MCI
2.2 Intermediary Liability Rules
2.3 Intermediary Liability Beyond the MCI
3. The MCI in Practice
3.1 The Brazilian Justice System
3.2 Relevant Decisions in High Courts
3.3 Cases Pending Before the Federal Supreme Court
4. The MCI and Digital Constitutionalism
5. Concluding Remarks
Chapter 11: Intermediary Liability in Africa: Looking Back, Moving Forward?
1. The Slow Rise of the Intermediary Liability Discourse in Africa
2. First-generation Liability Limitations
2.1 South Africa
2.2 Ghana
2.3 Zambia
2.4 Uganda
3. Interstate Cooperation in the African Region: Implications for Intermediary Liability
4. Second-generation Liability Limitations: the Rise of Hybrid Instruments
4.1 Malawi
4.2 Ethiopia
4.3 Kenya
4.4 South Africa
4.4.1 Liability limitations when you are also ‘cyber-police’: a complex terrain
5. Conclusion: The Role o fthe African Union and the Promise of the South African Model
Chapter 12: The Liability of Australian Online Intermediaries
1. Liability: Active Intermediaries and Recalcitrant Wrongdoers
1.1 Consumer Protection Law
1.2 Defamation
1.3 Vilification
1.4 Copyright
2. Limiting Devices and their Flaws
3. Conclusions
Chapter 13: From Liability Trap to the World’s Safest Harbour: Lessons from China, India, Japan, South Korea, Indonesia, and Malaysia
1. China
1.1 Basic Laws and Regulations
1.2 Liability-Imposing v Liability-Exempting
1.3 Conclusions
2. India
2.1 Basic Laws and Regulations
2.2 Liability-Imposing v Liability-Exempting
2.3 Dialectical Turn of Singhal
2.4 Conclusions
3. Japan
3.1 Basic Laws and Regulations
3.2 Comparison With Other Safe Harbours
3.3 Conclusions
4. Indonesia
5. Malaysia
6. South Korea
6.1 Introduction: Basic Laws and Regulations
6.2 Proof: Intermediary Behaviour and Courts’ Expansionist Interpretation
6.3 Origins: Syntactical Error in Adopting Section 512 of the DMCA?
6.4 Conclusions
7. Epilogue
Chapter 14: China’s IP Regulation and Omniscient Intermediaries: Oscillating from Safe Harbour to Liability
1. Intermediary Liability for Trade mark Infringement
1.1 Establish Internal Procedures and Enforce Accordingly
1.2 Appropriate Reasonable Measures in the Case of Repeat Infringement
1.3 Inferences
2. Intermediary Liability in the Case of Copyright Liability
2.1 Non-Interference
2.2 Removal after a Notice-and-Takedown Request
2.3 Necessary Measures
2.4 No Financial Benefits
2.5 Inducement and Contributory Liability
2.6 Non-Interference
3. Conclusions
Chapter 15: A New Liability Regime for Illegal Content in the Digital Single Market Strategy
1. The Issue of Illegal Contentwith in the DSM Strategy
2. Copyright-Infringing Content and the Copyright in the DSM Directive
3. Harmful Content within the Reformed AudioVisual Media Service Directive
4. Misleading content and the Unfair Commercial Practices Directive
5. Final Remarks
Part IV: A SUBJECT MATTER-SPECIFIC OVERVIEW
Chapter 16: Harmonizing Intermediary Copyright Liability in the EU: A Summary
1. The Current Incomplete EU Framework for Intermediary Liability in Copyright
2. The National Regimes on Intermediary Liability in Copyright
2.1 Intra-Copyright Solutions
2.2 Tort-Based Solutions
2.3 Injunction-Based Solutions
3. Intermediary Liability and Tort Law
4. Building a Complete Framework for European Intermediary Liability in Copyright
5. Closing Remarks
Chapter 17: The Direct Liability of Intermediaries
1. The right of communication to the public as construed through case law
2. Liability of Platform Operators for the Making of Acts of Communication to the Public: The Pirate Bay Case
3. Applicability of C-610/15 Stichting Brein to Less Egregious Scenarios
4. Other Implications: Primary/Secondary Liability and Safe Harbours
5. Conclusion
Chapter 18: Secondary Copyright Infringement Liability and User-Generated Content in the United States
1. Secondary Copyright Infringement in Common Law
2. The Digital Millennium Copyright Act
2.1 Actual and ‘Red Flag’ Knowledge
2.2 Wilful Blindness
2.3 Right and Ability to Control
3. User-Generated Content in The Shadow of The DMCA and Case Law
3.1 Technological Measures
3.2 Government Enforcement Efforts
4. Policy Activity
4.1 Stop Online Piracy Act and Companion Bills
4.2 US Department of Commerce Internet Policy Task Force
4.3 US House Judiciary Committee Copyright Review
4.4 US Copyright Office Study
5. Secondary Copyright Infringement Liability for Online Intermediaries Outside the United States
5.1 International Treaties and Trade Agreements
5.2 European Union
6. Conclusions
Chapter 19: Intermediary Liability and Online Trade Mark Infringement: Emerging International Common Approaches
1. The ‘Ratio’ Principles of Intermediary Responsibility—International Common Approaches
1.1 Injunctions for Blocking Websites by ISPs
2. The Ius Gentium of Voluntary Measures—International Common Approaches
2.1 Freedom of Expression, Competition, and Data Protection
3. Conclusions
Chapter 20: Intermediary Liability and Trade Mark Infringement: Proliferation of Filter Obligations in Civil Law Jurisdictions?
1. Trade Mark Rights
1.1 Inherent Limits
1.2 Context-Specific Infringement Analysis
2. Limitations of Trade Mark Rights
2.1 Commercial Freedom of Expression
2.2 Artistic and Political Freedom of Expression
2.3 Context-Specific Limitations
3. Developments in Case Law
3.1 Guidelines at EU Level
3.2 Application in Civil Law Jurisdictions
3.3 Need for Balanced, Proportionality-Based Approach
4. Conclusions
Chapter 21: Intermediary Liability and Trade Mark Infringement: A Common Law Perspective
1. Who are Online Intermediaries?
2. Primary Liability of Intermediaries for Trade Mark Infringement
2.1 Use of the Sign Complained of by the Intermediary
2.2 Use of the Sign in the Relevant Territory
2.3 Counterfeit Goods and Grey Goods
2.4 Articles 12 to 14 of the e-Commerce Directive
3. Accessory Liability of Intermediaries for Trade Mark Infringement
3.1 Accessory Liability of Intermediaries for Trade Mark Infringement under English Law
3.2 Article 14 of the e-Commerce Directive
4. Injunctions Against Intermediaries Whose Services are Used to Infringe Trade Marks
4.1 Jurisdiction of the Courts of England and Wales to Grant Injunctions Against Intermediaries in Trade Mark Cases
4.2 Website-Blocking Injunctions: Threshold Conditions
4.3 Website-Blocking Injunctions: Applicable Principles
4.5 Other Kinds of Injunctions Against Intermediaries in Trade Mark Cases
Chapter 22: Digital Markets, Rules of Conduct, and Liability of Online Intermediaries—Analysis of Two Case Studies: Unfair Commercial Practices and Trade Secrets Infringement
1. Problem Definition
2. Online Intermediaries: Walking a Tightrope Between Immunity from Liability and Remedies
2.1 Safe Harbours
2.2 Injunctions Against Online Intermediaries
3. Unfair Commercial Practices via Online Intermediaries
3.1 Unfair Commercial Practices Definition
3.2 Is the Online Intermediary a ‘Trader’ that Performs ‘Commercial Practices’?
3.3 The Interplay Between the UCPs Directive and e-Commerce Directive
3.4 Protection of Consumers’ Interests vs IPRs’ Protection
4. Trade Secrets Infringement via OIs
4.1 Protection of Undisclosed Know-How and Business Information (Trade Secrets) Within the EU: Overview
4.2 Unlawful Acquisition, Use, and Disclosure of Trade Secrets by Third Parties
4.3 Remedies Against Third Parties
4.4 The Interplay Between Trade Secrets Directive and the e-Commerce Directive
4.5 Trade Secrets and IPRs’ Enforcement Against Third Parties
5. Assessment
Chapter 23: Notice-and-Notice-Plus: A Canadian Perspective Beyond the Liability and Immunity Divide
1. Legal Context
1.1 Common Law
1.2 Statutory Approaches
2. Proposal for Reform
2.1 Common Law
2.2 Notice-and-Notice-Plus
3. Notice-and-Notice-Plus Beyond Defamation Law
3.1 NN+ Requires the Speech to Be Unlawful but Other Forms of Speech Are Harmful Too
3.2 Speech Regulation for Which NN+ is Clearly Unsuitable
3.3 Case Study: Terrorist Content
3.4 Case Study: Hate Speech
4. Conclusions
Chapter 24: Free Expression and Internet Intermediaries: The Changing Geometry of European Regulation
1. The European RegulatoryFramework
1.1 The Council of Europe
1.2 The European Union
2. Geometrical Shifts
3. Conclusions
Chapter 25: The Right to Be Forgotten in the European Union
1. The Right to be Forgotten Vis-à-Vis Search Engines
1.1 Google Spain
1.2 Delisting in Numbers
1.3 Balancing Rights
1.4 Geographical Scope
1.5 Sensitive Data
2. The Right to be Forgotten Vis-À-Vis Primary Publishers
3. Conclusions
Chapter 26: Right to be . . . Forgotten? Trends in Latin America after the Belén Rodriguez Case and the Impact of the New European Rules
1. The Belén Rodriguez Case: Something New Under the Sun?
1.1 Previous and Actual Knowledge
1.2 Explicit Illegality of the Content
1.3 Negligent Response
2. Trends in Latin America: What do they Follow?
2.1 Chile
2.2 Colombia
2.3 Mexico
2.4 Peru
2.5 Uruguay
3. The Right to Privacy and the Right to Freedom of Expression Under the Inter American System of Human Rights and its Impact on the Right to be Forgotten
4. Conclusions
Part V: INTERMEDIARY LIABILITY AND ONLINE ENFORCEMENT
Chapter 27: From ‘Notice and Takedown’ to ‘Notice and Stay Down’: Risks and Safeguards for Freedom of Expression
1. Impact on Freedom of Expression
2. Notice and TakeDown
2.1 General
2.2 Variations of the Mechanism
2.3 Risks and Safeguards for Freedom of Expression
2.3.1 Foreseeability
2.3.2 Abusive requests
2.3.3 Notification and counter-notification
3. Notice and Notice
3.1 General
3.2 Variations of the Mechanism
3.3 Risks and Safeguards for Freedom of Expression
3.3.1 Foreseeability
3.3.2 Decision-making bodies
3.3.3 Severity of the response
4. Notice and Stay Down
4.1 General
4.2 Judicial Construct
4.3 Risks and Safeguards for Freedom of Expression
4.3.1 General v specific monitoring
4.3.2 Clear and precise notifications
4.3.3 Appeal procedure
5. Conclusions
Chapter 28: Monitoring and Filtering: European Reform or Global Trend?
1. OSP as a ‘Mere Conduit’: ‘No Monitoring’ Obligations
2. From ‘Mere Conduits’ to ‘Gate-Keepers’? The Global Shift in Intermediary Liability
2.1 Case Law
2.1.1 The European experience
2.2 Private Ordering: Emerging Industry Practice
3. The EU Copyright Directive in the Digital Single Market: Legitimation through Legislation?
3.1 Definition of an OCSSP
3.2 A General Monitoring Obligation?
4. Effect on Fundamental Rights
5. Conclusions
Chapter 29: Blocking Orders: Assessing Tensions with Human Rights
1. A Freedom of Expression Perspective on Website Blocking: The Emergence of User Rights
1.1 User Rights
1.2 Collateral Effects of Blocking: The Risk of Overblocking
1.3 The ‘Value’ of Content
1.4 Alternative Means of Accessing Information
2. A Freedom to Conduct a Business Perspective on Website Blocking: The (Rising) Role of the ISPS in Digital Copyright Enforcement
2.1 Costs and Complexity of Blocking
2.2 Availability of Reasonable Alternatives (Subsidiarity)
3. A Right to Property Perspective on Website Blocking: Effectiveness of the Blocking
4. Recent EU Copyright Reform and its Effects on Website Blocking and Fundamental Rights
5. Conclusions
Chapter 30: Administrative Enforcement of Copyright Infringement in Europe
1. The European Landscape: Spain, Italy, and Greece
2. The Legal Context
2.1 International and EU Legislative Provisions
2.1.1 TRIPs
2.1.2 The EU
2.1.3 The EU Charter of Fundamental Rights
2.2 Domestic Legal Basis
3. The Essential Features
3.1 The Independence of the Public Bodies Entrusted with the Task
3.2 Protected Subject Matter and Violations
3.3 Parties
3.4 Procedure
3.5 Abbreviated Proceedings and Protective Orders
3.6 Costs
3.7 Remedies
3.8 Transparency
3.9 Double Track
3.10 Review
3.11 Safeguards against Abuse
4. The AGCOM Regulation in Practice: A Case Study
4.1 Transparency
4.2 Protected Subject Matter
4.3 Scope of Violations
4.4 Remedies
4.5 Relevance of the Principle of Proportionality
Part VI: INTERMEDIARY RESPONSIBILITY, ACCOUNTABILITY, AND PRIVATE ORDERING
Chapter 31: Accountability and Responsibility of Online Intermediaries
1. Tools for Increasing Responsibility
1.1 Graduated Response
1.2 Changes to Online Search Results
1.3 Payment Blockades and Follow the Money
1.4 Private DNS Content Regulation
1.5 Standardization
1.6 Codes of Conduct
1.7 Filtering
1.8 Website-Blocking
2. Mechanisms and Legal Challenges
2.1 Market and Private Ordering
2.2 Corporate Social Responsibility
2.3 Involuntary Cooperation in IP Rights Enforcement
2.4 Public Deal-Making
2.5 Circulation of Solutions
3. Conclusions
Chapter 32: Addressing Infringement: Developments in Content Regulation in the US and the DNS
1. ICANN, the DNS, and DNS Intermediaries
2. The History of Intellectual Property Enforcement in the DNS
2.1 The UDRP
3. ICANN’s New gTLD Programme and IP Stakeholder Demands
4. Expanding IP Enforcement in the DNS: Within and Without ICANN
4.1 Present Arrangements: ‘Trusted Notifier’ Agreements
4.1.1 The trusted notifier model and the UDRP compared
4.2 Future Plans: A Copyright-Specific UDRP?
4.2.1 The DNA’s copyright ADRP
4.2.2 PIR’s SCDRP
5. Conclusions
Chapter 33: Intermediary Liability in Russia and the Role of Private Business in the Enforcement of State Controls over the Internet
1. Russian Internet
2. The Evolution of Internet Regulations and ISP Liability
3. How the Government Relies on Intermediaries
3.1 Content
3.2 Surveillance
4. Compliance Dilemma
5. Transparency and Compliance with Human Rights
6. Conclusions
Chapter 34: Guarding the Guardians: Content Moderation by Online Intermediaries and the Rule of Law
1. Content Moderation by Platforms and The Rule of law
2. Barriers to Accountability
3. Enhancing Intermediaries’ Oversight
4. Future Challenges
Chapter 35: Algorithmic Accountability: Towards Accountable Systems
1. Accountable to whom?
2. Accountability for what?
3. Challenges with Algorithmic Accountability
3.1 Access to the Algorithmic System
3.2 Verification
3.3 Aggregation
3.4 Measuring the Effect of a System on User Behaviour
3.5 Users’ Interpretation of the Capabilities of the Systems They are Using
3.6 Improving the Quality of Technical Systems
3.7 Accountability of the Socio-Technical System
4. What does Algorithmic Accountability Mean in the Context of Intermediary Liability Online?
5. Conclusions
Part VII: INTERNET JURISDICTION, EXTRATERRITORIALITY, AND LIABILITY
Chapter 36: Internet Jurisdiction and Intermediary Liability
1. Internet Intermediaries and Jurisdiction
2. Terms of Service, Jurisdiction, and Choice of Law
3. Access to Evidence and Jurisdiction
4. Scope of Jurisdiction of Content Blocking
5. Concluding Remarks
Chapter 37: The Equustek Effect: A Canadian Perspective on Global Takedown Orders in the Age of the Internet
1. Where it all Began: The Yahoo France Case
2. Equustek Solutions v Google: Internet Jurisdiction Hits Canada’s Highest Court
3. Supreme Court of Canada Hearing
4. The Supreme Court of Canada Decision
5. After Equustek: The Risks of Global Takedown Orders From National Courts
5.1 Conflicting Court Orders
5.2 Expanding Equustek
5.3 Expanding Intermediary Power
6. Conclusions
Chapter 38: Jurisdiction on the Internet: From Legal Arms Race to Transnational Cooperation
1. National Jurisdictions and Cross-Border Data Flows and Services
1.1 Conflicting Territorialities
1.2 A Challenge for All Stakeholders
1.3 A Core Issue of Internet Governance
2. A Legal Arms Race in Cyberspace?
2.1 Extraterritoriality
2.2 Digital Sovereignty
2.3 Paradoxes of Sovereignty
3. Limits to International Cooperation
3.1 Obstacles to Multilateral Efforts
3.2 MLATs: The Switched Network of International Cooperation
4. A Dangerous Path
4.1 Economic Impacts
4.2 Human Rights Impacts
4.3 Technical Infrastructure Impacts
4.4 Security Impacts
5. Filling the Institutional Gap in Internet Governance
5.1 Lessons from the Technical Governance ‘of ’ the Internet
5.2 Evolution of the Ecosystem: Governance ‘on’ the Internet
5.3 Enabling Issue-Based Multistakeholder Cooperation
6. Towards Transnational Frameworks
6.1 Procedural Interoperability
6.2 Governance through Policy Standards
7. Conclusions
Index

Citation preview

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

t h e ox f o r d h a n d b o o k o f

ON L I N E I N T E R M E DI A RY L I A BI L I T Y

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

The Oxford Handbook of

ONLINE INTERMEDIARY LIABILITY Edited by

GIANCARLO FROSIO

1

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

1 Great Clarendon Street, Oxford, ox2 6dp, United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © Giancarlo Frosio 2020 The moral rights of the authors have been asserted First Edition published in 2020 Impression: 1 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number: 2020931359 ISBN 978–0–19–883713–8 Printed and bound by CPI Group (UK) Ltd, Croydon, cr0 4yy Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work.

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

Contents

ix xv

Editor’s Note: A Dialogue on the Role of Online Intermediaries Notes on Contributors

PA RT I   I N T RODU C T ION 1. Mapping Online Intermediary Liability

3

Giancarlo Frosio

PA RT I I   M A P P I N G F U N DA M E N TA L N OT ION S 2. Who Are Internet Intermediaries?

37

Graeme Dinwoodie

3. A Theoretical Taxonomy of Intermediary Liability

57

Jaani Riordan

4. Remedies First, Liability Second: Or Why We Fail to Agree on Optimal Design of Intermediary Liability

90

Martin Husovec

5. Empirical Approaches to Intermediary Liability

104

Kristofer Erickson and Martin Kretschmer

6. The Civic Role of OSPs in Mature Information Societies

122

Mariarosaria Taddeo

7. Intermediary Liability and Fundamental Rights

138

Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko

PA RT I I I   S A F E HA R B OU R S , L IA B I L I T Y, A N D F R AG M E N TAT ION 8. An Overview of the United States’ Section 230 Internet Immunity Eric Goldman

155

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

vi   contents

9. The Impact of Free Trade Agreements on Internet Intermediary Liability in Latin America

172

Juan Carlos Lara Gálvez and Alan M. Sears

10. The Marco Civil da Internet and Digital Constitutionalism

190

Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes

11. Intermediary Liability in Africa: Looking Back, Moving Forward?

214

Nicolo Zingales

12. The Liability of Australian Online Intermediaries

236

Kylie Pappalardo and Nicolas Suzor

13. From Liability Trap to the World’s Safest Harbour: Lessons from China, India, Japan, South Korea, Indonesia, and Malaysia

251

Kyung-Sin Park

14. China’s IP Regulation and Omniscient Intermediaries: Oscillating from Safe Harbour to Liability

277

Danny Friedmann

15. A New Liability Regime for Illegal Content in the Digital Single Market Strategy

295

Maria Lillà Montagnani

PA RT I V   A SU B J E C T M AT T E R- SP E C I F IC OV E RV I E W 16. Harmonizing Intermediary Copyright Liability in the EU: A Summary

315

Christina Angelopoulos

17. The Direct Liability of Intermediaries

335

Eleonora Rosati

18. Secondary Copyright Infringement Liability and User-Generated Content in the United States

349

Jack Lerner

19. Intermediary Liability and Online Trade Mark Infringement: Emerging International Common Approaches Frederick Mostert

369

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

contents   vii

20. Intermediary Liability and Trade Mark Infringement: Proliferation of Filter Obligations in Civil Law Jurisdictions?

381

Martin Senftleben

21. Intermediary Liability and Trade Mark Infringement: A Common Law Perspective

404

Richard Arnold

22. Digital Markets, Rules of Conduct, and Liability of Online Intermediaries—Analysis of Two Case Studies: Unfair Commercial Practices and Trade Secrets Infringement

421

Reto M. Hilty and Valentina Moscon

23. Notice-and-Notice-Plus: A Canadian Perspective Beyond the Liability and Immunity Divide

444

Emily Laidlaw

24. Free Expression and Internet Intermediaries: The Changing Geometry of European Regulation

467

Tarlach McGonagle

25. The Right to be Forgotten in the European Union

486

Miquel Peguera

26. Right to be . . . Forgotten? Trends in Latin America after the Belén Rodriguez Case and the Impact of the New European Rules

503

Eduardo Bertoni

PA RT V   I N T E R M E DIA RY L IA B I L I T Y A N D ON L I N E E N F ORC E M E N T 27. From ‘Notice and Takedown’ to ‘Notice and Stay Down’: Risks and Safeguards for Freedom of Expression

525

Aleksandra Kuczerawy

28. Monitoring and Filtering: European Reform or Global Trend?

544

Giancarlo Frosio and Sunimal Mendis

29. Blocking Orders: Assessing Tensions with Human Rights Christophe Geiger and Elena Izyumenko

566

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

viii   contents

30. Administrative Enforcement of Copyright Infringement in Europe

586

Alessandro Cogo and Marco Ricolfi

PA RT V I   I N T E R M E DIA RY R E SP ON SI B I L I T Y, AC C OU N TA B I L I T Y, A N D P R I VAT E OR DE R I N G 31. Accountability and Responsibility of Online Intermediaries

613

Giancarlo Frosio and Martin Husovec

32. Addressing Infringement: Developments in Content Regulation in the US and the DNS

631

Annemarie Bridy

33. Intermediary Liability in Russia and the Role of Private Business in the Enforcement of State Controls over the Internet

647

Sergei Hovyadinov

34. Guarding the Guardians: Content Moderation by Online Intermediaries and the Rule of Law

669

Niva Elkin-Koren and Maayan Perel

35. Algorithmic Accountability: Towards Accountable Systems

679

Ben Wagner

PA RT V I I   I N T E R N E T J U R I SDIC T ION , E X T R AT E R R I TOR IA L I T Y, A N D L IA B I L I T Y 36. Internet Jurisdiction and Intermediary Liability

691

Dan Jerker B. Svantesson

37. The Equustek Effect: A Canadian Perspective on Global Takedown Orders in the Age of the Internet

709

Michael Geist

38. Jurisdiction on the Internet: From Legal Arms Race to Transnational Cooperation

727

Bertrand de La Chapelle and Paul Fehlinger

Index

749

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

Editor’s Note: A Dialogue on the Role of Online Intermediaries Giancarlo Frosio

In pursuit of an answer to the online intermediary liability conundrum, The Oxford Handbook of Online Intermediary Liability has gathered together multiple voices that have contributed greatly to this emerging debate in the last few years. I have encountered or came to know about most of my co-authors a few years ago at the time of my residence at the Stanford Center for Internet and Society where I served as the first Intermediary Liability Fellow. The idea for this Handbook was the result of other projects that we have run together, including the World Intermediary Liability Map, and the need to crystallize an emerging field of research. This field of research has grown exponentially since this Handbook was first envisioned. Online intermediary liability has now become a pervasive issue on the agenda of governments, courts, civil society, and academia and in the last few years legislation, case law, and initiatives dealing with the liability of online intermediaries have followed at an astounding pace. The role of online  service providers (OSPs) is unprecedented in relation to their capacity to influence users’ interactions in the infosphere.1 Online intermediaries mediate human life in a virtual brave new world that reflects and augments our physical realm. ‘Internet intermediaries are crucial for how most people use the Internet,’ Dan Jerker Svantesson noted.2 Ubiquitous platforms dictate our daily routine: searching for information on Google, getting a taxi on Uber, shopping on Amazon Fresh, making payments via PayPal, collaborating on Google Docs, storing documents on Dropbox, taking up employment through Upwork, discussing trendy topics on Twitter, sharing videos on YouTube, or posting pictures on Instagram. In particular, most creative expression today takes place over communications networks owned by private companies. The decentralized, global nature of the internet means that almost anyone can present an 1 See Luciano Floridi, The Fourth Revolution—How the Infosphere is Reshaping Human Reality (OUP 2014) (arguing that after the Copernican, Darwinian, and Freudian revolutions, humans are once again forced to rethink their role as they must now interact with virtual entities and agent in a wholly new medium, the infosphere). 2  The following considerations are the result of reflections collected via email exchange with my ­co-authors in the last few months. The quoted passages are taken from this exchange, which is on file with the author.

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

x   editor’s note: a dialogue on the role of online intermediaries idea, make an assertion, post a photograph, or push to the world numerous other types of content. Billions of  people possess multiple connected portable devices. People communicate their experiences on the go through multimedia social networking services (e.g. Facebook, Instagram), online file repositories (e.g. Flickr, Dropbox, Google Photos), and various kinds of video-sharing platforms (e.g. YouTube, Vimeo, Dailymotion). According to Danny Friedmann, ‘in China, en route to become the biggest economy in the world, at the end of 2017, it was estimated that more than half of the population (772 million people) had access to the internet. Goods are traded between businesses and consumers, between consumers, and between businesses, at enormous online market platforms, such as those of the Alibaba Group’. The ‘information society’ with its 2.5 quintillion bytes of data created each day3 has slowly morphed into the ‘platform society’. But there is more to it. Perhaps unnoticed, our society has transformed into an ‘intermediated society’. Platforms and OSPs filter our networked life, shape our experiences, define our memories—and even remind us of a (selected) few of them. ‘They contribute to inform the space of opportunities in which individuals and societies can flourish and evolve,’ Mariarosaria Taddeo stresses, ‘and eventually impact how we understand reality and how we interact with each other and with the environment’. The decisions made by these platforms increasingly shape contemporary life. ‘As leading designers of online environments, OSPs make decisions that impact private and public lives, social welfare and individual wellbeing.’ For Christina Angelopoulos, ‘the prevalence of intermediation in modern electronic communications makes [intermediary liability] an increasingly significant aspect of an increasing wide array of different areas of information law’. Eleonora Rosati adds that ‘in this sense Recital 59 of the Information Society Directive, which stresses how, in a context of this kind, intermediaries may be indeed best placed to bring infringements to an end, helps understand the need for intermediaries’ involvement in the enforcement process’. Also Richard Arnold develops this point by noting that ‘online exploitation makes infringement of IPRs easy, but makes enforcement of IPRs against the sources of such infringements difficult (eg because of jurisdictional issues)’, therefore ‘IPR owners increasingly find that it is more practical and effective to target online intermediaries both with claims for direct and accessory liability and with claims for intermediary liability’. Frederick Mostert recognizes that ‘intermediary liability for online service providers has become relevant because of an attempt . . . to deal with the volume and velocity of illegal online activities by Bad Actors’. But, Mostert adds, this strategy is suboptimal: ‘culpa is rarely present at the platform end-point and intermediaries are virtually in all cases not the source of the problem’. However, ‘dealing with the source of the problem is exacerbated by the anonymity issue in the online world, making it very difficult to track and trace the identity of the Bad Actors themselves’. Hence, ‘the last resort efforts to try and stop the illegal activities at the end-point—the platform level’. 3  See Domo, ‘Data Never Sleeps 5.0’ . Note that all hyperlinks in the Handbook were last accessed on 10 June 2019.

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

editor’s note: a dialogue on the role of online intermediaries   xi Therefore, whether and when access providers and communications platforms like Google, Twitter, and Facebook are liable for their users’ online activities is a key factor that affects innovation and fundamental rights. Transaction costs deriving from their liability shape platforms’ decisions and policies, algorithms’ development, and, finally, the architecture of the infosphere. Kylie Pappalardo and Nicolas Suzor clarify that ‘inter­medi­ary liability matters because the rules of intermediary liability structure the internet. What internet users can and cannot see and do is largely dictated by the laws that apply to the online service providers that mediate their experience’. According to Miquel Peguera, ‘the way business models are conceived and developed closely hinges on the legal framework that governs their duties and that determines the conditions under which they may or may not be sheltered from liability’, so that ‘intermediary liability ends up shaping the availability of new digital services and affects citizens’ daily lives in multiple ways’. Jaani Riordan reminds of the tension that lies in setting up liability against online intermediaries: If intermediary liability rules are too strict, they risk stifling new and useful services and restricting market participation to the largest platforms—those best able to afford lawyers, compliance costs, and political lobbying. At the other extreme, rules which immunise intermediaries against any need to act could make it impossible or impracticable for claimants to enforce their rights, encouraging the spread of disinformation and other harmful material.

On one side, as Marco Ricolfi suggests, ‘the adoption of intermediary liability exacerbates the difficulties in preserving freedom of innovation [and], in terms of competition policy, it risks creating barriers to newcomers’. On the other side, Eduardo Bertoni stresses that ‘depending on the regulation that could be enacted, the impact on fundamental rights might be positive or negative’. For Peguera, the way in which we arrange liability has ‘an immediate effect on the exercise of fundamental rights on the Internet, particularly in terms of freedom to access and impart information and in terms of priv­ acy and data protection’. According to Kristofer Erickson and Martin Kretschmer, ‘the legal regime of notice-and-takedown which has been in place for 20 years was conceived as a means of balancing legitimate interests of media rightholders, platform innovators and online users’. However, Christophe Geiger and Elena Izyumenko add that: once intermediaries are asked to do more in their new role of active . . . enforcers, impartial enforcement can be at risk, raising the questions with regards to compliance, among others, with the fundamental rights of both Internet users and intermediaries themselves. This situation entails a serious risk of intermediary being ‘overzealous’ in order to avoid liability and to block access to content that is made available in a perfectly legal manner.

OSPs are subjected to increasing pressure by governments and interest groups which are seeking to control online content by making use of their technical capacities. In this

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

xii   editor’s note: a dialogue on the role of online intermediaries regard, Niva Elkin-Koren and Maayan Perel highlight that ‘they offer a natural point of control for monitoring, filtering, blocking, and disabling access to content, which makes them ideal partners for performing civil and criminal enforcement’. As Emily Laidlaw develops: Intermediaries occupy a critical regulatory role, because they have the capacity to control the flow of information online in a way that others, including states, cannot. Thus, significant energy is devoted by lawmakers to strategizing how to create laws that capitalize on intermediaries’ capacity to regulate to solve many problems posed by internet use.

However, ‘intermediary obligations should not be confused with those of states’. This is hardly the case nowadays. For example, as Nicolo Zingales notes, ‘intermediary liability of OSPs is particularly relevant in the African continent because of the increasing pressure on intermediaries to fulfil broad and open-ended public policy mandates, and potentially affecting both technological progress and the creation of local content’. Multiple jurisdictions follow the same path. As Bertrand de La Chapelle and Paul Fehlinger argue, the intermediary liability debates are exacerbated by a looming risk of ‘a legal arms race, in which states resort to an extensive interpretation of territoriality criteria over cross-border data flows and services’. In an ‘invisible handshake’ between public and private parties, faced with semiotic regulation on an unprecedented scale, enforcement looks once again for an ‘answer to the machine in the machine’.4 Sophisticated algorithms and company policies enable and constrain our actions. These algorithms take decisions reflecting policy’s assumptions and interests that have very significant consequences for society at large, yet there is limited understanding of these processes. This is critical, Ben Wagner notes, as ‘understanding the mechanism of implementation is key to understanding the nature of speech that is enabled by it’. Sergei Hovyadinov notes how ‘the framework of inter­medi­ ary liability established in the EU and the US 15–20 years ago is not sufficient anymore to fight illegal content, especially terrorist, which because of the volume and the speed of its dissemination calls for a more proactive approach, with elements of automation and proactive monitoring’. ‘Intermediary liability’, Hovyadinov continues, ‘is thus shifting from a reactive “notice-and-take down” regime to “intermediary accountability” [as Martin Husovec termed it] based on new expectations from governments and civil society as to the role online platforms should proactively perform’. Christophe Geiger and Elena Izyumenko highlight the ‘danger that intermediaries have recourse to automated systems, leading to a situation where machines and algorithms would become the decisionmakers of what is available or not on the Internet’. Of course, ‘this privatisation of justice is highly problematic in a democratic society, in particular with regard to the constitutional legal framework’. Therefore, Niva Elkin-Koren and Maayan Perel conclude that 4  Charles Clark, ‘The Answer to the Machine is in the Machine’ in Bernt Hugenholtz (ed.), The Future of Copyright in a Digital Environment (Kluwer Law Int’l 1999) 139 (discussing the application of digital right management systems to enforce copyright infringement online).

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

editor’s note: a dialogue on the role of online intermediaries   xiii ‘with the rise of algorithmic enforcement, data driven economy, centralized architecture and market concentration, the new regulatory powers of online platforms are challenging the rule of law’. To continue this dialogue on the role of online intermediaries in modern society, this  Handbook will present multiple scholarly perspectives on the major themes in inter­medi­ary liability and platform governance. The Handbook discusses fundamental legal issues in online intermediary liability, while also describing advances in intermediary liability theory and identifying recent policy trends. Part I features an introductory chapter that will serve as a blueprint for the consistent development of other chapters as it sets out in advance the most relevant trends according to which the structure of the book has been generated. Part II provides a taxonomy of internet platforms, a general discussion of a possible basis for liability, and a review of remedies. In addition, Part II introduces the discussion of the fundamental rights implications of intermediary liability and considers the ethical ramifications of the role of online intermediaries. Part III presents a jurisdictional overview of intermediary liability safe harbours and highlights systemic fragmentation. In this respect, Part III will also focus on enhanced responsibilities that multiple jurisdictions incresingly impose on online intermediaries. Part IV provides an overview of domain-specific solutions, including intermediate liability for copyright, trade mark, unfair competition, and privacy infringement, together with internet platforms’ speech-related obligations and liabilities. Part V reviews intermediary liability enforcement strategies by focusing on emerging trends, including proactive monitoring obligations, blocking orders, and the emergence of administrative enforcement of online infringement. Part VI discusses an additional core emerging trend in intermediary liability enforcement: voluntary measures and private enforcement of allegedly illegal content online are shifting the d ­ iscourse from intermediary liability to intermediary responsibility or accountability. International private law issues are addressed in Part VII with special emphasis on extraterritorial enforcement of intermediaries’ obligations. Finally, I would like to express my gratitude to my co-authors for completing this truly ‘monumental’ study that contributes fundamentally to research in the field. Special thanks also go to the team at Oxford University Press for giving us the opportunity to publish this volume, helping to define the trajectory that it has finally taken, and for all the editorial support. I would also like to thank my wife, Hong, and my parents, Nuccia and Clemente, for their continuous support. Nothing would be possible without the happiness that my family gives to me. June 2019

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

Notes on Contributors

Christina Angelopoulos  is a Lecturer in Intellectual Property Law at the University of Cambridge and a member of the Centre for Intellectual Property and Information Law (CIPIL). Email: [email protected]. Richard Arnold  is a Judge of the Court of Appeal of England and Wales. Eduardo Bertoni is the Director of the Access to Public Information Agency in Argentina, Director of the Post-graduated Program on Data Protection at Buenos Aires University School of Law, Argentina, and Global Clinical Professor at New York University School of Law, New York. Email: [email protected] and [email protected]. Annemarie Bridy  is the Allan G. Shepard Professor of Law at the University of Idaho College of Law, an Affiliate Scholar at the Stanford Law School Center for Internet and Society (CIS), and an Affiliated Fellow at the Yale Law School Information Society Project (ISP). Email: [email protected]. Bertrand de La Chapelle is the Executive Director and Co-founder of the global multistakeholder organization Internet & Jurisdiction Policy Network. Email: [email protected]. Alessandro Cogo  is Associate Professor at the University of Turin Law School and Director of the Master of Laws in Intellectual Property jointly organized by the World Intellectual Property Organization and the Turin University. Email: alessandroenrico. [email protected]. Graeme Dinwoodie  is the Global Professor of Intellectual Property Law at Chicago-Kent College of Law. Email: [email protected]. Niva Elkin-Koren  is a Professor of Law at the University of Haifa, Faculty of Law and a Faculty Associate at the Berkman Klein Center at Harvard University. She is the Founding Director of the Haifa Center for Law & Technology (HCLT), and a Co-director of the Center for Cyber, Law and Policy. Email: [email protected]. Kristofer Erickson is Associate Professor in Media and Communication at the University of Leeds. Email: [email protected]. Paul Fehlinger is the Deputy Executive Director and Co-founder of the multistakeholder  organization Internet & Jurisdiction Policy Network. Email: [email protected].

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

xvi   notes on contributors Danny Friedmann  is Visiting Assistant Professor of Law, Peking University School of Transnational Law in Shenzhen. Email: [email protected]. Giancarlo Frosio  is an Associate Professor at the Center for International Intellectual Property Studies at Strasbourg University, a Fellow at Stanford Law School Center for Internet and Society, and Faculty Associate of the NEXA Center in Turin. Email: [email protected]. Christophe Geiger is Professor of Law and Director of the Research Department of  the Centre for International Intellectual Property Studies (CEIPI) at Strasbourg University. Email: [email protected]. Michael Geist is a Professor of Law at the University of Ottawa where he holds the Canada Research Chair in Internet and E-commerce Law and is a member of the Centre for Law, Technology and Society. Email: [email protected]. Eric Goldman  is a Professor of Law at Santa Clara University School of Law, where he is also Director of the school’s High Tech Law Institute. Email: [email protected]. Reto M. Hilty  is Managing Director at the Max Planck Institute for Innovation and Competition in Munich and Full Professor (ad personam) at the University of Zurich. Email: [email protected]. Sergei Hovyadinov is a JSD candidate at Stanford Law School. Email: sergeih@ stanford.edu. Martin Husovec  is Assistant Professor at the University of Tilburg (Tilburg Institute for Law, Technology and Society & Tilburg Law and Economics Center) and Affiliate Scholar at Stanford Law School Center for Internet and Society. Email: martin@ husovec.eu. Elena Izyumenko  is a Researcher and a PhD Candidate at the Center for International Intellectual Property Studies (CEIPI), University of Strasbourg. Email: elena. [email protected]. Martin Kretschmer  is Professor of Intellectual Property Law at the School of Law, University of Glasgow and Director of CREATe, the UK Copyright and Creative Economy Centre. Email: [email protected]. Aleksandra Kuczerawy  is a Postdoctoral Researcher at the Katholieke Universiteit (KU) Leuven’s Centre for IT & IP Law. Email: [email protected]. Emily Laidlaw  is Associate Professor, Faculty of Law, University of Calgary. Email: [email protected]. Juan Carlos Lara Gálvez is the Research and Public Policy Director at Derechos Digitales—América Latina, based in Santiago de Chile. Email: juancarlos@ derechosdigitales.org.

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

notes on contributors   xvii Jack Lerner  is a Clinical Professor of Law at the University of California, Irvine School of Law and Director of the UCI Intellectual Property, Arts, and Technology Clinic. Email: [email protected]. Tarlach McGonagle  is a Senior Lecturer/Researcher at IViR, University of Amsterdam and Professor of Media Law & Information Society at Leiden Law School. Email: [email protected]. Luiz Fernando Marrey Moncau  is a Non-Residential Fellow at the Stanford Center for Internet and Society and a PhD from Pontifícia Universidade Católica of Rio de Janeiro. Email: mailto:[email protected]. Sunimal Mendis is a Postdoctoral Researcher at the Center for International Intellectual Property Studies (CEIPI) at Strasbourg University. Email: mendis@ ceipi.edu Maria Lillà Montagnani  is an Associate Professor of Commercial Law at Bocconi University in Milan. Email: [email protected]. Valentina Moscon  is Senior Research Fellow in Intellectual Property and Competition Law at the Max Planck Institute for Innovation and Competition. Email: valentina. [email protected]. Frederick Mostert  is Professor of Intellectual Property at the Dickson Poon School of Law, King’s College and Research Fellow at the Oxford Intellectual Property Research Centre. Email: [email protected]. Kylie Pappalardo  is a Lecturer in the Law School at the Queensland University of Technology (QUT) in Brisbane. Email: [email protected]. Kyung-Sin Park  is a Professor at Korea University Law School. Email: kyungsinpark@ korea.ac.kr. Miquel Peguera  is an Associate Professor of Law at the Universitat Oberta de Catalunya (UOC) in Barcelona and Affiliate Scholar, Stanford Center for Internet and Society. Email: [email protected]. Maayan Perel  is an Assistant Professor in Intellectual Property Law at the Netanya Academic College in Israel and a Senior Research Fellow at the Cyber Center for Law & Policy, University of Haifa. Email: [email protected]. Marco Ricolfi  is Professor of Intellectual Property at the Turin Law School, Partner at the law firm Tosetto, Weigmann e Associati, and Co-director of the Nexa Center on Internet and Society of the Turin Polytechnic. Email: [email protected]. Jaani Riordan  is a barrister at 8 New Square, London. Email: [email protected]. Eleonora Rosati  is an Associate Professor in Intellectual Property Law at Stockholm University and an Of Counsel at Bird & Bird. Email: [email protected].

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

xviii   notes on contributors Alan M. Sears  is a Researcher and Lecturer at Leiden University’s eLaw Centre for Law and Digital Technologies. Email: [email protected]. Martin Senftleben  is a Professor of Intellectual Property Law, Institute for Information Law, University of Amsterdam and a Visiting Professor, Intellectual Property Research Institute, University of Xiamen. Email: [email protected]. Nicolas Suzor  is a Professor in the Law School at Queensland University of Technology in Brisbane. Email: [email protected]. Dan Jerker B. Svantesson  is a Professor at the Faculty of Law at Bond University, a Visiting Professor at the Faculty of Law, Masaryk University, and a Researcher at the Swedish Law & Informatics Research Institute, Stockholm University. Email: dasvante@ bond.edu.au. Mariarosaria Taddeo is a Researcher Fellow at the Oxford Internet Institute and Deputy Director of the Digital Ethics Lab. Email: [email protected]. Ben Wagner is an Assistant Professor and Director of the Privacy & Sustainable Computing Lab at Vienna University of Economics and Business and a Senior Researcher of the Centre of Internet & Human Rights (CIHR). Email: [email protected]. Diego Werneck Arguelhes  is an Associate Professor of Law at Insper Institute for Education and Research, São Paulo, Brazil. E-mail: [email protected]. Nicolo Zingales  is an Associate Professor of Law at the University of Leeds, an Affiliate Scholar at Stanford Center for Internet and Society, and a Research Associate at the Tilburg Institute for Law, Technology and Society and the Tilburg Law and Economics Centre. Email: [email protected].

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

PA RT I

I N T RODUC T ION

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 1

M a ppi ng On li n e I n ter m edi a ry Li a bilit y Giancarlo Frosio

Intermediary liability has emerged as a defining governance issue of our time. However, modern legal theory and policy struggle to define an adequate framework for the li­abil­ity and responsibility of online service providers (OSPs). In addition, market conditions, against which the initial regulation was developed, have changed considerably since the first appearance of online intermediaries almost two decades ago. These changes started to be reflected in new policy approaches. Tinkering with this matter which is in constant flux brought The Oxford Handbook of Online Intermediary Liability into being. The Handbook will crystallize the present theoretical understanding of the intermediary liability conundrum, map emerging regulatory trends, and qualify pol­it­ical and economic factors that might explain them. In doing so, the Handbook will provide a comprehensive, authoritative, and ‘state-of-the-art’ discussion of intermediary liability by bringing together multiple scholarly perspectives and promoting a global discourse through cross-jurisdictional parallels. The Handbook thus serves as a priv­il­eged venue for observing emerging trends in internet jurisdiction and innovation regu­la­tion, with special emphasis on enforcement strategies dealing with intermediate liability for copyright, trade mark, and privacy infringement, and the role of online platforms in moderating the speech they carry for users, including obligations and liabilities for defamation, hate, and dangerous speech. With globalized OSPs operating across the world in an interdependent digital en­vir­on­ ment, inconsistencies across different regimes generate legal uncertainties that undermine both users’ rights and business opportunities. To better understand the heterogeneity of the international online intermediary liability regime and contribute to this important policy debate, the Handbook enlisted leading authorities with the goal of mapping the field of online intermediary liability studies. This effort builds on the work of a predecessor, the World Intermediary Liability Map (WILMap), a repository for information

© Giancarlo Frosio 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

4   Giancarlo Frosio on international intermediary liability regimes hosted at the Stanford Center for Internet and Society (CIS), that I developed and launched with contributions from many of the co-authors of this Handbook.1 The Handbook’s attempt to study intermediary liability and come to terms with a fragmented legal framework builds on a vast array of other efforts. Besides the WILMap mentioned earlier, mapping and comparative analysis exercises have been undertaken by the Network of Centers,2 the World Intellectual Property Organization (WIPO),3 and other academic initiatives.4 Institutional efforts at the international level are on the rise. The Global Multistakeholder Meeting on the Future of Internet Governance (NETmundial) worked towards the establishment of global provisions on intermediary liability within a charter of internet governance principles.5 The final text of the NETmundial Statement included the principle that ‘[i]ntermediary liability limitations should be implemented in a way that respects and promotes economic growth, in­nov­ation, creativity, and free flow of information’.6 The Organisation for Economic Co-operation and Development (OECD) issued recommendations on Principles for Internet Policy Making stating that, in developing or revising their policies for the internet economy, the state members should consider the limitation of intermediary liability as a high level principle.7 Also, the 2011 Joint Declaration of the three Special Rapporteurs for Freedom of Expression contains statements suggesting an ongoing search for a global regime for intermediary liability.8 The Representative on Freedom of the Media of the 1 World Intermediary Liability Map (WILMap) (a project designed and developed by Giancarlo Frosio and hosted at Stanford CIS) (WILMap). 2  See Berkman Klein Center for Internet and Society, ‘Liability of Online Intermediaries: New Study by the Global Network of Internet and Society Centers’ (18 February 2015) . 3  See Daniel Sang, ‘Comparative Analysis of National Approaches of the Liability of the Internet Intermediaries’ (WIPO Study); Ignacio Garrote Fernández-Díez, ‘Comparative Analysis on National Approaches to the Liability of Internet Intermediaries for Infringement of Copyright and Related Rights’ (WIPO study). 4  See e.g. for other mapping and comparative exercises, Graeme Dinwoodie (ed.), Secondary Liability of Internet Service Providers (Springer 2017); Martin Husovec, Injunctions Against Intermediaries in the European Union: Accountable But not Liable? (CUP 2017); Jaani Riordan, The Liability of Internet Intermediaries (OUP 2016); Christina Angelopoulos, European Intermediary Liability in Copyright: A Tort-Based Analysis (Wolters Kluwer 2016); Christopher Heath and Anselm Kamperman Sanders (eds), Intellectual Property Liability of Consumers, Facilitators, and Intermediaries (Wolters Kluwer 2012). 5  See NETmundial Multistakeholder Statement (São Paulo, Brazil, 24 April 2014) . 6  ibid. 5. 7  See OECD, ‘Recommendation of the Council on Principles for Internet Policy Making’ C(2011)154 . See also OECD, ‘The Economic and Social Role of Internet Intermediaries’ (April 2010) . 8  See Organization for Security and Co-operation in Europe (OSCE), ‘International Mechanism for Promoting Freedom of Expression: Joint Declaration on Freedom of Expression and the Internet by the United Nations Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE) Representative on Freedom of the Media, the Organization

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   5 OSCE issued a Communiqué on Open Journalism recognizing that ‘intermediaries have become one of the main platforms facilitating access to media content as well as enhancing the interactive and participatory nature of Open Journalism’.9 Efforts to produce guidelines and general principles for intermediaries also emerged in civil society. In particular, the Manila Principles on Intermediary Liability set out safeguards for content restriction on the internet with the goal of protecting users’ rights, including ‘freedom of expression, freedom of association and the right to privacy’.10 Other projects developed best practices that can be implemented by intermediaries in their conditions of service with special emphasis on protecting fundamental rights.11 For example, under the aegis of the Internet Governance Forum, the Dynamic Coalition for Platform Responsibility aims to delineate a set of model contractual provisions.12 The provisions should be compliant with the UN ‘Protect, Respect and Remedy’ Framework as endorsed by the UN Human Rights Council together with the UN Guiding Principles on Business and Human Rights.13 Ranking Digital Rights is an add­ition­al initiative that promotes best practice and transparency among online inter­medi­ar­ies.14 The project ranks internet and telecommunications companies according to their moral behaviour in respecting users’ rights, including privacy and freedom of speech. Several initiatives have looked into notice-and-takedown procedures in order to highlight possible chilling effects and propose solutions. Lumen—formerly Chilling Effects—archives takedown notices to promote transparency and facilitate research into the takedown ecology.15 The Takedown Project is a collaborative effort housed at the University of California, Berkeley School of Law and the American Assembly to study notice-and-takedown procedures.16 Again, the Internet and Jurisdiction project has been developing a due process framework to deal more efficiently with transnational notice-and-takedown requests, seizures, mutual legal assistance treaties (MLATs), and law enforcement cooperation requests.17

of American States (OAS) Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples’ Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information’ (2011) . 9 ibid. 10  See Manila Principles on Intermediary Liability, Intro . 11  See e.g. Jamila Venturini and others, Terms of Service and Human Rights: Analysing Contracts of Online Platforms (Editora Revan 2016). 12  See Dynamic Coalition on Platform Responsibility: a Structural Element of the United Nations Internet Governance Forum . 13  See United Nations, Human Rights, Office of the High Commissioner, ‘Guiding Principles on Business Human Rights: Implementing the United Nations “Protect, Respect, and Remedy” Framework’ (2011) A/HRC/RES/17/4. 14  See Ranking Digital Rights . 15  See Lumen . 16  See The Takedown Project . 17  See Bertrand de La Chapelle and Paul Fehlinger, ‘Towards a Multi-Stakeholder Framework for Transnational Due Process’, Internet & Jurisdiction White Paper (2014) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

6   Giancarlo Frosio

1.  Mapping Fundamental Notions Following this chapter, Part II of the Handbook maps out fundamental notions and issues in online intermediary liability and platforms’ regulation that will serve as a basis for further analysis in later Parts. It provides a taxonomy of internet platforms, an ana­ lysis of evidence-based research in the field, and a general discussion of a possible basis for liability and remedies. In addition, it puts into context intermediary liability regu­la­ tion with fundamental rights—a theme that will resurface many times throughout the Handbook—and considers the ethical implications of the online intermediaries’ role. Graeme Dinwoodie sets the stage by defining a taxonomy of online intermediaries in Chapter 2. Dinwoodie ask the question Who are Internet Intermediaries? or, as multiple alternatives go, ‘online service providers’ or ‘internet service providers’. The first findings, disappointing as they may be, are that there is little consensus, a condition that is common to all things related to online intermediary liability. Fragmentation and mul­tiple approaches abound. Reconstruction of the notion comes from scattered references in statues and case law, obviously a suboptimal approach for promoting consistency and legal certainty. However, Dinwoodie does a stellar job of finding patterns within the confusion and bringing together the systematization of other contributors to the Handbook, such as Jaani Riordan and Martin Husovec.18 Within the broad view of online intermediaries endorsed by Dinwoodie, we are left with the option of pursuing an essentialist approach that responds to technology or going beyond it in search of a more dynamic approach unbounded by ephemeral technological references. The present debate on the overhaul of the online intermediary liability regime exposes the limitations of the former approach as it is striking how in just twenty years—when the first intermediary liability regulations were enacted—the online market and its techno­logic­al solutions have changed. A well-known definition from the OECD bears out this point: ‘Internet intermediaries bring together or facilitate transactions between third parties on the Internet. They give access to, host, transmit and index content, products and services originated by third parties on the Internet or provide Internet-based services to third parties.’19 Traditional distinctions in access, hosting, caching providers, and search engines has increasingly become obsolete. More granularity is needed and account should be taken of actors performing multiple functions and services. Jaani Riordan follows up on Dinwoodie’s taxonomy by describing the typology of li­abil­ity that may attach to online intermediaries’ conduct. Liability rules give rise to two main classes of obligation: monetary and non-monetary. Monetary liability may be further divided along a spectrum comprising four main standards: strict liability; negligence or fault-based liability; knowledge-based liability; and partial or total immunity. Non-monetary liability may be further divided into prohibitory and mandatory 18  See Husovec (n. 4) and Riordan (n. 4). 19  See OECD, The Economic and Social Role of Internet Intermediaries (2010) 9.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   7 obligations. As Riordan explains, these rules can be then situated within concepts of primary and secondary liability. The review of the types of liability and intermediary obligations also leads to identifying the main justifications given for imposing liability on inter­medi­ar­ies. On one side, liability rules may attribute blame to secondary actors who have assumed responsibility for primary wrongdoers and intermediary liability rules reflect the secondary actor’s own culpability. Alternatively, secondary liability rules can be justified by reducing enforcement costs by secondary actors who are likely to be least-cost avoiders. Finally, Chapter 3 provides an overview of the types of wrongdoing which may be relevant to intermediaries’ liability, including copyright infringement, trade mark infringement, defamation, hate speech, breach of regulatory obligations, and disclosure obligations. Further chapters will pick up several of these wrongdoings for more detailed discussion. In Chapter  4, Martin Husovec completes this preliminary mapping exercise with an overview of remedies for intermediary liability. Husovec describes damages and injunctions, their scope and goals, while also analysing the costs of those remedies. In Husovec’s view, there are three legal pillars of liability that lead to remedies: primary li­abil­ity, secondary liability, and injunctions against innocent third parties or inter­medi­ ar­ies, which are a more recent development. Applying any of these pillars to inter­medi­ ar­ies has consequences by changing the scope of damages, their aggregation, scope, and goal of injunctions, and their associated costs. This, according to Husovec, launches forms of ‘remedial competition’, where plaintiffs may have an incentive to bring lawsuits against parties that never acted wrongfully themselves, rather than against known tortfeasors that might better redress the wrongdoing. In this respect, distinguishing the cost structure between intermediaries as infringers and innocent third parties—therefore exempting innocent third parties from the full or partial burden of compliance and procedural costs—would limit incentives to act against innocent third parties as well as negative externalities for technological innovation. As Husovec discusses, some jurisdictions address this issue by putting in place the necessary balancing, while others do not come up with similar distinctions. More generally, mapping remedies for inter­medi­ ary liability leads to the conclusion that much still needs to be done to bring about consistency between different jurisdictions. In search of consistency, legal systems should look for a common vocabulary and a common set of consequences associated with regulatory modalities. Chapter 5 wraps up the legal framework that has been in place for almost two decades by looking at it through the lens of empirical evidence. Kristofer Erickson and Martin Kretschmer provide a thoughtful study on the implication of empirical evidence for intermediary liability policy, which builds on previous empirical projects such as the Copyright Evidence Wiki, an open-access repository of findings relating to the effects of copyright.20 As ‘we appear to be in the midst of a paradigm shift in intermediary 20  See ‘The Copyright Evidence Wiki: Empirical Evidence for Copyright Policy’, CREATe Centre, University of Glasgow .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

8   Giancarlo Frosio li­abil­ity, moving from an obligation to act once knowledge is obtained to an obligation to prevent harmful content appearing’—Erickson and Kretschmer note—changes in the legal framework should be based on hard empirical evidence that attaches those changes to positive externalities for society, while addressing negative externalities that emerged under the previous regimes. Empirical data—which Erickson and Kretschmer identify and discuss—such as the volume of takedown requests, the accuracy of notices, the potential for over-enforcement or abuse, transparency of the takedown process, and the costs of enforcement borne by different parties, should be assessed in advance of policymaking. Legislative and regulatory changes should then follow from an empirically-based policymaking process. All in all, according to Erickson and Kretschmer, evidence suggests that the notice-and-takedown regime works and its shortcomings should be ‘addressed through tweaking, rather than overhauling, the safe harbour regime’. In add­ition, data gathering and transparency of algorithmic decisionmaking becomes a crit­ic­al call for the platform society, which increasingly becomes a ‘black box society’ where users’ lives are daily affected by unaccountable privately-run algorithms.21 Algorithmic accountability will be further discussed by Ben Wagner in Chapter 35. After mapping fundamental categorizations in the field, Part II also highlights tensions within the system, especially concerning the ethical implications of intermediary liability regulation and frictions with fundamental rights. In Chapter 6, Mariarosaria Taddeo investigates the ethical implications of intermediary liability by describing moral responsibilities of OSPs with respect to managing access to information and human rights. As designers of online environments, OSPs play a civic role in mature information societies. This role brings about a responsibility for designing OSPs’ services according to what is acceptable and socially preferable from a global perspective that can reconcile different ethical views and stakeholders’ interests. In applying Floridi’s soft ethics to consider what responsibilities the civic role of OSPs entails, Taddeo concludes that OSPs need to develop ethical foresight analyses to consider the impact of their practices and technologies step-by-step and, if necessary, identify alternatives and risk-mitigating strategies. In Chapter 7, Christophe Geiger, Elena Izyumenko, and I consider the tension between intermediary liability and fundamental rights with special emphasis on the European legal framework. Competing fundamental rights, such as freedom of expression, privacy, freedom of business, and the right to property are entangled in the intermediary liability conundrum. Policymakers are still in search of a balanced and proportional fine-tuning of online intermediaries’ regulation that can address the miscellaneous interests of all stakeholders involved, with special emphasis on users’ rights. In this context, the increasing reliance on automated enforcement technologies, which will be the topic of further review in several chapters of the Handbook, might set in motion dystopian scenarios where users’ fundamental rights are heavily undermined.

21 See Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (HUP 2015).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   9

2.  Mapping International Fragmentation: From Safe Harbours to Liability Part III sets out to establish the fragmented field of online intermediary liability by mapping international actions which have oscillated from safe harbours to liability. This Part presents a jurisdictional overview discussing intermediary liability safe harbour arrangements and highlighting systemic fragmentation and miscellaneous inconsistent approaches. Each chapter in this Part focuses on regional trends, then cherry-picks an important topic to be discussed in detail. In this respect, Part III exposes an increasing number of cracks that appear in safe harbour arrangements—where they are in place— and the enhanced responsibilities that multiple jurisdictions cast on online inter­medi­ar­ ies and platforms. Since the enactment of the first safe harbours and liability exemptions for online intermediaries, market conditions have radically changed. Originally, intermediary liability exemptions were introduced to promote an emerging internet market. Do safe harbours for online intermediaries still serve innovation? Should they be limited or expanded? These critical questions—often tainted by protectionist concerns—define the present intermediary liability conundrum. In the mid-1990s, after an initial brief hesitation,22 legislators decided that online intermediaries, both access and hosting providers, should enjoy exemptions from li­abil­ity for wrongful activities committed by users through their services. The United States first introduced these safe harbours. In 1996, the Communications Decency Act exempted intermediaries from liability for the speech they carry.23 In 1998, the Digital Millennium Copyright Act introduced specific intermediary liability safe harbours for copyright infringement under more stringent requirements.24 Shortly after, the e-Commerce Directive imposed on EU Member States the obligation to enact similar legal arrangements to protect a range of online intermediaries from liability.25 Other jurisdictions have more recently followed suit.26 In most cases, safe harbour legislation provides mere 22  See Bruce Lehman, Intellectual Property and the National Information Infrastructure: The Report of the Working Group on Intellectual Property Rights (DIANE Publishing 1995) 114–24 (noting ‘the best policy is to hold the service provider li­able . . . Service providers reap rewards for infringing activity. It is difficult to argue that they should not bear the responsibilities’). 23  See Communications Decency Act of 1996, 47 USC § 230. 24  See the Digital Millennium Copyright Act of 1998, 17 USC § 512 (DMCA). 25  See Directive 2000/31/EC of the European Parliament and of the Council of 17 July 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1. 26 See e.g. Copyright Legislation Amendment Act 2004 (Cth) no. 154, Sch. 1 (Aus.); Copyright Modernization Act, SC 2012, c.20, s. 31.1 (Can.); Judicial Interpretation no. 20 of 17 December 2012 of the Supreme People’s Court on Several Issues concerning the Application of Law in Hearing Civil Dispute Cases Involving Infringement of the Right of Dissemination on Information Networks (Ch.); Federal

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

10   Giancarlo Frosio conduit, caching, and hosting exemptions for intermediaries, together with the exclusion of a general obligation on online providers to monitor the information which they transmit or store or actively seek facts or circumstances indicating illegal activity.27 According to Eric Goldman, who provides an overview of the state of intermediaries’ immunities in the United States in Chapter 8, even the traditionally strong enforcement of online intermediaries’ safe harbours in section 230 of the Communication Decency Act (CDA) shows signs of decay with critics claiming that the functional life of section 230 is nearing its end and more regulation is necessary.28 Similarly, safe harbours for copyright infringement provided for by the US DMCA have also been questioned.29 However, on the other hand, trade agreements like the United States–Mexico–Canada Agreement (USMCA) or NAFTA 2.0 have exported the section 230 arrangement to other countries—Canada and Mexico—both making it a North American standard and limiting the power of Congress to undermine significantly section 230. In Chapter 9, Juan Carlos Lara Gálvez and Alan Sears follow up by discussing the impact of free trade agreements (FTAs) on internet intermediary liability in Latin America. They note that even where FTAs have been adopted between the United States and Latin American countries, the implementation of related intermediary liability provisions lags behind. So far, only Chile and Costa Rica have implemented these provisions in their national laws. Notably, after the withdrawal of the Unites States from the Trans-Pacific Partnership (TPP), the remaining parties reached an agreement on an amended version of the TPP, which actually suspended the provisions pertaining to online inter­medi­ar­ies’ safe harbours. In sum, Lara and Sears question whether adopting the DMCA model is ideal for Latin American countries. In any event, they note, there is no consensus in Latin America on whether this would be the best model for balancing the rights of copy­right holders and the general public, which explains the resistance in implementing this regime nationally. Similar resistance to providing immunities for intellectual property infringement is also shown in the Brazilian Marco Civil da Internet (MCI)—or Internet Bill of Rights— introducing a civil liability exemption for internet access providers and other internet providers.30 This broad civil—and not criminal—liability exemption, however, does not apply to copyright infringement.31 Luiz Moncau and Diego Arguelhes describe the process leading to enactment of the MCI and its main achievements in Chapter  10. Law no. 149-FZ of 27 July 2006 on Information, Information Technologies and Protection of Information (Rus.) and Federal Law no. 187-FZ of 2 July 2013 amending Russian Civil Code, s. 1253.1. A repository including most of the safe harbour legislation enacted worldwide can be found at the WILMap (n. 1). 27  See e.g. Directive 2000/31/EC (n. 25) Arts 12–15; DMCA (n. 24) s. 512(c)(1)(A)–(C). 28  See e.g. David Ardia, ‘Free Speech Savior or Shield for Scoundrels: An Empirical Study of Intermediary Immunity Under Section 230 of the Communications Decency Act’ (2010) 43 Loyola L. Rev. 373. 29  The United States Copyright Office is undertaking a public study to evaluate the impact and effectiveness of the safe harbour provisions. In particular, notice-and-stay-down arrangements—rather than takedown—are under review in the United States as well as elsewhere. See United States Copyright Office, Section 512 Study . 30  See Marco Civil da Internet, Federal Law no. 12.965 (23 April 2014) Art. 18 (Bra.) (‘the Internet connection [access] provider shall not be subject to civil liability for content generated by third party’). 31  ibid. Art. 19(2).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   11 Although the MCI is simply an ordinary federal statute, it arguably stands out as a manifestation of digital constitutionalism, as Moncau and Arguelhes argue. Following in the footsteps of an emerging global move,32 the MCI updates and translates the traditional concerns of constitutionalism—protecting rights and limiting power—into a realm where the enjoyment of powers is very much threatened by private companies that hold the resources and tools to shape our online experience. African countries have been discussing the introduction of a safe harbour regime for online intermediaries for quite some time. In Chapter 11, Nicolo Zingales reviews a global attempt in several African jurisdictions to adjust the legal framework to the challenges posed by the platform economy. In Intermediary Liability in Africa: Looking Back, Moving Forward?, Zingales highlights how the African Union (AU) has attempted to drive harmonization on the basis of the South African Electronic Communications Act—the first and most sophisticated intermediary liability legislation in the area.33 Ghana, Zambia, and Uganda have enacted legislation heavily inspired by the South African model. Meanwhile, AU-led interstate cooperation has raised awareness of human rights protection online, cybersecurity, and personal data protection, in particular with a dedicated AU Convention.34 However, very limited intermediary liability le­gis­la­tion has been adopted in the region. In this fragmented legal framework, cyber-policing obligations become entangled with immunities and the self- and co-regulation schemes that characterize the South African Electronic Transactions Act, posing challenges to due process and other fundamental rights, such as freedom of expression. According to Zingales, only the AU could promote a shared notion of inter­medi­ary li­abil­ity exemptions in the region. An inconsistent approach that brings about legal uncertainty also characterizes the Australian law governing the liability of online intermediaries, according to Kylie Pappalardo and Nicolas Suzor, who provide a comprehensive review of the current state of Australian online intermediary liability law across different doctrines, such as laws of defamation, racial vilification, misleading and deceptive conduct, contempt of court, and copyright. In Chapter 12, Pappalardo and Suzor show that the basis on which third parties are liable for the actions of individuals online is confusing and, viewed as a whole, largely incoherent. Australian law lacks articulation of a clear distinction for circumstances in which intermediaries will not be held liable, which results in a great deal of uncertainty. These conflicts in flux within Australian doctrines that have been applied to online intermediary liability have led to a push for greater online enforcement and intermediary regulation based on the capacity of doing something to prevent harm rather than responsibility. Pappalardo and Suzor posit that confusion between capacity and responsibility has a role in much of the uncertainty in Australia intermediary li­abil­ity regulation. One solution would be for Australian courts to more strictly apply responsibility 32 See Dennis Redeker, Lex Gill, and Urs Gasser, ‘Towards digital constitutionalism? Mapping attempts to craft an Internet Bill of Rights’ (2018) 80 Int’l Communication Gazette 311. 33  See Electronic Transactions and Communications Act (ECTA) (2001) XI. 34  See African Union Convention on Cyber Security and Personal Data Protection (2014).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

12   Giancarlo Frosio theory in tort law and ascribe liability only after examining the inter­medi­ar­ies’ causal role in committing the wrong, therefore establishing fault first and liability later. In Chapter 13, Kyung-Sin Park compares the intermediary liability rules of six major Asian countries including China, India, Japan, Korea, Indonesia, and Malaysia to demonstrate that under the label of safe harbours lies, in fact, a liability trap.35 China and South Korea adopted a rule that an intermediary is required to remove known unlawful content on the penalty of liability and thereby set out to specify when the intermediaries will be held liable, instead of when they will not be held liable, inadvertently creating a liability-imposing rule instead of a liability-exempting rule. India’s regulation, namely the 2011 Intermediary Guidelines, generated a raft of obligations on intermediaries that threatened to convert the whole system into one imposing, rather than exempting from, liability. However, such a threat may have had an impact on the jurisprudence that, by way of the 2013 Shreya Singhal decision,36 made the Indian system an extremely ‘safe harbour’ by requiring judicial review for taking down infringing content. Further, Indonesia’s draft safe harbour regulation which was announced in December 2016 seems to move towards the model of China and South Korea, while Malaysia’s copyright notice and takedown seems to follow the US model closely but has a structure that allows the same misunderstanding made by the Korean regulators. All in all, Park shows how all over Asia online intermediary liability is on the rise, while safe harbours’ scope narrows proportionally. In Chapter 14, Danny Friedmann expands on China and explains how the interplay of multiple laws, regulations, and judicial interpretations have produced a system where weak safe harbours for online intermediaries oscillate heavily towards enhanced li­abil­ ity, given a very broad notion of ‘knowledge’ and the fact that an OSP without the ability to control copyright or trade mark infringement can still be found liable. This liabilityimposing rule ends up putting pressure on intermediaries to take down unlawful—and lawful—content. This seems to imply that filtering standards for OSPs in China will be continuously on the rise as well as their obligations to sanitize their networks against allegedly infringing content. This regulatory approach will be increasingly coupled with predictive artificial intelligence (AI) analytics and deep learning that will allow massive data processors, like Alibaba and Baidu, to become an omniscient tool against alleged infringement, both intellectual property (IP) and speech related. However, in a move that is surprisingly similar to that of other jurisdictions, especially the European Union, the Chinese regulatory framework seems to push forward self-regulation and pressure for OSPs to take on more responsibility, rather than a legislatively mandated duty of care. In Europe, Asia, South America, Africa, and Australia, the recent international policy debate has focused on the recalibration of safe harbours towards more liability for online intermediaries. As part of its Digital Single Market (DSM) Strategy, the European 35  The Hong Kong government introduced a Copyright Bill establishing a statutory safe harbour for OSPs for copyright infringement, provided that they meet certain prescribed conditions, including taking reasonable steps to limit or stop copyright infringement after being notified. See Copyright Amendment Bill 2014, C2957, cl. 50 (HK) . 36 See Shreya Singhal [2013] 12 SCC 73 (Ind.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   13 Commission has narrowed the e-Commerce Directive’s horizontal liability limitations for internet intermediaries and put in place a ‘fit for purpose’—or vertical—regulatory environment for platforms and online intermediaries.37 In Chapter  15, Maria Lillà Montagnani discusses this development by looking into the emergence of A New Liability Regime for Illegal Content in the Digital Single Market Strategy. The DSM Strategy deploys enhanced obligations that websites and other internet intermediaries should have for dealing with unlawful third party content.38 Legislative developments, including the Copyright in the DSM Directive,39 the amendments to the Audiovisual Media Service Directive,40 and the Guidance on Unfair Commercial Practices,41 have vertically ‘enhanced responsibility’42 among online platforms in Europe. These developments both aim to achieve a fairer allocation of value generated by the distribution of copyrightprotected content by online platforms—to close the so-called value gap43—and lower transaction costs of online enforcement by shifting the burden of sanitization of allegedly illegal speech on online intermediaries rather than law enforcement agencies. In this context, Montagnani highlights obvious inconsistencies between the new vertical regimes and the horizontal safeguards for intermediary liability provided by the e-Commerce Directive.44 Fragmentation, enhanced responsibility, and statutory li­abil­ity then emerge in tight connection with the expansion of private ordering and voluntary measures in Europe as much as in other jurisdictions as earlier noted. Perhaps, within a fragmented international framework, a common design in recent developments of intermediary liability regulation can be identified as consistently emerging in multiple jurisdictions. Standards for liability are lowered, safe harbours constricted, and incentives for responsible behaviour, that should mainly be carried out 37  See European Commission Communication, ‘A Digital Single Market Strategy for Europe’ (2015) COM(2015) 192 final, s. 3.3 (DSM Strategy). 38  ibid. s. 3.3.2 (noting that ‘[r]ecent events have added to the public debate on whether to enhance the overall level of protection from illegal material on the Internet’). 39  See Directive 2019/790/EU of the European Parliament and of the Council of 17 April 2019 on copy­ right and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92. 40 See European Commission, ‘Proposal for a Directive amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audio-visual media services in view of changing market realities’ (25 May 2016) COM(2016) 287 final. 41  See European Commission, ‘Commission Staff Working Document—Guidance on the implementation/application of Directive 2005/29/EC on Unfair Commercial Practices’ (25 May 2016) SWD(2016) 163 final. 42  European Commission Communication, ‘Tackling Illegal Content Online—Towards an enhanced responsibility of online platforms’ COM(2017) 555 final. 43  See European Commission Communication, ‘Online platforms and the Digital Single Market— Opportunities and Challenges for Europe’ COM(2016) 288/2, 8. 44  cf. Directive 2000/31/EC (n. 25) recital 48 (previously establishing that ‘[t]his Directive does not affect the possibility for Member States of requiring service providers, who host information provided by recipients of their service, to apply duties of care, which can reasonably be expected from them and which are specified by national law, in order to detect and prevent certain types of illegal activities’) (emphasis added).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

14   Giancarlo Frosio through automated algorithmic monitoring and filtering, increased. Horizontally, online intermediaries have been increasingly involved with semiotic regulation online, which is perceived as a major threat to global stability due to the ubiquity of allegedly infringing behaviours and the enormous transaction costs that enforcement entails. However, standards differ greatly from jurisdiction to jurisdiction, with the strongest protection given to online intermediaries in the United States, while the rest of the world is slowly drifting away from the US approach. The legal uncertainty deriving from miscellaneous approaches in constant flux obviously affects online businesses. This is further exacerbated by a steady increase in vertical regulation often launched without pondering potential inconsistencies with horizontal regulation that has now been in place for a few decades.

3.  Mapping Subject-Specific Regulation Mapping online intermediary liability worldwide entails a review of a wide-ranging topic, stretching into many different areas of law and domain-specific solutions. The purpose of Part IV is to consider online intermediary liability for subject matter-specific infringements. This Part provides an overview of intermediate liability for copyright, trade mark, unfair competition, and privacy infringement, together with internet platforms’ obligations and liabilities for defamation, hate, and dangerous speech. Secondary liability for copyright infringement has increasingly moved centre stage and is the driving issue on the agenda of recent reform proposals. In particular, recent EU copyright reform has struggled to find consensus on new obligations for OSPs.45 The European debate has revolved around the harmonization of national traditions of secondary liability and the alternative of construing online intermediaries as primarily li­able for the violation of the right of communication to the public. The latter option has finally been endorsed by the European Parliament,46 setting Europe apart from the dominant approach in other jurisdictions. In Chapter 16, Christina Angelopoulos builds on her long-standing research in this domain and reviews the lessons of European tort law for intermediary liability in copyright in order to plot a path for Harmonizing Intermediary Copyright Liability in the EU: A Summary. Traditionally, Member States have relied on home-grown solutions in the absence of a complete EU framework for intermediary accessory copyright liability. Angelopoulos examines the approaches taken in three of the major tort law traditions of Europe: the UK, France, and Germany. This examination shows the emergence of three cross-jurisdictional approaches to intermediary liability, including intra-copyright solutions, tort-based solutions, and 45  See Giancarlo Frosio, ‘To Filter or Not to Filter? That is the Question in EU Copyright Reform’ (2018) 36(2) Cardozo Arts & Entertainment L.J. 101–38. 46  See Directive 2019/790/EU (n. 39) Art. 17(1).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   15 injunction-based solutions. Existing projects on the harmonization of European tort law, such as the Principles of European Tort Law (PETL),47 may serve as a basis for building a framework for European intermediary liability in copyright. Angelopoulos proposes a negligence-based approach, which is informed by existing EU and national copyright law and tort law according to the emerging doctrine of the Court of Justice of the European Union (CJEU) of fair balance among fundamental rights.48 However, harmonization in this field is also taking place through an expansive notion of communication to the public. Both CJEU case law49 and EU legislation50 have construed some online intermediaries as directly liable for communicating infringing content to the public. In Chapter 17, Eleonora Rosati looks into the Direct Liability of Intermediaries for copyright infringement and disentangles the complexities of the recent CJEU case law concerning the matter. In the light of recent legislation,51 direct liability reaches beyond platforms that induce infringement by users—where the core business is piracy—as concluded by the CJEU in The Pirate Bay case.52 More broadly, it reaches user-generated content platforms that organize and promote user-uploaded content for profit.53 In addition, no safe harbours will be available to platforms that communicate to the public. Both these points settled, Rosati wonders whether a distinction between primary harmonized liability and secondary unharmonized liability still makes sense. In sum, this expansive construction of the notion of direct liability for communication to the public of online platforms reflects an ongoing move towards their enhanced accountability and liability, especially in the EU. In a highly volatile policy environment like that of the United States where market forces constantly lobby the legislative power in order to obtain better conditions, the recent European developments might lead to reconsideration of the traditional balance of interests and power embedded in section 512 of the DMCA in favour of stricter regu­ la­tions for online intermediaries. Presently, although reform activity and ligation has slowed down in the United States, Jack Lerner notes that there are considerable challenges that new entrants to the user-generated content (UGC) market must face to comply with the requirements of the Digital Millennium Copyright Act.54 In discussing Secondary Copyright Infringement Liability and User-Generated Content in the United States, Lerner highlights that online intermediaries and UGC platforms are exposed to uncertainty by the construction of the notion of inducement and wilful blindness, which leaves room for litigation even if they respond to takedown notices expeditiously and actively seek to remove infringing content. 47  See European Group on Tort Law, Principles of European Tort Law . 48  See e.g. C-160/15 GS Media BV v Sanoma Media Netherlands BV [2016] ECLI:EU:C:2016:644, para. 31. 49  See e.g. C-466/12 Nils Svensson et al. v Retriever Sverige AB [2014] ECLI:EU:C:2014:76; C-527/15 Stichting Brein v Jack Frederik Wullems [2017] ECLI:EU:C:2017:300; C-610/15 Stichting Brein v Ziggo BV and XS4All Internet BV [2017] ECLI:EU:C:2017:456. 50  See Directive 2019/790/EU (n. 39) Art. 17(1). 51 ibid. 52 C-610/15 Stichting Brein v Ziggo BV and XS4All Internet BV [2017] ECLI:EU:C:2017:456. 53  See Directive 2019/790/EU (n. 39) Art. 2(6) and recital 62. 54  See 17 USC § 512.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

16   Giancarlo Frosio Next to rampant global copyright infringement, the internet—and the emergence of the platform economy within it—has seen the widespread availability of counterfeit goods that can be purchased and easily delivered anywhere in the world. More than copy­right, trade mark infringement can become a sensitive public order concern, especially when it involves drugs or food the commercialization of which might put public health at risk. According to Frederick Mostert, even though ‘the lack of uniform international guidelines has made tackling counterfeits in a borderless digital environment even more challenging’, there are common approaches emerging towards intermediary liability at the international level for online trade mark infringement. In Chapter 19, Mostert outlines three common tenets that can be distilled into a transnational principle of inter­medi­ary liability: ‘[1] upon notice of a specific infringement, [2] an ISP is required to take all proportionate and reasonable measures which a reasonable ISP would take in the same circumstances [3] to address the specific instance of infringement brought to their attention’. This emerging common international principle is then coupled with a ius gentium of voluntary measures that results from voluntary cooperation between online intermediaries and rightholders to curb infringement. In this context, Mostert highlights the consistent deployment of voluntary removals, monitoring, algorithmic filtering, follow the money approaches, registry systems, advertising codes of practice preventing advertisements on counterfeit websites, and educational campaigns. As Friedmann also explained earlier in Chapter 14, with special regard to Chinese online conglomerates such as Alibaba, the development of this ius gentium of voluntary measures is tightly connected to advancement in technological innovation such as AI and machine learning. However, current technology can barely cope with trade mark infringement, which is potentially even more challenging than copyright infringement. The issue here is twofold. On one side, technology can be easily circumvented by sophisticated infringers. On the other side, ‘fair balance’ between trade mark protections and other fundamental rights, such as freedom of competition, freedom of expression and information, and the right to privacy, can be hard to achieve through automated enforcement. Actually, Chapter 20 magnifies the issue of balancing trade mark protection and social and cultural values from a civil law perspective. Martin Senftleben reviews the CJEU case law on point as well as European national jurisprudence concluding that there is a growing recognition of necessary limitations in trade mark protection for providing breathing space for commercial, artistic, and political freedom of expression. However, Senftleben also warns against a Proliferation of Filter Obligations in Civil Law Jurisdictions, reinforcing the concerns already expressed in other chapters. Overblocking through filtering technologies might easily defy all safeguards of free expression that recent jurisprudential developments have carved into trade mark law. Although copyright filtering has recently been under the spotlight, over-enforcement of trade marks online should not be underplayed—Senftleben noted—especially in the light of the heavily context-specific nature of trade mark exclusive rights, which make file-matching technologies inefficient in trade mark enforcement online.55 In this regard, at least in 55  cf. Graeme Dinwoodie, ‘Secondary Liability for Online Trademark Infringement: The International Landscape’ (2014) 37 Columbia J. of L. & the Arts 463, 498–9.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   17 European civil law jurisdictions, there are fragmented responses. Senftleben uses the example of Dutch courts imposing a far-reaching filtering obligation only if the intermediary systematically and structurally facilitates the infringing activities. In contrast, German jurisprudence has been less cautious in this domain by using the open-ended Störerhaftung doctrine to develop quite substantial specific moni­tor­ing and filtering duties for online intermediaries, such as in the eBay and Rapidshare cases.56 This jurisprudence will be also reassessed in Chapter 28 by Sunimal Mendis and me when considering global emergence of judicially-imposed filtering and monitoring obligations. The peculiarities of the common law perspective of intermediary liability and trade mark infringement are discussed by Richard Arnold in Chapter 21. Arnold crystallizes the teachings of UK case law in this domain, while situating this common law perspective within EU trade mark law,57 the e-Commerce Directive,58 and the Enforcement Directive.59 Arnold, as well as other contributors in this Handbook earlier,60 makes a fundamental distinction between liability stemming from legal principles which are not particular to intermediaries, including primary and accessory liability, and liability depending on the application of principles which are specific to intermediaries, inter­ medi­ary liability proper. This second type of liability includes injunctions against inter­ medi­ar­ies whose services are used to infringe trade marks made available in national jurisdictions by the implementation of Article 11 of the Enforcement Directive. Although other types of injunctions against intermediaries are available, Arnold focuses on the increasing popularity of website-blocking injunctions, which have recently been ported from the copyright domain, where they have more traditionally been deployed, to the trade mark domain. The Cartier case was the first—and so far the only—European case applying a website-blocking injunction to trade mark infringement.61 Online intermediary liability can also arise as a consequence of infringements of rights and legal interests other than IP rights. In Chapter 22, Valentina Moscon and Reto Hilty explore intermediary liability for unfair commercial practices (UCPs) and trade secret infringement under European law.62 Moscon and Hilty investigate whether and 56  See e.g. Bundesgerichtshof [Supreme Court] (BGH) Rolex v eBay (aka Internetversteigerung II) [19 April 2007] I ZR 35/04 (Ger.); BGH Rolex v Ricardo (aka Internetversteigerung III) [30 April 2008] I ZR 73/05 (Ger.); BGH GEMA v RapidShare [15 August 2013] I ZR 80/12 (Ger.). 57  See Directive 2015/2436/EU of the European Parliament and of the Council of 16 December 2015 to approximate the laws of the Member States relating to trade marks (recast) [2015] OJ L336/1; Regulation 2017/1001/EU of 14 June 2017 on the European Union trade mark [2017] OJ L154/1. 58  See Directive 2000/31/EC (n. 25) Arts 12–15. 59  See Directive 2004/48/EC of the European Parliament and of the Council of 29 April 2004 on the enforcement of intellectual property rights [2004] OJ L195/16, Art. 11. 60  See, for an identical or close categorization, Riordan, Husovec, Angelopoulos, and Mostert. 61 See Cartier International AG v British Telecommunications plc [2018] UKSC 28 (UK). 62  See Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No. 2006/2004 of the European Parliament and of the Council (Unfair Commercial Practices Directive) [2005] OJ L149/22; Directive (EU) 2016/943 of the European Parliament and of the Council of the European Parliament and of the Council of 8 June 2016 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure [2016] OJ L157/1.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

18   Giancarlo Frosio under which conditions online intermediaries are liable under rules of conduct governing the functioning of the market, and the upcoming DSM in particular. Platforms like TripAdvisor might be considered traders as well as hosting providers with the potential cumulative applicability of the e-Commerce Directive safe harbours and the obligations governing traders under the UPC Directive. Most likely, Moscon and Hilty conclude, the e-Commerce Directive as a lex specialis will prevail on competing legal instruments and provide exemption from liability. However, there is uncertainty regarding the standard of knowledge required for triggering takedown obligations or the obligation of preventing future infringements or, again, whether injunctive relief similar to that of Article 11 of the Enforcement Directive can apply to UCPs. Obviously, then, liability, if exemptions do not apply, is a matter of fragmented and unharmonized national tort laws that implement very differently the general prohibition against UCPs contained in EU law. Given the globalized nature of digital markets and their steady growth, multiplying potential violations by online intermediaries, Moscon and Hilty conclude that the status quo is unsatisfactory and specific alternatives for UCPs and trade secret violations in digital markets must be developed. Sanitization of allegedly infringing speech is yet another area where online inter­medi­ ar­ies have a primary role. The ‘datafication’ and ‘platformization’ of society has reached every sphere of life with platforms controlling the flow of a more abundant but fragmented information offer than the traditional mass media.63 Therefore, their long-lasting liability exemptions have been challenged almost everywhere, even in the United States, where section 230 of the CDA, however, still holds as a comprehensive protection for online intermediaries for speech-related infringements. Elsewhere, the messenger can now be more freely shot unless it acts responsibly enough and supports wronged parties and law enforcement agencies in fighting illegal speech online. In Chapter 23, Emily Laidlaw tackles this side of the intermediary liability conundrum, providing a common law perspective on intermediary liability for defamation and dangerous and hate speech. Laidlaw focuses on the Canadian system and other common law jurisdictions. After an introduction to the common law and statutory legal context, Laidlaw puts forward a reform proposal for online defamation that goes under the name of Notice-and-NoticePlus (NN+). The discussion of this rather subject-specific proposal becomes an opportunity for exploring optimal intermediary liability models for the regu­la­tion of other kinds of harmful speech, including fake news, terrorist content, and hate speech. In Free Expression and Internet Intermediaries: The Changing Geometry of European Regulation, Tarlach McGonagle stresses how the geometry of European regulation has moved towards online intermediaries’ increased liability and enhanced responsibility for illegal third party speech. This change seems to sideline the well-established understanding that freedom of the media must be safeguarded and regulation should not curb the development of information and communication technologies. McGonagle reviews

63  See José van Dijck, Thomas Poell, and Martijn de Waal, The Platform Society: Public Values in a Connective World (OUP 2018) 46 as cited in Chapter 24.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   19 the case law of the European Court of Human Rights (ECtHR), such as Delphi,64 EU Communications, Recommendations, and Codes of Conduct and concludes that the greater the seriousness of the perceived harms of certain categories of expression, such as hate speech, the more responsibly online intermediaries are supposed to act. In particular, as noted elsewhere, there is an emergent preference for self-regulatory codes of conduct as a regulatory technique in Europe. As McGonagle notes, however, all the rele­vant European codes of conduct are less voluntary than they seem, as rather than trusting the market with appropriate self-regulatory choices, they put forward a rather coercive approach. In the information society, the role of private sector entities in gathering information for and about users has long been a most critical issue. Therefore, intermediaries have become a main focus of privacy regulation, especially in jurisdictions with a strong trad­ ition of privacy protection such as Europe.65 In Chapter 25, Miquel Peguera discusses The Right to be Forgotten in the European Union. As Peguera recounts, in a landmark case the CJEU ruled that an internet search engine operator is responsible for processing personal data which appear on web pages published by third parties.66 The Google Spain ruling once again expands the obligations of online intermediaries, of which search engines are a subset. It brings about enhanced responsibility—and transaction costs— for online intermediaries, while entrusting them with an adjudication role that entails a delicate balance between fundamental rights. That balance has been addressed quite satisfactorily by European institutions and national courts, setting precise guidance that preserves freedom of expression and public interest.67 However, also in this field, there still remain concerns about whether any adjudication role should be entrusted to online intermediaries, although the scale of semiotic governance online leaves room for very few sustainable alternatives. At the same time, additional obligations of uncertain applicability, such as the prohibition of processing of sensitive data that should theoretically apply to all data controllers including those online intermediaries that qualify as such, might be so invasive as to disrupt the business of online intermediaries. Peguera also discusses the hotly debated issue of the geographical scope of the right to be forgotten; that is, its possible extraterritorial global application to .com domains rather than European domains only. Extraterritorial application of intermediary liability obligations is a critical issue that goes beyond enforcement of the right to be forgotten and will be further discussed in Part VI.

64  Delfi AS v Estonia [GC] App. no. 64569/09 (ECtHR, 16 June 2015). 65  See Bart van der Sloot, ‘Welcome to the Jungle: the Liability of Internet Intermediaries for Privacy Violations in Europe’ (2015) 6 JIPITEC 211. 66  See C-131/12 Google Spain SL v Agencia Española de Protección de Datos [2014] ECLI:EU:C:2014:317. 67  See e.g. Art. 29 Working Party (WP29), ‘Guidelines on the implementation of the Court of Justice of the European Union judgment on “Google Spain and Inc v. Agencia Española de Protección de Datos (AEPD) and Mario Costeja González” C-131/12’ (2014) 14/EN WP 225. See also Giancarlo Frosio, ‘Right to be Forgotten: Much Ado about Nothing’ (2017) 15(2) Colorado Tech. L.J. 307.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

20   Giancarlo Frosio Multiple jurisdictions are trying to cope with ‘right to be forgotten’ demands, following the Google Spain ruling.68 The emergence of the right to be forgotten—and its extraterritorial application—follows in the footsteps of a global move towards data protectionism against the de facto market dominance of US internet conglomerates.69 There are plenty of recent examples, including the CJEU’s Schrems decision and Russian Federal Law No. 242-FZ. In Schrems, the CJEU ruled that the transatlantic Safe Harbour agreement— which lets US companies use a single standard for consumer privacy and data storage in both the United States and Europe—is invalid;70 whereas Russia introduced legislation requiring that the processing of the personal data of Russian citizens be conducted with the use of servers located in Russia.71 Eduardo Bertoni tackles the global impact of enhanced privacy obligation for online intermediaries in Chapter 26, where he discusses the Right to be . . . Forgotten? Trends in Latin America after the Belén Rodriguez Case and the Impact of the New European Rules. Several Latin American countries, including Argentina, Chile, Columbia, Peru, and Uruguay, have been heavily debating obligations of online intermediaries in connection with the protection of users’ data and personal information. The debate has further involved delisting obligations and proactive filtering generally. In the aftermath of the CJEU Google Spain decision, freedom of expression and privacy advocates have been in confrontation in Latin America with proposals to introduce the right to be forgotten or other delisting obligations often met with strong civil society opposition. The Belén Rodriguez case in Argentina, for example, endorsed the quite extreme view that no delisting obligation should be imposed on OSPs unless ordered by a court or in a few specific cases of obviously infringing content.72 Bertoni, in particular, warns about de-indexing obligations against search engines that would amount to prior restraint to speech and not be compliant with the American Convention on Human Rights.73 Again, as Bertoni reports, the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights has been quite straightforward in rejecting delisting obligations à la Google Spain.74 Tensions between fundamental rights and intermediary liability obligations define the essence of the intermediary liability conundrum online. In this context, fragmentation and inconsistencies abound. They are actually steadily growing rather than receding. Obviously, fragmentation 68  ibid. 307–12. 69  See e.g. Maria Farrel, ‘How the Rest of the World Feels About  U.S.  Dominance of the Internet’ (Slate, 18 November 2016) . 70  See C-362/14. 71  See Federal Law No. 242-FZ of 21 July 2014 ‘on amending certain legislative acts of the Russian Federation as to the clarification of the processing of personal data in information and telecommunications networks’ . 72  See Corte Suprema de Justicia de la Nación [National Supreme Court] Rodríguez, María Belén v Google Inc. /daños y perjuicios [2014] CSJN Case no. 337:1174 (Arg.). 73  See Organization of American States (OAS), American Convention on Human Rights (‘Pact of San Jose’), Costa Rica, 22 November 1969 (entered into force 18 July 1978) Art. 13. 74  See Office of the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights, ‘Annual Report’ (15 March 2017) OEA/Ser.L/V/I Doc. 22/17 .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   21 is multiplied vertically across different subject matter and if common solutions are not rapidly agreed, inconsistencies may soon become so irreconcilable that they will drive a process of Balkanization that will break down the internet, as discussed in more detail in Part VI of the Handbook.

4.  Mapping Intermediary Liability Enforcement As should now be quite obvious, the intense debate concerning OSPs’ intermediary li­abil­ ity primarily concerns the involvement of OSPs in online enforcement to aid law enforcement agencies and wronged parties to curb widespread illegal behaviours online. Given the scale of the phenomenon, intermediaries have been increasingly identified as the most suitable option for minimizing transaction costs and enhancing efficacy of enforcement. Part V reviews intermediary liability enforcement strategies by focusing on emerging trends, including notice and action, proactive monitoring obligations across the entire spectrum of intermediary liability subject matter, blocking orders against innocent third parties, and the emergence of administrative enforcement of online infringement. Later, Part VI discusses private ordering and voluntary measures, an additional emerging trend in intermediary liability enforcement. The focus of the review in Part V inevitably magnifies the tensions between enforcement strategies and miscellaneous fundamental rights, including freedom of expression, privacy, and freedom of business. In Chapter 27, Aleksandra Kuczerawy discusses From ‘Notice and Takedown’ to ‘Notice and Stay Down’: Risks and Safeguards for Freedom of Expression. Kuczerawy describes enforcement mechanisms that are provided to allegedly wronged parties in multiple jurisdictions to seek a remedy directly from an OSP for an infringement that may have occurred through its networks. These are generally known as ‘notice-and-action’ (N&A) mechanisms and—Kuczerawy explains—take many nuanced forms, including most commonly ‘notice-and-take-down’, ‘notice-and-notice’, and ‘notice-and-stay-down’. By providing for the removal or blocking of content, all these mechanism can interfere with the right to freedom of expression. Therefore, Kuczerawy examines ‘how different types of N&A mechanisms amplify the risks to free expression and what safeguards they include to prevent such risks from manifesting themselves’. From notice-and-take-down to notice-and-notice and notice-and-stay-down, fundamental rights find themselves increasingly under pressure. The recent move away from the e-Commerce notice-andstay-down model to endorse notice-and-stay down and other filtering obligations in the new Directive on Copyright in the DSM witnesses this intensification of the tension between online intermediary regulation and fundamental rights.75

75  See Directive 2019/790/EU (n. 39) Art. 17(4)(b)–(c).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

22   Giancarlo Frosio In Monitoring and Filtering: European Reform or Global Trend?, Sunimal Mendis and I develop this very point and take up from Kuczerawy’s more general discussion by focusing on the widespread deployment of monitoring and filtering obligations through voluntary, judicial, and legislative means. The recent EU copyright reform would de facto force hosting providers to develop and deploy filtering systems, therefore moni­tor­ing their networks.76 As we argue, the solution adopted by the new Directive follows in the footsteps of a well-established path in recent intermediary liability policy: the demise of the principle of ‘no monitoring obligations’. In the same vein, recent case law has imposed proactive monitoring obligations on intermediaries for copyright infringement—such as Allostreaming in France, Dafra in Brazil, RapidShare in Germany, or Baidu in China.77 Actually, the emerging enforcement of proactive filtering and moni­tor­ing obligations has spanned the entire spectrum of intermediary liability subject matter, including other IP rights,78 privacy,79 defamation, and hate/dangerous speech.80 In that context, notable exceptions—such as the landmark Belén Rodriguez case that is discussed in detail in Chapter 26—highlight again the fragmented international response to intermediary liability.81 Next, blocking orders against innocent third parties are an additional relevant trend in intermediary liability. Blocking orders have become increasingly popular in Europe, especially to contrast online copyright—and recently also trade mark—infringement.82 Their validity under EU law was recently confirmed by the CJEU in the Telekabel decision.83 Outside the EU, website blocking of copyright-infringing sites has been authorized in countries including Argentina, India, Indonesia, Malaysia, Mexico, South Korea, and Turkey.84 In December 2014, Singapore effected an amendment to its Copyright Act to enable rightholders to obtain website-blocking orders,85 and in 2015 Australia introduced ‘website blocking’ provisions to the Copyright Act.86 These 76 ibid. 77  See Cour de cassation [French Supreme Court] SFR, Orange, Free, Bouygues télécom et al. v Union des producteurs de cinéma et al. [6 July 2017] no. 909 (Fra.) (Allostreaming); Superior Court of Justice Google Brazil v Dafra (24 March 2014) Special Appeal 1306157/SP (Bra.); BGH GEMA v RapidShare (n. 56); Beijing Higher People’s Court Zhong Qin Wen v Baidu [2014] Gao Min Zhong Zi 2045 (Ch.). 78  See BGH Rolex v eBay (n. 56); BGH Rolex v Ricardo (n. 56). 79  See Tribunal de grande instance [High Court] TGI Paris Google v Mosley [6 November 2013] (Fra.); Landgericht [District Court] (LG) Hamburg Max Mosley v Google Inc. [24 January 2014] 324 O 264/11 (Ger.); Mosley v Google [2015] EWHC 59 (QB) (UK). 80 See Delfi (n. 64). 81 See Belén (n. 72). 82  See Directive 2004/48/EC (n. 59) Art. 11; Directive 2001/29/EC of the European Parliament and the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L167/10, Art. 8(3). 83  See C-314/12 UPC Telekabel Wien GmbH v Constantin Film Verleih GmbH [2014] ECLI:EU:C:2014:192. 84  See Swiss Institute of Comparative Law, Comparative Study on Filtering, Blocking and Take-down of Illegal Content of the Internet (a study commissioned by the Council of Europe, 20 December 2015) . 85  See Copyright (Amendment) Act 2014, An Act to Amend the Copyright Act (Ch. 63 of the 2006 revised edn). 86  See Copyright Amendment (Online Infringement) Act 2015 (Cth).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   23 measures have been enacted with the aim of curbing IP infringement online, although negative effects on human rights have been widely highlighted. In Chapter 29, Christophe Geiger and Elena Izyumenko discuss Blocking Orders: Assessing Tensions with Human Rights and consider the standards developed by the jurisprudence of the CJEU and the ECtHR to sustain the difficult coexistence between blocking orders and fundamental rights, including freedom of expression and freedom to conduct a business. Although CJEU case law has recognized users’ rights as enforceable against injunctions that might curb users’ freedom of expression online,87 according to Geiger and Izyumenko, it also ‘shifted a con­sid­er­able part of the human-rights-sensitive enforcement choices on the intermediaries’. This is a suboptimal solution from a human right perspective. It is forced on international courts, such as the CJEU, by the need to find a proportional equilibrium between competing rights—given the scale of the transaction costs of online enforcement—according to the doctrine of ‘fair balance’ among fundamental rights, which is also discussed in Chapters 16 and 17. In reviewing the proper balance between freedom to conduct business and blocking orders, Geiger and Izyumenko discuss the allocation of costs of enforcement; that is, whether the rightholders or the online intermediaries should sustain the costs of blocking. The issue is also discussed in Chapter 20. The UK Supreme Court in Cartier,88 a trade mark infringement case, allocated the costs of blocking and delisting to rightholders, taking the opposite view to the French Cour de cassation in the Allostreaming case,89 a copyright infringement case. Both the UK and French Courts of Appeal had instead decided that the costs of enforcement had to be equally divided between the two parties.90 Other courts in Europe, such as the Irish Court of Appeal in the Sony Music case,91 although deciding on costs for setting up a graduate response scheme rather than website blocking, came to a different ratio, imposing 80 per cent of the costs to the online intermediaries and 20 per cent to the rightholders. EU law and CJEU jurisprudence say little in this regard and leave the decision to the national courts on the basis of their 87  C-314/12 (n. 83) para. 57. 88  Cartier (n. 61) para 31 (‘the ordinary principle is that unless there are good reasons for a different order an innocent intermediary is entitled to be indemnified by the rights-holder against the costs of complying with a website-blocking order’). 89 See Allostreaming (n. 77) (noting that EU provisions ‘do not prevent the costs of the measures strictly necessary for the safeguarding of the rights in question . . . from being borne by the technical intermediaries, even when such measures may present significant costs for the intermediaries. The aforementioned Directives 2000/31 and 2001/29, . . . foresee that notwithstanding the principle of nonresponsibility of the intermediaries, the ISPs and hosting providers are required to contribute to the fight against the illegal content and, in particular, against the infringement of copyright and related rights, when they are best positioned to put an end to such violations’, as translated in Oleksandr Bulayenko, Cour de cassation, Urt. v. 6.7.2017 (SFR, Orange, Free et al. / Union des producteurs de cinéma et al.) (translation into English) (2018) 1 GRUR Int 51–53.). 90  See e.g. Cartier Int’l AG and others v British Telecommunications Plc and another [2017] Bus L.R. 1 [100]–[128] (CA) (UK). 91 See Sony Music Entertainment Ireland Ltd v UPC Communications Ireland Ltd [2016] IECA 231 (Ire.) (‘[b]ecause the defendant is the company which profits—albeit indirectly—because it derives revenue from its subscribers who are engaged in this practice, it is the defendant who should, in my view, be primarily liable for the costs’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

24   Giancarlo Frosio national law.92 Of course, this is telling of considerable fragmentation in approaches to intermediary liability. This is especially relevant as it occurs in matters as sensitive as the allocation of costs of enforcement. Fragmentation in this context brings about legal uncertainty and higher transaction costs that reflect on the sustainability of the business models of online intermediaries in Europe. However, blocking orders have also been widely used in many jurisdictions—in particular by administrative authorities—in connection with amorphous notions of public order, defamation, and morality. In this respect, the emergence of administrative enforcement of online intermediary liability appears to be another well-marked trend in recent internet governance. Multiple administrative bodies have been put in charge of enforcing a miscellaneous array of online infringements—primarily against inter­medi­ar­ies and often absent any judicial supervision. Some administrative bodies—such as the Italian Communication Authority (AGCOM), the Second Section of the Copyright Commission (CPI), and the Greek Committee on Internet Violations of Intellectual Property (CIPIV)—have been given powers to police copyright infringement online and issue blocking orders and other decisions to selectively remove infringing digital works.93 In Chapter 30, Alessandro Cogo and Marco Ricolfi dig deep into the legal and regulatory framework empowering these administrative bodies by studying the Administrative Enforcement of Copyright Infringement Online in Europe. Although administrative procedures are available both under international and EU law for the protection and enforcement of IP rights,94 they must conform to the same principles and safeguards as those for judicial review. Cogo and Ricolfi find—especially by analysing data resulting from the practical implementation of the Italian administrative enforcement system—that transparency and due process rank very low in these administrative enforcement systems. Worldwide many other administrative agencies enjoy broader powers of sanitization of the internet. The Russian Roskomnadzor is an administrative body competent to request telecoms operators to block access to websites featuring content that violates miscellaneous pieces of legislation and competent to keep a special registry or ‘blacklist’ of websites that violate the law.95 Chapter 33 provides more insight on the functioning of this Russian agency. In South Korea, the Korea Communications Commission 92  See C-314/12 (n. 83) para. 57 (noting only that ‘[an injunction] constrains its addressee in a manner which restricts the free use of the resources at his disposal because it obliges him to take measures which may represent a significant cost for him’); Directive 2001/29/EC (n. 82) recital 59. 93  See AGCOM Regulations regarding Online Copyright Enforcement, 12 December 2013 680/13/ CONS (It.); Royal Legislative Decree No. 1/1996, enacting the consolidated text of the Copyright Act, 12 April 1996 (as amended by Law No. 21/2014, 4 November 2014) (Sp.). 94  See Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs) (15 April 1994) Marrakesh Agreement Establishing the World Trade Organization, Annex 1C, 1869 UNTS 299, 33 ILM 1197, Art. 41(2); Directive 2000/31/EC (n. 25) Arts 12–14 (all stating that these provision do ‘not affect the possibility for a court or administrative authority . . . of requiring the service provider to terminate or prevent an infringement’). 95  See Federal Law no. 139-FZ of 28 July 2012 on the Protection of Children from Information Harmful to Their Health and Development and Other Legislative Acts of the Russian Federation (aka ‘Blacklist law’) (Rus.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   25 implements deletion or blocking orders according to the request and standards of the  Korea Communications Standards Commission ‘as necessary for nurturing sound communications ethics’.96 In Turkey, the law empowers the Presidency of Telecommunications (TIB) to block a website or web page within four hours without any judicial decision for the violation of a new category of crimes labelled as ‘violation of private life’ or privacy.97 In India, section 69(A)(1) of the IT Act provides the government with the ‘power to issue directions for blocking for public access of any information through any computer resource’,98 which is dealt by a special committee examining within seven days all requests received for blocking access to online information.99 Many other national administrative authorities—such as the Supreme Council of Cyberspace in Iran or CONATEL in Venezuela—do issue orders against ISPs regarding the legality, blocking, and removal of online content, which do not involve—or involve very limited—judicial review.100 Concerned views have been voiced against administratively issued blocking orders, which could undermine basic due process guarantees.101

5.  Mapping Private Ordering and Intermediary Responsibility As anticipated, private ordering has been emerging powerfully as a privileged tool for online enforcement. Part VI considers whether policymakers—and interested third parties such as IP rightholders—try to coerce online intermediaries into implementing enforcement strategies through voluntary measures and self-regulation, in addition to legally mandated obligations. As Martin Husovec argued, EU law, for example, increasingly forces internet intermediaries to work for the rightholders by making them accountable even if they are not tortiously liable for the actions of their users.102 Bringing pressure on innocent third parties that may enable or encourage violations by others is a 96 See Act on the Establishment and Operation of Korea Communications Commission, last amended by Act no. 11711 of 23 March 2013 (Kor.). 97  See Omnibus Bill no. 524 of 26 June 2013, amending provisions in various laws and decrees including Law no. 5651 on regulation of publications on the internet and suppression of crimes committed by means of such publications, Law no. 809 ‘Electronic Communications Law’ and others (Tur.). 98  See Information Technology Act 2000, as amended by the Information Technology (Amendment) Act 2008, Art. 69(A)(1). 99  See Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules 2009 (to be read with s. 69A of the IT Act), rule 7 (Ind.). 100  See Executive Order of the Supreme Leader Establishing the Supreme Council of Cyberspace, March 2012 (Ire.); Ley de Responsabilidad Social en Radio Televisión y Medios Electrónicos (ResorteME) [Law of Social Responsibility in Radio-Television and Electronic Media], Official Gazette no. 39.579 of 22 December 2012 (Ven.). 101  See e.g. Manila Principles (n. 10) Principle no. 2 (stating that content must not be required to be restricted without an order by a judicial authority). 102  See Husovec (n. 4).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

26   Giancarlo Frosio well-established strategy to curb infringement. As also discussed by Riordan in Chapter 3, intermediaries’ secondary liability has been based on different theories ran­ging from moral to utilitarian approaches. A moral approach would argue that en­cour­aging infringement is widely seen as immoral.103 The second approach is associated with utilitarian and welfare theories.104 Welfare theory approaches have been dominant in intermediary liability policy until recently. They have been based on the notion that li­abil­ity should be imposed only as a result of a cost–benefit analysis. However, there is an ongoing revival of moral approaches to intermediary liability with justification for policy intervention based on responsibility for the actions of users as opposed to efficiency or balance innovation vs harm. The discourse is increasingly shifting from liability to ‘enhanced responsibility’105 of online intermediaries under the assumption that OSPs’ role is unprecedented for their capacity to influence the informational environment and users’ interactions within it. The European Commission stressed that ‘the responsibility of online platforms is a key and cross-cutting issue’.106 This policy development puts special focus on inter­medi­ar­ies’ corporate social responsibilities and their role in implementing and fostering human rights.107 However, emphasis on a responsible role for intermediaries in fostering human rights has a flip side when multiple competing rights are at stake. Online inter­medi­ar­ies are unequipped—and lack constitutional standing—for making decisions involving a proportional balancing of rights. As Calabresi–Coase’s ‘least-cost avoiders’,108 online intermediaries will inherently try to lower the transaction costs of adjudication and liability and, in order to do so, might functionally err on the side of overblocking, in particular by deploying automated algorithmic enforcement tools. Again, this policy trend leads to incremental fragmentation as enforcement is handled directly by miscellaneous private entities from multiple jurisdictions through pro­pri­etary automated means applying corporate visions and disparate terms of service. Of course, there are also 103  See Richard Spinello, ‘Intellectual Property: Legal and Moral Challenges of Online File Sharing’ in Ronald Sandler (ed.), Ethics and Emerging Technologies (Palgrave Macmillan 2013) 300; Mohsen Manesh, ‘Immorality of Theft, the Amorality of Infringement’ (2006) 2006 Stan. Tech. L. Rev. 5; Richard A. Spinello, ‘Secondary Liability in the Post Napster Era: Ethical Observations on MGM v. Grokster’ (2005) 3(3) J. of Information, Communication and Ethics in Society 121; Geraldine Szott Moohr, ‘Crime of Copyright Infringement: An Inquiry Based on Morality, Harm, and Criminal Theory’ (2003) 83 BU L. Rev. 731. 104  See Reiner Kraakman, ‘Gatekeepers: the Anatomy of a Third-Party Enforcement Strategy’ (1986) 2(1) J. of Law, Economics and Organization 53. 105  European Commission Communication, ‘Tackling Illegal Content Online. Towards an enhanced responsibility of online platforms’ COM(2017) 555 final. 106 European Commission, ‘Communication on online platforms and the digital single market: opportunities and challenges for Europe’ COM(2016) 288 final, 9. 107  See e.g. United Nations Human Rights Council, ‘The Promotion, Protection and Enjoyment of Human Rights on the Internet’ (20 June 2014) A/HRC/RES/26/13 (addressing inter alia a legally binding instrument on corporations’ responsibility to ensure human rights). See also Emily Laidlaw, Regulating Speech in Cyberspace: Gatekeepers, Human Rights and Corporate Responsibility (CUP 2015); Mariarosaria Taddeo and Luciano Floridi, ‘The Debate on the Moral Responsibility of Online Service Providers’ (2015) 22(6) Sci. Eng. Ethics 1575, 1575–603. 108 See Guido Calabresi, ‘Some Thoughts on Risk Distribution and the Law of Torts’ (1961) 70 Yale L.J. 499; Ronald Coase, ‘The Problem of Social Cost’ (1960) 3 J. L. & Econ. 1.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   27 counter-posing forces at work in the present internet governance struggle. As seen in Chapter 10, a centripetal move towards digital constitutionalism for internet governance alleviates the effects of the centrifugal platform responsibility discourse.109 This move from intermediary liability to platform responsibility has been occurring on several levels—being apparent from the deployment of miscellaneous enforcement strategies that will be detailed in Part VI. Governments everywhere—and the European Commission in particular—push coordinated self-regulatory efforts by major online hosting providers—including Facebook, Twitter, YouTube, Microsoft, Instagram, Snapchat, and Dailymotion—to promote codes of conduct endorsing commitments to combat the spread of illegal hate speech online,110 fight incitement to terrorism,111 prevent cyberbullying,112 or curb IP infringement online.113 Martin Husovec and I introduce this discussion in Chapter 31, where we describe several emerging legal trends reflecting this change in perspectives, such as obviously voluntary agreements and codes of conduct, but also other legal arrangements, including three-strikes schemes, online search manipulation, follow-the-money strategies, voluntary filtering and website blocking, and private Domain Name System (DNS) content regulation. Under these agreements, schemes, and enforcement strategies, access and hosting providers would be called on actively and swiftly removing illegal materials, instead of reacting to complaints. Of course, some of these enforcement tools are also discussed in many other chapters of the Handbook as stand-alone items and from a legal liability rule perspective. However, as Husovec and I note, ‘legal rules are often only basic expectations which are further developed through market transactions, business decisions, and political pressure’, therefore the practical responsibility landscape is wide-spanning, ‘ranging from legal entitlements to request assistance in enforcement to entirely voluntary private-ordering schemes’. Equally, the typology of OSPs enlisted in voluntary online enforcement strategies broadens steadily beyond the traditional access and hosting providers. On the IP enforcement side, payment blockades—notice-and-termination agreements between major rightholders and online payment processors—and ‘voluntary best practices agreements’ have been applied widely.114 Both the European Commission and the US government have endorsed a ‘follow-the-money’ approach seeking to ‘deprive those engaging in commercial infringements of the revenue streams (for example from consumer payments and advertising) emanating from their illegal activities, and therefore act as a

109 See Lex Gill, Dennis Redeker, and Urs Gasser, ‘Towards Digital Constitutionalism? Mapping Attempts to Craft an Internet Bill of Rights’ (2018) 80(4) Int’l Communication Gazette 302, 302–19. 110  See European Commission, ‘The EU Code of conduct on countering illegal hate speech online’ . 111  See European Commission, ‘Recommendation on measures to effectively tackle illegal content online’ C(2018) 1177 final. 112  See European Commission, ‘Communication on online platforms’ (n. 106) 10. 113  See Directive 2019/790/EU (n. 39) Art. 17(10). 114  See Annemarie Bridy, ‘Internet Payment Blockades’ (2015) 67 Florida L. Rev. 1523. See also Derek Bambauer, ‘Against Jawboning’ (2015) 100 Minnesota L. Rev. 51.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

28   Giancarlo Frosio deterrent’.115 Payment processors like MasterCard and Visa have been pressured to act as IP enforcers as well as enforcers of speech-related infringements. Law enforcement agencies have tried to coerce payment providers to stop providing service to websites like Backpage, for the adult section it runs,116 or Wikileaks. Annemarie Bridy focuses on one additional emerging enforcement strategy involving non-conventional intermediaries in Addressing Infringement: Developments in Content Regulation in the United States and the DNS. In Chapter 32, Bridy describes how the reach of privately ordered online content regulation is deepening by migrating downwards from the application layer into the network’s technical infrastructure, specifically the Domain Name System (DNS). Private agreements between DNS intermediaries and IP rightholders are based on a ‘pass-along’ provision in the ICANN-Registry Agreement stating that a domain name can be suspended if the registrant is found to have engaged copyright or trade mark infringement. On the basis of this clause in the ICANN-Registry Agreement, registry operators and rightholders have set up ‘trusted notifier’ agreements to fast-track domain name suspensions—and hence site blocking—when a ‘trusted notifier’ sends a complaint to the registry operator. Bridy highlights lack of transparency and due process in this privately-ordered form of enforcement, which is also heavily biased in favour of complainants. Bridy also notices how the notice-and-action procedures institutionalized by these IP-focused agreements are readily adaptable for use in censoring all types of content. Increased intermediary accountability has become a globalized trend that has been emerging in numerous jurisdictions. In this regard, online intermediaries are not only held liable for IP, privacy, or defamation infringements, but are also held responsible for state security. Several countries enlist private business in the enforcement of state controls over the internet. In Chapter 33, Sergei Hovyadinov looks exactly at this expanded scope of online intermediaries’ responsibility by presenting Intermediary Liability in Russia and the Role of Private Business in the Enforcement of State Controls over the Internet. According to Hovyadinov, since 2011–12, the Russian government drastically changed its stance on internet regulation. The rapid expansion of the internet in Russia— and its potential for triggering social unrest—has led to the adoption of significant regulatory restrictions on online content and anonymity. Regulation increasingly restricted the type of information available online and allowed the state to collect user data and online activity. As part of this development, telecom operators, web-hosting providers, and social media platforms have become an integral part of the state’s internet control apparatus. Hovyadinov reports—also thanks to interviews of sector operators in Russia— that, ‘[f]aced with new technical challenges, the Kremlin has enlisted competent and technically capable internet actors . . . to help implement these restrictions and control the flow of information.’ Actually, at approximately the same time, similar developments 115 See European Commission Communication, ‘Towards a Modern More European Copyright Framework’ COM(2015) 260 final, 11. See also Office of the Intellectual Property Enforcement Coordinator, Supporting Innovation, Creativity & Enterprise: Charting a Path Ahead (US Joint Strategic Plan for Intellectual Property Enforcement FY 2017–2019) (2017) 61ff. 116 See Backpage v Dart, no. 15–3047 (7th Cir. 2015) (US) (upholding an injunction against Sheriff Dart for his informal efforts to coerce credit card companies into closing their accounts with Backpage).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   29 occurred in many jurisdictions besides Russia. Chapters 13 and 14 describes similar dynamics occurring in China and other Asian countries. Chapter 11 stresses the enlisting of OSPs as cyber-police in African countries. Several chapters also report similar trends in the European regulatory framework. Although Hovyadinov notes that transparency and public accountability of private involvement in internet regulation in Russia is especially low, numerous governments, including western governments, rely on OSPs as their ‘online proxy agents’ to assist with the enforcement of IP rights, data protection, speech-related infringements, and state security. Privately-ordered content moderation defines the contours of the infosphere where social interaction occurs for billions of people daily.117 The impact of online content moderation on modern society is tremendous. Governments, along the lines of what Hovyadinov has described in Chapter 33, rightholders, and miscellaneous user groups would like to shape the gatekeeping functions of OSPs according to their agendas. In Guarding the Guardians: Content Moderation by Online Intermediaries and the Rule of Law, Niva Elkin-Koren and Maayan Perel continue the discourse undertaken in this Part by highlighting challenges posed by content moderation to the rule of law. ElkinKoren and Perel reinforce the point that private ordering ‘circumvents the constitutional safeguard of the separation of powers’ and blurs the distinction between private interest and public responsibility. Chapter 34, then, introduces a critical point in the current debate on intermediary liability online: socially relevant choices are delegated to automated enforcement run though opaque algorithms. Algorithms’ transparency and accountability remains an issue challenging semiotic regulation online.118 In addition, ElkinKoren and Perel stress that machine learning and data analytics allow OSPs to proactively predict and prevent illicit use of content, bringing to life the omniscient platforms that Friedmann recalled in Chapter 14. Omniscient platforms that give a not-so-invisible handshake119 to government for cybersecurity, surveillance, censorship, and general lawenforcement tasks through opaque algorithms evoke threatening dystopian scenarios. Elkin-Koren and Perel suggest that the solution to black box content mod­er­ation can be found in grassroots oversight through ‘tinkering’,120 which would allow people to ‘systematically test and record how online intermediaries respond to representatives, like-real content’ submitted to the platforms. Apparently, there is an emerging strategy for regulation of online platforms leaning towards a globalized, ongoing move in the direction of privatization of law enforcement 117  See Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (Yale U. Press 2018). 118  See Joshua Kroll, Joanna Huey, Solon Barocas, Edward Felten, Joel Reidenberg, David Robinson, and Harlan Yu, ‘Accountable Algorithms’ (2017) 165 U.  Pa. L.  Rev. 633; Nicholas Diakopoulos, ‘Accountability in Algorithmic Decision Making’ (2016) 59(2) Communications of the ACM 56, 56–62; Nicholas Diakopoulos and Michael Koliska, ‘Algorithmic Transparency in the News Media’ (2016) Digital Journalism. 119  See Michael Birnhack and Niva Elkin-Koren, ‘The Invisible Handshake: The Reemergence of the State in the Digital Environment’ (2003) 8 Va. J. L. & Tech. 6. 120 See also Maayan Perel and Niva Elkin-Koren, ‘Black Box Tinkering: Beyond Disclosure in Algorithmic Enforcement’ (2017) 69 Fla. L. Rev. 181, 193.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

30   Giancarlo Frosio online through algorithmic tools. Algorithmic enforcement makes this shift even more unpredictable in terms of fair balancing between private and public interests and human rights. In Chapter 35, Ben Wagner tries to shed some light on this murky issue by discussing Algorithmic Accountability: Towards Accountable Systems. Given the early stage of human engagement with AI, the essential basics and the precise nature of the notion itself of algorithmic accountability are still under review. Basically, according to Wagner, there is still a high level of uncertainty regarding ‘to whom’ and ‘for what’ algorithms should be accountable. An initial basic finding, which fits within Elkin-Koren and Perel’s conclusions, is that algorithms should be at least accountable to users. In this respect, access to the source code might provide some accountability but users should be enabled to understand what the algorithm is actually doing. In order to do so, Wagner lists a number of technical, organizational, and regulatory challenges to ensuring access to data. Considering intermediary liability and algorithmic accountability more closely, Wagner believes that ‘a proposal in which adherence to algorithmic accountability would lower liability of intermediaries could contribute to more effectively ensuring compliance with human rights’. Specific provisions for ensuring algorithmic ac­count­abil­ity might include transparency of automated decisions, external auditing of the algorithmic accountability mechanism, an independent external body addressing complaints, ‘radical transparency’ of the system, irrevocable storage of the decisions, user-friendly explanation of the automated decisions, and availability of human review of the decisions.

6.  Mapping Internet Jurisdiction, Extraterritoriality, and Intermediary Liability Finally, international private law issues are addressed in Part VII. The purpose of this Part is to examine the interface between intermediary liability online and internet jurisdiction, with special emphasis on extraterritorial enforcement of intermediaries’ obligations, which has been emerging as a consistent trend in intermediary liability policy. The term ‘fragmentation’ has surfaced several times in the last few pages. Rules governing online intermediaries are complex and diverging in numerous jurisdictions. As is apparent from the preliminary mapping in this chapter, divergence has recently been expanding rather than becoming normalized. This phenomenon is perhaps tightly attached to the protectionist impulses that characterize present international relationships and internet governance. In Chapter 36, Dan Jerker Svantesson rates this ‘as the most important, and perhaps most urgent, underlying issue facing the internet—there is a fundamental clash between the global, largely borderless, internet on the one hand, and the practice of lawmaking and jurisdiction anchored in a territorial thinking’. In discussing Internet Jurisdiction and Intermediary Liability, Svantesson points at several issues, including the validity of OSPs’ terms of service and law enforcement requests to

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   31 access data stored abroad, but the key development that threatens the global internet relates to the geographical scope of online intermediaries’ liability and obligations. In particular, recent case law and policymaking is faced with answering the riddle of the extraterritorial application of such obligations. Extraterritorial enforcement recently made the headlines for the worldwide enforcement of the ‘right to be forgotten’. Some European institutions endorse the view that delisting should have an extraterritorial reach. On the territorial effect of delisting decisions, the WP29 Guidelines noted that limiting delisting to EU domains cannot be considered a sufficient means to satisfactorily guarantee the rights of data subjects according to the ruling. In practice, ‘this means that in any case de-listing should also be effective on all relevant .com domains’.121 Recently—in accordance with the WP29 Guidelines— the Commission Nationale de l’informatique et des Libertés (CNiL), the French data protection authority, ordered Google to apply the right to be forgotten on all domain names of Google’s search engine, including the .com domain.122 The question raised by CNiL was finally decided by two recent CJEU cases—with a second case dealing with global delisting of a defamatory post on Facebook. The CJEU concluded that EU law does not impose or preclude worldwide measures.123 Instead, it is up to national courts to decide whether extraterritorial delisting should be imposed according to their own balancing of fundamental rights and application of international norms.124 The final CJEU’s stance partially departed from Advocate General Szpunar’s Opinion warning against the unqualified application of worldwide delisting and blocking orders.125 Svantesson shares similar critical views against worldwide measures. However, in a time of ‘sovranism’, a stronger stance on digital sovereignty is to be expected. Of course, this approach is set to create conflicts between the law of the affected party and the law of the speaker. However, national courts can hardly endorse a different approach—or EU law impose it on national courts. If international law allows it, a national court finding that the fundamental rights of its nationals have been infringed must impose measures that provide global redress. Meanwhile, decisions imposing extraterritorial effects on intermediaries have also appeared elsewhere. Svantesson discusses additional ones from Australia and the United States, such as X v Twitter126 and Garcia v Google respectively.127 Notably, the 121 Article 29 Data Protection Working Party, ‘Guidelines on the Implementation of the CJEU Judgment on Google Spain v. Costeja’ (26 November 2014) 14/EN WP 225, 3. 122  See CNiL Restricted Committee Decision no. 2016–054 of 10 March 2016 . See also ‘CNiL Orders Google to Apply Delisting on All Domain Names of the Search Engine’ (CNiL, 12 June 2015) . 123  See C-507/17 Google LLC v Commission nationale de l’informatique et des libertés (CNIL) [2019] ECLI:EU:C:2019:772, para. 72; C-18/18 Eva Glawischnig-Piesczek v Facebook Ireland Ltd [2019] ECLI:EU:C:2019:821, paras 50–2. 124 ibid. 125 C-507/17 Google LLC v Commission nationale de l’informatique et des libertés (CNIL) [2019] ECLI:EU:C:2019:15, Opinion of AG Szpunar, para. 36. 126  [2017] NSWSC 1300. 127 See Cindy Lee Garcia v Google Inc. and others, 786 F.3d 733 (9th Cir. 2015) (US).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

32   Giancarlo Frosio Supreme Court of Canada issued an order requiring Google to remove websites from its worldwide index. The court order is unprecedented for Canada as it forces Google to remove links anywhere in the world, rather than only from the search results available through Google.ca.128 In Chapter 37, Michael Geist discusses The Equustek Effect: A Canadian Perspective on Global Takedown Orders in the Age of the Internet. Equustek stands as a quintessential example of the disruptive effect of extraterritorial enforcement and the stalemate that it might bring about. After the Canadian Supreme Court issued its final order for global delisting, a district court in San Jose, California issued a diametrically opposed decision stating that Google would be infringing US law if it enforced the Canadian order. Twenty years later, the catch-22 scenario of Licra v Yahoo! resurfaces in a far more distributed fashion and might potentially break the internet.129 The 2000 hate speech and Nazi memorabilia case showed the world that internet jurisdiction could become an unsolvable puzzle. For a couple of decades, though, courts shied away from the controversy so as not to awake the dormant kraken. The kraken that can destroy the internet—or at least Balkanize it—has been reawakened. According to Geist, only two outcomes seem plausible as well as precarious: ‘local courts deciding what others can access online or companies such as Google selectively deciding which rules they wish to follow’. Bertrand de La Chapelle and Paul Fehlinger further develop this precarious state of internet affairs by noting that ‘[e]xtraterritorial extension of national jurisdiction is becoming the realpolitik of internet regulation.’ In Jurisdiction on the Internet: From Legal Arms Race to Transnational Cooperation, de La Chapelle and Fehlinger argue from a global internet governance perspective that conflicts between jurisdictions online increase steadily, challenging the Westphalian international system. Possibly, as the case law reviewed by Svantesson and Geist has already shown, a legal arm race will escalate with countries exerting digital sovereignty through an extensive interpretation of territoriality criteria over cross-border data flows and services. For de La Chapelle and Fehlinger, the traditional legal cooperation cannot cope with internet jurisdictional tensions, opening an uncertain path for the future of the global digital economy, human rights, cybersecurity, and the technical internet infrastructure. The route to pursue would be that of an international agreement on the matter. Unfortunately, there is no consensus in sight and this is not going to change any time soon. This institutional gap in internet governance—de La Chapelle and Fehlinger stress—may be solved by launching innovative cooperation mechanisms as transnational as the internet itself through the development of issue-based multistakeholder policy networks for developing ‘scal­able solutions for cross-border legal challenges with regard to data flows, online content or domains’. Given the role of online intermediaries in the digital interconnected society, their li­abil­ity for the speech and content they carry has become a primary policy concern. Much has changed since the inception of the first online intermediaries—and their early regulation. New challenges have renewed discussion of the scope of intermediaries’ 128  Equustek Solutions Inc. v Google [2017] SCC 34 9 (Can.). 129  TGI Paris LICRA & UEJF v Yahoo! Inc [20 November 2000] (Fra.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   33 duties and obligations. The Oxford Handbook of Online Intermediary Liability seeks to understand a confused international legal framework. The uncertainty that this confusion brings about can hurt users by potentially scaring companies away from providing innovative new services in certain markets. Additionally, companies may unnecessarily limit what users can do online, or engage in censorship by proxy to avoid uncertain retribution under unfamiliar laws. National courts and authorities, on the other hand, may seek extraterritorial enforcement to prevent any access to infringing materials in their jurisdiction. This is telling of a disconnection between physical and digital governance of information and content that will hardly go away, at least for some time. As a result, in an apparently confused legal and theoretical landscape, there is a growing tendency towards internet fragmentation, which is made even more obvious by unconcealed national tendencies towards data protectionism.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

pa rt I I

M A PPI NG F U N DA M E N TA L NOT IONS

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 2

W ho A r e I n ter n et I n ter m edi a r ies? Graeme Dinwoodie

As this Handbook plainly attests, much has been written in recent years on the liability of ‘internet intermediaries’.1 Yet, it is not altogether clear who is an ‘internet intermediary’ (or ‘online intermediary’), the object of this intense scholarly attention. Tarleton Gillespie’s 2018 article ‘Platforms Are Not Intermediaries’ clearly invoked an understanding of the term (probably meaning they are not passive neutral conduits)2 quite different than most legal scholars.3 But even among legal scholars (and courts and legislatures) there is little consensus or clarity. As Jaani Riordan has explained: The term ‘internet intermediary’ is [an] unhappy abstraction. These words must be used to describe many entities which seem to share little in common, other than activity that uses electronic computer networks . . . They are a genus whose many members’ common feature have never been systematically identified. These difficulties partly explain their tendency to elude precise definition.4

This chapter considers several different ways in which we can better understand the category of online intermediaries. It does so by acknowledging important scholarly efforts 1 See e.g. Jaani Riordan, The Liability of Internet Intermediaries (OUP 2016); Martin Husovec, Injunctions Against Intermediaries in the European Union: Accountable But not Liable? (CUP 2017). 2  See Tarleton Gillespie, ‘Platforms are Not Intermediaries’ (2018) 2 Geo. L. Tech. Rev. 198; see also Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media (Yale U. Press 2018). 3  Cf. David Llewelyn, ‘Intellectual Property Liability of Consumers, Facilitators and Intermediaries: Concepts Under Common Law’ in Christopher Heath and Anselm Kamperman Sanders (eds), Intellectual Property Liability of Consumers, Facilitators and Intermediaries (Wolters Kluwer 2012) s. 2.02, at 18 (‘This paper regards “intermediaries” as those who have played an active role in the transaction, providing an essential element for the act of infringement; “facilitators”, on the other hand, do not play this role at all, they are mere conduits and their role is a passive one’). 4  See Riordan (n. 1) s. 2.12, at 29.

© Graeme Dinwoodie 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

38   Graeme Dinwoodie to produce a detailed taxonomy of online intermediaries. Riordan himself has produced a remarkably useful taxonomy of internet services—developed by reference to the way in which a party’s services fits within the internet’s architecture—on which it would be nigh impossible to improve.5 An analysis of positive law in a number of countries reveals some lessons about the meaning of the term, as well as alternative legal categories of online actors that might be seen as subsets of, or overlapping with, ‘online intermediaries’. Most notably, the concepts of ‘online service provider’ or ‘internet service provider’ serve as rough, but not exact, proxies for internet intermediaries.6 And more recent legislative innovations in Europe have carved out further subsets, such as the ‘online content-sharing service pro­viders’ which will be subject to additional obligations under the 2019 Directive on Copyright in the Digital Single Market.7 By and large, these are technical statutory def­in­itions adopted for very particular legal purposes, but they are considered for they introduce a range of distinctions among internet intermediaries. This chapter also pursues a more typological approach to the classification question, while expounding on the important role that Riordan’s functional taxonomy (or those of others) might play.8 Clearly, law must have regard to empirical reality of how actors behave in constructing categories. But those legal categories are often driven by other considerations, as will be our policy debates. By adopting a somewhat more conceptual approach, I hope to develop understandings that will allow us to debate the liability of online intermediaries across borders and across time, given that the technological features and social role of online intermediaries are constantly evolving.

1.  Definitions of ‘Internet Intermediaries’ The term ‘internet intermediary’ has almost no formal definition in statutes or inter­ nation­al treaties.9 This is not to say that the term ‘intermediary’ is not found in such 5  ibid. ss. 2.40–2.86, at 36–46. 6  See text accompanying nn. 104–6. 7  See ibid. 8  Different disciplines comprehend taxonomies and typologies in different ways. Without engaging with that abstract philosophical question, I use ‘taxonomy’ to suggest more empirically observable features of different online intermediaries and ‘typology’ to suggest more conceptual groupings along a number of variables that should inform how we treat internet intermediaries. 9  But see Comprehensive Economic and Trade Agreement Between Canada of the One Part, and the European Union and its Member States, of the Other Part, Can-EU, 30 October 2016 [CETA], OJ L/11 (14 January 2017) 23, Art. 20.11 (mandating safe harbours for ‘intermediaries’ including conduit, hosting, and caching, but also permitting safe harbours for other functions, ‘including providing an information location tool, by making reproductions of copyright material in an automated manner, and communicating the reproductions’). Art. 18.82 of the Trans-Pacific Partnership Agreement noted that ‘the Parties recognise the importance of facilitating the continued development of legitimate online services operating as intermediaries and, in a manner consistent with Article 41 of the TRIPS Agreement, providing

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   39 instruments. For example, Article 11 of the EU Enforcement Directive and Article 8(3) of the EU Information Society Directive each require that Member States ensure that ‘right holders are in a position to apply for an injunction against intermediaries whose services are used by a third party to infringe an intellectual property right’.10 For the purpose of such injunctions—which might be called ‘accountability orders’11—neither Directive defines the term ‘intermediary’. But the Court of Justice of the European Union has been faced with cases in which the defendant against whom such an order was sought has contested its status as an ‘intermediary’. These challenges to intermediary status have largely failed. Thus, an internet access provider is an intermediary.12 So, too, is an online marketplace, a social network, and the operator of an open wireless network.13 Drawing on recital 59 of the Information Society Directive, the Court of Justice suggested in Telekabel that an intermediary must ‘carry a third party’s infringement in a network’.14 But this is implicit in the ordinary concept of ‘intermediation’, as reflected in Jaani Riordan’s observation that ‘internet intermediaries are united by the attribute that enforcement procedures that permit effective action by right holders against copyright infringement covered under this Chapter that occurs in the online environment’. However, the safe harbours expressed im­mun­ity in term of ‘internet service providers’. Art. 18.82 was one of those provisions suspended when the parties renegotiated the agreement upon US withdrawal. See Comprehensive and Progressive Agreement for Trans-Pacific Partnership, 8 March 2018 . 10  See Directive 2004/48/EC of the European Parliament and of the Council of 29 April 2004 on the enforcement of intellectual property rights [2004] OJ L195/22, Art. 11; Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the Harmonisation of Certain Aspects of Copyright and Related Rights in the Information Society [2001] OJ L167/10, Art. 8(3). 11  In using this phrase, I am adapting Martin Husovec’s insightful characterization of this type of remedy. See Husovec (n. 1). Sir Richard Arnold has labelled this type of liability as ‘intermediary liability’, see Richard Arnold in Chapter 21, which makes some sense given the precondition to its availability. Cf. Riordan (n. 1) ss. 1.49–1.60, at 12–14 (using terminology of ‘injunctive’ liability, and describing these mechanisms as ‘injunctions without wrongdoing’). But as this chapter is intended to interrogate the concept of internet ‘intermediary’ I will avoid using that description of the orders here. 12  See C-314/12 UPC Telekabel Wien GmbH v Constantin Film Verleih GmbH [2014] ECLI:EU:C:2014:192; C-557/07 LSG-Gesellschaft zur Wahrnehmung von Leistungsschutzrechten GmbH v Tele2 Telecommunication GmbH [2009] ECLI:EU:C:2009:107, para. 43 (‘Access providers who merely enable clients to access the Internet, even without offering other services or exercising any control, whether de iure or de facto, over the services which users make use of, provide a service capable of being used by a third party to infringe a copyright or related right, inasmuch as those access providers supply the user with the connection enabling him to infringe such rights’). 13  See C-324/09 L’Oréal SA v eBay Int’l AG [2011] ECLI:EU:C:2011:474, para. 113; C-360/10 SABAM v Netlog NV [2012] ECLI:EU:C:2012:85; C-484/14 Tobias McFadden v Sony Music [2016] ECLI:EU:C:2016:689 (discussing free public wi-fi offered by small business). 14  See C-314/12 (n. 12) para. 30 (emphasis added). The Court further elaborated that the access provider in that case was an intermediary because it was ‘an inevitable actor in any transmission of an infringement over the internet between one of its customers and a third party, since, in granting access to the network, it makes that transmission possible’, ibid. para. 32. This seems to suggest (no doubt correctly) that there will only be an order against such an intermediary when the order relates to the provision of services with which its customer would potentially access an infringing site. Cf. Jaani Riordan, ‘Website Blocking Injunctions Under United Kingdom and European Law’ in Graeme Dinwoodie (ed.), Secondary Liability of Internet Service Providers (Springer 2017) 275, 278 (noting that this ‘appears to reflect causation-based justification for the grant of blocking remedies’ rather than serving as any definitional limit).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

40   Graeme Dinwoodie they are necessary but insufficient causes when online wrongdoers utilize their services or status to cause harm to others’.15 And the language of the Court’s quite conclusory reasoning appears to impose few limits. The suggestion in Tommy Hilfiger Licensing LLC v Delta Center that ‘[f]or an economic operator to fall within the classification of “intermediary” within the meaning of those provisions, it must be established that it provides a service capable of being used by one or more other persons in order to infringe one or more intellectual property rights’ brings within the concept a wide array of actors.16 In Hilfiger, the Court did not decide the hypothetical question whether suppliers of electricity were intermediaries, but the language admits of this possibility.17 Scholars have suggested that ‘[f]urther intermediaries likely to be covered [by Art. 8(3) of the Information Society Directive and Art. 11 of the Enforcement Directive] include pro­viders of payment or anonymizing services, domain name authorities, search engines, VPN providers, or providers of other technical infrastructure.’18 In short, the Court of Justice has embraced a capacious understanding of the term ‘intermediary’. As Martin Husovec notes in his foundational work on accountability orders, ‘although the Directive uses the term “intermediary” [as the potential subject of an accountability order] it is becoming increasingly clear that the underlying concept has very little to do with any intermediary role. The generous protective hand of injunctions basically extends to everyone who engages in an economic activity, in course of which he or she is in a position to prevent third party wrongdoing. Any definitional exercises seem to be by now a mere windowdressing for the purpose of satisfying black letter law.’19

15  See Riordan (n. 1) s. 2.03, at 27. 16  See C-494/15 Tommy Hilfiger Licensing LLC v Delta Center [2016] ECLI:EU:C:2016:528, para. 23. If Art. 11 of the Enforcement Directive is read as encompassing a number of conditions before an order is issued, one of which being that the defendant is an ‘intermediary’, see Cartier Int’l v British Sky Broadcasting [2014] EWHC 3354 (Ch) (UK), then that suggests that one could be an ‘intermediary’ in circumstances where the other conditions do not pertain. The UK courts have listed the other conditions as including that (1) the users/operators of the website are infringing the plaintiff ’s rights, and (2) the users/operators of the website use the intermediary’s service to do so. See ibid. So, logically, these cannot be definitional elements of the concept of an ‘intermediary’; ‘intermediary’ must be a concept that stands apart from these considerations. 17  See C-494/15 (n. 16) para. 28. Sir Richard Arnold has suggested that certain types of search engine might be an intermediary within the meaning of Art. 11 but others would not. See Arnold in Chapter 21. If this were true, it would highlight the important point that for the purposes of particular provisions where the concept of ‘intermediary’ is a threshold for application, ‘intermediary’ might be defined by particular conduct in a particular context rather than status in the online ecosystem. 18  See Husovec (n. 1) 90. This understanding of the legal term ‘internet intermediary’ for the purposes of accountability orders is not all that different than the functionally-derived taxonomy created by Riordan. See Riordan (n. 1) ss. 2.40–2.86 at 36–46. But, of course, Riordan’s taxonomic exercise is designed not only to show the range of actors whom we might think of as intermediaries but also to highlight the important differences among them, which should help in determining whether accountability (or any other) orders should be issued against them, and what the nature of any relief should be. That is, the scope of the overall concept is a different question from the question of sub-classification. 19  See Husovec (n. 1) 90. The EU is not unique. See Graeme Austin, ‘Common Law Pragmatism: New Zealand’s Approach to Secondary Liability of Internet Service Providers’ in Dinwoodie (n. 14) 213 (commenting that the definition of a ‘hosting’ service provider for the purposes of the copyright safe

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   41 Indeed, although the provision in the Information Society Directive was clearly mo­tiv­ated in large part by the context of online exploitation,20 the parallel provision in the Enforcement Directive has not been limited to the online context.21 Thus, in Tommy Hilfiger Licensing, the tenant of market halls who sublet the various sales points situated in those halls to market traders, some of whom used the locations in order to sell counterfeit branded products, was held to fall within the concept of ‘an intermediary whose services are being used by a third party to infringe an intellectual property right’ within the meaning of Article 11 of the Enforcement Directive.22 The Court of Justice declined to draw any distinction between an online and physical marketplace. Likewise, the e-Commerce Directive references the term ‘intermediary’ several times in its recitals.23 But in the corresponding Articles that implement those recitals (Arts 12–15), safe harbours are created for providers of ‘information society service’ providers (rather than ‘internet intermediaries’).24 In the context of the EU e-Commerce Directive, a ‘service provider’ is a person providing an information society service (as that term is defined in an earlier Directive). This basically means ‘any service normally provided for remuneration, at a distance, by electronic means and at the individual request of a recipient of services’.25 This is a broad definition but does exclude some commercial actors such as internet cafés (because not provided remotely) and broadcasters (who, rather than the user, determine when and what transmissions occur). And although the safe harbours to which such providers can have recourse apply only to economic operators rather than non-commercial services, English courts have held un­equivo­cal­ly that personal websites, such as blogs and discussion fora, which have no profit motive or revenue model, may qualify for protection.26 Insofar as the defined term ‘information society service’ provider is serving as a proxy for intermediaries that the e-Commerce Directive has in its sights, this too is a broad conception of ‘internet intermediary’.27 And in the context of

harbour ‘would extend the concept of “Internet service providers” to “Web 2.0” platforms, bulletin boards, blogs, or even websites operated by firms, public entities, and private parties’). 20  See Information Society Directive, recital 59 (‘In the digital environment, in particular, the services of intermediaries may increasingly be used by third parties for infringing activities. In many cases such intermediaries are best placed to bring such infringing activities to an end . . .’). 21  See C-494/15 (n. 16) para. 29 (‘The fact that the provision of sales points concerns an online marketplace or a physical marketplace such as market halls is irrelevant in that connection. It is not apparent from Directive 2004/48 that the scope of the directive is limited to electronic commerce’). 22  See ibid. 23  See Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on Certain Legal Aspects of Information Society Services, in Particular Electronic Commerce, in the Internal Market [2000] OJ L178/1, recitals 14, 40, and 50. 24  In implementing Art. 8(3) of the Information Society Directive, the UK government also used the term ‘service provider’, which it then defined in terms consistent with regulations issued under the e-Commerce Directive. Cf. Directive 2001/29/EC (n. 10) Art. 8(3); Copyright, Designs and Patents Act 1988, s. 97A; Electronic Commerce (EC Directive) Regulations 2002 (SI 2002/2013), reg. 2. 25  See Directive 2000/31/EC (n. 23) recital 17. 26  See Riordan (n. 1) s. 12.63, at 390. 27  Cf. Riordan (n. 1) s. 2.01, 26 (‘Not all providers of internet services are properly described as “inter­ medi­ar­ies” as such, and are not necessarily to be treated comparably’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

42   Graeme Dinwoodie the e-Commerce Directive, unlike the Enforcement Directive, we are talking about internet intermediaries. The e-Commerce Directive not only encompasses a wide variety of internet inter­medi­ ar­ies, but also classifies them in a way that might suggest a taxonomy; at the very least, these classifications have legal consequences. Thus, the e-Commerce Directive creates immunity for service providers: (1) who are mere neutral and transient conduits for tortious or unlawful material authored and initiated by others (Art. 12); (2) for caching local copies of third parties’ tortious or unlawful data (Art. 13);28 and (3) who store third parties’ tortious or unlawful material while having neither actual knowledge of the ‘unlawful activity or information’ nor an awareness of facts or circumstances from which that ‘would have been apparent’ (Art. 14).29 As we shall see later, EU legislation picks up three of the four safe harbours found in US copyright law, as enacted by the Digital Millennium Copyright Act (DMCA).30 More generally, the types of intermediary activity protected by the safe harbours vary slightly among jurisdiction.31 But there is a core commonality that identifies, inter alia, access providers that serve as conduits and host providers that host content as distinct classes of intermediary. Article 14 of the e-Commerce Directive, which provides a hosting safe harbour not unlike that found in section 512(c) of the US Copyright Act, has received the greatest attention in litigation and is similar to immunity provisions in many jurisdictions. Article 14 provides that:



1. Where an information society service is provided that consists of the storage of information provided by a recipient of the service, Member States shall ensure that the service provider is not liable for the information stored at the request of a recipient of the service, on condition that: (a) the provider does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent; or

28  To qualify for protection, caching must be ‘automatic, intermediate and temporary’, and for the sole purpose of making onward transmission more efficient. Service providers will lose protection if they do not act expeditiously to remove cached information upon obtaining actual knowledge that the original copy has been removed or its removal ordered by a competent authority. See Directive 2000/31/EC (n. 23) Art. 13. 29  ibid. Arts 12–14. See also C-484/14 (n. 13). 30  Unlike the DMCA safe harbours, however, those required by the e-Commerce Directive apply horizontally to tort claims under national law as well as trade mark or unfair competition law claims. Indeed, they apply horizontally across all forms of civil and criminal wrongdoing, subject to a number of exclusions set out in Art. 3. 31  For example in New Zealand there are storage and caching safe harbours, along with a provision (which arguably serves as immunity for a mere conduit) affirming that service provider liability cannot be based on mere use of its facilities by a primary infringer. See Austin (n. 19).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   43

(b) the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information. 2. Paragraph 1 shall not apply when the recipient of the service is acting under the authority or the control of the provider.

The Court of Justice interpreted this provision in Google France v Louis Vuitton Malletier and L’Oreal v eBay.32 The Court held that whether the intermediaries in question—Google or eBay—came within the safe harbour would depend on two considerations. First, as a threshold matter, the intermediary could not have been ‘active’ in the al­leged­ly illegal activity; the safe harbour protects conduct of a ‘mere technical, automatic and passive nature’.33 To take advantage of the safe harbour, an intermediary must be a ‘neutral’ actor. The Advocate General in eBay had questioned the application of the neutrality condition to immunity under Article 14; the concept is referenced in a recital arguably addressing another provision (the conduit safe harbour).34 But the Court of Justice adopted the requirement in both Google France and eBay.35 Thus, Google’s immunity would depend on the role it played in the selection of keywords.36 Google’s Keyword Suggestion Tool might render its activities ‘non-neutral’ and make it vulnerable to loss of immunity under Article 14. Likewise, in eBay L’Oréal argued that Article 14(1) could not apply because the activities of eBay went ‘far beyond the mere passive storage of information provided by third parties’.37 On the contrary, L’Oréal alleged, eBay ‘actively organized and participated in the processing and use of the information to effect the advertising, offering for sale, exposing for sale and sale of goods (including infringing goods)’.38 The Court of Justice found first that eBay was potentially entitled to the benefit of Article 14 as an information society service provider. Whether eBay was such a provider within the protection afforded by the safe harbour would as a threshold matter depend on how active it was in the allegedly illegal activity.39 If it had been involved in ‘optimising the presentation of the offers for sale in question or promoting those offers, it [would] be 32  See C-236–238/08 Google France SARL v Louis Vuitton Malletier SA [2010] ECLI:EU:C:2010:159, para. 20; C-324/09 (n. 13) para. 113. 33  ibid.; ibid. para. 115 (‘[T]he mere fact that the operator of an online marketplace stores offers for sale on its server, sets the terms of its service, is remunerated for that service and provides general information to its customers cannot have the effect of denying it the exemptions from liability provided for by [the e-Commerce Directive]’). 34  See C-324/09 (n. 13) [2010] ECLI:EU:C:2010:757, AG’s Opinion 139−46. 35  See C-236–238/08 (n. 32) paras 113–16. 36  See C-324/09 (n. 13) para. 119. 37  L’Oréal SA v eBay International AG [2009] EWHC 1094 (Ch) [437] (UK). 38 ibid. 39  See C-324/09 (n. 13) para. 113; ibid. para. 115. The fact that certain services are automated should not always mean that the provider of those services is of itself not ‘active’. Algorithmic development is surely relevant behaviour in assessing degrees of activity. Cf. Lenz v Universal, 815 F.3d 1145 (9th Cir. 2016) (US) (amending opinion to reflect algorithmic review). But see Annette Kur, ‘Secondary Liability for Trademark Infringement on the Internet: The Situation in Germany and Throughout the EU’ (2014) 37 Colum. J. of L. & Arts 525, 532 (noting the majority position in German case law that has treated suggestions proffered by operation of algorithms as being within the protection of Art. 14).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

44   Graeme Dinwoodie considered not to have taken a neutral position . . . [and thus unable] to rely, in the case of those [offers]’, on the Article 14 exemption.40 Determination of that question was, however, left to national courts.41 Secondly, by the terms of Article 14, even if an intermediary was insufficiently in­active to qualify as an information services provider it would lose immunity if it was put on knowledge of a wrong and did not act expeditiously. Thus, for example, if eBay ‘was aware of facts or circumstances on the basis of which a diligent economic operator should have realised that the offers for sale in question were unlawful and, in the event of it being so aware, failed to act expeditiously’.42 Google could invoke Article 14 only if it disabled ads upon receiving notice.43 The Court of Justice’s reading of the second condition imposes ‘a hybrid standard that assesses what the defendant actually knew according to the standards of a reasonable service provider in the defendant’s position’.44 The Court of Justice has not finally decided on the availability of the EU safe harbour for either search engines or auction sites, leaving that to be fought out among the lower national courts. In decisions handed down on similar facts to eBay v L’Oreal, the French Supreme Court held that eBay could not avail itself of the protection under Article 14 because it had played too active a role by assisting sellers in the promotion and fostering of sales.45 In contrast, the Madrid Court of Appeals has held that Google (as the owner of YouTube) was acting as a host under Article 14 in the context of a copyright infringement case.46 The Italian courts have reached similar outcomes regarding YouTube.47 Variations clearly remain at the national level within Europe.48 The issue is now before the Court of Justice.49

40  C-324/09 (n. 13) para. 116. 41  ibid. para. 117. 42  ibid. para. 124 (quoting Art. 14 of the e-Commerce Directive). 43  See ibid. para. 120. 44  See Graeme Dinwoodie, ‘A Comparative Analysis of the Secondary Liability of Online Service Providers’ in Dinwoodie (n. 14) 37 (quoting Riordan). 45  See Cour de cassation [French Supreme Court] eBay Inc v LVMH, Parfums Christian Dior [3 May 2012] Bull. civ. IV, no. 89 (Fra.). See also Beatrice Martinet Farano, ‘French Supreme Court Denies eBay Hosting Protection’ (2012) 3 TTLF Newsletter on Transatlantic Antitrust & IPR Developments (discussing the French Supreme Court’s decision, in which the court specifically relied on the ‘active role’ standard from the Court of Justice’s L’Oréal and Google France decisions). 46 See Audiencia Provincial Sentencia [APS] [Provincial Appellate Court Sentence] Madrid Gestevision Telecinco, SA v YouTube, LLC [14 January 2014] no. 11/2014 (ES) translated in ‘Decision no. 11/2014 on YouTube v Telecinco’ (Hogan Lovells Global Media & Comm Watch, 14 February 2014) . 47  See Martini Manna and Elena Martini, ‘The Court of Turin on the Liability of Internet Service Providers’(Lexology,10June2014) (reporting on similar outcomes in Italy on similar grounds in Tribunale Torino [Ordinary Court of First Instance of Turin] Delta TV Programs srl v Google Inc [23 June 2014] Docket no. 38113/2013 (It.) ). 48  See Kur (n. 39) 531–2 (noting different approaches in Germany and France). 49  See C-682/18 LF v Google LLC, YouTube Inc., not yet decided.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   45

2.  Alternative Terms The literature discussing online intermediaries frequently uses alternative terms that are often assumed to be synonymous both with each other and with ‘online intermediaries’. The most frequent alternative would be ‘online service provider’ (OSP) or ‘internet service provider’ (ISP). These terms are used largely because they are commonly defined terms in statutes; indeed, if they were not defined by the statute, the concept would almost be without limit.50 As a result, those terms have no consistent meaning across national borders. To be sure, the term ‘online intermediary’ also lacks a single, common, and consistent usage. But ‘online intermediary’ is less frequently used as a defined statutory term; the variation in that case arises more from conceptual ambiguities than statutory definitions. Each of these terms has in recent years received some legislative definition (along with yet other synonyms) in provisions creating safe harbours, or immunity from liability, for such actors.51 Just before the EU adopted the e-Commerce Directive, the United States enacted the DMCA (and two years before that the Communications Decency Act). Section 230 of the Communications Decency Act provides the strongest and most unconditional form of immunity from liability for providers and users of an ‘interactive computer service’ who publish information provided by others.52 However, that provision does not apply to federal intellectual property claims. Instead, the DMCA introduced a detailed set of provisions into section 512 of the Copyright Act establishing a series of immunities from damages for copyright infringement for intermediaries engaged in certain types of behaviour: providing internet access as conduits, hosting websites, or offering ‘information location tools’.53 It also provided a safe harbour for caching, from which many intermediaries benefit. The parallels with the e-Commerce Directive are obvious, even though the DMCA is formally restricted to copyright law.54

50  See Riordan (n. 1) s. 2.04, at 27 (noting that the term ‘[service providers] is so general that it described almost any commercial entity’). 51  The terms make more fleeting appearances outside this context, primarily in defining which actors are subject to certain disclosure obligations vis-à-vis customers and enforcement authorities. For example the UK’s Digital Economy Act 2010 required an ‘internet service provider’ to participate in a subscriber notification regime, and for that purpose defined an ‘internet service provider’. See Digital Economy Act 2010, s. 16 amending Communications Act 2003, s. 124N (defined as ‘a person who provides an “internet access service”, which in turn means an electronic communications service consisting wholly or mainly of access to the internet, where an IP address is allocated to each subscriber to enable access’). See also Digital Economy Act 2010, ss. 3–4. 52  See Communications Decency Act of 1996, 47 USCA s. 230(c)(1) (‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider’). The protected intermediaries—interactive computer service providers—is a broad category. 53  17 USC § 512 (2000). 54  It is on occasion improperly used to further different claims.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

46   Graeme Dinwoodie Each safe harbour under the DMCA (e.g. access vs hosting vs caching) has its own specific requirements that vary slightly.55 Immunity as an access provider requires, to oversimplify, a basic passivity. More formally, the storage of material must be ‘inter­medi­ ate and transient . . . in the course of transmitting, routing, or providing connections’. Moreover: • the event must be initiated by a third party; • the event must be carried out through an automatic technical process without selection of the material by the service provider; • the service provider must not select recipients of the material except as an automatic response to the request of another person; • no copy of the material made by the service provider must be maintained on the system in a manner ordinarily accessible to anyone other than anticipated recipients or for a longer period than is reasonably necessary for the transmission, routing, or provision of connections; and • the material is transmitted through the system or network without modification of its content. The crucial hosting immunity in section 512(c) of the Copyright Act is based on inter alia the operation of a notice-and-takedown system by the intermediary. Section 512(c) is an elaborate provision that (like each safe harbour in the statute) imposes a series of specific conditions—in addition to the general conditions noted above—that are quite detailed. Thus, ‘the (hosting) provider must not have actual knowledge that the material or an activity using the material is infringing, is not aware of facts or circumstances from which infringing activity is apparent, and upon obtaining such knowledge or awareness, it acts expeditiously to remove or disable access to the material’. In addition, where the provider has the right and ability to control such activity, it must not receive a financial benefit directly attributable to the infringing activity.56 Most importantly, where an ISP receives a takedown notice alleging infringement that complies with the detailed provisions of the statute, it must respond expeditiously to remove or disable access to the material alleged to be infringing. And the statute then orchestrates in precise terms the actions that the ISP must take to preserve its immunity, including how it must respond to any counter-notice that it receives from any person in response to its good-faith disabling of access to, or removal of, material.57 The counternotice procedure of the DMCA is intended to preserve some balance between the rights of the customers of the ISP, who might have valid grounds for believing that their conduct is not infringing, and those of the copyright owner (with whose notice the ISP is 55  To avail oneself of any of the different immunities under the DMCA, the statute created two general conditions. Thus, OSPs must ‘adopt and reasonably implement, and inform its subscribers and account holders, of a policy for termination of repeat infringers’; and they must ‘accommodate and not interfere with standard technical measures used by copyright owners to identify or protect copyrighted works’. See 17 USC § 512(i)–(j). 56  ibid. § 512(c). 57  ibid. § 512(g).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   47 likely to comply in order to maintain immunity). These provisions take the form of conditions which an ISP must satisfy to protect itself from liability for hosting infringing content or direct liability to a customer for removing what turns out to be lawful ma­ter­ ial. But the benefits of compliance are so strong that the conditions in the statute function as a form of business regulation.

3. A  Functional Taxonomy The Organisation for Economic Co-operation and Development (OECD) has described (but not defined) internet intermediaries in these terms: ‘Internet intermediaries bring together or facilitate transactions between third parties on the Internet. They give access to, host, transmit and index content, products and services originated by third parties on the Internet or provide Internet-based services to third parties.’58 As with the statutory definitions noted earlier, this is hardly limited. And it risks treating quite diverse actors as one monolith. In The Liability of Internet Intermediaries, Jaani Riordan offered a compelling taxonomy of internet services, which ‘classifies their technical activities into the physical, network, and application layers’ of the internet.59 Physical layer services provide the basic infrastructure necessary for communication; network layer services are responsible for data being routed from one IP address to another;60 and application layer services where ‘content is transacted’.61 Riordan consciously sought to offer a more nuanced classification than one could find in the statutory schemes noted earlier. One of the criticisms of the schemes found in, for example, the e-Commerce Directive or the DMCA, is that they reflect a static twentieth-century view of internet intermediaries. The separation between access pro­ viders and hosts, and our expectation of what each category of intermediaries does or should do has moved on. Many intermediaries now straddle these classifications. And those classifications also group together actors who might in sometimes important ways differ dramatically from each other. More nuance can only help. Without rehashing Riordan’s model, it is worth noting two features. First, within each category there is a remarkable range of different actors performing different roles. The legal regimes operate with a far smaller number of categories, as is not surprising (and surely appropriate for enforcement costs reasons).62 Secondly, one actor can perform 58  See OECD, ‘The Economic and Social Role of Internet Intermediaries’ (2010) 9. 59  See Riordan (n. 1) s. 2.02, at 26. 60  In this category, Riordan creates sub-classes of ISPs (akin to access providers); hosts; cloud services; domain name controllers; and certificate authorities. See Riordan (n. 1) s. 2.46, at 38. 61  See Riordan (n. 1) s. 2.57, at 40. Riordan splits this last extremely diverse category into platforms, gateways, and marketplaces. See ibid. 62  See Christina Angelopoulos, European Intermediary Liability in Copyright: A Tort-Based Analysis (Wolters Kluwer 2016) s. 1.3.2, at 18 (noting that the oversimplification of categories might have been appropriate in the early days of the internet, but might now be questioned).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

48   Graeme Dinwoodie more than one type of service; indeed, this has become evident even in working with the smaller number of legal categories, as defendants have often asserted different safe ­harbours to immunize different parts of their operation. It must be understood that perfecting the taxonomy is an essentially empirical task. Given the fast-changing and diverse nature of online intermediaries, there might appear to be a risk in placing too much analytical weight on a (2016) technical description of the role and capacity of current actors. And it is an open question whether that empirical reality should be hardwired into legal doctrine, or even given dispositive weight at the moment it is assessed. The dynamic between technological form and legal liability has ebbed and flowed over the years. For example, the US Supreme Court has recognized the relevance of design choice to determinations of copyright inducement liability when combined with other evidence.63 Yet, it has also refused to allow technology clearly designed around doctrine to secure a clean bill of health.64 As long as this potential for evolution is understood and the functional features of today do not become embedded in fixed legal doctrine, this effort at taxonomy is valu­able.65 There is a need for greater granularity in this debate. This sensitivity would be valuable, because the legitimacy of the behaviour of intermediaries occupies a spectrum that requires greater flexibility (and room for more subtle calibration) than formal secondary liability doctrine might seem to allow.66 Indeed, such flexibility is also necessary to account for the different demands that might be imposed on smaller entities without the capital or sophistication of eBay.67 To give just one example of where a factual taxonomy might inform legal analysis, there remains disagreement among national courts in Europe whether web-blocking orders must as a matter of proportionality be sought first from host providers before actions are brought against access providers. The Court in Telekabel did not address 63 See MGM Studios Inc. v Grokster Ltd, 545 US 913, 939 (2005) (US) (‘[T]his evidence of unlawful objective is given added significance by MGM’s showing that neither company attempted to develop filtering tools or other mechanisms to diminish the infringing activity using their software’). 64  See Joseph Fishman, ‘Creating Around Copyright’ (2015) 128 Harv. L. Rev. 1333; Rebecca Giblin and Jane Ginsburg, ‘We Need to Talk About Aereo: Copyright-Avoiding Business Models, Cloud Storage and a Principled Reading of the “Transmit” Clause’, Columbia Law and Economics Working Paper no. 480 (29 May 2014) . 65  Riordan recognizes this risk, but believes that being organized around network architecture, his taxonomy is ‘flexible and capable of adaptation’. See Riordan (n. 1) s. 2.42, at 37. 66  See generally Jane Ginsburg, ‘Separating the Sony Sheep from the Grokster Goats: Reckoning the Future Business Plans of Copyright-Dependent Technology Entrepreneurs’ (2008) 50 Ariz. L. Rev. 577 (noting the need for copyright law to have tools by which to recognize the different culpability of a range of intermediaries even though all operate dual-purpose technologies). 67  cf. Frederick Mostert and Martin Schwimmer, ‘Notice and Takedown for Trademarks’ (2011) 101 Trademark Rep. 249, 264 (speculating as to some of the misjudgments that might have prompted the lack of responsiveness of the defendant in Louis Vuitton Malletier SA v Akanoc Solutions Inc., 658 F.3d 936 (9th Cir. 2011) (US)). In Louis Vuitton v Akanoc, the Ninth Circuit affirmed a jury verdict of over $10 million against a web-hosting business that leased, inter alia, server space to customers trafficking in counterfeit goods. In that case, the defendant (a small company) failed to respond expeditiously to takedown requests, and thus fell afoul of Tiffany because it had received actual notice. See ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   49 the suggestion by the Advocate General that the principle of proportionality might, however, warrant the infringers or the host ISP being pursued first.68 But if such a form of analysis were pursued then courts would need to be informed about the different roles played by different intermediaries (and potential defendants). That is to say, while resisting subservience to technical function as determinant of how to classify a commercial actor, it is important that technical realities inform assessment of liability.69 This is particularly true if the responsibility of an intermediary to engage in particular conduct turns on its technological capacity to do so.70 Definition by exemplar might also be useful, so it is worth briefly setting out the types of defendants that have been considered intermediaries in litigation. Claims against intermediaries have arisen not only when claimants are seeking accountability orders as discussed previously. In particular, claims for primary or secondary liability have been asserted—and characterized as being in essence against intermediaries—under copyright law, trade mark law, defamation law, and privacy law, to name but the most prom­ in­ent examples.71 The principal cases have revolved around relatively similar fact patterns in different jurisdictions. For example, copyright owners have sued manufacturers of copying technologies for infringements caused by those who use their equipment,72 purveyors of peer-to-peer file-sharing software for the activities of those who download material without rightholders’ permissions,73 and social media sites (such as YouTube) that host allegedly infringing clips from copyrighted audiovisual works.74 In the context of trade mark law, the leading modern exemplars of such claims are actions brought against online auction sites, each essentially alleging that the auction site could have done more to stop the sale of counterfeits or other allegedly infringing items by third parties on its website; and claims brought against search engines alleging 68  See C-324/09, AG’s Opinion (n. 34) para. 107 (‘[I]t should be noted that the ISP is not in a contractual relationship with the operator of the copyright-infringing website. As a consequence . . . , a claim against the ISP is, admittedly, not completely out of the question, but the originator must, as a matter of priority, so far as is possible, claim directly against the operators of the illegal website or their ISP’); see also ‘ “Disturber Liability of an Access Provider Disturber Liability of an Access Provider” (Störerhaftung des Access-Providers) Decision of the Federal Supreme Court (Bundesgerichtshof) 26 November 2015— Case No. I ZR 174/14’ (2016) 481 47(4) IIC 481 (para. 83). 69  See Riordan (n. 1) s. 2.02, at 26 (suggesting that the accurate description of ‘what each service does’ will ‘enable more nuanced conceptions of their causative relationship to harm and the circumstances in which they may face primary secondary liability’). 70 See Directive 2001/29/EC (n. 10) recital 59 (‘In many cases, intermediaries are best placed to bring . . . infringing activities [of third parties using their services] to an end’). 71 cf. Sea Shepherd v Fish & Fish [2015] UKSC 10 [40] (Lord Sumption dissenting) (UK). 72  See e.g. Sony Corp. of Am. v Universal City Studios Inc., 464 US 417 (1984) (US). 73  See e.g. Metro-Goldwyn-Mayer Studios Inc. v Grokster Ltd, 545 US 913 (2005) (US); A&M Records Inc. v Napster Inc., 239 F.3d 1004 (9th Cir. 2001) (US); see generally Jane Ginsburg, ‘Separating The Sony Sheep from the Grokster Goats: Reckoning The Future Business Plans of Copyright-Dependent Technology Entrepreneurs’ (2008) 50 Ariz. L. Rev. 577. 74 See Viacom International Inc. v YouTube Inc., 676 F.3d 19 (2d Cir. 2012) on remand 940 F.Supp.2d 110 (SDNY 2013) (US).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

50   Graeme Dinwoodie that the sale of keyword advertising consisting of the trade marks of parties other than the mark owner resulted in infringement (normally, by causing actionable confusion).75 In defamation or libel law, websites (e.g. the retailer Amazon.co.uk) have been sued where a third party posted an allegedly defamatory book review on the claimant’s book product page.76 And search engines such as Google have been sued for allegedly ‘publishing’ defamatory material that appeared within ‘snippets’ summarizing search results for the claimant,77 or ‘processing’ personal data the publication of which within snippets violated the privacy of individuals to whom the personal data related (even if the data is not removed from the actual publisher’s website).78 This illustrates the wide variety of internet intermediaries pursued as liable for en­ab­ ling wrongs perpetrated by others, but core ISPs, such as companies providing access to the internet or web-hosting services, are also potential defendants in any of these scen­ arios.79 And as rightholders—and potentially policymakers—adopt ‘follow the money’ or ‘least cost avoider’ strategies to identify defendants of first resort, the list of relevant online intermediaries may grow further. Thus, companies that process credit card payments have also been sued for facilitating unlawful transactions,80 and companies merely providing customers with access to the internet have been required to block websites in other countries where allegedly infringing content resides.81

4.  Typological Considerations 4.1  Intermediaries in Distinct Fields Some of the definitions of intermediary noted earlier are drawn from regimes regulating intermediaries horizontally. Others are taken from particular legal regimes where provision has been made for the treatment of intermediaries. Insofar as the concept is defined for a particular field of law, it will almost inevitably come to be shaped by the 75  See generally Graeme Dinwoodie, ‘Secondary Liability for Online Trademark Infringement: The International Landscape’ (2014) 36 Colum. J. of L. & Arts 463–501. See also Free Kick Master LLC v Apple Inc., 2016 WL 77916 (ND Cal. 2016) (US) (app store sued for trade mark infringement with respect to apps sold by third parties in its app store). 76 See McGrath v Dawkins [2012] EWHC B3 (QB) (UK). 77  See e.g. Metropolitan Schools v DesignTechnica [2009] EWHC 1765 (QB) (UK); A v Google New Zealand Ltd [2012] NZHC 2352 (NZ). 78  See Case C-131/12 Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (AEPD), Mario Costeja González [2014] ECLI:EU:C:2014:317. 79  See e.g. Louis Vuitton v Akanoc (n. 67). 80  See e.g. Gucci Am Inc. v Frontline Processing Corp., 721 F.Supp.2d 228 (SDNY 2010) (US) (allowing secondary infringement claim to proceed against credit card processing companies who provided services to online merchant allegedly selling counterfeit goods); but cf. Perfect 10, Inc. v Visa Int’l Serv. Ass’n, 494 F.3d 788 (9th Cir. 2007) (US) (affirming dismissal of actions against credit card companies). 81  See e.g. Cartier Int’l AG v British Sky Broadcasting Ltd [2016] EWCA Civ 658; [2018] UKSC 28 (UK).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   51 purpose of that field. The capacity of an intermediary to avoid confusion (and thus further the goals of trade mark law) might differ from its capacity to prevent reproduction (and thus assist in the prevention of copyright infringement). Likewise, establishing an understanding of intermediary that extends further than the norm might bring costs that undermine the purpose of the regime in question in ways that vary among regimes. This consideration has to inform some of the debate surrounding whether to develop schemes of liability for intermediaries that are tied to particular legal claims or regimes. There may be pragmatic reasons for doing so. Overlapping and inconsistent regulation can increase costs and uncertainties and undermine the proper functioning of a legal regime (especially where the regime, such as to provide safe harbours) is aimed at providing the necessary certainty for electronic commerce.82 In some countries, immunity provisions are restricted to particular types of claims. In the United States, there is a remarkably complex matrix of immunity.83 Most claims are covered by section 230 of the Communications Decency Act.84 Copyright claims are covered by the DMCA. But trade mark claims are encompassed within neither US safe harbour regime;85 liability of intermediaries for trade mark infringement is thus determined by the positive standard articulated in Inwood v Ives86 and applied in eBay v Tiffany.87 Thus, one choice to bear in mind as we classify intermediaries is whether status as an ‘intermediary’ is a free-standing assessment about the technological or social function 82  See Mark Lemley, ‘Rationalizing Internet Safe Harbors’ (2007) 6 J. Telecomm’s & High Tech. L. 101 (arguing that at least some aspects of liability—e.g. safe harbours—be standardized to afford the requisite certainty to IPSs and to prevent opportunistic distortion of the claims asserted against them). 83  ibid. (calling for rationalization of the matrix). 84  See text accompanying nn. 52–3. 85  Some have argued for a DMCA-like system of notice and takedown, tied to immunity, to be extended to trade marks. See Susan Rector, ‘An Idea Whose Time Has Come: Use of Takedown Notices for Trademark Infringement’ (2016) 106 Trademark Rep. 699; Mostert and Schwimmer (n. 67) 265. The ISP community is split. See the comments of Etsy, Foursquare, Kickstarter, Meetup, Shapeways, In the Matter of Joint Strategic Plan on Intellectual Property Enforcement (16 October 2015) (asking for consideration of trade mark safe harbour). 86 See Inwood Labs Inc. v Ives Labs Inc., 456 US 844 (1982) (US). 87 See Tiffany (N.J.) Inc. v eBay Inc., 600 F.3d 93 (2d Cir. 2010) (US). In the United States, the Lanham Act does contain provisions that immunize certain intermediary conduct from liability. See e.g. 15 USC § 1114(2)(B)–(C) (protecting innocent publisher or distributor of newspaper, magazine, or electronic communication from liability for trade mark infringement based on use in paid advertising matter against liability for damages); see also 15 USC § 1125(c)(3) (defence against dilution liability based on facilitating fair use, although the use facilitated was fair it is hard to see much need for this provision). Some scholars have seen s. 32(2) as possessing the potential for operating as a general trade mark im­mun­ity. See e.g. Lemley (n. 82) 105–6. Indeed, Mark Lemley has argued that that immunity provision should provide the model for a uniform safe harbour for intermediaries across all causes of action, in preference to that found in the Communications Decency Act or the DMCA. See ibid. 115–18. However, the courts have rarely found reason to apply s. 32(2). Cf. Hendrickson v eBay Inc., 165 F.Supp.2d 1082 (CD Cal. 2001) (US). Domain name registrars also benefit from provisions drafted in ways that confer limited immunity from liability for trade mark infringement under US law. See 15 USC § 1114(2)(D)(iii) (‘A domain name registrar, a domain name registry, or other domain name registration authority shall not be liable for damages under this section for the registration or maintenance of a domain name for another absent a showing of bad faith intent to profit from such registration or maintenance of the domain name’). See also Lockheed Martin Corp. v Network Solutions, Inc., 141 F.Supp.2d 648 (ND Tex. 2001) (US).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

52   Graeme Dinwoodie of the actor, or whether we denominate an actor as an intermediary for a particular type of legal claim in respect of which that status results in legal consequences.88 It might be appropriate to consider an actor an ‘intermediary’—insofar as legal consequences attached to this characterization—for one type of legal claim but not another.89 Martin Husovec has noted that ‘in its case law, the CJEU doesn’t synchronize [the usage of the term “intermediary”] across the InfoSoc Directive, the Enforcement Directive, and the E-Commerce Directive’.90 Husovec suggests that the Court has perhaps resisted harmonizing the concept because it wanted the provisions in Articles 8(3) and 11 to apply to as broad a category of providers, and that applying the understandings developed for host provider immunity under Article 14 of the e-Commerce Directive (e.g. ‘passive’ behaviour) would unduly restrict those orders.91 But, as a descriptive matter, one can read L’Oreal not as saying that active behaviour renders a provider something other than an intermediary; rather, it would be one not entitled to immunity under Article 14, just as an access provider against whom an order is not made under Article 8(3) is no less an intermediary. The Court found first that eBay was potentially entitled to the benefit of Article 14 as an information society service provider.92 Whether eBay was such a provider within the protection afforded by the safe ­harbour would as a threshold matter depend on how active it was in the allegedly illegal activity.93 If it had been involved in ‘optimising the presentation of the offers for sale in question or promoting those offers, it [would] be considered not to have taken a neutral position . . . [and thus unable] to rely, in the case of those [offers]’, on the Article 14 exemption.94 Moreover, the range of intermediaries encompassed by Articles 8(3) and 11 might appropriately be broader than those addressed in the safe harbour immunities, because the safe harbours shield intermediaries from monetary liability. Under prevailing standards for liability, it is hard to imagine plausible liability (as opposed to accountability) claims against many of the actors potentially susceptible to orders under Article 8(3) or 11. But perhaps there is merit in trying to articulate the concept apart from the susceptibility of a particular actor in a particular setting to be the subject of an accountability order or the beneficiary of safe harbour immunity?

88  The scope of the immunity provision in a particular country will not necessarily map to the scope of the secondary liability standard. Nor logically must it do so. 89  cf. Martin Senftleben, ‘An Uneasy Case for Notice and Takedown: Context Specific Trademark Rights’, SSRN Research Paper no. 2025075 (16 March 2012) . 90  See Husovec (n. 1) 88. 91 ibid. 92  See C-324/09 (n. 13) para. 109 (‘an internet service consisting in facilitating relations between sellers and buyers of goods is, in principle, a service for the purposes of Directive 2000/31. That directive concerns, as its title suggests, ‘information society services, in particular electronic commerce’. It is apparent from the definition of ‘information society service’ . . . that that concept encompasses services provided at a distance by means of electronic equipment for the processing and storage of data, at the individual request of a recipient of services and, normally, for remuneration’). 93  ibid. paras 113 and 115. 94  ibid. para. 116.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   53

4.2  Status as a Measure of Legal Applicability As this last comment suggests, to some extent the apparent variation in decisions as to whether particular actors are intermediaries under the e-Commerce Directive may be because the definition of those actors who are potentially immune may implicitly contain some of the conditions for availing oneself of immunity.95 Strictly, these conditions do not define who is a service provider but rather whether the service provider is acting in a way that will allow them to take advantage of the immunities conferred.96 Thus, for example, the Italian courts have developed a distinction between passive and active intermediaries that has its roots in the conditions under which the protections of the e-Commerce Directive will be available (and given greater weight than anticipated by the Court of Justice).97 Often the term ‘intermediary’ is used in ways that are an attempt to link the actor with a particular legal consequence or immunity.98 One could read this case law as refining the notion of service provider (for these purposes) or simply imposing conditions on when immunity will be available. The latter is probably the better reading because a service provider may be active in one scenario but passive in another. Likewise, and relatedly, statutes providing immunity for different kinds of service provider performing different online roles will frequently define the term ‘service provider’ in varying ways to accommodate those differences. For example, an ‘access provider’ will inevitably be defined differently from a host provider at a certain level of detail.99 Both are intermediaries, but what we expect them to do might differ. Should we define intermediaries by reference to what they do or by what we want them to do (or by the legal exposure to which we wish to subject them)? One sees an inversion of this, in fact, in the way that the conduct of actors who might as a matter of functional taxonomy be regarded as intermediaries has been treated as implicating primary liability. For example, search engines such as Google have been formally found not liable for defamatory material that appeared within ‘snippets’ sum­mar­ iz­ing search results for the claimant because they were not a publisher of the material.100 95  Even within a single jurisdiction, different safe harbours may also contain different specific conditions. Thus, a service provider may be protected under one safe harbour but not another. See Viacom (n. 74) (discussing immunity under the different safe harbours of the e-Commerce Directive). 96  The same approach can be found in provisions defining service provider for the purpose of imposing regulatory obligations. Thus, the UK Digital Economy Act only required ‘qualifying ISPs’ to participate in its subscriber notification regime. 97  See Dinwoodie (n. 44). 98  See Llewelyn (n. 3) 18 (‘It is impossible to draw a precise line to categorize . . . virtual platforms into either “facilitators” or “intermediaries”, as it will depend upon what these platforms actually do and provide, and their exact involvement in, or relationship with, the act of infringement of IP . . . This paper regards “intermediaries” as those who have played an active role in the transaction, providing an essential element for the act of infringement; “facilitators”, on the other hand, do not play this role at all, they are mere conduits and their role is a passive one’). 99  See 17 USC § 512(k)(1)(A)–(B) (providing narrower definition of providers able to come within the scope of the copyright infringement immunity conferred by § 512(a) on access providers). 100  See e.g. Metropolitan Schools v DesignTechnica [2009] EWHC 1765 (QB) (UK); A v Google New Zealand Ltd [2012] NZHC 2352 (NZ).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

54   Graeme Dinwoodie They were merely intermediaries. Similarly, the analysis of the US Court of Appeals for the Second Circuit in Rescuecom v Google turned on the question of the extent to which Google’s Keyword Suggestion Tool effectively induced the primary infringing conduct or was itself trade mark use.101 Here, the classification is being used to trigger a particular form of analysis, rather than analysis resulting in a classification. To the extent that these cases expand the elements of primary liability, they are distorting what we might think of as outside the category of ‘internet intermediary’. And in some cases this has been a stretch, prompting Stacey Dogan to label the Rescuecom-endorsed cause of action as a ‘curious branch of direct trade mark infringement designed to distinguish between the innocent intermediary, and one whose technology and business model deliberately seeks to confuse’.102 But these legal assessments do not (and perhaps should not) define the outer bound­ ar­ies of the essentialist concept. Thus, while one of the reasons for an ‘intermediary’ being held accountable under the Enforcement Directive or Information Society Directive— and thus obliged to assist in preventing infringement—is that they are ‘best placed’ to do so, the concept of ‘intermediary’ cannot be limited to least-cost avoiders. (Indeed, textually, the recital from which that justification is gleaned suggests that there will be some intermediaries who are not best placed.) One could simply relegate the concept of ‘intermediary’ to denote a party against whom an ‘accountability order’ issues. But limiting the concept to mere recognition of the legal consequence prevents the concept itself doing much useful broader legal work; an actor might be an ‘intermediary’ in one instance but not in another despite performing precisely the same role.103 And it would obscure an important policy concern, namely that intermediaries might—but might not—be required to assist in preventing infringement under the two provisions in question.

4.3  The Directive on Copyright in the Digital Single Market The question posed earlier—should we define intermediaries by reference to what they do or by what we want them to do (or by the legal exposure of which we wish to 101 See Rescuecom Corp. v Google Inc., 562 F.3d 123, 129 (2d Cir. 2009) (US). On this latter point, the Second Circuit distinguished between the practice of the defendant in 1-800 Contacts, which had sold advertising keyed to categories of marks (e.g. selling the right to have an ad appear when a user searches for a mark connected to eye-care products, but not disclosing to the advertiser the proprietary mapping of marks and categories), and that of Google (which sold advertising tied directly to single marks). See ibid. 128–9. 102  Stacey Dogan, ‘“We Know It When We See It”: Intermediary Trademark Liability and the Internet’ [2011] Stan. Tech. L. Rev. 7, 19. 103  Recital 59 of the Information Society Directive explains Art. 11 orders as flowing from the fact that ‘in many cases . . . intermediaries are best placed to bring . . . infringement activities to an end’. However, that proposition turns in part on factual questions—e.g. the technological capacity of the intermediary and the location of the infringers—as well as policy preferences. Thus, we might think that inter­medi­ar­ies are lowest cost avoiders, but that assessment will depend on what items of cost are to be included in the calculus, and what ‘costs’ one assigns to concerns such as overblocking and the like. (To some extent, the proportionality analysis conducted by courts before issuing orders should function to bring these assessments together.)

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   55 subject them)—is raised by the 2019 Directive on Copyright in the Digital Single Market. This Directive legislated on the obligations of certain intermediaries to filter content uploaded by users, or more formally to make ‘in accordance with high industry standards of professional diligence, best efforts to ensure the unavailability of specific works . . . for which the right-holders have provided the service providers with the relevant and necessary information’.104 These obligations will, however, be imposed only on a certain subset of intermediaries—‘online content-sharing service providers’ (OCSSPs).105 Article 2(6) defines an online content-sharing service provider as ‘a provider of an information society service of which the main or one of the main purposes is to store and give the public access to a large amount of copyright-protected works or other protected subject matter uploaded by its users, which it organises and promotes for profit-making purposes’.106 As part of the legislative compromise, it was also agreed that ‘providers of services, such as not-for-profit online encyclopedias, not-for-profit educational and scientific repositories, open source software-developing and -sharing platforms, [access pro­viders], online marketplaces, business-to-business cloud services and cloud services that allow users to upload content for their own use, are not “online content-sharing service providers” within the meaning of this Directive’. Moreover, Article 17(6) provided exemption from some of the obligations imposed on OCSSPs. Thus: Member States shall provide that, in respect of new online content-sharing service providers the services of which have been available to the public in the Union for less than three years and which have an annual turnover below EUR 10 million . . ., the conditions under the liability regime set out in paragraph 4 [of Art. 17] are limited to compliance with point (a) of paragraph 4 and to acting expeditiously, upon receiving a sufficiently substantiated notice, to disable access to the notified works 104  See Directive 2019/790/EU of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/ EC [2019] OJ L130/92, Art. 17(4)(b). 105  Art. 2(6) defines ‘online content-sharing service provider’ as ‘a provider of an information society service of which the main or one of the main purposes is to store and give the public access to a large amount of copyright-protected works or other protected subject matter uploaded by its users, which it organises and promotes for profit-making purposes’. It also excludes specifically ‘Providers of services, such as not-for-profit online encyclopedias, not-for-profit educational and scientific repositories, open source software-developing and -sharing platforms, [access providers], online marketplaces, businessto-business cloud services and cloud services that allow users to upload content for their own use.’ 106  The definition is made more complex still by the recitals. Recitals 62–3 provide: (62) The definition of an online content-sharing service provider laid down in this Directive should target only online services that play an important role on the online content market by competing with other online content services, such as online audio and video streaming services, for the same audiences . . . (63) The assessment of whether an online content-sharing service provider stores and gives access to a large amount of copyright-protected content should be made on a case-by-case basis and should take account of a combination of elements, such as the audience of the service and the number of files of copyright-protected content uploaded by the users of the service.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

56   Graeme Dinwoodie or other subject matter or to remove those works or other subject matter from their websites. Where the average number of monthly unique visitors of such service providers exceeds 5 million, calculated on the basis of the previous calendar year, they shall also demonstrate that they have made best efforts to prevent further uploads of the notified works and other subject matter for which the rightholders have provided relevant and necessary information.

5. Conclusions Who are ‘internet intermediaries’? Even after analysis of a range of legal instruments, we find little consensus. It is perhaps thus appropriate to adopt as broad a view as possible so as to admit of the greatest possibility of legal intervention. But the breadth of the term should not obscure—indeed, it demands—an attempt to classify and differentiate among the different actors who are encompassed by the term. Essentialist taxonomies may well assist in supplying us with important facts that feed into the development of legal policy or the resolution of legal disputes. But we must not become hostage to that essentialism, both because it is static and because law need not be purely responsive to technology. But likewise, we need not allow technical legal definitions of ‘internet intermediary’ devised for particular purposes to obscure a search for a broader understanding of the phenomenon that we seek to regulate.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 3

A Theor etica l Ta xonom y of I n ter m edi a ry Li a bilit y Jaani Riordan

Given the rapid emergence of internet technologies in modern life and commerce, it is unsurprising that a growing array of regulatory schemes, statutory rules, judicial doctrines, procedures, and remedies have something to say about internet intermediaries and their legal duties. No discussion of intermediaries’ liability is complete without an understanding of the many forms it may take, the policy levers it may serve, and the areas in which liability may arise. This is an essentially cartographic exercise, as it involves mapping the legislative, judicial, and regulatory landscape to identify and classify legal norms of relevance to internet wrongdoing. Far from being a lawless wasteland or ungovernable dominion, it is now well recognized that the internet is, or should be, governed by the rule of law. This gives rise to new challenges, many of which are discussed elsewhere in this Handbook: most acutely, how to develop liability rules which properly encourage intermediaries to avoid harmful uses of their technologies without creating disproportionate or chilling effects; and how to craft effective remedies that are scalable to meet swifter and more widespread forms of online wrongdoing. Meeting these challenges also requires a degree of sensitivity on the part of national courts and legislators in shaping and applying legal norms which may have extraterritorial effects or may profoundly shape the development and use of future technologies and nascent industries, often in unpredictable ways. In all of this, it is beneficial to have a conceptual architecture within which to analyse and apply national and supranational liability rules to intermediaries, and to consider the policy objectives that may be served by imposing (or excluding) liability of different kinds. This is far from a simple task, both for the myriad forms that liability rules may take and because it requires careful consideration of what ‘liability’ actually means in different contexts. © Jaani Riordan 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

58   Jaani Riordan This chapter therefore aims to provide a taxonomy of the different types of liability which may be imposed upon internet intermediaries, and a theoretical framework for talking about problems of liability. This taxonomy is necessarily modest: it is not intended to provide an exhaustive description of the diverse areas of civil and criminal wrongdoing which may give rise to liability, but rather to situate each of these areas within an overall framework for analysing legal responsibility and the different ways in which it may be imposed upon facilitators of harm. Nor can it hope to address this question wholly unconstrained by the liability structure of the English common law system, though it will endeavour to propose a theoretical account which is of wider application. First, this chapter begins by considering what is meant by ‘liability’ and identifying the different forms that it may take, consistently with widespread assumptions about moral agency and individual responsibility. Secondly, this chapter analyses how ­liability may be attributed or imputed for the acts and omissions of others, noting a distinction between two models of responsibility which may be termed ‘primary’ and ‘secondary’ (or accessory) liability. Thirdly, this chapter considers the functions and policy justifications for imposing liability onto intermediaries for the acts or omissions of others. Finally, this chapter provides an overview of the main kinds of wrongdoing for which intermediaries may, in principle, be liable, by reference to English and EU law.

1.  What is ‘Liability’? Many usages of the word ‘liability’ can be identified, making it necessary to consider what this term is commonly assumed to mean. In its most conventional sense, liability describes the consequence of a person being held legally responsible for an event characterized as civil or criminal wrongdoing. Thus, the pithy phrase ‘D is liable’ is shorthand for a legal formula which refers to the obligation imposed (or recognized) by a court or administrative authority of competent jurisdiction to supply a prescribed remedy, or take (or cease taking) a prescribed action, in response to an event.1 That event is usually, but need not always be,2 characterized as a legal or equitable wrong, or breach of some other legal duty owed by D. The consequence of holding D liable is that C can go to court and obtain an order for a remedy against D. This section begins by considering the principles of moral agency and individual responsibility upon which most instances of liability are founded. This section then analyses the two main forms of obligations which may be imposed on an intermediary who is found liable for civil wrongdoing: monetary and non-monetary liability.

1  Peter Birks, ‘Rights, Wrongs, and Remedies’ (2000) 20 OJLS 1, 23. 2  Robert Stevens, Torts and Rights (OUP 2007) 58 (giving the example of an interim injunction which prohibits lawful conduct).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   59

1.1  Moral Agency and Individual Responsibility In common law systems, the traditional function of tort law was to determine which events generated remedial obligations and which did not.3 Liability rules could be used to make a defendant answerable to the claimant ‘under the rules to be blamed, punished, or made to pay.’4 In these and most other western legal systems, liability is traditionally premised on the fundamental assumption that a natural or legal person is responsible for her (and only her) own voluntary acts and omissions, subject to limited exceptions. The traditional principle was articulated by Lord Sumner in Weld-Blundell v Stephens: In general (apart from special contracts and relations and the maxim respondeat su­per­ior), even though A is at fault, he is not responsible for injury to C which B, a stranger to him, deliberately chooses to do.5

This principle reflects the intuitive claim of moral philosophers that a person is responsible for ‘all and only his intentional actions’.6 Actions (or, it might be added, inactions) by others are not ordinarily our responsibility; they are theirs to bear alone. This may be thought of as the basic principle of moral agency on which liability is ordinarily founded. Within each individual’s area of ‘personal moral sovereignty’, we treat that person as a  moral agent whose conduct may be assessed against the applicable liability rules.7 Because of this, an individual is normally responsible ‘only for conduct (and, within some bounds, its consequences) that he has directed for himself ’.8 Thus, we regard it as intuitively unjust to impose liability upon an innocent person for the wrongful acts or omissions of others, absent something more. The liability flowing from an individual’s own moral agency may be augmented by concepts of collective or secondary fault or responsibility, where ‘by applying facsimiles of our principles about individual fault and responsibility’ an individual may be held liable for the activities of another moral agent for whom they share fault or re­spon­si­bil­ity.9 It is this latter basis for personal responsibility which is most relevant in discussions about intermediary liability. However, this frequently gives rise to difficult questions in demarcating the sphere within which an intermediary can legitimately be said to bear responsibility for the acts and omissions of others. Part of the difficulty underlying many discussions of intermediary liability is that they presuppose a particular model of liability or treat the term ‘liability’ as a form of 3  Oliver Wendell Holmes, The Common Law (first published 1881, 1963 Belknap Press edn) 64. Now, of course, the constellation of statute often displaces, codifies, or abrogates earlier common law rules. 4  H.L.A. Hart and Tony Honoré, Causation in the Law (OUP 1985) 65. 5  [1920] AC 956, 986 (Lord Sumner). 6  John Mackie, Ethics: Inventing Right and Wrong (Penguin Books 1977) 208. 7  Ronald Dworkin, Law’s Empire (HUP 1986) 174. 8  Lloyd Weinreb, Natural Law and Justice (HUP 1987) 200. This assumption appears to be common to many philosophical discussions of individual responsibility: see e.g. Isaiah Berlin, Liberty (Cohen 1995) 6; Emmanuel Kant, The Metaphysics of Morals (first published 1785, 1996 CUP edn, Gregor trans.) ss. 6:223, 6:389–6:390, 152–3. 9  Dworkin (n 7) 170.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

60   Jaani Riordan metonymy denoting a much wider spectrum of liability rules. Thus, it is typical to speak of ‘li­abil­ity’ as a binary and monolithic concept: either an intermediary is liable or it is not. To avoid falling into this trap, it is helpful to disambiguate liability into its constituent forms. In this section, we identify two main branches of liability: monetary and nonmonetary. Monetary liability may be further divided along a spectrum comprising four main standards: strict liability; negligence or fault-based liability; knowledge-based li­abil­ity; and partial or total immunity. Non-monetary liability may be further divided into prohibitory and mandatory obligations.

1.2  Monetary Liability First, remedies for liability can be monetary, as in the case of orders to pay compensatory damages or to disgorge profits.10 Such orders enforce secondary duties to correct losses or gains resulting from the breach of a primary duty. These obligations to pay are backed by the threat of executive enforcement and asset seizure. Almost all information torts recognize an obligation on the legally responsible party to pay money.11 The preconditions for obtaining a monetary remedy vary between types of wrongdoing and of course between different legal systems; however, in broad terms they may be divided on a spectrum from absolute liability to total immunity, and grouped under four headings.

1.2.1  Strict Liability Strict liability requires intermediaries to internalize the cost of user misconduct without proof of fault. By requiring intermediaries to pay for the social harms of third party wrongdoing, a liability rule may increase the expected penalty—and thereby the deterrent effect—of facilitating that wrongdoing, with the effect that intermediaries adjust their activities to reduce wrongdoing to an optimal level.12 Strict liability has the advantage of being simple for courts, intermediaries, and claimants to assess, thereby allowing efficient ex ante pricing decisions. However, although strict primary liability rules are common, strict secondary liability rules are rare, mainly because it is unfeasibly costly for intermediaries to monitor the lawfulness of all their users’ activities. This may also be because strict secondary liability would pose a more direct challenge to the principles of moral agency and individual responsibility considered earlier.

1.2.2  Negligence-Based Standards Liability may alternatively be conditioned upon a finding of fault or limited to circumstances in which an intermediary is said to owe a legal duty of care. For example, such a 10 See Attorney General v Blake [2001] 1 AC 268, 278–81 (Lord Nicholls) (UK). 11 See John v MGN Ltd [1997] QB 586, 608–9 (Sir Thomas Bingham MR) (UK) (defamation); Copyright, Designs and Patents Act 1988, ss. 96(1), 97–100, 103, 184(2), 191I (UK) (copyright and performers’ right). 12  See Jennifer Arlen and Reinier Kraakman, ‘Controlling Corporate Misconduct: An Analysis of Corporate Liability Regimes’ (1997) 72 New York U. L. Rev. 687; Gary Becker, ‘Crime and Punishment: An Economic Approach’ (1968) 76 J. of Political Economy 169, 178–80, 184.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   61 duty may require intermediaries to act reasonably to prevent, deter, or respond to primary wrongdoing. This would ordinarily represent a lower level of monitoring than a strict liability standard, and thereby reduces the risk of over-deterrence by holding intermediaries to an objectively determined but imperfect standard of conduct—for example, a rule which requires a website or platform operator to remove defamatory postings within a reasonable period.13 Fault-based duties which are fixed by reference to external standards such as industry practices can operate more stringently than knowledge-based duties; for example, by imputing constructive knowledge of tortious ma­ter­ial where an intermediary is under a duty to seek it out or prevent its reappearance, or imposing liability for conduct of which an intermediary is wholly unaware on the basis of a duty to control the wrongdoer.

1.2.3  Knowledge-Based Standards By contrast, knowledge-based standards impose obligations upon intermediaries to respond to wrongdoing only once they receive sufficient information to reach a threshold mental state; for example, that they know or reasonably infer that wrongdoing has occurred. This type of liability rule furnishes the dominant mechanism for European internet content regulation: notice and takedown. Less wrongdoing must be internalized, which encourages optimal ex post enforcement. To prevent wilful blindness, knowledge usually incorporates an objective measure, by which an intermediary is taken to know facts which would have been discovered with reasonable diligence by a competent operator standing in its shoes.14 In this way, liability rules which are premised on a standard of knowledge being attained delimit liability according to whether the defendant’s mental state was objectively culpable.

1.2.4 Immunity At the other end of the liability rule spectrum, intermediaries can be partially or wholly exempted from monetary liability. Immunity has the advantages of certainty, subsidizing nascent technology industries and promoting ‘market-based self-help’,15 but has been heavily criticized by some scholars as removing any incentives for least-cost avoiders to intervene in enforcement, even where that might be the most efficient way to prevent wrongdoing or bring it to an end,16 or is simply necessary to uphold claimants’ rights. Conversely, immunity may serve to uphold countervailing public policy ob­ject­ives, such

13 Cf. Emmens v Pottle (1885) QBD 354 (UK) (common law defence of innocent dissemination). 14  See e.g. Directive 2000/31/EC of the European Parliament and of the Council of 17 July 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1, Arts 13(1)(e), 14(1)(b). 15 Doug Lichtman and Eric Posner, ‘Holding Internet Service Providers Accountable’ (2006) 14 Supreme Court Economic Rev. 221, 226. 16  See e.g. Directive 2001/29/EC of the European Parliament and the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L167/10, recital 59; Michael Rustad and Thomas Koenig, ‘Rebooting Cybertort Law’ (2005) 80 Washington L. Rev. 335, 390–1.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

62   Jaani Riordan as constitutional protections for freedom of expression.17 In Europe, partial immunity in respect of monetary liability is conferred within three classes of passive, neutral, and technical activities (hosting and caching, and transmission).18

1.3  Non-Monetary Liability A second set of liability outcomes imposes non-monetary obligations upon inter­medi­ar­ ies, most commonly an injunction to do, or refrain from doing, certain acts (mandatory and prohibitory injunctions, respectively). Such obligations may arise in circumstances where the intermediary is itself liable as a primary wrongdoer, or where it is not so liable but instead must discharge a more limited duty to prevent or cease facilitating wrongdoing, assist a victim of wrongdoing, or uphold the administration of just­ice. Most commonly, this latter class of non-monetary duties requires intermediaries to take reasonable and proportionate steps to prevent third parties’ wrongful activities from being facilitated by the use of their services, when requested to do so.19 As has been observed on a number of occasions, the underlying duty is ‘of a somewhat notional kind’: The duty of a person who had become involved in another’s wrongdoing was held . . . to be to ‘assist the person who has been wronged by giving him full information and disclosing the identity of the wrongdoers’. . . . It is, however, clear that this duty was of a somewhat notional kind. It was not a legal duty in the ordinary sense of the term. Failure to supply the information would not give rise to an action for damages. The concept of duty was simply a way of saying that the court would require disclosure.20

Although both monetary and non-monetary forms of liability require some kind of recognized legal wrongdoing to have occurred, to impose non-monetary liability only requires wrongdoing on the part of a third party, ‘and not that of the person against whom the proceedings are brought’.21 For this reason, non-monetary remedies are described as imposing accountability without liability,22 though of course both mon­et­ary and nonmonetary liability may flow from the same wrongful activity. Liability in this second, non-monetary sense is both broader and narrower than mon­et­ary liability: as noted earlier, it can be imposed without proof of wrongdoing on 17  See, inter alia, Chapters 7, 27, and 29. 18  See, inter alia, Chapters 15 and 16. 19  See e.g. Cartier International AG v British Sky Broadcasting Ltd [2014] EWHC 3354 (Ch) [106] (Arnold J) (UK); aff ’d [2016] EWCA Civ 658 [52] (Kitchin LJ) (UK). 20  Singularis Holdings Ltd v PricewaterhouseCoopers [2015] AC 1675 [22] (Lord Sumption JSC) (UK) (citations omitted). 21  Ashworth General Hospital Ltd v MGN Ltd [2002] 1 WLR 2033 [26] (Lord Woolf CJ) (UK). 22  See Martin Husovec, ‘Accountable, Not Liable: Injunctions against Intermediaries’, Tilburg Law & Economics Research Paper Series, Discussion Paper No. 2016-012 (2016).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   63 the part of the intermediary, but it only protects limited categories of interests and enforces limited types of duties. Further, at least in Europe, injunctive remedies are impervious to safe harbours, which ‘do not affect the possibility of injunctions of different kinds’, whereas monetary remedies may be unavailable in relation to protected activities of an intermediary.23 Finally, while they may not impose a direct obligation to compensate a claimant or disgorge profits, injunctions are enforced, ultimately, by the criminal law of contempt and the associated machinery of incarceration and, in some jurisdictions, monetary penalties.

2.  Classifying Liability So far, we have seen that two main forms of obligations may be imposed upon inter­ medi­ar­ies held to be ‘liable’, each falling within a spectrum of liability rules sharing certain common features and underlying assumptions. A further distinction lies between li­abil­ity rules which impose ‘primary’ (or direct) liability, and those which impose ‘secondary’ (or accessory) liability.24 Primary liability arises where all elements of wrongdoing are fulfilled by the intermediary’s own acts or omissions; conversely, secondary liability is liability which is at least partly conditioned upon proof of prima facie wrongdoing by a third party. Principles of primary and secondary liability reflect a consistent policy of holding intermediaries accountable for harms that are caused or contributed to by third parties when the intermediary has a normatively and causally significant relationship with primary wrongdoing, typically constituted by an assumption of responsibility for the primary wrongdoer’s actions. This policy may partly explain the development of common law liability rules involving internet intermediaries, but—as is demonstrated by the English principles of joint tortfeasorship considered later—seems unlikely to offer a sufficiently granular or responsive means of regulating their obligations and business models.

2.1  Primary Liability A distinction lies between two ways in which the law classifies wrongdoing. First, a person may engage in wrongful activity by his or her own acts or omissions. Such conduct is intended and directed by that person, who carries it out personally. This type 23 Directive 2000/31/EC (n. 14) recital 45. See, in the UK, Electronic Commerce (EC Directive) Regulations 2002 (SI 2002/2013), reg. 20(1)(b), (2). 24  Some judges and scholars argue that ‘accessory’ or ‘indirect’ are better labels to describe this type of liability: see e.g. Paul Davies, ‘Accessory Liability: Protecting Intellectual Property Rights’ [2011] 4 Intellectual Property Quarterly 390, 396. ‘Secondary’ is preferred here for its neutrality and ability to capture the range of standards according to which intermediaries may be held liable.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

64   Jaani Riordan of wrong is described as ‘primary wrongdoing’ and its originating agent is the ‘primary wrongdoer’ or tortfeasor. The consequences of such conduct are caused by the person who engages in it; in other words, there is strict identity between actor and acts. This form of liability typically poses little in the way of challenge to conventional understandings of moral agency or personal responsibility. Breaches of primary duties owed by intermediaries are properly treated as primary wrongs. Common to these instances is that the definition of primary liability is sufficiently wide to accommodate acts or omissions caused by the relevant use of the intermediary’s services. For example, an intermediary may face primary liability in its capacity as a contracting party (perhaps under its terms of service or a distance contract entered into with a consumer),25 as a party that has assumed responsibility for the safety or security of its users or their data,26 to prevent harm which is likely to occur,27 for injury caused by something or someone that the intermediary has a duty to control,28 or under a statutory data protection scheme.29 Other attempts to impose liability raise more difficult questions concerning the scope of primary wrongdoing; for example, whether an act of reproducing a copyright work which occurs when a user uploads an infringing video to a video-sharing platform’s server is to be treated as performed by the intermediary, its user, or both.30 In some cases, the boundary between primary and secondary liability is not wholly distinct. This is most apparent in areas such as copyright, where the right of communication to the public has progressively been expanded to encompass activities which are traditionally thought of as the province of secondary liability.31 The dividing line will often depend on how widely the relevant primary wrong is defined, and may become blurred at the margins. However, despite (or perhaps because of) erosion in certain areas, it is suggested that it remains important to think of primary and secondary responsibility as distinct concepts. 25  See e.g. Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights [2011] OJ L304/64, Arts 8–9 (regulating distance contracts); Directive 2013/11/EU on alternative dispute resolution for consumer disputes, Art. 2(a) (online dispute resolution). 26  See e.g. William Perrin and Lorna Woods, ‘Reducing Harm in Social Media through a Duty of Care’ (LSE Media Policy Project Blog, 8 May 2018) . 27  See by analogy Dorset Yacht Co. Ltd v Home Office [1970] AC 1004, 1030 (Lord Reid) (UK) (recognizing a duty to prevent harm which was ‘very likely’ to occur). 28  See by analogy Haynes v Harwood [1935] 1 KB 146 (driver for bolting horse) (UK); Newton v Edgerley [1959] 1 WLR 1031 (UK) (father for child’s use of weapon). 29  As in the case of an intermediary who is a data controller vis-à-vis personal data. See Regulation 2016/679/EU of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC [2016] OJ L119/1 (‘GDPR’), Arts 5–6. 30  See e.g. National Rugby League Investments Pty Ltd v Singtel Optus Pty Ltd [2012] FCAFC 59 [75]– [78] (Aus.) (concluding that copies were made jointly by users and the platform). 31  See e.g. C-160/15 GS Media BV v Sanoma Media Netherlands BV [2016] ECLI:EU:C:2016:644 (provision of hyperlinks to infringing content); C-527/15 Stichting Brein v Jack Frederik Wullems [2017] ECLI:EU:C:2017:300 (sale of pre-programmed IPTV set-top boxes).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   65

2.2  Secondary Liability A second form of liability is ‘secondary’ in the sense that it requires proof of at least prima facie wrongdoing by a person other than the claimant or defendant. As Lord Hoffmann explained in OBG Ltd v Allan, secondary liability is concerned with ‘prin­ciples of liability for the act of another’.32 Lord Nicholls described it as ‘civil liability which is secondary in the sense that it is secondary, or supplemental, to that of the third party who committed [the primary tort]’.33 A more precise definition may be that secondary liability is liability having as one of its conditions a finding of at least prima facie wrongdoing by a third party. For example, liability for authorizing copyright infringement requires proof of actual infringement by the party so authorized.34 By contrast, liability for breaching a contract is primary, as it does not matter whether any third party has also breached it. Doctrines of secondary liability determine the threshold at which intermediaries may become legally responsible for primary wrongs perpetrated by others, even though they do not independently satisfy the definition of primary wrongdoing. Their operation begins at the penumbra of primary liability and ends at the limits of the connecting factors through which secondary liability may attach. Secondary liability is thus closely related to the definition of a primary wrong, whose boundaries can be adjusted to encompass a wider or narrower range of conduct within it. Secondary liability is not harmonized within the EU and national approaches tend to evolve alongside national doctrines of civil procedure, non-contractual and criminal responsibility.35 Even within national legal systems, there is often little to unite the disparate instances of secondary liability doctrines, except that they express common patterns of attribution in private law and reflect shared policies about the proper limits of personal responsibility. As Birks has observed, secondary liability is an ‘obscure and under-theorised’ part of private law,36 and is one frequently characterized by the use of undefined, inconsistent, or misleading terminology. This significantly complicates the task of discerning common structural principles. In the most general terms, secondary liability may be thought of as attaching to acts or omissions by A, the secondary actor, which (1) are not independently a primary wrong, but either: (2) cause B, a primary wrongdoer, to engage in primary wrongdoing against C in a recognized way (‘causative secondary wrongdoing’); or (3) establish a recognized relationship between A and B within the scope of which B engages in primary wrongdoing against C (‘relational secondary wrongdoing’). Here the key questions for national legal systems are: (i) when will harm caused by A to C fall within a recognized category 32  [2008] 1 AC 1 [27] (Lord Hoffmann) (UK). 33  ibid. [59] (Lord Nicholls). 34  As there is no liability for attempted or inchoate copyright infringement, it would plainly be insufficient for an authorization to be given which fails to lead to any infringement. 35  See e.g. French Civil Code (2016 French Ministry of Justice edn, Cartwright et al. trans.) arts 1240–2; German Civil Code (1 October 2013) Arts 823, 1004. 36  Peter Birks, ‘Civil Wrongs: A New World’ in Butterworths Lectures 1990–1991 (Butterworths 1992) 55, 100.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

66   Jaani Riordan of duty or otherwise be sufficiently culpable to justify imposing secondary li­abil­ity; and (ii) what relationships are sufficiently proximate to justify treating A as responsible for the actions of B? Different legal systems understandably formulate distinct answers to these questions, and draw the line in different places. In English law, these principles are reflected primarily in doctrines of joint tortfeasorship in tort, equitable accessory liability, criminal accessory liability, and vicarious liability. Although these doctrines developed in radically different institutional, doctrinal, and historical settings, some scholars have attempted to derive ‘general principles of accessory liability’ from these disparate instances.37 However, with limited exceptions, English courts have consistently rejected those attempts, and the area is instead characterized by ‘systematic failure’ to explain the basis of principles which are frequently ‘unstructured, un­prin­cipled and incoherent’.38 Partly this reflects terminological confusion,39 and partly the diverse policies, fact-specific circumstances, and remedial values that these principles uphold in different areas of law. If any unifying principle can be identified, it is that an intermediary may be made to answer for the wrongs of others where it has by its own conduct become so involved as to ‘make those wrongful acts [its] own’.40 In other words, these parallel criteria determine whether a secondary actor has voluntarily assumed responsibility for the primary wrongdoer’s conduct.41 Collectively, these doctrines operate as limited exceptions to the general principle that a claimant’s rights extend only to those who have done him wrong, and—the obverse proposition—that a defendant’s liabilities extend only to his own wrongdoing.

2.2.1  Causative Secondary Liability Under English law, to establish secondary liability in tort requires the claimant to show two things. First, reflecting its ‘parasitic’ nature, there must be some primary wrongdoing, without which it is ‘self-evident’ that no liability can attach to other parties.42 For example, an online platform could not plausibly face secondary liability for trade mark infringement without a finding that there has been some primary infringement: ‘No secondary liability without primary liability’, as Lord Hoffmann surmised in OBG.43

37  See e.g. Philip Sales, ‘The Tort of Conspiracy and Civil Secondary Liability’ (1990) 49 Cambridge L.J. 491, 502. 38  Claire McIvor, Third Party Liability in Tort (Hart 2006) 1. 39  Confusingly, secondary wrongdoing can often lead to primary liability. For example, tortious secondary liability is primary in the sense that all wrongdoers are jointly liable for the same tort, subject to rights of contribution. Joint tortfeasors are therefore ‘principals’ rather than ‘accessories’ in the strict sense. See Pey-Woan Lee, ‘Inducing Breach of Contract, Conversion and Contract as Property’ (2009) 29 OJLS 511, 521. 40  Sabaf SpA v Meneghetti SpA [2003] RPC 264, 284 (Peter Gibson LJ) (UK). 41 Cf. Caparo Industries plc v Dickman [1990] 2 AC 605, 628–9 (Lord Roskill) (UK) (describing a duty arising from assumptions of responsibility for the performance of an activity). 42  Revenue and Customs Commissioners v Total Network SL [2008] 1 AC 1174, 1255 (Lord Walker) (UK). 43  OBG (n. 32) [31] (Lord Hoffmann).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   67 Secondly, the secondary wrongdoer’s conduct must fall within a recognized connecting factor. This specifies a threshold of causative participation and knowledge which are, in combination, normatively sufficient for ‘concurrent fault’.44 The two most common categories are procurement and participation in a common design. However, these are non-exhaustive and it would, as Bankes LJ observed in The Koursk, ‘be unwise to attempt to define the necessary amount of connection’ in the abstract.45 Despite some confusion,46 these connecting factors are alternatives.47 Together, they identify the situations when a sufficient nexus exists between secondary and primary wrongdoers to justify extending liability to an intermediary or other third party. They are also routinely supplemented by statutory forms of secondary liability, most notably in the case of copyright.48 2.2.1.1 Procurement Secondary liability for procuring arises where A intentionally causes B ‘by inducement, incitement or persuasion’ to engage in particular acts infringing C’s rights.49 Procurement of a tort is not a separate tort. Instead, it makes the secondary wrongdoer liable for the primary wrong as a joint tortfeasor. It is necessary but insufficient that A’s conduct must cause the primary wrong, in the sense that, ‘but for his persuasion, [the primary wrong] would or might never have been committed’.50 However, merely ‘aiding’ wrongdoing is not procurement: ‘[f]acilitating the doing of an act is obviously different from procuring the doing of the act.’51 On the basis of this case law, in L’Oréal SA v eBay International AG, Arnold J held that eBay had not procured infringements by sellers who offered for sale and sold counterfeit goods on its platform.52 Procurement also has a mental element, as the procurer must ‘wilfully’ have sought to induce the primary wrongdoer to act wrongfully. Ordinarily, A must intend B to engage in wrongful conduct in a particular way, which requires knowledge of at least the existence of the primary right to be interfered with and the acts to be performed, while possessing

44  Glanville Williams, Joint Torts and Contributory Negligence.A Study of Concurrent Fault in Great Britain, Ireland and the Common Law Dominions (Stevens and Sons 1951) 2. 45  The Koursk [1924] P 140, 151 (Bankes LJ) (UK). For example, some scholars argue that authorization or ratification of a wrong is a further basis for imposing secondary liability in tort: see further Patrick Atiyah, Vicarious Liability in the Law of Torts (Butterworths 1967) 292–4. 46  See e.g. CBS Songs Ltd v Amstrad Consumer Electronics plc [1988] 1 AC 1013, 1058 (Lord Templeman) (UK) (D liable if ‘he intends and procures and shares a common design that infringement shall take place’) (emphasis added); cf. MCA Records Inc. v Charly Records Ltd [2002] FSR 26 [424] (Chadwick LJ) (UK) (treating the test as disjunctive). 47  Unilever plc v Gillette (UK) Ltd [1989] RPC 584, 595 (Mustill LJ) (UK). 48  For example, in the UK statutory authorization liability extends the scope of primary liability for copyright infringement, and impliedly abrogates common law authorization as a connecting factor. See Copyright, Designs and Patents Act 1988 (n. 11) s. 16(2). 49  CBS (n. 46) 1057–8 (Lord Templeman). 50  Allen v Flood [1898] AC 1, 106–7 (Lord Watson) (UK). 51  Belegging-en Exploitatie Maatschappij Lavender BV v Witten Industrial Diamonds Ltd [1979] FSR 59, 65–6 (Buckley LJ) (Aus.). 52  L’Oréal SA v eBay International AG [2009] RPC 21, 770–1 (Arnold J) (UK). See Chapter 20.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

68   Jaani Riordan any mental element necessary for primary liability.53 This requirement distinguishes fault-based procurement liability from primary liability, which may be strict (as in the case of most infringements of intellectual property rights and torts such as def­am­ ation). However, it also means that procurement has little role to play in cases where an intermediary is passive and neutral, because it is unlikely to have induced, still less intended, the commission of the primary wrong. 2.2.1.2  Common Design Where an intermediary combines with others ‘to do or secure the doing of acts which constituted a tort’, it will be liable as a joint tortfeasor.54 This requires consensus between an intermediary and another party to cause wrongdoing, and participation, in the sense that both ‘are active in the furtherance of the wrong’.55 In other words, secondary li­abil­ity arises where parties ‘agree on common action, in the course of, and to further which, one of them commits a tort.’56 Common design is a broader category than procurement, since consensus is more easily demonstrated than inducement.57 However, simply lending assistance to a primary wrongdoer, without more, is insufficient to impose secondary liability.58 Like procurement, common design comprises physical and mental elements. The required causal link is ‘concerted action to a common end’,59 rather than independent but cumulative or coinciding acts. In other words, there must actually be agreement, whether express or implicit, which includes within its scope the tortious act or omission.60 However, mere sale of goods does not entail such an agreement without more, as the House of Lords held in CBS v Amstrad.61 There the seller ‘did not ask anyone’ to infringe copyright (which would have been procurement), and there was no common design to infringe, because Amstrad did not decide the purpose for which its cassette recorders should be used; purchasers did, without any agreement between them and the vendor. Secondly, there must be action: ‘some act in furtherance of the common design—not merely an agreement’.62 This requires that the secondary party actually take part in the plan to some more than ‘de minimis or trivial’ extent.63 The required mental element is that each secondary party intended that the events constituting the primary wrong occurred,64 and additionally meets any state of mind required of a primary tortfeasor.65 As Davies has argued, this sets a high bar, and courts 53  See Atiyah (n. 45) 290–1. 54  Fish & Fish Ltd v Sea Shepherd UK [2014] AC 1229 [21] (Lord Toulson JSC) (UK). 55  Glanville Williams, Joint Torts and Contributory Negligence (Stevens and Sons 1951) 10. 56  The Koursk (n. 45), 155 (Scrutton LJ). 57  eBay (n. 52), 766–7 (Arnold J). 58 See Credit Lyonnais Bank Nederland NV v Export Credits Guarantee Department [2000] 1 AC 486, 500 (UK). 59  The Koursk (n. 45), 156 (Scrutton LJ); Credit Lyonnais (n. 58), 493, 499 (Lord Woolf MR). 60  Unilever v Gillette (n. 47) 608 (Mustill LJ). 61  CBS (n. 46) 1055–7 (Lord Templeman). 62  Unilever plc v Chefaro Properties Ltd [1994] FSR 135, 138, 141 (Glidewell LJ) (UK). 63  Sea Shepherd (n. 54) [57] (Lord Neuberger PSC). 64 See CBS (n. 46) 1058 (Lord Templeman). 65  C. Evans & Son Ltd v Spritebrand Ltd [1985] 1 WLR 317, 329 (Slade LJ) (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   69 have not abandoned ‘the shackles of CBS’ in subsequent decisions.66 Although intent includes wilful blindness, it does not extend to reckless or negligent failures to know.67 By analogy, only a specific subjective intention to bring about the acts constituting the wrong will suffice. 2.2.1.3  Criminal Accessory Liability It is also possible to impose criminal liability upon intermediaries as accessories where they participate in criminal wrongdoing (subject to the effect of safe harbour protection).68 In the UK, criminal accessory liability is defined both by statutory and common law rules. Any accessory who ‘shall aid, abet, counsel or procure the commission of any indictable offence . . . shall be liable to be tried, indicted and punished as a principal offender’.69 The exact boundaries of secondary participation in crime are defined judicially. These connecting factors are in some ways broader than those of the civil law; for example, they extend to some forms of deliberate assistance. However, they are also narrower, since the accessory must know or believe that the primary acts will occur. Collectively they may be thought of as another example of causative secondary liability, since they define ways in which an intermediary may contribute to the commission of criminal wrongdoing.

2.2.2  Relational Secondary Liability Secondary liability may also be imposed upon intermediaries who stand in some relationship with a primary wrongdoer. For example, vicarious liability can be used to hold employers and principals liable for wrongful acts and omissions of their employees and agents that are carried out within the scope of their employment or agency.70 In this setting, it is the status of the secondary actor and the proximity of its relationship with the primary wrongdoer which justifies the imposition of liability, rather than the materiality of her causal contribution to primary wrongdoing. Examples of non-causative relational secondary conduct include: wrongdoing by B which occurs within the scope of her employment or agency to A; unauthorized wrongdoing carried out by B but subsequently ratified by A;71 and primary wrongdoing done on premises controlled by A.72 Relational attribution encompasses all tortious conduct that occurs within the scope of the relationship. It is not restricted by medium and could in theory apply to internet 66  See Davies (n. 24) 403. 67  OBG (n. 32) [29]–[30] (Lord Hoffmann). 68  See Electronic Commerce (EC Directive) Regulations 2002 (SI 2002/2013), regs 17–19, 21 (UK) (which immunize against ‘any criminal sanction’ otherwise resulting from conduct falling within the mere conduit, caching, and hosting safe harbours). 69  Accessories and Abettors Act 1861, s. 8 (UK). See also Serious Crime Act 2007, ss. 44–6 (UK). 70  See generally Lister v Hesley Hall Ltd [2002] 1 AC 215 (UK). 71 See Eastern Construction Co. v National Trust Co. [1914] AC 197 (UK). 72  See e.g. Famous Music Corp. v Bay State Harness Racing and Breeding Association Inc., 554 F.2d 1213 (1st Cir., 1977) (US) (imposing liability for infringing performances on property controlled by the defendant). These cases can be viewed as breaches of a primary duty to take reasonable steps to prevent land from causing harm to others: see Leakey v National Trust for Places of Historic Interest or National Beauty [1980] QB 485, 517–19 (Megaw LJ) (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

70   Jaani Riordan intermediaries where primary wrongdoing is carried out by an employee or agent. This makes the boundaries of the employment relationship of considerable importance, particularly in the context of online platforms which function as marketplaces for services supplied by workers to consumers, most notably transportation, delivery, and accommodation services.73

3.  Justifying Intermediary Liability Intermediary liability rules often sit uneasily within the normative and conceptual structure of private law. This is partly because they can constitute exceptions to the deeply ingrained principle of individual moral agency—that a person should normally be responsible only for her own voluntary behaviour—a problem felt most acutely when intermediaries are made liable for unlawful material uploaded or transmitted by third parties over whom they otherwise lack direct control. It is also partly because the im­pos­ ition of liability upon intermediaries frequently pushes at the boundaries of established legal categories. This section provides an overview of the two main sets of justifications for imposing secondary liability upon intermediaries, which are commonly invoked to overcome the problem of moral agency (or at least to defend its curtailment). The first understands these liability rules as methods of attributing blame to secondary actors who have in some way assumed responsibility for primary wrongdoers or their actions. This account is entirely consistent with conventional principles of tortious responsibility, because it treats intermediary liability rules as reflecting the secondary actor’s own culpability. Secondly, at the level of consequentialist analysis, secondary liability rules are justified as reducing enforcement costs and encouraging optimal policing by secondary actors who are likely to be least-cost avoiders.

3.1  Normative Justifications Modern accounts of tort law describe a system of relational directives which impose responsibility for acts or omissions which interfere with the rights of others in prescribed ways.74 Primary tortious liability reflects the defendant’s violation of an obligation not to do wrong to the claimant; remedies are therefore normally only available against ‘the rights violator’, for wrongdoing ‘at the hands of the defendant’.75 Thus, as Goldberg and 73  See e.g. Uber BV v Aslam [2018] EWCA Civ 2748 [95]–[97] (Etherton MR and Bean LJ) (UK) (concluding that the relationship between Uber and its drivers was that of transportation business and workers). 74  See e.g. John Goldberg and Benjamin Zipursky, ‘Rights and Responsibility in the Law of Torts’ in Donal Nolan and Andrew Robertson (eds), Rights and Private Law (Hart 2012) 251, 263. 75  ibid. 268.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   71 Zipursky argue, the power to exact a remedy is available against wrong­doers ‘only if they have violated the victim’s right’.76 The availability of remedies against non-violators such as intermediaries poses a challenge to an account premised on rights or civil recourse. Either victims of wrongdoing have an entitlement to relief against parties who have not themselves infringed their rights, or—perhaps more plausibly—tort law must embed additional rights against secondary wrongdoers. Theoretical responses to this challenge fall under four main headings. These are not mutually exclusive cat­egor­ies; instead, they supply related but distinct explanations for extending responsibility.

3.1.1  Holding Causes of Harm Accountable The first category points to the secondary wrongdoer’s causally significant conduct—be that inducement, conspiracy, or some other recognized form of enablement—as justifying personal responsibility for the consequences. As Hart and Honoré argue, to instigate or supply the means or other assistance ‘may in a broad sense be said to give rise to a causal relationship’.77 Gardner identifies causality—that is, actually ‘making a difference’ to the primary wrong—as the defining attribute of secondary responsibility and the essential difference between primary and secondary wrongdoers: while both contribute to wrongdoing, only secondary wrongdoers make their contribution through primary wrongdoers.78 This explains why primary wrongdoing must be a sine qua non of secondary wrongdoing.79 Superfluous, ineffectual, or inchoate contributions are ignored. Similarly, contributions which might have been effective, but which do not ultimately eventuate in wrongdoing, are forgotten. Causation thus offers a normative justification for imposing tortious liability upon a secondary party: if we are morally responsible for our voluntary conduct, then we ought also to be held responsible for wrongful consequences that conduct causes.80 Causation supplies a rich vocabulary with which to analyse the ‘substitutional visiting of sins’ upon those who set others in motion.81 However, the romanticization of wrongs as billiard balls, which follow deterministic paths of cause and effect, hides a great deal of complexity, and fails to supply ready answers to problems involving intermediaries. First, causation does not always appear necessary for civil secondary liability: ratification may occur after the tortious conduct and have no effect on its occurrence; relational doctrines may impose liability regardless of the principal’s causative role. Stevens goes further and argues that only procuring requires a causal link82—though this ignores the causal element of authorization and common design, which may also ‘bring about’ 76  ibid. 273 (emphasis added). 77  Hart and Honoré (n. 4) 388. See also John Wigmore, ‘A General Analysis of Tort-Relations’ (1895) 8 Harv. L. Rev. 377, 386–7. 78  See e.g. John Gardner, Offences and Defences: Selected Essays in the Philosophy of Criminal Law (OUP 2006) 58, 71–4. 79  See e.g. K.J.M. Smith, A Modern Treatise on the Law of Criminal Complicity (OUP 1991) 6–7, 66, 82. 80  See Jerome Hall, ‘Interrelations of Criminal Law and Torts: I’ (1943) 43 Colum. L. Rev. 753, 775–6. 81  See Philip James and David Brown, General Principles of the Law of Torts (4th edn, Butterworths 1978) 356. 82  See Stevens (n. 2) 254.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

72   Jaani Riordan harm by clothing the primary wrongdoer in authority or giving a plan the legitimacy of consensus. Secondly, causation is an incomplete explanation, since merely causing or contributing to primary harm is never sufficient for secondary liability. Instead, as Hall observes, further principles of culpability—‘a body of value-judgments formulated in terms of personal responsibility’—are needed to determine which consequences individuals should be accountable for causing.83 These principles (reflected in the secondary liability rules examined earlier) ultimately rest on normative claims about justice, personal responsibility, and the allocation of losses which cannot be defended using caus­ation alone.

3.1.2  Fictional Attribution to Secondary Wrongdoers Some scholars argue that secondary liability rules attribute actions to the secondary wrongdoer, as expressed by the maxim qui facit per alium facit per se.84 Under this fiction, secondary wrongdoers are held responsible for conduct they are deemed to carry out which infringes the claimant’s rights. Older cases lend some support to the view that the acts of any participant in a common design are to be imputed to all other participants, or of employee to employer.85 Some theorists have embraced this fiction to explain joint tortfeasorship: Atiyah argues that the secondary wrongdoer has ‘effectively committed the tort himself, and the liability is not truly vicarious’; while Stevens argues that all secondary liability involves attributing actions, leading to liability ‘for the same tort’.86 This amounts to an agency-based explanation: it treats primary wrongdoers as implied agents of secondary wrongdoers, where the ‘physical acts and state of mind of the agent are in law ascribed to the principal.’87 This would rest on an implied mani­fest­ ation of assent that the primary wrongdoer should act on behalf of the secondary actor insofar as he unlawfully causes loss to others.88 However, not all secondary liability is relational: consider a website that procures infringement undertaken by users solely for their own benefit. The agency account is directly contradicted by more modern au­thor­ ities, which impute liability for the wrong of the primary wrongdoer.89 Further, it cannot be that acts constituting primary wrongdoing are literally attributed to joint tortfeasors; otherwise there would be two sets of tortious acts and two torts. Instead, ‘if one party procures another to commit a tort . . . both are the principal wrong­ doers of the same tort’.90 Given that there is a single tort, it must be that joint tortfeasors are liable separately and together for the same act of wrongdoing, rather than liable for the notional acts of two people. This explains the requirement that the secondary actor 83  See Hall (n. 80) 775–6. 84  He who employs another to do it does it himself. 85  See e.g. Launchbury v Morgans [1973] AC 127, 135 (Lord Wilberforce) (UK). 86  Stevens (n. 2) 245. 87  Tesco Supermarkets Ltd v Nattrass [1972] AC 153, 198–9 (Lord Diplock) (UK). 88  See Gerard McMeel, ‘Philosophical Foundations of the Law of Agency’ (2000) 116 LQR 387, 389–90, 410–11. 89 See Majrowski v Guy’s and St Thomas’ NHS Trust [2007] 1 AC 224, 229–30 (Lord Nicholls), 245 (Baroness Hale), 248 (Lord Brown) (UK). 90  Credit Lyonnais (n. 58) 549 (Lord Woolf MR) (emphasis added).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   73 must ‘make the wrong his own’. If the acts were already his own, this addition would be superfluous. The better answer is that a claimant’s rights in tort against one wrongdoer extend to any secondary actors who adopt the primary wrongdoer’s acts as their own. Secondary liability rules merely recognize that we are all under sub-duties—to avoid inducing, granting authorization, or conspiring with others to commit wrongs—as elem­ents inherent in primary duties.91

3.1.3  Upholding Primary Duties A third set of justifications argues that secondary liability rules are necessary to protect the integrity of an underlying primary right, such as a promise, fiduciary relationship, or property. Such rules prevent secondary actors from devaluing primary rights by removing pre-emptive reasons for compliance. This ensures that moral lacunae do not arise where morally culpable parties interpose ‘innocent’ intermediaries. Secondary liability is said to ‘strengthen’,92 ‘extend’,93 or ‘reinforce’ duties owed by primary actors, or to protect ‘species of property which deserve special protection’,94 thereby protecting the claimant’s primary interest in performance. The problem with this account is that doctrines of secondary liability operate throughout private law, so it cannot easily be said that a single species of right is singled out for ‘special protection’. Moreover, the added protection afforded by secondary remedies is incomplete; for example, it would not make conceptual sense to require the secondary party to disgorge profits retained only by the primary wrongdoer. This set of justifications lends itself more naturally to non-monetary liability. There secondary duties may be seen to arise which require the intermediary to take reasonable steps to disable or prevent wrongdoing by others. Alternatively, there may be circumstances in which the claimant’s fundamental rights are engaged by a third party’s wrongdoing and an injunction against an intermediary is a proportionate remedy to protect them. In these circumstances, one may think of the resulting duty to assist claimants to halt wrongdoing as a vehicle for protecting the claimant’s interest in ensuring protection of his or her rights.

3.1.4  Upholding Duties Voluntarily Assumed Finally, secondary liability may be understood in terms of the responsibility which intermediaries and other secondary wrongdoers assume for the actions of primary wrongdoers: for example, by helping, requesting, authorizing, or ratifying them—or, more pertinently to large internet platforms, by putting themselves in a position to regulate, and in practice regulating, such activities. To view secondary liability as premised upon an assumption of responsibility overcomes the basic objection that secondary li­abil­ity rules interfere with a person’s liberty by holding them accountable for conduct 91  Agency-based explanations may be more useful in cases where authority is specifically delegated to a primary wrongdoer—something which internet intermediaries rarely do. 92  Davies (n. 24) 404, 409. 93  Hazel Carty, ‘Joint Tortfeasance and Assistance Liability’ (1999) 19 Legal Studies 489, 668. 94  OBG (n. 32) [27] (Lord Hoffmann).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

74   Jaani Riordan which is not theirs. As Bagshaw argues, there must be ‘special reasons’ for holding someone responsible for third parties’ conduct.95 Attribution is justified where responsibility stems from a person voluntarily undertaking an obligation which can properly be upheld.96 Secondary liability may actually promote the concept of individual re­spon­ si­bil­ity and the purposes of tort law since it enforces secondary wrongdoers’ duties to control primary wrongdoers with whom they share a nexus of causation and responsibility. (In this regard, voluntary assumption of responsibility shares considerable overlap with causation-based justifications.) This account must be clarified in two ways. First, it will often be the case that a secondary wrongdoer wishes to avoid rather than assume responsibility for the primary wrongdoing; accordingly, the responsibility assumed is here notional—it reflects an expectation imposed by tort law having regard to the secondary wrongdoer’s conduct, knowledge, and control. The further riposte, that this simply involves ‘a policy of conscripting “controllers” into the ranks of [tort] prevention authorities’,97 can be met by observing that those who facilitate harm play a part in violating the claimant’s rights. While this may in itself be insufficient for monetary liability, it justifies some level of blame. As Cane argues, wilful disregard for primary rights justifies restricting secondary actors’ choices by imposing liability:98 the choice to be involved in others’ wrongful conduct forfeits any initial right of moral autonomy they once enjoyed. Second is the charge of circularity: to say that the secondary actor is liable because she owes (or has assumed) a duty of care for the primary wrongdoer’s actions begs the question, since whether such a duty exists is the very issue to be determined. Ultimately, the answer is a function of tort law more generally: duties may be assumed expressly—for example, by conducting risk-taking activity,99 giving advice,100 or exercising control101—or by satisfying a connecting factor sufficient for secondary liability to arise.

3.2  Practical Functions Consequentialist justifications of intermediary liability argue that it promotes efficient internalization of wrongdoing, thereby deterring wrongs and lowering both individual and overall enforcement costs. Without expressing a view on whether these distributive arguments are valid normative justifications for imposing liability in particular cases, at 95 Roderick Bagshaw, ‘Inducing Breach of Contract’ in Jeremy Horder (ed.), Oxford Essays in Jurisprudence (OUP 2000) 131, 148. 96  See J.C. Smith and Peter Burns, ‘Donoghue v Stevenson—The Not So Golden Anniversary’ (1983) 46 Modern L. Rev. 147, 157; Stovin v Wise [1996] AC 923, 935 (Lord Nicholls) (UK). 97  K.J.M. Smith (n. 79) 44–5. 98  Peter Cane, ‘Mens Rea in Tort Law’ (2000) 20 OJLS 533, 546. 99  See Harrison Moore, ‘Misfeasance and Non-Feasance in the Liability of Public Authorities’ (1914) 30 LQR 276, 278. 100  See e.g. Hedley Byrne & Co. Ltd v Heller & Partners Ltd [1964] AC 465, 494–5 (Lord Morris), 487 (Lord Reid) (UK). 101  See e.g. Dorset Yacht (n. 27) 1030 (Lord Reid).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   75 least three plausible accounts may be identified: first, reducing claimants’ enforcement costs by conscripting least-cost avoiders; secondly, encouraging innovation; and, thirdly, regulating communications policy. These parallel streams inform and are shaped by the considerations of fault and personal responsibility considered previously.

3.2.1  Reducing Claimants’ Enforcement Costs Enforcement against secondary parties is cheaper than suing primary wrongdoers if the aggregate costs of identifying each primary wrongdoer, proving liability, and recovering judgment outweigh the total costs of recovery against enabling intermediaries. Without a way to target facilitators, inducers, and conspirators, claimants face the Sisyphean task of suing every tortfeasor. Moreover, without the cooperation of secondary actors, claimants may lack the information necessary even to identify them. To solve this problem, doctrines of secondary liability create ‘gatekeeper’ regimes, allowing claimants to exploit natural enforcement bottlenecks and reduce overall costs.102 The function of secondary liability is to set default rules where high transaction costs would otherwise prevent optimal private ordering between claimants and wrongdoers. Such rules encourage intermediaries to internalize the cost of negative externalities their services create—for example, by using contractual mechanisms to allocate liability to primary wrongdoers, increasing service prices, or policing wrongdoing.103 This has two consequences: first, it increases the net value of primary rights, with corresponding increases in any social benefits which those rights were designed to incentivize (for example, the creation of beneficial works); and secondly, it deters primary wrongdoing, which may reduce related social harms (for example, by increasing the quality of speech). The threat of liability incentivizes secondary actors to act in ways that reduce total costs and increase net savings to society—in other words, to act as least-cost avoiders—just as the tort of negligence conditions liability for accidents upon a failure to take optimal precautions.104 In aggregate, this is said to reduce the cost of preventing and correcting breaches of primary obligations. Consequentialists regard secondary liability as appropriate only when these benefits outweigh its costs, such as lost positive externalities caused by restricting non-tortious conduct.105 However, those costs can elude quantification where they produce indirect harms to ‘soft’ interests such as freedom of expression, innovation, and privacy. Insofar as it offers a descriptive account, this view is difficult to reconcile with the fault-based requirements for causative secondary wrongdoing in English law. As Mann and Belzley argue, true efficiency-based secondary liability ‘should have nothing to 102  See Reinier Kraakman, ‘Gatekeepers: The Anatomy of a Third-Party Enforcement Strategy’ (1986) 2 J. of Law, Economics, and Organization 53, 56. 103  Lichtman and Posner (n. 15), 229–30. 104  See Guido Calabresi, The Costs of Accidents: A Legal & Economic Analysis (Yale U. Press 1970); Stephen Gilles, ‘Negligence, Strict Liability and the Cheapest Cost-Avoider’ (1992) 78 Va. L. Rev. 1291. 105  See William Landes and Richard Posner, The Economic Structure of Intellectual Property Law (HUP 2003), 118–19; Lichtman and Posner (n. 15) 238–9.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

76   Jaani Riordan do with a normative assessment of the . . . intermediary’—whether the secondary actor behaved ignorantly, dishonestly, or blamelessly—since its sole criteria are the relative costs and effectiveness of enforcement.106 Yet even least-cost avoiders are not liable to pay a monetary remedy unless they have intentionally induced, dishonestly assisted, authorized, or conspired in wrongdoing. This reflects the reasonable assumption that ignorant intermediaries are often unable to prevent wrongdoing without high social costs; fault is, in other words, a good heuristic for efficient loss avoidance. However, it means that English and European law are not solely concerned with efficient detection and prevention of primary wrongdoing; indeed, in eBay, secondary liability was refused notwithstanding the Court’s conclusion that eBay could feasibly do more to prevent wrongdoing. Conversely, courts have all but ignored the liability of payment and advertising intermediaries, despite their capacity to reduce the expected profit from online wrongdoing. Efficiency cannot account for these additional normative thresholds and therefore offers only a partial explanation of these doctrines’ aims.

3.2.2  Encouraging Innovation Consequentialists explain safe harbours as mechanisms for ensuring that secondary li­abil­ity is imposed upon intermediaries only when they are least-cost avoiders. They help efficiently apportion liability between primary and secondary wrongdoers by creating incentives for intermediaries to adopt low-cost procedures to remove material for which litigation is disproportionately costly, while recognizing that intermediaries are unlikely to be least-cost avoiders unless they are actually aware of wrongdoing.107 In other words, although ex ante monitoring may carry excessive social costs, inter­medi­ar­ ies are usually more efficient at ex post removal than primary wrongdoers.108 When Parliament or courts intervene to impose or limit secondary liability, they use a retrospect­ ive mechanism to shift innovation and dissemination entitlements between incumbent industries and technological innovators. These interventions reflect an assessment of net social welfare that seeks to induce the inefficient party to internalize the cost of wrongdoing and so avoid future inefficient investments.109 Safe harbours also provide bright lines and clear zones of activity within which inter­ medi­ar­ies may act without fear of potential liability.110 In supplying clear guarantees of immunity, they reduce uncertainty and (at least in theory) facilitate investment in new technologies. Further, they reduce the need for secondary actors to make decisions

106  Ronald Mann and Seth Belzley, ‘The Promise of Internet Intermediary Liability’ (2005) 47 William and Mary L. Rev. 239, 265. 107  See Rustad and Koenig (n. 16) 391. 108  See European Commission, Report on the Application of Directive 2004/48/EC (22 December 2010) 9. 109 See Dotan Oliar, ‘The Copyright–Innovation Tradeoff: Property Rules, Liability Rules, and Intentional Infliction of Harm’ (2012) 64 Stan. L. Rev. 951, 1001. 110  See Douglas Lichtman and William Landes, ‘Indirect Liability for Copyright Infringement: An Economic Perspective’ (2003) 16 Harv. J. of L. and Tech. 395, 406.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   77 about primary wrongdoing, which reduces the risk of pre-emptive over-enforcement.111 Intermediaries might otherwise do so because they do not internalize the benefits of tortious activity or incur the social costs of excessive enforcement.112 For courts, these limits function as liability heuristics, reducing decision costs and ultimately the cost of supplying internet services to consumers—all of which encourages investment and innovation. However, safe harbours may not go far enough, since the marginal utility derived from servicing primary wrongdoers may lead intermediaries to abandon ‘risky subscribers’.113 Innovation-based accounts therefore acknowledge that the limits of secondary liability embody a compromise between strict and fault-based responsibility that reflects wider considerations of social policy and market forces.

3.2.3  Regulating Communications Policy Finally, legal realists identify the wider role of secondary liability rules in regulating access to information. Secondary actors are natural targets for propagating communications policy and enforcing rights in and against information, since they have always been gatekeepers crucial for its reproduction and dissemination.114 Those policies serve numerous purposes, from preserving existing business models and protecting incumbent industries, to minimizing consumer search costs.115 Secondary liability rules are one method of regulating the interface between each generation of disseminating industries and those with an interest in what is being disseminated. They appoint judges as technological gatekeepers who assess the likely harms and benefits of new entrants’ technologies, deciding whether, on balance, they should be immunized or face extended liability.116 Following this assessment, Parliament may intervene to reverse or codify an emergent policy. That tort law specifies high thresholds for secondary liability reflects an underlying policy of entrusting regulation to market forces unless the harms of new technology clearly outweigh their benefits. Safe harbours partially codify these policies. If they are pragmatic compromises, this reflects the contested nature of modern communications policies.117 This approach views the limits of secondary liability as an evolving battleground of regulation which corrective theory cannot wholly explain; although prin­ciples of tortious responsibility inform doctrines of secondary liability, they are subservient to a Kronosian cycle of innovation, market disruption, and regulation in which courts and Parliament periodically rebalance wider interests of competition and economic policy, 111  See Mann and Belzley (n. 105) 274. 112  See Assaf Hamdani, ‘Gatekeeper Liability’ (2003) 77 Southern California L. Rev. 53, 73. Cf. Lichtman and Posner (n. 15) 225–6. 113  Neal Katyal, ‘Criminal Law in Cyberspace’ (2001) 149 U. Pa. L. Rev. 1003, 1007–8. 114  See Tim Wu, ‘When Code Isn’t Law’ (2003) 89 Va. L. Rev. 679, 712–13. 115  See e.g. Stacey Dogan and Mark Lemley, ‘Trademarks and Consumer Search Costs on the Internet’ (2004) 41 Houston L. Rev. 777, 795–7, 831. 116  See Tim Wu, ‘Copyright’s Communications Policy’ (2004) 103 Michigan L. Rev. 278, 348–9, 364. 117  ibid. 356.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

78   Jaani Riordan human rights, innovation, regional and international trade policy,118 and the complex incentive structures underlying primary legal norms.

4.  Types of Wrongdoing The preceding sections have provided a theoretical account of the distinct forms that liability rules may take, the relationship between primary and secondary wrongdoing, and the justifications for extending liability to intermediaries. Having done so, we can now complete the cartography of intermediary liability by identifying the main areas in which national legal systems have sought to impose duties upon intermediaries. Although this section is, by necessity, neither comprehensive nor detailed, it aims to provide a bird’s-eye view of the territory which will be explored elsewhere in this book, and to situate each area within the conceptual taxonomy of primary and secondary li­abil­ity developed earlier.

4.1  Copyright Infringement Traditional business models of copyright industries have been challenged by new forms of online distribution enabled by peer-to-peer protocols, user-generated content platforms, search engines, and content distribution networks. Confronted with this challenge, content creators and publishers have sought to use copyright norms to regulate the activities of intermediaries and to force them to internalize the costs of policing and preventing unlicensed exploitation of copyright works. In so doing, copyright has proven to be one of the hardest fought battlegrounds for intermediary liability. The predominant focus of copyright owners has been the imposition of monetary li­abil­ity upon intermediaries who were obviously complicit in, or directly responsible for, the most egregious infringements. Into this category may be grouped cases seeking to impose liability upon website directories of infringing content,119 platforms structured around infringing content,120 and transmission protocols that induce infringement by their users and are overwhelmingly used to transmit infringing content.121 In most such cases, both primary and secondary liability are alleged, which reflects their increasingly indistinct boundary in copyright law. For example, in Twentieth 118  See Graeme Dinwoodie, ‘The WIPO Copyright Treaty: A Transition to the Future of International Copyright Lawmaking’ (2007) 57 Case Western Reserve L. Rev. 751, 757–8 (it might be added that inter­ medi­ar­ies now represent a ‘fourth vector’ of balance). 119  See e.g. Cooper v Universal Music Australia Pty Ltd [2006] FCAFC 187 [42] (Branson J) (French J agreeing) (Aus.). 120  See e.g. Twentieth Century Fox Film Corp. v Newzbin Ltd [2010] EWHC 608 (Ch) [125] (Kitchin J) (UK); [2010] FSR 21. 121  See e.g. Metro-Goldwyn-Mayer Studios Inc. v Grokster Ltd, 545 US 913, 937 (2005) (US).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   79 Century Fox Film Corp. v Newzbin Ltd,122 the operator of a Usenet binary storage service was liable both for itself communicating to the public copies of the claimants’ films that had been uploaded by third parties to Usenet newsgroups, and also for procuring and engaging in a common design with its paying subscribers to copy the films. It made little difference to the outcome in this case whether liability was classified as primary or secondary. A second strand of cases has sought to impose monetary liability upon intermediaries whose business models are not structured around infringement, but whose services nevertheless facilitate or enable infringing transmissions to occur: chiefly, internet service providers (ISPs) and platforms. In these cases, the difference between primary and secondary liability matters a great deal. Courts have generally rejected attempts to pin secondary liability (typically under the guise of authorization liability) upon ISPs and other mere conduits, on the basis that they lack knowledge or control over the in­frin­ ging transmissions of their subscribers.123 Similarly, in The Newspaper Licensing Agency Ltd v Public Relations Consultants Association Ltd,124 the liability of an online news aggregator service was to be decided solely by reference to a question of primary li­abil­ity: namely, whether the service engaged in acts of reproduction in respect of news headlines, and whether those acts satisfied the temporary copying defence. Claims against platforms have produced more equivocal outcomes: in the United States, litigation against YouTube was settled out of court after two first instance judgments in which summary judgment was granted in favour of YouTube on the basis of statutory safe harbours,125 a successful appeal against summary dismissal, and findings upon remission that YouTube had no actual knowledge of specific infringements or any ability to control what content was uploaded by users.126 In another decision, the online video platform Veoh was held not to have sufficient influence over user-uploaded video content to fix it with liability for infringement:127 ‘something more than the ability to remove or block access to materials posted on a service provider’s website’ was needed.128 Both these claims appear to have been focused solely on secondary liability standards. By contrast, European claims against platforms have led to divergent and inconsistent results, which appear to stem from confusion concerning the proper boundaries of primary liability rules. In the United Kingdom, an app that allowed users to upload clips taken from the claimants’ broadcasts of cricket matches infringed copyright by reproducing and communicating to the public a substantial part of the broadcast works, and could not rely on the hosting or mere conduit safe harbours insofar as the clips were 122  [2010] EWHC 608 (Ch) [108]–[112] (Kitchin J) (secondary liability), [125] (UK) (primary liability). 123  See e.g. Roadshow Films Pty Ltd v iiNet Ltd (No. 3) (2012) 248 CLR 42 (Aus.). 124 [2013] RPC 19. See also C-360/13 Public Relations Consultants Association Ltd v Newspaper Licensing Agency Ltd and Others [2014] ECLI:EU:C:2014:1195. 125  See Digital Millennium Copyright Act 1998, s. 512(c) (US). 126 See Viacom International Inc. v YouTube Inc., 676 F.3d 42 (2d Cir., 2012) (US); Viacom International Inc. v YouTube Inc., 07 Civ 2103 (LLC), 1:07-cv-02103-LLS, 20–3 (SDNY 2013) (Stanton J) (US). 127 See UMG Recordings v Shelter Capital Partners LLC, No. 10-55732, 2013 WL 1092793, 12, 19 (9th Cir., 2013) (US). 128  Capital Records Inc. v MP3Tunes LLC, 821 F.Supp.2d 627, 635 (SDNY 2011) (US).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

80   Jaani Riordan subject to editorial review.129 In Germany, the Bundesgerichtshof did not consider it acte clair whether a platform such as YouTube performs an act of reproduction or communication to the public where infringing videos are uploaded by a user automatically and without any prior editorial review by the platform operator, so referred several questions to the CJEU.130 Previously, the Oberlandesgericht Hamburg had held that, as a host, YouTube could avail itself of safe harbour protection irrespective of the answer to that question.131 Meanwhile, in Austria, a television broadcaster has reportedly succeeded in a claim for infringement against YouTube on the basis that YouTube could not rely on the hosting safe harbour.132 Such divergent and irreconcilable outcomes are the unfortunate by-products of using primary liability concepts to conceal differing value assessments of these platforms’ secondary responsibility. More recently, copyright owners and their licensees have shifted their focus towards non-monetary remedies to block or disable access to infringing material. In the United Kingdom, sections 97A and 191JA of the Copyright, Designs and Patents Act 1988 create statutory blocking remedies consequent upon a finding that a third party has infringed the claimant’s copyright or performers’ rights, respectively. These are in substance final injunctions which give effect to the obligation recognized by Article 8(3) of Directive 2001/29/EC.133 These remedies are discussed in more detail later in this book.134 Their growing use reflects a perception that injunctions of this kind can be significantly more valuable for copyright owners than traditional forms of monetary relief, despite the absence of a monetary remedy. This is well illustrated by the fact that the first blocking order made against a British ISP, Twentieth Century Fox Film Corp. v British Telecommunications plc, related to the Newzbin platform, which (despite the liability judgment in Newzbin) continued in operation as ‘Newzbin2’. Indeed, rather tellingly, it was only after being blocked that, finally starved of visitor traffic and advertising rev­enue, the platform eventually shut down in late 2012.135

129 See England and Wales Cricket Board Ltd v Tixdaq Ltd [2016] EWHC 575 (Ch) [169]–[171] (Arnold J) (UK), though it was common ground that the app maker was jointly liable for infringing acts committed by users: see ibid. [94] (Arnold J). 130  See Bundesgerichtshof [Supreme Court] (BGH) LF v Google LLC & YouTube Inc. [13 September 2018] case no. I ZR 140/15 (Ger.); C-682/18 LF v Google LLC, YouTube Inc., YouTube LLC, Google Germany GmbH [2019] Request for a preliminary ruling from the Bundesgerichtshof (Ger.) lodged on 6 November 2018, OJ/C 82, 2–3. 131  See Oberlandesgericht [Higher Court] (OGH) Hamburg, GEMA v YouTube II [1 October 2015] case no. 5 U 87/12 (Ger.). 132  See Handelsgericht [Commercial Court] Vienna, ProSiebenSat.1 PULS 4 GmbH v YouTube LLC [5 June 2018] (Aust.); Press release, ‘PULS 4 wins lawsuit against YouTube’ (APA-OTS, 6 June 2018) . 133  See e.g. Twentieth Century Fox Film Corp. v British Telecommunications plc (No. 1) [2011] EWHC 1981 (Ch) [146] (Arnold J) (UK) (‘Newzbin2’). See also Football Association Premier League Ltd v British Telecommunications plc (No. 1) [2017] EWHC 480 (Ch) (UK), in the context of live sports broadcasts. 134  See Chapters 4 and 29. 135  See ‘Piracy Site Newzbin2 Gives up and Closes 15 Months after Block’ (BBC, 29 November 2012) .

OUP CORRECTED PROOF – FINAL, 03/30/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   81

4.2  Trade Mark Infringement Doctrines of trade mark ‘use’ present almost insurmountable difficulties to claims seeking to impose primary liability upon intermediaries such as search engines and online advertising networks for third parties’ trade mark infringements. As the CJEU explained in Google France SARL v Louis Vuitton Malletier SA, permitting third parties to select keywords which may be trade marks does not mean that a service provider itself ‘uses’ those signs for trade mark purposes.136 In this regard, merely ‘creating the technical conditions necessary for the use of a sign’ by a third party will not ordinarily be sufficient to lead to primary liability: the intermediary’s role must instead ‘be examined from the angle of rules of law other than [primary liability]’.137 In that case, the CJEU rejected claims that the claimants’ trade marks had been used by Google in keyword advertisements placed by third parties on the defendant’s search engine, as such advertisements were not used by Google. However, the Court did advert to the possibility of liability attaching under domestic secondary liability rules.138 In England, at least, this seems unlikely as a result of the combined effect of eBay and CBS, subject to the possibility of non-monetary liability under Article 11 of Directive 2004/48/ EC. The relatively high thresholds of participation attaching to secondary liability under English tort law principles mean that it is difficult to impose monetary secondary li­abil­ ity upon marketplaces and platforms for third parties’ trade mark infringements. In eBay, the claimants had argued that eBay was jointly liable for trade mark infringement pursuant to a common design with registered users who sold counterfeit and parallel-imported versions of the claimants’ perfume and cosmetic goods. Arnold J rejected this argument, despite expressing ‘considerable sympathy’ for the suggestion that eBay could and should do more to prevent infringement.139 The starting position was that tort law imposed ‘no legal duty or obligation to prevent infringement’.140 Liability as a joint tortfeasor is the consequence of failing to discharge a duty (not to procure or participate in a tortious design), and not the source of such a duty. It followed that if eBay was under no duty to act, then whether or not it failed to take reasonable steps was ir­rele­vant. The claimants’ argument was therefore circular: it assumed the duty it set out to prove. All that could be said was that eBay’s platform facilitated acts of infringement by sellers, but mere facilitation—even with knowledge of the existence of infringements and intent to profit from them—was not enough to create a common design. However, the Court expressly left open the possibility that eBay might be susceptible to an injunction under Article 11.141

136 C-236–238/08 Google France SARL v Louis Vuitton Malletier SA [2010] ECLI:EU:C:2010:159, paras 55–6. 137  ibid. para. 57. The CJEU clearly had in mind national rules of secondary liability, as indicated by the cross-reference to ‘the situations in which intermediary service providers may be held liable pursuant to the applicable national law’. See ibid. para. 107. 138  ibid. para. 57. 139  eBay (n. 52) [370] (Arnold J). 140  ibid. [375] (Arnold J). 141  ibid. [454], [464]–[465] (Arnold J).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

82   Jaani Riordan As Aldous LJ observed in British Telecommunications plc v One in a Million Ltd, secondary liability is ‘evolving to meet changes in methods of trade and communication as it had in the past.’142 However, despite the inherent flexibility of such rules, cases like eBay suggest that secondary liability rules embed within them a default policy choice to exonerate passive or neutral intermediaries who play no intentional role in wrongdoing, despite knowingly facilitating it for profit, or making it a necessary or inevitable part of their business models. Like copyright, English courts have recognized the availability of blocking injunctions to prevent trade mark infringement by third parties.143 While these remedies are potentially effective to prevent sales of counterfeit goods from websites that are dedicated to advertising and selling such goods, they are less obviously available against general purpose platforms such as eBay (on which are also advertised a vast array of lawful goods) due to proportionality concerns and the technical difficulty of preventing access only to infringing material.

4.3 Defamation The tort of defamation proved one of the earliest battlegrounds for intermediary li­abil­ity, as claimants sought to impose monetary liability upon hosts of Usenet newsgroups,144 online news publishers,145 social networks,146 bloggers,147 and ISPs148—with varying degrees of success. Defamation provides a good example of primary li­abil­ity being remoulded to encompass certain categories of secondary publishers whose contribution to the publication of defamatory material is considered sufficiently blameworthy to merit liability. This process shares many parallels with earlier developments in the law of defamation which enabled it to accommodate previous technologies of reproduction and dissemination within established patterns of primary liability (for example, telegraphy and radio). Defamation claims involving intermediaries tend to have three focal points: prima facie liability, which asks whether the intermediary is to be considered a publisher at common law; defences, such as innocent dissemination, which exempt innocent secondary publishers from liability; and safe harbours, which immunize passive and neutral publishers who would otherwise be liable. The common law approach to prima facie liability has evolved rapidly in a series of judicial decisions involving intermediaries. By 142  [1998] EWCA Civ 1272; [1999] FSR 1 [486] (Aldous LJ). 143  See e.g. Cartier International AG v British Sky Broadcasting Ltd [2014] EWHC 3354 (Ch) (UK); British Telecommunications plc v Cartier International AG [2018] UKSC 28; [2018] 1 WLR 3259 (UK). 144  See e.g. Godfrey v Demon Internet Ltd [2001] QB 201 (UK). 145  See e.g. Dow Jones & Co. v Gutnick (2002) 210 CLR 575 [44], [51]–[52] (Gleeson CJ, McHugh, Gummow, and Hayne JJ) (UK). 146  See e.g. Richardson v Facebook UK Ltd [2015] EWHC 3154 (QB) [28], [48] (Warby J) (UK) (noting obiter that the defendant could not be held to be a publisher). 147  See e.g. Bunt v Tilley [2006] EWHC 407 (QB) (UK). 148  See e.g. Kaschke v Gray [2010] EWHC 690 (QB) [40]–[41] (Stalden J) (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   83 analogy with cases involving offline intermediaries, English courts developed a test premised upon knowledge of defamatory material coupled with a failure to remove it within a reasonable period, from which an inference of publication could be drawn. Early cases reasoned that an intermediary is to be treated as a publisher (and therefore within the scope of primary liability for defamation) where tortious material is uploaded by a third party to facilities under its control, the intermediary has been put on notice of the unlawfulness of the material, and fails to remove it within a reasonable period despite having the capacity to do so.149 This reasoning relied on an analogy with older cases involving offline intermediaries in which consent to publication was inferred from an occupier’s physical control over premises and its failure to remove statements displayed there.150 This approach was further developed in Tamiz v Google Inc., which concerned de­fama­ tory comments posted to a blog hosted by Google’s ‘Blogger’ service. The Court of Appeal laid down a test which asks whether the platform could ‘be inferred to have associated itself with, or to have made itself responsible for, the continued presence of [the defamatory] material on the blog’, after receiving notice of the material.151 The Court emphasized Google’s own voluntary conduct—in particular, by providing a platform for publishing blogs and comments, setting the terms and policies for the platform, and determining whether or not to remove any posting made to it—as a potential basis for inferring participation in publication. This suggests an approach which is consistent with principles of moral agency and individual responsibility, albeit one founded upon a failure to act after notification.152 Conversely, in Metropolitan Schools v DesignTechnica, the claimant alleged that Google Search was a publisher of defamatory material that appeared within automated ‘snippets’ summarizing search results for the claimant’s name. Google relied on the statu­tory defence of innocent dissemination and safe harbours, though ultimately it needed neither since it was held not to be a publisher of the material at common law, and therefore did not face even prima facie liability.153 Similarly, the Supreme Court of Canada has concluded that a mere hyperlink is not publication of the material to which it leads.154 Australian courts have been slower to reach the same conclusion: in Trkulja v Google LLC,155 for example, the High Court of Australia held that the question was not amenable to summary determination, as ‘there can be no certainty as to the nature and extent of Google’s involvement in the compilation and publication of its search engine results without disclosure and evidence’.156 In so holding, the Court appears to have 149 See Godfrey (n. 143) 209, 212 (Morland J). 150 See Byrne v Deane [1937] 1 KB 818, 830 (Greene LJ) (UK). 151  [2013] EWCA Civ 68 [34] (Richards LJ) (Lord Dyson MR and Sullivan LJ agreeing) (UK). 152  See also Pihl v Sweden App. no. 74742/14 (ECtHR, 7 February 2017) para. 37 (suggesting that this approach strikes an appropriate balance with claimants’ Art. 8 Convention rights). 153  [2009] EWHC 1765 (QB) [124] (Eady J) (UK). 154 See Crookes v Newton [2011] SCC 47 (Can); [2011] 3 SCR 269 [42] (Abella J), [48] (McLachlin CJ and Fish J) (CA) (UK). 155  [2018] HCA 25 [38]–[39] (Kiefel CJ, Bell, Keane, Nettle, and Gordon JJ) (Aus.). 156  ibid. [39].

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

84   Jaani Riordan disagreed with the proposition that a search engine can never be a common law publisher of materials included in search results or autocompleted queries. Intermediaries who are further removed from the defamatory material are in­trin­sic­ al­ly less likely to face primary liability. For example, in Bunt v Tilley the ISP defendant was held incapable of being a publisher at common law despite transmitting defamatory materials from third party website operators to its subscribers.157 Instrumental in this conclusion was the lack of any ‘assumption of general responsibility’ of an ISP for transmitted materials, which would be necessary to impose legal responsibility.158 The same conclusion would appear likely to exonerate other mere conduits, without any need to rely on defences or safe harbours. The hosting safe harbour is most commonly relied upon by intermediaries to shield against monetary liability for defamation.159 In McGrath v Dawkins, the retailer Amazon. co.uk was immunized against claims of liability for allegedly defamatory reviews and comments posted to the claimant’s book product page.160 Similarly, in Karim v Newsquest Media Group Ltd, a claim against a website operator arising from defamatory postings made by users was summarily dismissed on the basis of safe harbours.161 The hosting safe harbour tends to encourage over-compliance by intermediaries, since removal is in most cases the only realistic response to a complaint, even if a mo­tiv­ ated defendant might well be able to plead a substantive defence (such as truth or qualified privilege). Nevertheless, in defamation cases an informal counter-notification procedure appears to be encouraged by the English courts: if an intermediary receives two competing allegations—one claiming that material is defamatory, the other defending the lawfulness of the material—then, faced with two irreconcilable notices, the intermediary will have ‘no possible means one way or the other to form a view as to where the truth lies.’162 It also appears to be the case that intermediaries will not be required to accept complaints at face value if they are defective or fail to identify facts from which unlawfulness is apparent.163 Such notices are ‘insufficiently precise or inadequately substantiated’ to deprive an intermediary of the hosting safe harbour.164 The Defamation Act 2013 (UK) introduced a new defence of exhaustion, which requires the claimant first to exhaust any remedies against a primary wrongdoer before proceeding against a secondary publisher, such as an intermediary. This prescribes a stricter relationship of subsidiarity between primary and secondary liability for def­am­ ation: a court cannot hear a defamation claim against a secondary party unless the court is satisfied that it is not reasonably practicable for the claimant to proceed against the

157 See Bunt v Tilley (n. 146) [21]–[26], [37] (Eady J) (UK). 158  ibid. [22] (Eady J); cf. McLeod v St Aubyn [1899] AC 549, 562 (Lord Morris) (UK). 159  In many respects, the safe harbours cover similar ground—and support similar policy goals—to the defence of innocent dissemination in the law of defamation. 160  [2012] EWHC B3 (QB) (UK). 161  [2009] EWHC 3205 (QB) [15] (Eady J) (UK). 162  See e.g. Davison v Habeeb [2011] EWHC 3031 (QB) [66] (HHJ Parkes QC) (UK). 163  Tamiz (n. 150) [59]–[60] (Eady J). 164  eBay (n. 52) [122].

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   85 primary party.165 In defamation claims, a successful claim against a secondary publisher (such as a service provider) is usually treated as an example of primary liability for a second and distinct publication, even though it is derivative from another wrong (the ori­gin­al publication).

4.4  Hate Speech, Disinformation, and Harassment Discussion continues in the United Kingdom about how best to regulate and deter online abuse, disinformation and ‘fake news’, and cyber-bullying. Proposals have been made for intermediaries (especially social networks and media-sharing platforms) to take a more active role in policing content accessible on their platforms.166 A series of recent cases from Northern Ireland demonstrates how social networks can face non-monetary li­abil­ity to remove materials which constitute unlawful harassment or otherwise interfere in an individual’s fundamental rights. In XY v Facebook Ireland Ltd, the Court held that a Facebook page entitled ‘Keeping Our Kids Safe from Predators’ created a real risk of infringing the claimant sex offender’s rights to freedom from inhuman and degrading treatment and to respect for private and family life. The Court granted an interim injunction requiring Facebook to remove the page in question (designated by URL), but refused to order it to monitor the site to prevent similar material from being uploaded in the future due to the disproportionate burden and judicial supervision that would entail.167 However, no monetary liability was imposed on Facebook, and it appears to have been assumed without any discussion that Facebook owed a duty to prevent interferences with the claimant’s rights even though it was not necessarily the publisher of the page.168 Conversely, in Muwema v Facebook Ireland Ltd, the Irish High Court made an order requiring Facebook to identify the author of a Facebook page, but refused in­junct­ive relief because Facebook could avail itself of statutory defences and because it would be futile.169 Intermediaries and individuals can face criminal liability under section 127(1) of the Communications Act 2003 (UK), which makes it an offence to publish a message or other matter that is ‘grossly offensive’ or of an ‘indecent, obscene or menacing character’ over a public electronic communications network.170 Service providers have not been targeted directly, and it is difficult to envisage a situation in which an intermediary would possess the necessary mens rea for primary liability. In the absence of relevant primary or secondary liability standards, it seems likely that some form of sui generis statutory regulation would be necessary if policymakers wish to impose duties onto 165  Although the provision has not yet been litigated, it may have less application in internet disputes involving anonymous primary defendants. 166  See e.g. Digital, Culture, Media and Sport Committee, Final Report: Disinformation and ‘Fake News’ (18 February 2019) ch 2, paras 14–40. 167  [2012] NIQB 96 [16]–[20] (McCloskey J) (UK). 168  See also AB Ltd v Facebook Ireland Ltd [2013] NIQB 14 [13]–[14] (McCloskey J) (UK). 169  [2016] IEHC 519 [62]–[64] (Binchy J) (Ire.). 170  See e.g. Chambers v Director of Public Prosecutions [2012] EWHC 2157 (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

86   Jaani Riordan platforms, social networks, and other intermediaries involved in the dissemination of harmful material.171

4.5  Breach of Regulatory Obligations Liability can also arise pursuant to statutory provisions that impose specific duties upon intermediaries in particular contexts. Intermediaries now face primary regulation in fields as diverse as network neutrality,172 network and information security,173 access to online pornography,174 and data protection.175 The consequences of breaching such a duty depend on the terms of the relevant legislative provisions, and range from mon­et­ ary penalties and enforcement notices from regulators through to private claims for breach of statutory duty. These are most conventionally thought of as forms of primary liability, since they attach to breaches of primary duties owed by intermediaries under the terms of the relevant legislation. A particularly topical example concerns interception and retention of communications data. Most developed countries have legislation providing for the monitoring and interception of metadata by service providers, under various conditions and pro­ced­ures, to assist in the detection and investigation of crime. Under the Investigatory Powers Act 2016 (UK), law enforcement and intelligence agencies have statutory powers to intercept ‘communications data’ (such as emails, instant messages, HTTP requests, and file metadata) carried by telecommunications service providers (broadly defined to encompass, for example, telephone carriers, web hosts, ISPs, and social networking platforms). Such interceptions can in most cases be authorized by executive warrant. Service providers who receive a valid request are obliged to grant access to the requested data, enforceable by means of injunction. Following the disclosure of mass interception capabilities and intelligence-sharing by GCHQ and other agencies, public debate has ignited about whether these capabilities are lawful, necessary, or desirable, and what conditions and safeguards are needed to uphold fundamental rights. In the meantime, intermediaries continue to be subject to primary duties to store communications metadata and to provide targeted and bulk access to law enforcement authorities.

171 The Final Report (n. 165) recommended the creation of a statutory body responsible for enforcing a Code of Ethics against intermediaries, including by requiring the removal of ‘harmful and illegal content’, ibid. para. 39. 172  See Regulation 2015/2120/EU of the European Parliament and of the Council of 25 November 2015 laying down measures concerning open internet access and amending Directive 2002/22/EC on universal service and users’ rights relating to electronic communications networks and services and Regulation (EU) No. 531/2012 on roaming on public mobile communications networks within the Union [2016] OJ L/310. 173  See Network and Information Systems Regulations 2018 (SI 2018/506) (UK). 174  See Digital Economy Act 2017, ss. 14, 20, 23 (UK). 175  See Regulation 2016/679/EU (n. 29).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   87

4.6  Disclosure Obligations Most legal systems recognize an obligation to give disclosure of information needed to bring a viable claim against an actual or suspected wrongdoer. In common law systems, disclosure of this kind is a discretionary equitable remedy that has been developed by courts of equity.176 The claimant must establish: (1) an arguable case of primary wrongdoing by someone; (2) facilitation of that wrongdoing by the defendant from whom disclosure is sought; (3) that disclosure is necessary to enable some legitimate action to be taken against the primary wrongdoer; and (4) that disclosure would, in the circumstances, be proportionate. Whether disclosure by an intermediary is proportionate will depend on a number of factors, including the nature and seriousness of the primary wrongdoing,177 the strength of the claimant’s claim against the primary wrongdoer,178 whether the intermediary’s users have a reasonable expectation that their personal data are private,179 whether the disclosure would only affect the personal information of arguable wrongdoers,180 whether there are any realistic alternatives, and the cost and inconvenience of giving disclosure. Although there is no presumption either way, the decided cases indicate that it is exceedingly rare for disclosure to be outweighed by the fundamental rights of an intermediary or their anonymous users. The nature of the legal duty recognized in these cases is relatively limited, and such claims give rise only to non-monetary secondary liability of the most nominal kind. The basis of the duty is that, where an intermediary is ‘mixed up in’ wrongdoing, once it has been given notice of the wrongdoing it comes under a duty to ‘stop placing [its] facilities at the disposal of the wrongdoer’.181 However, reflecting their status as a legally innocent party, intermediaries are, in an ordinary case, entitled to be reimbursed for their compliance costs and legal costs of responding to the claim for disclosure.182 Two cases suggest that the English courts are alive to the risk that blanket disclosure orders against ISPs may be abused by claimants for the collateral purpose of extracting monetary settlements from subscribers accused of copyright infringement. In one case,183 the Court commented critically on the use of disclosed information to per­pet­rate a scheme in which parties were given no realistic prospect but to submit to a demand of payment to avoid being ‘named and shamed’ as alleged downloaders of porno­graph­ic copyright works. In another case, the Court undertook a full proportionality analysis, balancing claimants’ and subscribers’ rights and the justifications for interference, before applying the ‘ultimate balancing test’ of proportionality. However, in that case, a refusal 176 See Norwich Pharmacal Co. v Customs and Excise Commissioners [1974] AC 133 (UK). 177 See Sheffield Wednesday Football Club Ltd v Hargreaves [2007] EWHC 2375 (QB) (UK). 178 See Clift v Clarke [2011] EWHC 1164 (QB) (UK). 179 See Totalise plc v The Motley Fool Ltd [2002] 1 WLR 1233 [30] (Aldous LJ) (UK). 180 See Rugby Football Union v Consolidated Information Services Ltd (formerly Viagogo Ltd) (in liq.) [2012] 1 WLR 3333 [43], [45] (Lord Kerr JSC) (UK). 181  ibid. [15]; Cartier (n. 142) [10] (Lord Sumption JSC) (UK). 182 See Totalise (n. 178) [29] (Aldous LJ); see Cartier (n. 142) [12] (Lord Sumption JSC). 183 See Media CAT Ltd v Adams (No. 2) [2011] FSR 29 [724]–[725] (HHJ Birss QC) (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

88   Jaani Riordan to extend disclosure to copyrights that had been licensed for enforcement purposes by third party copyright owners was overturned; this did not amount to ‘sanctioning the sale of the [subscribers’] privacy and data protection rights to the highest bidder’.184

5. Conclusions This chapter has proposed a theoretical framework for describing liability and a tax­onomy for classifying the types of liability which may be imposed upon internet inter­medi­ar­ies. Such liability may be divided into monetary and non-monetary obligations, and further classified along a spectrum of liability rules. Additionally, all forms of li­abil­ity may be described as either primary or secondary, where the latter is reserved to derivative wrongdoers whose conduct meets certain causal, relational, and normative thresholds. This chapter has also considered two sets of policy justifications for imposing liability upon intermediaries. The first seeks to bring secondary wrongdoers’ liabilities within traditional accounts of moral agency and individual responsibility in private law. This chapter argued that doctrines of secondary liability can be best explained, at least in the context of internet intermediaries, by intermediaries’ assumption of responsibility for primary wrongdoers. The second set of justifications uses law and economics to explain why certain secondary actors may be appropriate targets of loss-shifting. Although economic analysis provides a powerful vocabulary to describe the communications policy underlying many intermediary liability rules, the agnosticism of enforcement cost ana­lysis offers few clear answers to underlying questions of fault, responsibility, and fundamental rights. Although our focus has been legal liability, it is also obvious that even in the absence of liability, intermediaries’ conduct may be important in a number of ways. First, voluntary self-regulation now accounts for the vast majority of online content enforcement undertaken by major platforms and search engines. For example, by the end of March 2019, Google had processed over 4 billion requests to de-index copyright-infringing URLs from search results,185 which is many orders of magnitude greater than any conceivable claims for injunctions or damages. Secondly, the norms and practices which intermediaries choose to adopt, and their approach to self-enforcing those norms, in turn affect the likelihood of tortious material being posted, transmitted, or accessed.186 Thirdly, the possibility of liability may cause intermediaries to internalize at least some of the harms caused by their services even if they are not under a strict legal duty to do

184  Golden Eye (International) Ltd v Telefónica UK Ltd [2012] EWHC 723 (Ch) [146] (Arnold J) (UK); rev’d in part [2012] EWCA Civ 1740 (UK). 185  See Google Inc., ‘Requests to remove content due to copyright’ in Transparency Report (March 2019) . 186  See e.g. Twitter Inc., ‘The Twitter Rules’ (Help Center, 2019) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   89 so.187 Finally, the threat of more stringent future regulation may deter intermediaries from more self-interested behaviour, may incentivize greater self-regulation,188 and may even lead intermediaries to develop new capabilities which are later available to be deployed in aid of legal remedies.189 Each of these forms of conduct by intermediaries can exert a strongly regulatory effect on internet communications, even though it can hardly be said to be a consequence of imposing liability. At least under English law, the traditional tort-based thresholds for imposing secondary liability are rarely met by passive and neutral intermediaries, which usually lack the required knowledge and intention even if they do facilitate wrongdoing. The high thresholds of intervention required for secondary liability suggest that these rules are unlikely to be effective at regulating internet intermediaries alone. Limitations derived from European law, examined elsewhere in this book, further entrench the principle that faultless intermediaries should not face monetary liability or onerous duties to police third parties’ wrongdoing. Despite this general aversion to imposing liability upon intermediaries and other secondary actors, it is clear that intermediaries may face liability in a growing number of areas. This chapter has provided an overview of the main developments from the perspective of English law, which will be further analysed in specific areas elsewhere in this book.

187 See e.g. eBay Inc., ‘About VeRO’ (2019) ; Google Ireland Ltd, ‘How Content ID Works’ (YouTube Help, 2019) . 188  See e.g. Rob Leathern, ‘Updates to Our Ad Transparency and Authorisation Efforts’ (Facebook Business, 29 November 2018) (noting a change to Facebook’s advertising policies in the UK). 189  See e.g. Lord Macdonald, ‘A Human Rights Audit of the Internet Watch Foundation’ (November 2013) 8–9 (noting the development of the IWF blacklist in response to government pressure).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 4

R em edies First, Li a bilit y Secon d: Or W h y W e Fa il to Agr ee on Optim a l De sign of I n ter m edi a ry Li a bilit y Martin Husovec*

Over the last years, attention in the literature has largely focused on analysing how courts decide about the conditions leading to liability of intermediaries for their users’ actions. Numerous authors studied issues related to the wide-range of market players active in all areas of intellectual property (IP) law.1 What remains under-researched, however, is the question of the consequences that these legal qualifications entail. I am not aware of an in-depth comparative or empirical study that would try to map the extent of damages awarded, the scope and type of injunctions granted, the allocation of burden of proof, or similar highly practical issues. Angelopoulos gets the closest.2 For legal scholarship, very often establishing the liability itself is an end-station. It is somehow implicitly assumed that intermediaries cannot recover and will simply shut down their services if held liable. This is striking, as it basically presupposes a world *  I would like to thank Christina Angelopoulos, Giancarlo Frosio, and Miquel Peguera for their valuable comments on the earlier draft of this chapter. All mistakes are solely mine. 1  See e.g. Miquel Peguera, ‘Hyperlinking under the lens of the revamped right of communication to the public’ (2018) 34(5) Computer Law & Security Rev. 1099–118; Christina Angelopoulos, European Intermediary Liability in Copyright: A Tort-Based Analysis (Kluwer Law Int’l 2016); Jaani Riordan, The Liability of Internet Intermediaries (OUP 2016); Paul Davies, Accessory Liability (Hart 2015); Graeme Dinwoodie (ed.), Secondary Liability of Internet Service Providers (Springer 2017); Mark Bartholomew and Patrick McArdle, ‘Causing Infringement’ (2011) 64 Vand. L. Rev. 675. 2  See Angelopoulos (n. 1).

© Martin Husovec 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Remedies First, Liability Second   91 of one-chance intermediaries which cannot err and improve. It also makes the debate overly dramatic when it does not have to be. While some of the recent developments in the area of injunctions against innocent third parties started challenging these assumptions,3 it seems to me that scholarly review is still lagging behind when considering the system in its entirety. To illustrate this in the European setting, let me offer a paradox. In the EU, while deciding on when to trigger liability is left to the Member States to legislate—although subject to the limitations of the e-Commerce Directive, human rights safeguards and increasing review of the Court of Justice of the European Union (CJEU)—the issue of ensuing consequences has comparably more European footing in the Enforcement Directive and its regulation of remedies. In other words, what is more European then when. Despite this, most of the work on intermediary liability tackles the question of when to trigger consequences rather than what consequences to trigger.

1.  Three Legal Pillars IP scholars studying intermediary liability often distinguish between primary and secondary liability. While the first is delineated by the statutory exclusive rights, the second is shaped by doctrines which come to expand those rights. These doctrines are often grounded in general tort law. The basic notion is that while the primary infringers are wrongdoers due to their own exploitation of the protected objects, the secondary infringers only become wrongdoers when they contribute, in some relevant way, to other people’s infringing actions. What is considered relevant differs substantially across the globe. However, what constitutes primary infringement differs too. The line between primary and secondary liability is more fluid than one might think, as the latest developments in copyright law4 and trade mark law testify.5 In a comparative edited volume in 2017, Dinwoodie concludes that ‘an assessment of secondary liability cannot be divorced from (and indeed must be informed by) the scope of primary liability or other legal devices by which the conduct of service pro­viders 3  See Martin Husovec, Injunctions Against Intermediaries in the European Union: Accountable, But Not Liable? (CUP 2017); Pekka Savola, ‘Blocking Injunctions and Website Operators’ Liability for Copyright Infringement for User-Generated Links’ (2014) 36(5) EIPR 279–88. 4 See C-348/13 BestWater v Mebes and Potsch [2014] ECLI:EU:C:2014:2315; C-279/13 C.  More Entertainment AB v Sandberg [2015] ECLI:EU:C:2015:199; C-160/15 GS Media BV v Sanoma Media Netherlands BV [2016] ECLI:EU:C:2016:644; C-610/15 Stichting Brein v Ziggo BV and XS4All Internet BV [2017] ECLI:EU:C:2017:456. 5  In the area of trade mark law, the typical example is trade mark use by Google in its keyword advertising, which is infringing in the EU but allowed in the United States. See Graeme Dinwoodie, ‘A Comparative Analysis of the Secondary Liability of Online Service Providers’ in Graeme Dinwoodie (ed.), Secondary Liability of Internet Service Providers (Springer 2017) 13. Another example is the emer­ ging question about use of trade marks by Amazon on its platform. See C-567/18 discussed also in this Handbook by Richard Arnold in Chapter 20.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

92   Martin Husovec or their customers is regulated’.6 In other words, although categories of primary and secondary liability are important, they cannot be simply studied in isolation. Classification by either of them is made on the basis of our understanding of the breadth associated with each of them. Therefore, primary infringement and its scope cannot be uninformed by the scope of secondary infringement, and vice versa. In this chapter, I try to emphasize that this is true not only for the design of conditions of liability, but also of its associated consequences. In the EU, the question of the interface between primary/secondary liability is complicated by the fact that the choice between these two policy-layers is a reaction not only to associated trade-offs of scope, but also to EU integration. Because the CJEU has no explicit jurisdiction over non-harmonized domestic secondary liability laws, if it wants to assert some jurisdiction, its policy choices have to be made either (1) entirely within primary liability, or (2) outside it, by creating an additional layer of protection drawing on national traditions of accessory liability. What we have been observing in the last couple of years in EU copyright law could be explained in both ways.7 Only the future will tell how the Court will eventually conceptualize its own case law on communication to the public when facing further national requests for clarification.8 Similarly, some national doctrines that are dressed as primary liability in fact incorporate policy con­sid­er­ations of secondary liability, and hence would be more appropriate to be studied as such.9 Secondary liability doctrines differ across the globe. However, in IP scholarship, the term labels situations where the source of infringement is a third party’s behaviour. In the United States, doctrines of contributory liability, including forms of inducement, and vicarious liability incorporate this policy space.10 In the UK, it is liability of ac­ces­sor­ies.11 In Germany, it is liability of aiders and abettors, but also reaching into case law concerning inducement, injunctive ‘disturbance liability’,12 and patent law’s liability for wrongful omissions.13 All these doctrines consider different circumstances relevant. 6  See Dinwoodie (n. 5) 4. 7  For the debate, see Ansgar Ohly, ‘The broad concept of “communication to the public” in recent CJEU judgments and the liability of intermediaries: primary, secondary or unitary liability?’ (2018) 8(1) JIPLP 664–75; Jan Bernd Nordemann, ‘Recent CJEU case law on communication to the public and its application in Germany: A new EU concept of liability’ (2018) 13(9) JIPLP 744–56; Pekka Savola, ‘EU Copyright Liability for Internet Linking’ (2017) 8 JIPITEC 139. 8  See e.g. pending case C-753/18. 9  Dinwoodie (n. 5) 8 ff (discussing whether the concept of authorization also fits secondary liability). 10  See Mark Bartholomew and Patrick McArdle, ‘Causing Infringement’ (2011) 64 Vand. L. Rev. 675; Salil Mehra and Marketa Trimble, ‘Secondary Liability of Internet Service Providers in the United States: General Principles and Fragmentation’ in Dinwoodie (n. 5) 93–108; John Blevins, ‘Uncertainty as Enforcement Mechanism: The New Expansion of Secondary Copyright Liability to Internet Platforms’ (2013) 34 Cardozo L. Rev. 1821. 11  See Paul Davies, Accessory Liability (Hart 2015) 177 ff. 12  Disturbance liability still serves a dual role in German case law—i.e. to supplement missing negligence-based tortious liability of accessories, on the one hand, and to implement the EU system of injunctions against innocent intermediaries, on the other. See Martin Husovec, ‘Asking Innocent Third Parties for a Remedy: Origins and Trends’ in Franz Hofmann and Franziska Kurz (eds), Law of Remedies (Intersentia 2019). 13  Accessory liability in German private law is generally based on the provision of s. 830(2) BGB (‘Instigators and accessories are equivalent to joint tortfeasors.’), which, according to the case law,

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Remedies First, Liability Second   93 In the last decade, we have also witnessed developments that create a third pillar of liability, that of injunctions against innocent third parties, in our context, innocent intermediaries.14 Unlike primary and secondary liability, these injunctions impose obligations on parties that engage in no wrongdoing themselves. These injunctions are often not grounded in tortious actions, although they can relevantly cross-reference to tort law. For instance, the English concept of injunctions against innocent third parties is based on equity and the German concept of ‘disturbance liability’ is based on an analogy with protection in property law.15 Both are dependent on wrongdoings by someone, however not by the person held responsible.16 In other countries, these instruments take the form of new statutory provisions,17 or a separate administrative regulation.18 Whatever the legal basis, the characteristic feature of these orders is that they are not premised on one’s wrongdoing, and rather try to impose obligations due to con­sid­er­ations of efficiency or fairness. They treat intermediaries as accountable (for help), but not liable. While harm is not attributed to them, some form of assistance is required. Although some of these instruments can still come with a ‘liability’ label attached (e.g. German concept), the consequences clearly differ. Obligations are imposed on the services by courts or legislators without trying to attribute to them liability for individual instances of usertriggered harm. While pillars of liability are distinct—and should be understood as mutually exclusive— their application might overlap. Although an intermediary who is a secondary infringer should not also be a primary infringer for the same set of facts, the courts might test both grounds of liability in parallel to strengthen their decisions.19 Intermediaries in a requires, as in criminal law, double intent, e.g. the acts of an accessory as well as of the main tortfeasor have to be intentional. See Mathias Habersack and others, Münchener Kommentar zum BGB (6th edn, C.H. Beck 2013) s. 830(2) para. 15. Apart from this, the Bundesgerichtshof [Supreme Court] (BGH) recognized that inducement can lead to tortfeasorship. See BGH Cybersky [2009] I ZR 57/07 (Ger.) (inducement). On the other hand, so-called ‘adopted content’ seems to fall under the rubric of primary liability. See BGH Marions-kochbuch.de [2009] I ZR 166/07 (Ger.); BGH [2016] VI ZR 34/15 para. 17 (Ger.); Posterlounge [2015] I ZR 104/14 para. 48 (Ger.). In contrast, in patent law a different senate accepted negligence-based tortious liability for omissions. See BGH MP3-Player-Import [2009] Xa ZR2/08 (Ger.). 14  See, recognizing this, Martin Husovec, ‘Injunctions against Innocent Third Parties: The Case of Website Blocking’ (2013) 4(2) JIPITEC 116; Franz Hofmann, ‘Markenrechtliche Sperranordnungen gegen nicht verantwortliche Intermediäre’ [2015] GRUR 123; Riordan (n. 1) ss. 1.49–1.60; Husovec (n. 3); Dinwoodie (n. 5) 3; Tatiana Eleni Synodinou and Philippe Jougleux, ‘The Legal Framework Governing Online Service Providers in Cyprus’ in Dinwoodie (n. 5) 127; Ansgar Ohly, ‘The broad concept of “communication to the public” in recent CJEU judgments and the liability of intermediaries: primary, secondary or unitary liability?’ (2018) 8(1) JIPLP 672; Jan Nordemann, ‘Liability of Online Service Providers for Copyrighted Content—Regulatory Action Needed?’ (2018) 20 ; see also Richard Arnold in Chapter 20. 15  See Husovec (n. 3) 145 ff. 16  Note that the German situation is complicated by the convergence with tortious principles. 17  In Australia, the Copyright Amendment (Online Infringement) Act 2015, effective 27 June 2015, created a novel s. 115A titled ‘injunctions against carriage service providers providing access to online locations outside Australia’. 18  See Chapters 13 and 30. 19 See Twentieth Century Fox Film Corp. & Anor v Newzbin Ltd [2010] EWHC 608 (Ch) (UK) (Justice Kitchin finding Newzbin service liable under the doctrines of direct liability, joint tortfeasorship, and the copyright-specific tort of authorization).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

94   Martin Husovec position of secondary infringers could be targeted by injunctions against innocent intermediaries when rightholders think it is easier not to discuss their wrongful contributions, and focus only on preventive steps to be imposed by injunctions. Given these dynamics, the extent to which the remedies can actually compete among each other when more of them potentially apply should not be underestimated. Remedies are tools given to the plaintiffs to choose from in order to solve their situation. The plaintiffs will select them also taking into account their costs and expected benefits, and the practical difficulties in obtaining and enforcing them. Policymakers and judges disregarding this ‘remedial competition’ can lead to undesirable consequences, such as enforcement outcomes that put too much pressure, and thereby costs, on players who are innocent, as opposed to real wrongdoers.20

2.  Distinguishing Reasons from Consequences Primary and secondary liability tests both end with classifying the implicated persons as some type of infringer; that is, wrongdoer. Despite this, the legal consequences cannot be said to be identical. Different consequences match different policy situations that these doctrines try to resolve. Primary infringement is concerned with those who harm by their own behaviour. Secondary infringement is concerned with those who harm by relying on the behaviour of others to achieve the harmful outcomes. The economic analysis evaluates the preventive function of tort law rules by the parameters of precaution-levels and activity-levels.21 These essentially refer to the quality and quantity of someone’s behaviour. Precaution refers to instantaneous adjustment in behaviour, such as checking authorship before reusing content, or blocking third party content on notification.22 Activity then refers to the decision to participate in an event that may generate harm, such as uploading the content or providing a service al­together.23 The reason why the law wants to influence behaviour is that it aims to stimulate socially optimal levels of care so that harm is avoided at a reasonable cost for society. Hence, the law requires diligence of users and intermediaries when dealing with copyright content owned by rightholders. Users are asked to think about authorship and fair use; intermediaries to avoid inducing others to use protected subject matter without authorization or to remove it once they are notified of its existence on their services. If actors 20  See Husovec (n. 14) ss. 44 ff. 21  See William Landes and Richard Posner, The Economic Structure of Tort Law (HUP 1987) 66; Steven Shavell, Economic Analysis of Accident Law (HUP 1987) 25; Stephen Gilles, ‘Rule-Based Negligence and the Regulation of Activity Levels’ (1992) 21 J. of Legal Studies 319, 327 and 330 (noting that the distinction between the two, however, is not always very sharp). 22  See Keith Hylton, ‘Missing Markets Theory of Tort Law’ (1996) 90 Northwestern U. L. Rev. 977, 981. 23 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Remedies First, Liability Second   95 adopt a sufficient standard of care to respect others’ rights, the law allows them to engage in their activities. Precaution-levels and activity-levels are also crucial for a proper understanding of remedies, and their role in solving liability scenarios.24 First of all, the prevention of damage in online space is typically multilateral because the behaviour of at least three parties needs to be involved—the intermediary, users, and rightholders. And, secondly, unlike in the usual scenario of bilateral damages, the parameters of activity-level (quantity) and care-level (quality) are not necessarily vested in all the parties equally.25 IP rightholders cannot really directly influence the quantity of infringement, as they can only reduce the production of the protected subject matter or stop producing it al­together.26 Indirectly, however, by setting the prices and conditions of legitimate access to their works, some of the rightholders (but not all of them) might be able to influence the activity-level of the users. Intermediaries are mostly unable to directly influence the quantity of the infringements posted by their users. Indirectly, however, by creating a certain culture, or setting incentives, they might produce some outcomes. At the same time, by exercising a duty of care (e.g. taking content down or redesigning their service), they can also indirectly influence the number of infringements. However, unless they shut down the service or pre-moderate completely, they can never entirely control the infringing posts of their users. Usually, the ‘infringing ecosystem’ might be significantly broader than the service itself, which means that this ecosystem can be sustained by independent unrelated services such as pirate discussion fora, which are outside intermediaries’ control. Therefore, business models and technical set-up are only indirect ways of control over other people’s behaviour. Two services with identical governance structure can attract completely different use and communities. Similarly, two rightholders with identical access and pricing strategies might attract different infringement patterns. Both intermediaries and rightholders, in fact, try to influence users’ behaviour in order to reduce the extent of infringement. In both cases, they are not in the driver’s seat. To summarize: primary liability relates to the addressee’s own activity, while secondary liability relates mostly to the precautions the addressee has taken (or has failed to take) regarding the occurrence of someone else’s infringement. Even intentional inducement, where minds of an intermediary and its users are aligned in intentional wrong­doing, does not assume full control of other people’s behaviour.27 Wrongdoing users are not

24  Some of this debate is further developed in Martin Husovec, ‘Accountable Not Liable: Injunctions Against Intermediaries’, TILEC Discussion Paper No. 2016-012 (2016). 25  See also Assaf Hamdani, ‘Who’s Liable for Cyberwrongs’ (2002) 87 Cornell L. Rev. 901, 912. 26  See William Landes and Douglas Lichtman, ‘Indirect Liability for Copyright Infringement: An Economic Perspective’ (2003) 16 Harv. J. of L. and Tech. 395, 409, 405 (noting that intermediaries can only indirectly influence users). 27  This would amount to direct liability as those who assisted by not knowing would be considered instruments used to wrong, and thus innocent agents free of liability of their own. See, for common law, Paul Davies, Accessory Liability (Hart 2015) 68 ff.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

96   Martin Husovec mere instruments of intermediaries but autonomously acting individuals who might be encouraged or incentivized, but still exercise their own will. In a primary liability scenario, damages are imposed on the addressee to compensate the harm caused by its own activity, and injunctions aim at prohibiting him or her from engaging again in the infringing activity. In contrast, in a secondary liability scenario, damages are also imposed as a means for compensating the rightholder’s harm resulting from an infringement but are imposed on an addressee who did not carry out the infringement through its own activity. Damages are usually imposed on the ground that the addressee failed to take the precautions that he should have taken to avoid someone else’s infringement. Likewise, injunctions in this scenario are aimed at imposing the duty to take precautions to avoid or reduce the risk of users’ infringements (and thus other people’s activity). However, this is different in intentional inducement scenarios, where the problem is one’s own intention to stir up other people’s wrongdoing. Finally, in a scenario of mere accountability, damages are not imposed. Injunctions do not impose a duty to take precautions to avoid one’s own accessorship, but a duty to assist in preventing third parties’ infringements similar to regulatory duties known from areas such as antimoney-laundering legislation.

3.  Typology of Consequences To demonstrate some of these differences in practice, in the following sections, I will briefly look at some of the consequences of applying each of the pillars to intermediaries. I try to demonstrate the difference by examining: (1) the scope of damages; (2) their aggregation; (3) the scope and goal of injunctions; and (4) their associated costs.

3.1  Scope of Damages Damages come with a different scope in each of the three pillars. To begin with, damages do not exist in the system of injunctions against innocent third parties (accountability pillar).28 With regards to infringers, the contributions of direct (primary) and indirect (secondary) infringers differ. While primary infringers exploit the protected subject matter themselves, secondary infringers usually only facilitate someone else’s unauthorized exploitation. In between the two are situations of intentional inducement, which act as an important trigger for other people’s unauthorized exploitation. From the rightholder’s perspective, there is a single harm caused by the availability of their protected subject matter. Against the primary infringers, the rightholders might often invoke two basic types of damage: (1) losses suffered by rightholders, and (2) illicit profit obtained by the primary 28  However, uncompensated compliance with injunctions leads to monetary costs.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Remedies First, Liability Second   97 infringer. Sometimes the damages are supplemented by licensing analogies, or statutory damages which are meant to approximate to the negative economic consequences. In the European system, damages are generally meant to put the rightholder in the pos­ ition it enjoyed prior to the infringement. This means that damages ought generally to compensate for the loss actually suffered.29 However, some other countries such as the United States also opt for statutory damages in order to emphasize deterrence.30 The actions of infringing actors and their intermediaries are not always easy to dissect. In the relationship of actions between an actor and an intermediary, however, the relevant wrongful contribution may often cover different parts of the resulting harm, or even be limited to one of them. For instance, imagine a negligent intermediary operating under a notice-based liability regime who becomes liable for not taking down a video after being notified. Such an intermediary acted lawfully during the entire period before the notification, however its actions became wrongful after a reasonable period of time to process the notification had elapsed. The wrongful action of an intermediary and its relevant harm then covers the period after failing to act, which takes place after notification, and not the pre-notification period. If the video was online for two years before being notified, but was taken down only two months after notification, the two harms (pre- and post-notification) can differ substantially. Similar situations might arise more often in instances of gross or minimal negligence, in which the intention to act in concert with the direct infringer ab initio is not present. In cases of intentional tortfeasorship, this might perhaps pose less significant problems, as the intention of both parties often covers identical harm from the outset.31 Therefore, an intermediary that intentionally induces others to do wrong can also be attributed to the harm that precedes any notification, since the notification does not trigger the wrongful behaviour and is thus irrelevant. To further complicate the problem of damages owed by secondary infringers, it is not always easy to calculate even clearly attributable harm. This is because any licence for secondary infringers is unlikely to exist and therefore to provide a benchmark price. Thus, a licensing analogy is not the most appropriate tool. The lost profits might be possible to estimate in general, but hardly practicable to prove specific fractions of time and contributions, especially if the cases are factually too complicated.32 Therefore, as Ohly suggests, targeting part of the illicit profit might be the most realistic way forward,33 29 See C-367/15 Stowarzyszenie Oławska Telewizja Kablowa v Stowarzyszenie Filmowców Polskich [2017] ECLI:EU:C:2017:36, para. 31. 30  See Pamela Samuelson and Tara Wheatland, ‘Statutory Damages in Copyright Law: A Remedy in Need of Reform’ (2009) 51 William and Mary L. Rev. 439. 31  Hence it is also less problematic to apply to it any type of damages aggregation (see later). 32  A related issue is a question of allocation of the burden of proof. It is not exceptional for tort law to presume the form of fault on the side of an alleged tortfeasor. Doing so for secondary infringers who act as accessories is another matter. After all, very wrongfulness of their behaviour depends on their know­ ledge and thus, unlike with direct infringers, is not the usual situation but rather an exception to it. 33 See Ansgar Ohly, ‘Urheberrecht in der digitalen Welt—Brauchen wir neue Regelungen zum Urheberrecht und zu dessen Durchsetzung?’ [2014] NJW-Beil 50 (‘Gegen Intermediäre, die grob fahrlässig Verkehrspflichten verletzen, sollte ein Schadensersatzanspruch bestehen, der in der Höhe durch den aus der Vermittlung erzielten Gewinn begrenzt ist’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

98   Martin Husovec at least for negligent secondary infringers. Although, admittedly, estimating such profits is not without problems and equally lends itself to considerable judicial discretion.

3.2  Aggregation of Damages In relation to attribution of damages, it is crucial to determine whether or not the secondary infringer and primary infringer will be obliged to pay damages which are somehow aggregated. By damage aggregation, I mean a legal mechanism which connects individual debts and interlinks them at the performance stage; for example, an obligation to pay each other’s debt on request. An alternative to aggregation is when the liability of intermediaries and users is separated and is proportionate to their contributions. As Angelopoulos reports34 in her comparative survey of France, Germany, and the UK, existing European tort law usually favours solidary liability for any type of secondary liability. This means that secondary infringers often owe not only their own damages, but also the damages of ‘their’ primary infringers. Although they might recur against the primary infringers themselves, the practical relevance of this is questionable. Since intermediaries have deeper pockets and are easily identifiable, they end up paying the damages, with little prospect of recovering anything from their users who infringed. Does solidary liability therefore always make sense from a policy standpoint in the realm of IP infringements?35 Two key issues merit further consideration. First, online IP infringements are mass torts. Unlike in typical accessory liability scen­arios, litigation usually involves hundreds of cases of primary wrongdoing. This raises the stakes very high for every instance of such litigation. Secondly, the online environment is broadly anonymous. For accessories, this means that recovering any damages after paying them can be practically impossible not only due to the transactions costs involved in mass torts, but also due to the anonymity of the primary infringers. To think of alternatives, the level of aggregation of damages could depend on the sub-type of secondary liability. Especially in cases of intentional aiding and abetting,36 it is understandable that consequences are made more harsh by forcing the primary and secondary infringers into a solidary debtorship. However, the reason for doing the same for negligence-based secondary infringers is less clear, especially when one considers the mass-character and anonymity of primary wrongdoers. The question of aggregation cannot be disconnected from the question of calculation. If intermediaries are exposed to effective disgorgement of illicit profits for their own wrongful contributory acts, then imposing any additional compensatory burden stemming from solidary liability with the primary infringer’s actions transforms the entire liability scheme into damages with a punitive character. And as theory shows, punitive damages can have effects equivalent to prohibitory injunctions.37 This is why not only the calculation of the secondary 34  Angelopoulos (n. 1) 328 ff (asking the same). 35  ibid. (asking the same). 36  ibid. 488. 37 See Louis Kaplow and Steven Shavell, ‘Property Rules Versus Liability Rules: An Economic Analysis’ (1996) 109 Harv. L. Rev. 715; John Golden, ‘Injunctions as More (or Less) than “Off Switches”:

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Remedies First, Liability Second   99 infringer’s damages is crucial. If secondary infringers are subject to potential aggregation of the damages caused by primary infringers, calculation of the damages of the primary infringers cannot be neglected either. Overlooking the issue of aggregation can drive suboptimal policy outcomes for the design of secondary liability doctrines. Davies, for instance, argues that it is precisely the issue of aggregation of damages for accessories that has held back the development of accessory tort liability in the UK. The worry is that a small proportion of their fault could expose them to the much larger damages of primary wrongdoers.38 And since they are more readily available, they might become easy targets for collecting the entire damages claims, with barely any recourse to the primary wrongdoers. He also points out that some jurisdictions, such as the United States, are already moving away from solidary liability towards a system of proportionate apportionment of liability.39 Perhaps a way forward would be to think about alternative ways of how to apportion damages, which are in line with the logic behind their initial calculation. In any event, designing any liability conditions without considering the effects of aggregation skips some very important questions.

3.3  Scope and Goals of Injunctions Injunctions have different scopes in all these pillars. Any injunction against primary (direct) infringers requests them to cease their own wrongful behaviour. However, any injunction directed against secondary (indirect) infringers asks them to stop other people’s behaviour by adjusting their own. Clearly, secondary infringers cannot usually completely stop other’s people’s behaviour, unless they shut down their non-editorial services completely. They cannot coerce others into choices they do not want to make.40 If this fact is not reflected in the scope of an injunction, the consequence is that the secondary infringer is prohibited from engaging in lawful acts. Even worse, there may be no second chance for these players, and any finding of liability will simply mean the

Patent-Infringement Injunctions’ Scope’ (2012) 90 Texas  L.  Rev. 1402; Paul Heald, ‘Permanent Injunctions as Punitive Damages in Patent Infringement Cases’, Illinois Public Law & Legal Theory Research Paper no. 10-38 (2011) (discussing the ‘analogy injunctions can bear to punitive damages’); Henry Smith, ‘Mind the Gap: The Indirect Relation Between Ends and Means in American Property Law’ (2009) 94 Cornell L. Rev. 966 (discussing ‘property rules’ as being ‘embodied in injunctions and punitive damages’). 38  See Davies (n. 27) 216. 39  See American Law Institute, Restatement of the Law: Torts—Apportionment of Liability ( 1999) s. 17. 40  In this regard, consider the Daimler case, in which the CJEU held that lack of control over removal of trade mark use means that the previously infringing behaviour stops being infringing with relinquished control over other party’s actions—see C‑179/15 Daimler AG v Együd Garage Gépjárműjavító és Értékesítő Kft [2016] ECLI:EU:C:2016:134, para. 41 (noting ‘[h]owever, only a third party who has direct or indirect control of the act constituting the use is effectively able to stop that use and therefore comply with that prohibition’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

100   Martin Husovec end of a service or a firm. Some courts are aware of this,41 but designing the wording of such injunctions is particularly difficult. Lemley and Weiser even argue that, for these ­reasons, we should favour damages over injunctions in such cases.42 Angelopoulos, on the other hand, argues the opposite; namely, that injunctions and not damages should govern situations of negligence-based secondary liability.43 An excellent example of how problematic it can be to issue an injunction is the wellknown Napster case in the United States, which continued after the decision of the Ninth Circuit44—upholding contributory and vicarious liability—before the District Court that had to decide on the form of the injunction.45 The District Court first enjoined Napster from copying, downloading, uploading, transmitting, or distributing copyrighted sound recordings, but the plaintiffs were obliged to provide titles of their works, the names of the artists, and one or more files on the Napster system, and a certification of ownership. Soon, however, the RIAA complained that Napster’s name-based filtering system was ineffective. The District Court ordered Napster to implement a better technical solution in order to also prevent misspelled and similar infringing files from appearing on the network. So, Napster purchased access to a database of song titles with common misspellings, and designed its own automated filter to search for those as well. It also licensed audio-recognition software based on fingerprinting technology from Relatable. Even though Napster’s filter success rate was claimed to be 99.4 per cent, it was insufficient for the District Court even if true. Judge Patel stated during one of the hearings that ‘[t]he standard is to get it down to zero, do you understand that?’46 As a consequence, Napster shut down. It is clear that the judges were trying to punish a previously negligent actor by requiring super-optimal care.47 However, as with damages, the question remains how to design injunctions that are not de facto terminal. Discussions would be less dramatic had we known that the defendants would simply need to learn their lesson, internalize the costs, and move forward. Compared to the previous two categories, innocent third parties are subject only to a special type of injunction. This can come in the form of orders to assist enforcement by blocking, filtering, degrading, or providing information. As the European system shows, the target of such orders is not full prohibition of the illegal behaviour of the addressee, but rather some form of positively defined assistance in enforcement. In other words, 41  See Oberster Gerichtshof [Austrian Supreme Court] (OGH) [2014] 4Ob140/14p (Aust.) (arguing that an accessory can only be prohibited from carrying out those that are wrongful). 42  See Mark Lemley and Philip Weiser, ‘Should Property or Liability Rules Govern Information’ (2007) 85 Texas L. Rev. 783 (‘the inability to tailor injunctive relief so that it protects only the underlying right rather than also enjoining noninfringing conduct provides a powerful basis for using a liability rule instead of a property rule’). 43  See Angelopoulos (n. 1) 491. 44 See A&M Records Inc. v Napster Inc., 239 F.3d 1004 (9th Cir. 2001) (US). 45 See A&M Records Inc. v Napster Inc., 2001 US Dist. LEXIS 2169 (ND Cal. 5 March 2001) (US). 46  Evan Hansen and Lisa Bowman, ‘Court: Napster filters must be foolproof ’ (Cnet News, 12 July 2001) . 47  See Michael Einhorn, ‘Copyright, Prevention, and Rational Governance: File-Sharing and Napster’ (2001) 24 Colum. J. of L. & Arts 449, 459–60.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Remedies First, Liability Second   101 any order is mandatory rather than prohibitive. The CJEU preaches that prevention is the core goal of the measures. In Tommy Hilfiger, the Court stressed that ‘[n]or can the intermediary be required to exercise general and permanent oversight over its customers. By contrast, the intermediary may be forced to take measures which contribute to avoiding new infringements of the same nature by the same market-trader from taking place.’48 I have argued elsewhere49 that if we were to accept that an injunction against innocent third parties can restrict their full conduct, we would indirectly expand the scope of exclusive rights, since an injunction would put otherwise abstractly allowed acts under the reservation of a rightholder’s consent. In Europe, the CJEU, and at least some national courts, seem to be aware of this. The CJEU stresses that these measures are of different nature. The UK and Germany, despite different legal traditions, seem to converge on the consensus that these measures should prescribe positive actions, and not be worded or interpreted as prohibitory edicts.50

3.4  Costs of Injunctions A related question concerns the costs structure associated with the remedies. By costs, I mean both compliance and procedural costs. Compliance costs are costs incurred in carrying out orders granted by the courts. If those costs are incurred by infringers, there is little reason to question that they should be the ones bearing the costs of compliance. After all, they are to blame for the infringements, hence they should bear the consequences. The same can be argued for procedural costs, since the litigation is a consequence of their actions, too. It is, however, well known that some countries like the United States do not generally compensate procedural costs and that this drives prac­tical liability outcomes too. However, the situation is different for innocent third parties that are exposed to the third pillar of accountability for assistance, that of injunctions against innocent third parties. These parties are asked to assist in enforcement and not to respond for their own actions. It is therefore more understandable that policy should relieve them from pro­ ced­ural costs, and even compensate them for the inconvenience of compliance. In the absence of such a regime, the costs structures for infringers and innocent third parties become identical. Combined with the fact that no wrongfulness has to be proven on the side of the addressees, this creates a perverse incentive to also choose it as a cause of action in cases where the tortfeasors are known and could potentially be more appropriate candidates to correct the wrongdoing. Thus, in ‘remedial competition’ these measures 48 C-494/15 Tommy Hilfiger Licensing LLC and Others v Delta Center as [2016] ECLI:EU:C:2016:528, para. 34 (this builds on the earlier case law, in particular C-324/09 L’Oréal SA and Others v eBay International AG and Others [2011] ECLI:EU:C:2011:474). 49  See Husovec (n. 3) 107–8. 50  For the UK, see Chapter 20. For Germany, see analysis of case I ZR 64/17, para. 57 in Husovec (n. 12).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

102   Martin Husovec can inflate the aggregate costs of the compliance of parties that had never themselves acted wrongfully. For these and other reasons, some countries distinguish the costs structures of actions against intermediaries as infringers and as innocent third parties. The courts in the UK and Canada recognized a need for compensation for compliance for the latter.51 On the other hand, the French Cour de cassation rejected the idea.52 The German legislator decided to intervene by immunizing some types of providers from out-of-court and pre-trial costs, although not offering any compensation for compliance.53

4.  Putting the Cart before the Horse As the previous debate has shown, there is a lot of ‘colour’ to any determination of the liability of intermediaries. Perhaps we should use these components to inform our design of conditions for such liability. A debate of this type might be less difficult than it might appear at first sight. In her book, Angelopoulos convincingly shows that intentbased accessory liability is resolved well through domestic laws, and offers the best starting point for European (and perhaps global) convergence.54 At the same time, her work shows that it is much harder to achieve convergence on standards of negligencebased secondary liability. A reason for this maybe has less to do with the factors triggering such liability, rather than with the consequences that each country traditionally attaches to them. Two systems each assigning a different gravity of consequences are less likely to agree on when to trigger liability even if they have the same policy outlooks and goals. An agreement is slowed down by the fact that where one scholar anticipates damages subject to solidary liability with primary infringers, the other thinks of stand-alone compensatory damages subject to proportionate apportionment. To provide a metaphor, agreeing on when policemen in two countries should use their guns is hard when those guns are pre-loaded with different projectiles. Hitting a man with a plastic or bean-bag bullet is not the same as using a dum-dum bullet. Similarly, a firm hit with two different sanctions will internalize them differently. Discussing when without clarifying what then seems very confusing and, at the very least, incomplete. To facilitate convergence, we should, as a first-order policy question, understand the

51  In the UK, this was outcome of the Cartier case—Cartier International AG and others (Respondents) v British Telecommunications Plc and another (Appellants) [2018] UKSC 28 (UK). In Canada, a similar principle was postulated in the Rogers case—see Rogers Communications Inc. v Voltage Pictures, LLC, 2018 SCC 38 (Can.). Both decisions relate to the so-called Norwich Pharmacal jurisdiction. 52  The Cour de cassation rejected compensation of such costs. See Cour de cassation, Civil Chamber 1 [2017] Case no. 16-17.217 16-18.298 16-18.348 16-18.595 (Fra.). 53  For description of the current German approach, see Husovec (n. 12). 54  See Angelopoulos (n. 1).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Remedies First, Liability Second   103 ensuing consequences. Only as the next policy question, can we discuss the trigger-factors of such individual consequences. In the comparative analysis, clearly the second pillar of liability matters the most. However, intent-based and negligence-based situations might present different policy challenges. We may want to distinguish intent-based accessories as they are the most implicated in the wrongs of others and are punished virtually everywhere. What kind of damages should they be asked to pay? Do we aggregate them with those of primary wrongdoers? How do we design injunctions against them? Can we agree on a common set of principles? In parallel, we might unpack the situation of negligent secondary infringers. Are they all the same? What kind of damages should they be asked to pay? Do we really want to aggregate their own obligation to pay damages with that of primary wrongdoers? If not, what alternatives can we work out? If yes, under what circumstances? And how do we design injunctions against them? Is it even possible to come up with injunctions that do not mean the shut-down of a service? And, lastly, if countries also developed the system of injunctions against innocent third parties, how should they differ and interact with the previous two? Who should bear the costs of what? And should burdens of proof differ across all three pillars? These all seems like very pressing questions to answer before we can design optimal conditions triggering anything.

5. Conclusions Adopting a consequences-based approach might help to achieve convergence of different models. To utilize this approach, the comparative work first needs to recognize the basic vocabulary of intermediary liability for user-generated content incorporated in three pillars of liability. Not all countries will rely on the same mechanisms, or offer them in the same form; however, a common vocabulary will allow us to more clearly communicate and discuss their contents and the policy-goals behind them. I do not think we are far from accepting such a vocabulary. Secondly, the legal analysis needs to better pair legal consequences (e.g. solidary debtorship under disgorgement of illicit profits) with their ability to tackle specific challenges within each liability mechanism (e.g. compensation for inducement, design of injunction for negligence). This will allow us to better see the regulatory toolkit, and thus respond more sensibly to any questions in the liability design. Only as a third step should we try to connect these consequence–mechanism pairs with the realities of online services and requirements triggering them. This way, the what and when questions are interconnected, allowing the latter to internalize the former’s findings. And, hopefully, such an approach could become more conducive to establishing a comparative consensus.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 5

Empir ica l A pproach e s to I n ter m edi a ry Li a bilit y Kristofer Erickson and Martin Kretschmer

Legal theory has failed to offer a convincing framework for the analysis of the responsibilities of online intermediaries. Our co-contributors to this Handbook have identified a wide range of contested issues, from due process to costs to extraterritorial matters. This chapter considers what empirical evidence may contribute to these debates. What do we need to know in order to frame the liability of intermediaries and, a fortiori, what does the relationship between theory and empirics imply for the wider issue of platform regulation? While the liability of online intermediaries first surfaced as a technical issue with the emergence of internet service providers (ISPs) during the 1990s, recently the focus has shifted to the dominance of a handful of global internet giants that structure everything we do online. Platform regulation has become the central policy focus. There is an awareness of the increasing importance of communication between users in constituting a digital public sphere, and simultaneously greater pressure to control online harm (be it relating to child protection, security, or fake news). The allocation of liability in this context becomes a key policy tool. But we know very little about the effects of this allocation for the different stakeholders. This is in part due to secrecy about the rules under which platforms operate internally. As online users, we are governed by processes and algorithms that are hidden from us. If the rules are the result of algorithmic or artificial intelligence (AI) decision-making, even service pro­ viders themselves may not fully understand why decisions are made. Still, platforms see their rules of decision-making as key to their competitive advantage as firms. We live in what has been catchily labelled a ‘black box society’.1 1  See Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (HUP 2015). See also Chapter 35.

© Kristofer Erickson and Martin Kretschmer 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   105 So how can we as researchers open the box to let in some empirical air? There is a global trend towards increased reporting requirements for platforms. Most prom­in­ent­ly, the German platform law of 2017—NetzDG—requires all for-profit social media services with at least two million registered users in Germany to remove obviously il­legal content within twenty-four hours of notification.2 There are obligations to report every six months and high sanctions for failures to comply (up to €5 million fines that can be multiplied by ten). During the first six months of the law’s operation (January–June 2018), Facebook received a total of 1,704 takedown requests and removed 362 posts (21.2 per cent), Google (YouTube) received 214,827 takedown requests and removed 58,297 posts (27.1 per cent), and Twitter received 264,818 requests and removed 28,645 (10.8 per cent). These are much lower numbers than were expected.3 There are also a number of antitrust inquiries that have extracted sensitive data from platforms, in particular the European Commission’s investigations of Google under EU competition law.4 The Australian Competition and Consumer Commission’s digital platforms inquiry is considering the establishment of a new platform regulator with wide-ranging information-gathering and investigative powers. The regulator would monitor platforms with revenues from digital advertising in Australia of more than AU$100 million. These firms would be subject to regular reporting requirements with the aim of controlling whether they are engaging in discriminatory conduct, for ex­ample by predatory acquisition of potential competitors or advertising practices that favour their own vertically integrated businesses.5 So in the future, there is likely to be much greater public scrutiny and knowledgegathering about platforms’ practices. The paradox is that we will know more only by the time new regulatory regimes, and a new framework for online intermediary liability, have been selected. We appear to be in the midst of a paradigm shift in intermediary 2  See the 2017 Network Enforcement Act (Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken, NetzDG) (Ger.), s. 3(2)(2). The obligation to remove and block content relates to a specific list of criminal offences (not including intellectual property right infringements), and there is a carve-out, exempting platforms that support communications between individuals (this is designed to exempt professional networks, sales platforms, games, and messaging services). 3  See William Echikson and Olivia Knodt, ‘Germany’s NetzDG: A Key Test for Combatting Online Hate’, Centre for European Policy Studies (CEPS) Report no. 2018/09 (2018) . 4  See European Commission, ‘Press Release: Commission fines Google €1.49 billion for abusive practices in online advertising’ (Case 40411 Google Search (AdSense)) (20 March 2019) ; European Commission, ‘Press Release: Commission fines Google €4.34 billion for illegal practices regarding Android mobile devices to strengthen dominance of Google’s search engine’ (Case 40099 Google Android) (18 July 2018) ; European Commission, ‘Press Release: Commission fines Google €2.42 billion for abusing dominance as search engine by giving illegal advantage to own comparison shopping service’ (Case 39740 Google Search (Shopping)) (27 June 2017) . 5  See Australian Competition and Consumer Commission (ACCC), Digital platforms inquiry (intermediary report, published 10 December 2018) . An independent regulator has also been proposed in the UK to oversee a new statutory ‘duty of care’ for platforms. See Department for Digital, Culture, Media & Sport and Home Department, Online Harm (White Paper, Cp 59, 2019) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

106   Kristofer Erickson and Martin Kretschmer li­abil­ity, moving from an obligation to act once knowledge is obtained to an obligation to prevent harmful content appearing. This imminent shift towards filtering, even general monitoring,6 is exemplified by the controversial Article 17 (formerly Art. 13) of the 2019 EU Directive on Copyright in the Digital Single Market that makes certain platforms (‘online content sharing services’) directly liable under copyright law for the content uploaded by their users.7 This chapter evaluates what we already know after almost two decades of operation of one liability regime: the so-called safe harbour introduced in the United States by the Digital Millennium Copyright Act (DMCA), and in a related form in EU Member States with the e-Commerce Directive.8 Immunity for ‘Online Service Providers’ that act ex­ped­itious­ly to remove infringing material was first introduced in the United States under section 512 of the US Copyright Act (as amended by the DMCA 1998). Section 512 specifies a formal procedure under which service providers need to respond to requests from copyright owners to remove material. Rightholders who wish to have content removed must provide information ‘reasonably sufficient to permit the service provider to locate the material’ (such as a URL) and warrant that the notifying party is authorized to act on behalf of the owner of an exclusive right that is allegedly infringed. The practice is known as ‘notice and takedown’. Importantly, ‘counter notice’ procedures are also specified under which alleged infringers are notified that material has been removed and can request reinstatement. Under the EU Directive on Electronic Commerce (2000/31/EC), hosts of content uploaded by users will be liable only on obtaining knowledge of the content and its il­legality. The safe harbour of the e-Commerce Directive applies to all kinds of illegal activity or information, not only copyright materials. But unlike the DMCA, the e-Commerce Directive does not regulate the procedure for receiving the necessary knowledge. This is left to the Member States. The regime is sometimes characterized as ‘notice and action’.9 Even though the notice and takedown regime established under the DMCA is narrow (applying to copyright only) and limited to one jurisdiction, it has become the dominant paradigm for organizing liability of online intermediaries. This is for at least two reasons: (1) through Google’s practices, notice and takedown has become a global standard even in jurisdictions that do not have safe harbour laws;10 (2) copyright liability affects a far 6 Expressly prohibited as an obligation on service providers by Art. 15 of the EU e-Commerce Directive and the case law of the Court of Justice of the European Union (CJEU). See also Chapter 28. 7  See Directive 2019/790/EU of the European Parliament and the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92, Art. 17. 8  See Digital Millennium Copyright Act, Pub. L. no. 105-304, 112 Stat. 2860; Directive 2000/31/EC of the European Parliament and the Council of 17 July 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1. 9  cf. Chapter 27. 10  According to Google’s 2019 Transparency Report, in total more than four billion copyright takedown requests have been received. They are processed under DMCA formalities, regardless of whether the country in which the request was filed prescribed those formalities or had any safe harbour laws: ‘It is our policy to respond to clear and specific notices of alleged copyright infringement. The form of

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   107 wider range of user practices than, for example, defamation, obscenity, and other forms of illegal use. So, the lessons from the operation of the notice and takedown regime may have a wider application, addressing questions of transparency, due process, cost allocation, and freedom of expression. We now proceed to review the body of empirical studies on copyright intermediary liability during the twenty-year period from 1998 (the year that the DMCA was passed) to 2018. We use a snowball sampling method enhanced by a seed sample of empirical papers drawn from the Copyright Evidence Wiki, an open-access repository of findings related to copyright’s effects.11 Based on the initial sample, we searched forwards and backwards in time among those articles to identify further published research. The sample is focused exclusively on work deemed empirical (containing new data gathered or constructed by the authors). It includes articles, books, reports, and published impact assessments. Based on our survey of this body of research, we identify five key sub-fields of em­pir­ ic­al inquiry pursued so far. These relate to: the volume of takedown requests, the ac­cur­ acy of notices, the potential for over-enforcement or abuse, transparency of the takedown process, and the costs of enforcement borne by different parties (see Table 5.1). Each of

Table 5.1  Thematic foci of empirical research on notice and takedown and key studies Policy issue

Key studies

Volume of use of notice and takedown

Urban and Quilter (2006); Seng (2014); Karaganis and Urban (2015); Cotropia and Gibson (2016); Strzelecki (2019)

Accuracy of notices

Urban and Quilter (2006); Seng (2015); Bar-Ziv and Elkin-Koren (2017); Urban, Karaganis, and Schofield (2017)

Over-enforcement/abuse

Ahlert and others (2004); Nas (2004); Urban, Karaganis, and Schofield (2017); Bar-Ziv and Elkin-Koren (2017); Erickson and Kretschmer (2018); Jacques and others (2018)

Transparency/due process

Seng (2014); Perel and Elkin-Koren (2017); Fiala and Husovec (2018)

Balancing innovation/costs

Heald (2014); Schofield and Urban (2016); Urban, Karaganis and Schofield (2017)

notice we specify in our web form is consistent with the Digital Millennium Copyright Act (DMCA) and provides a simple and efficient mechanism for copyright owners from countries around the world. To initiate the process to remove content from Search results, a copyright owner who believes a URL points to infringing content sends us a take-down notice for that allegedly infringing material. When we receive a valid take-down notice, our teams carefully review it for completeness and check for other problems. If the notice is complete and we find no other issues, we remove the URL from Search results.’ See Transparency Report Google . 11 See Copyright Evidence Wiki, Empirical Evidence for Copyright Policy, CREATe Centre, University of Glasgow .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

108   Kristofer Erickson and Martin Kretschmer these areas are discussed in further detail in the remainder of the chapter. We conclude by identifying some of the gaps and limitations in this existing body of scholarship on intermediary liability for copyright, and offer some recommendations for future research.

1.  Volume of Notices In their 2006 study of notice and takedown, Urban and Quilter found that Google had received 734 notices between March 2002 (when the Chilling Effects12 database started collecting Google reports) and August 2005, the cut-off date of their study.13 The majority of takedown requests in their sample related to search engine links, but the overall quantity was relatively small. Since then, a number of follow-up studies have noted an explosion in the quantity of DMCA section 512 notices sent by rightholders to online service providers (OSPs), including Google. This sharp increase in volume did not occur immediately after the introduction of the DMCA and e-Commerce Directive; rather, this uptake took nearly a decade and accelerated from 2010 onwards.14 There are a range of explanations for this increase, ranging from rightholder frustration following rejection of the Stop Online Piracy Act (SOPA),15 to new technical affordances introduced by Google to receive and process requests. A general observation from this research is that notice and takedown processes can, at the very least, be considered successful in terms of uptake and use by rightholders and (OSPs), with the safe harbour it provides to internet intermediaries viewed as important for commercial innovation.16 However, the volume of notices may also produce challenges for OSPs in terms of cost of compliance, with implications for other thematic areas such as due process. Several empirical studies have examined these issues in significant detail.17 A 2014 study by Seng used data obtained from the Chilling Effects repository and from Google’s Transparency Report, on takedown notices received by Google, and other services which report to the database. The dataset consisted of 501,286 notices and some 12  The Chilling Effects (now Lumen) database was founded by Wendy Seltzer in 2001 and is currently maintained by the Berkman Klein Center for Internet and Society at Harvard University. The project website collects and enables analysis of takedown requests received by online intermediaries. 13  See Jennifer Urban and Laura Quilter, ‘Efficient Process or “Chilling Effects”? Takedown Notices Under Section 512 of the Digital Millennium Copyright Act’ (2006) 22 Santa Clara Computer and High Tech. L.J. 621. 14  See Daniel Seng, ‘The state of the discordant union: An empirical analysis of DMCA takedown notices’ (2014) 18 Va. J. of L. & Tech. 369. 15  See Daniel Seng, ‘“Who Watches the Watchmen?” An Empirical Analysis of Errors in DMCA Takedown Notices’, SSRN Research Paper no. 25632023 (2015) 3 . 16  See Jennifer Urban, Joe Karaganis, and Brianna Schofield, ‘Notice and Takedown: Online Service Provider and Rightsholder Accounts of Everyday Practice’ (2017) 64 J. Copyright Soc. 371. 17  Recently see Artur Strzelecki, ‘Website removal from search engines due to copyright violation’ (2019) 71(1) Aslib J. of Information Management 54–71.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   109 56,991,045 individual takedown requests.18 The dataset covered the period from 2001 to 2012 including the period of significant increase in volume. Seng notes that the increase not only affected Google, but other OSPs as well, lending support to his view that legislation was key to explaining the increase. For example, Twitter submitted less than 500 notices to the repository in 2010, but reported more than 4,000 requests the following year in 2011, and more than 6,000 in 2012.19 Seng found that a large majority of notices were sent by industry associations, collecting societies, and third party enforcement agencies. The British Phonographic Industry (BPI) was the top issuer of notices in the study, with 191,790 notices sent, or 38 per cent of the total sample. Agents such as Degban and WebSheriff also made up a significant portion of the total volume of notices sent.20 The music industry accounted for the lar­gest share of total notices sent, by sector, with nearly 60 per cent of the total notices. Seng observed an increasing concentration among the top issuers of notices over time. The top 50 issuers accounted for 23.9 per cent of notices in 2010, but reached 74.7 per cent of notices in the dataset for the year 2012.21 In other words, a smaller number of organizations, such as BPI and RIAA, generated the bulk of notices, skewing the average. Most providers identified in the dataset issued only one notice. Industry organizations and enforcement agents also crammed more claims and requests (sometimes numbering in the thousands) into each notice, while individual claimants tended to include fewer claims and requests together in the same notice. Other studies have examined the volume of takedown notices received by non-commercial institutions such as universities and academic libraries. Both groups receive fewer takedown requests than commercial internet companies, but empirical studies show changing practices over time and concern about possible volume increases in future. Cotropia and Gibson surveyed from a population of 1,377 four-year colleges and universities in the United States.22 They sent the survey to 680 institutions that had a DMCA agent registered with the US Copyright Office. The presence of registered DMCA agents was skewed towards large, higher ranked institutions. Among the 532 institutions that had working contact information and received the survey, the authors achieved a response rate of approximately 15 per cent (or 80 responses).23 The average number of takedown requests received per institution was 200 (std deviation 329.35). The majority of institutions surveyed (72.5 per cent) received between 0 and 270 notices, with a smaller number of schools receiving larger quantities of notices. Some 12.5 per cent of responding institutions spent more than 500 hours per year dealing with takedown requests, but most institutions (62 per cent) reported spending 50 hours or less.24 The majority of DMCA notices received (67.6 per cent) related to cases in which students used institutions’ 18  See Seng (n. 14) 382. 19  ibid. 444. 20  ibid. 448. 21  ibid. 393. 22  See Christopher Cotropia and James Gibson, ‘Commentary to the US Copyright Office Regarding the Section 512 Study: Higher Education and the DMCA Safe Harbors’, SSRN Research Paper no. 2846107 (2016) . 23  ibid. 4. 24  ibid. 12.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

110   Kristofer Erickson and Martin Kretschmer networks to download copyright infringing material onto personal computers (falling under the transitory communications safe harbour under section 512(a)).25 Universities were found to be using a number of technical measures to deter copyright infringing behaviour by users, in a similar manner to the DMCA-plus OSPs discussed by Urban and co-authors.26 Techniques used by universities included requiring individual network logins, port banning/firewalls, packet shaping, bandwidth throttling, and monitoring of suspicious network traffic. Some institutions reported that these techniques had reduced the number of DMCA notices received.27 The findings were limited by the low response rate and the potential for response bias this introduced. Since universities may be wary of reputational damage or increased li­abil­ity for infringing behaviour, there are disincentives to disclose the number of DMCA requests received or to discuss copyright infringing behaviours in general. However, the data offer original and unique insight into the sophistication of legal representatives within higher education with regard to DMCA provisions, and the specific techniques institutions have developed to confront these challenges.

2.  Accuracy of Notices One important focus for empirical investigation has been the accuracy of notices received by service providers. Here, research is concerned with the incentives structure for various parties in the notice and takedown regime, and its effectiveness in identifying and removing actually infringing content. Since the volume of takedown requests being sent has increased substantially, the problem of accuracy is potentially amplified. Directly studying accuracy of notices is challenging without access to the notices themselves and, ideally, the targeted work. The Chilling Effects (now Lumen) database has been used extensively as a source of data for empirical research on accuracy.28 One finding consistent across studies is that a small number of issuers are responsible for a disproportionate amount of takedown requests received by platforms, with consequences for accuracy of notices. Bar-Ziv and Elkin-Koren found that 65 per cent of their sample of takedown notices were sent by a single entity,29 while a study by Urban, Karaganis, and Schofield identified a single individual responsible for 52 per cent of the takedown notices in their separately collected sample.30 The concentration of issuers appears to be related to both the ease of filing using automated web tools, as well as the emergence of third party services which aggregate and work on behalf of rightholders. In fact, only 1 per cent of the copyright notices analysed by Bar-Ziv and Elkin-Koren were filed by private individuals, while 82 per cent of the requests were filed by third party 25  ibid. 13. 26  See Urban, Karaganis, and Schofield (n. 16) 382. 27  See Cotropia and Gibson (n. 22) 14. 28  See Lumen (n. 12). 29  See Sharon Bar-Ziv and Niva Elkin-Koren, ‘Behind the Scenes of Online Copyright Enforcement: Empirical Evidence on Notice & Takedown’ (2018) 50 Connecticut L. Rev. 1. 30  See Urban, Karaganis, and Schofield (n. 16) 99.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   111 enforcement services.31 The authors note this has a potentially negative side effect: increasing the number of steps between actual rightholders and recipients of takedown notices may contribute to a greater number of inaccuracies observed in bulk requests sent by third party agencies. As Seng put it, ‘[i]f the price of each arrow is low or min­ imal, to improve his chances, the reporter will fire off as many arrows as he could to hit a target, regardless of the accuracy or precision.’32 In his 2015 study of notice accuracy, Seng observed some improvements in accuracy rates since earlier studies, but worrying issues related to the substantive content of claims remained. He analysed 501,286 notices for the presence or absence of formalities such as rightholder signature, statement of good faith, statement of accuracy, and statement of authorization to act on behalf of a copyright owner. Seng found an extremely low quantity of errors among the notices analysed, decreasing over time: the error rate for formalities measured in 2012 was less than 0.1 per cent.33 This can be explained by Google’s adoption of a structured web form to intake notices, which requires senders to complete fields and provides instructions on how to complete them before sending. Seng then analysed substantive errors—those related to the nature of the copyright claim itself. In order to do this, he collected notices where the name of the copyright owner had not been redacted in the Chilling Effects dataset, to evaluate whether the claim was legitimate. He identified three specific cases in which an employee of a company (rather than the copyright owner herself) was erroneously identified in the request. Since individual notices can contain thousands of requests, these three systematic errors amounted to a total of 380,379 takedown requests, of which Google complied with over 90 per cent.34 As Seng elaborates, ‘what is alarming is the magnitude, frequency and systematic nature of these errors, which remained undetected and uncorrected for months on end. While we may excuse these errors on the basis that they arose from programs that are misconfigured with wrong information, automated systems propagated these errors across hundreds and thousands of takedown requests.’35 Seng reports that his findings represent a lower bound in the error rate, since he was unable to test accuracy in other ways, such as by observing removed content directly. Direct observation is rendered difficult by the swiftness with which requests are processed and content taken down. Overall, it appears on one hand that accuracy has been improved via automated systems, such as Google’s preferential ‘Trusted Copyright Removal Program’, by providing structured web forms, clear instructions, and negative consequences (revoked membership in the programme) for submitting inaccurate notices. On the other hand, ‘Robonotices’36 which may be generated in large numbers by third party enforcement agencies not closely tied to rightholders, can introduce and amplify errors affecting significant quantities of works. 31  See Bar-Ziv and Elkin-Koren (n. 29) 26. 32  Seng (n. 15) 48. 33  ibid. 19. 34  ibid. 36 35  ibid. 37. 36  Joe Karaganis and Jennifer Urban, ‘The Rise of the Robo Notice’ (2015) 58 Comm. ACM 28, 28–30.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

112   Kristofer Erickson and Martin Kretschmer

3.  Over-Enforcement and Abuse Over-enforcement occurs when non-infringing material is removed, for example because the content has been erroneously identified (e.g. a false positive), or because either the sender or receiver of a notice has not sufficiently considered exceptions such as fair use. Deporter and Walker argue that over-enforcement can be caused by a range of factors, including uncertainties in copyright law, the automation of enforcement, the frequent presence of both infringing and non-infringing uses on the same platform, and the high legal costs of defending one’s right to use copyright material.37 In the first major qualitative study of notice and takedown, Urban, Karaganis, and Schofield assessed the potential for over-enforcement by interviewing twenty-nine OSPs and six senders of high volumes of takedown notices. Respondents included ‘video and music hosting services, search providers, file storage services, e-commerce sites, web hosts, connectivity services, and other user-generated content sites’.38 Data were anonymized before publication. Overall, the authors found that notice and takedown procedures were taken seriously by both OSPs and rightholders. OSP respondents stated that the safe harbour provisions in the DMCA were central to their ability to operate, providing a stable framework for managing liability. On the other hand, some OSPs claimed that the fear of liability might lead them to over-enforce, as they struggled to decide what to do about inaccurate or invalid takedown requests. The authors differentiated OSPs into groups depending on the nature of the takedown notices they received. The group termed ‘DMCA Classic’ received individual takedown requests from single rightholders and dealt with them using human review.39 However, some service providers received massive amounts of takedown notices (more than 10,000 per year) often sent by electronic means, which the authors deemed ‘DMCA Auto’.40 These OSPs developed computerized systems for dealing with the large volume of requests, and made these tools available to rightholders. These OSPs were not always able to engage in human review of bulk requests. A final group, deemed ‘DMCA Plus’ took further steps to limit the upload of content that might trigger notices, at the same time as it offered more direct automated tools to rightholders to manage the detection and removal of potentially infringing content.41 Like the second group, these providers were unable to conduct human review of all requests, leading to issues of transparency, accuracy, and over-enforcement. In addition to the concerns raised by Urban, Karaganis, and Schofield, another area of focus for research has been protection of legitimate reuse of material, such as provided 37  Ben Depoorter and Robert Kirk Walker, ‘Copyright False Positives’ (2013) 89 Notre Dame L. Rev. 319. 38  Urban, Karaganis, and Schofield (n. 16) 376. 39  ibid. 379. 40  ibid. 382. 41  For additional discussion of ‘DMCA Plus’ behaviours, see Annemarie Bridy, ‘Copyright’s Digital Deputies: DMCA-plus Enforcement by Internet Intermediaries’ in John Rothchild (ed.), Research Handbook on Electronic Commerce Law (Edward Elgar 2016).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   113 by fair use in the United States, and by specific copyright exceptions in the UK and Europe. Due to the high volume of requests observed by researchers, and the seemingly strong incentives for platforms to over-comply with requests, there is potential that limi­ta­tions and exceptions to copyright could be eroded in this system. In one study of copyright exceptions, Erickson and Kretschmer longitudinally examined a dataset of user-generated parody videos hosted on YouTube, recording at yearly intervals whether the videos had been taken down.42 The research was conducted in collaboration with Jacques and others, who observed additional time periods. In all, the research covered a dataset of 1,839 videos between 2011 and 2016. The overall takedown rate across the whole four-year period was 40.8 per cent of videos, with 32.9 per cent of all takedowns attributable to copyright requests.43 Parodies varied in terms of uploader skill, parodic intent, and the nature of material borrowed to make the parody. Underlying musical works varied in terms of genre, territory, size of publisher, and commercial success. A hazards model (a statistical technique relating survival over time to one or more covariates) was used to analyse the effect of those variables on the likelihood that a given parody would be taken down. The findings showed that parodist technical skill and production values were significant in reducing the odds of a takedown. More popular parodies with more views also had lower odds of being removed.44 Rightholder behaviour varied significantly by music genre: rock music rightholders were significantly more tolerant of parodies than hip-hop and pop music rightholders.45 The significance of borrowed sound recordings in predicting a takedown, while controlling for other aspects, suggests that algorithmic detection techniques such as Content ID are shaping rightholder behaviour. The availability of a parody exception to copyright in the UK did not appear to deter rightholders from issuing takedown requests. Artists originating from the United States were significantly more tolerant of parodies than their UK counterparts. In a related study, Jacques and others found that the diversity of content is potentially harmed by automated takedown. Using the same dataset of 1,839 parodies, the authors checked to see whether music video parodies had been manually removed or blocked via the Content ID automated system. Differences in the way that YouTube informs would-be viewers of these missing videos allowed the researchers to distinguish between regular takedowns and Content ID blocking.46 The researchers attributed 32.1 per cent of the takedowns measured in 2016 to algorithmic takedown and 6.4 per cent to manual takedown.47 The authors then examined the effect of takedown on cultural diversity. They differentiated between ‘supplied’ diversity, which they define as diversity in the array of messages 42 See Kris Erickson and Martin Kretschmer, ‘This Video is Unavailable: Analyzing Copyright Takedown of User-Generated Content on YouTube’ (2018) 9 JIPITEC 75. 43  ibid. 83. 44  ibid. 86. 45  ibid. 87. 46  See Jacques and others, ‘The Impact on Cultural Diversity of Automated Anti-Piracy Systems as Copyright Enforcement Mechanisms: An Empirical Study of YouTube’s Content ID Digital Fingerprinting Technology’ (2018) 15(2) SCRIPTed 277, 291. 47  ibid. 298.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

114   Kristofer Erickson and Martin Kretschmer that could be watched on the platform, and ‘consumed’ diversity, which is measured in terms of what viewers actually choose to watch.48 They used the Simpson Index of diversity to calculate differences in the ‘effective number of parties’; that is, the concentration of availability of certain expressions, before and after takedowns are detected. The authors find that within the sample there is already a strong difference between ‘supplied’ and ‘consumed’ diversity, related to a skew in the preference by viewers for certain pop songs and specific parodies. This finding mirrors other research on the concentration (‘bottlenecking’) of content consumption online.49 Interestingly, the authors find that the application of automated takedown also reduces the effective number of parties in the sample, resulting in an overall loss of diversity of content. Their index of consumed diversity contracted from twenty-seven in 2013 to twenty in 2016 after takedowns had occurred.50 However, the effect of automated content blocking on diversity is overwhelmed by the built-in lack of diversity in demand (which occurs due to algorithmic sorting and the limited number of search results offered by YouTube). As the authors put it, ‘if [Content ID] removes the most popular parody, would the next most popular simply replace it, almost . . . as a forest fire which, in taking out the old trees, gives breathing space for new growth? If that is the case, then while the take-down . . . may be a personal tragedy for the creator of the most popular parody, this will have little to no effect on diversity or possibly even welfare.’51 Beyond studies of unanticipated effects on freedom of expression, another troubling possibility is that malicious actors could use the copyright claims to remove content they find politically disagreeable, or for other arbitrary reasons unrelated to copyright. In one pair of studies, researchers tested the level of scrutiny that ISPs gave to takedown notices before acting. These studies each created simulated web pages containing noninfringing content, and then sent notices to ISPs asking for it to be removed.52 In both cases, the authors used work clearly in the public domain (published pre-twentieth century), minimizing doubt that the content could be infringing. The study by Ahlert and others used portions of a chapter of John Stuart Mill’s On Liberty, while the experiment by Sjoera Nas used a work by Eduard Douwes Dekker dating from 1871. The researchers sent takedown notices, purporting to originate from the non-existent ‘John Stuart Mill Heritage Foundation’ and the ‘E.D. Dekkers Society’, from anonymous email addresses. In the two cases tested by Ahlert and others (one UK and one US ISP), the UK web host acted immediately to block the test web page, while the US ISP appeared willing to take action, but asked the researchers for further information before removing the content. Specifically, the US ISP asked the researchers to provide a statement ‘that the 48  ibid. 292. 49  See Matthew Hindman, The Myth of Digital Democracy (Princeton U. Press 2008). 50  See Jacques and others (n. 46) 299. 51  ibid. 303. 52 See Christian Ahlert, Chris Marsden, and Chester Yung, ‘How “Liberty” Disappeared from Cyberspace: the Mystery Shopper Tests Internet Content Self-regulation’ (2004) ; Sjoera Nas, ‘The Multatuli Project: ISP Notice and Takedown’ (2004) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   115 complaining party has a good faith belief that use of the material in the manner ­complained of is not authorized by the copyright owner’ as well as a statement that the information contained in the takedown notice was accurate.53 The researchers decided not to pursue the experiment at that point. In the Dutch example, Nas carried out a similar experiment using web pages created on ten Dutch ISPs. Of those, seven removed or blocked the web page containing public domain material by Dekker, sometimes before notifying the owner of the website. A further two ISPs ignored the requests, while one ISP refused to take the material down because it correctly determined it to be in the public domain. These two experimental studies, while illustrative of the troubling fact that it ‘takes only a Hotmail account to bring a website down’,54 are limited in certain important respects. Employees of the targeted ISPs, having various levels of subject-area expertise, could not be expected to consistently identify a literary text in the public domain, even a well-known one. Because they targeted a limited number of ISPs using a controlled scen­ario, these studies were unable to provide data on the actual levels of abuse of notice and takedown mechanisms, which would considerably extend the usefulness of this research. However, both studies suggest that placing legal requirements on notice issuers (e.g. statements of ‘good faith’) may deter abusive behaviour.

4.  Due Process and Transparency Procedural justice, in particular relating to the interests of users, is an issue that surfaces in a number of chapters in this Handbook.55 This has also been investigated empirically in relation to notice and takedown. In a 2018 study, Fiala and Husovec studied the economic incentives that drive overremoval of content and under-use of the counter-notification mechanism among users.56 The main problem identified by the authors is that, ‘according to theory and empirical evidence, [notice and takedown] leads to many false positives due to over-notification by concerned parties, over-compliance by providers, and under-assertion of rights by affected content creators’.57 To investigate the causes of under-use of counter-notification, the authors designed a laboratory experiment to model the relationship between service providers (platforms) and content creators. In the experiment, players were given the task of evaluating whether or not a maze had a valid solution, in a 15-second time limit. This was intended to simulate the real-world decision by platform pro­viders about whether to comply with a takedown request. Creators were given more time to study the maze, simulating their familiarity with their own work. A subsequent round allowed the opportunity to ‘punish’ service providers for incorrect removal of content. A stimulus 53  Ahlert, Marsden, and Yung (n. 52) 21. 54  Nas (n. 52) 6. 55  See in particular Chapters 34 and 35. 56  See Lenka Fiala and Martin Husovec, ‘Using Experimental Evidence to Design Optimal Notice and Takedown Process’, TILEC Discussion Paper no. 2018-028 (2018) . 57  ibid. 1.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

116   Kristofer Erickson and Martin Kretschmer condition simulated a hypothetical dispute-resolution process in which creators were given more power and financial incentive to dispute incorrect decisions by providers. The authors ran the experiment with eighty subjects drawn from university students in the Netherlands, and used real payouts. The researchers found that unlike the baseline condition in which providers tended to over-enforce, and creators tended not to dispute decisions, an alternative dispute reso­lution (ADR) treatment resulted in fewer mistakes by providers and a more profitable condition for creators overall.58 After repeated iterations of the game, the researchers observed that the existence of a credible mechanism through which mistakes can be identified and corrected, represented by the ADR process, decreased the rate of incorrect takedowns from 35 per cent to 19 per cent.59 Another problem for transparency in the notice and takedown process is the presence of automated or algorithmic methods of identification and removal. Algorithms are not subject to public oversight, often complex, walled off behind commercial secrecy, and unpredictable as they adapt to changing conditions over time. As Perel and ElkinKoren write, ‘proper accountability mechanisms are vital for policymakers, legislators, courts and the general public to check algorithmic enforcement. Yet algorithmic enforcement largely remains a black box. It is unknown what decisions are made, how they are made, and what specific data and principles shape them.’60 Perel and Elkin-Koren advocate ‘black box tinkering’ as a method for uncovering the hidden functionality of algorithms and holding them accountable. In the context of the intermediary liability regime, this means conducting experiments on live platforms under conditions controlled by the researcher to test how algorithms such as YouTube’s Content ID system react to various inputs. The authors did this by gathering data on the behaviour of OSPs over the life cycle of a typical uploaded work of user-generated content: from filtering at the moment of upload, to receipt and handling of takedown notices, to the removal of content and notification of the uploader. To accomplish this, they uploaded and tracked various purpose-made clips with paired controls. For ex­ample, one clip contained non-infringing footage but copyrighted music, while another contained only the footage. The researchers obtained ethical approval for the study from their university ethics committee, and notified the platforms at the conclusion of the study that they had conducted an experiment. They found that 25 per cent of videosharing websites and 10 per cent of the image-sharing websites tested in Israel made use of some ex ante filtering technology at the point of upload.61 Fifty per cent of the videosharing websites removed infringing content on receipt of a notice, while only 12.5 per cent of the image-sharing websites did so. After removing content, all of the video-sharing websites tested did notify the uploader about the removal, while only 11 per cent of the image websites did so.62 58  ibid. 15. 59  ibid. 17. 60  ibid; Mayaan Perel and Niva Elkin-Koren, ‘Black Box Tinkering: Beyond Disclosure in Algorithmic Enforcement’ (2017) 69 Fla. L. Rev. 181, 184. 61  ibid. 208. 62  ibid. 209.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   117 The wide variation in practices between online platforms suggests problems with procedural justice in the Israeli setting observed by the researchers. They also noted the presence of false positives (removal of non-infringing content when asked to do so) as evidence of the failure of human/algorithmic systems of handling notice and takedown procedures.63 Methodologically, the authors noted that, ‘to study algorithmic enforcement by online intermediaries may often need to overcome different contractual barriers imposed by the examined platforms or software owners’.64 The authors highlight terms of use which prohibit such tinkering. Internet companies have not readily shared information with researchers about how they process and handle takedown requests, prob­ably because they are wary of increased scrutiny from rightholders and regulators, or because the technical filtering mechanisms are a source of competitive advantage. In fact, none of the studies reviewed in this chapter obtained data with the cooperation of private companies, other than those made available via the Chilling Effects/Lumen database, or independently through experimentation such as by Perel and Elkin-Koren.

5.  Balancing of Responsibilities and Costs An ongoing debate in copyright policy relates to the burden of responsibility for identifying and removing potentially infringing works. While the original DMCA takedown mechanism placed responsibility on the shoulders of rightholders to monitor, identify, and request takedown of infringing material, recent policy discussions have brought focus to re-evaluating the prior balance. Some rightholder groups would like additional responsibilities placed on platforms (e.g. the obligation to ensure that content ‘stays down’ after removal).65 Legislation adopted in Europe in 2019 would add a licensing obligation that may lead service providers to filter content at the point of upload.66 Consequently, an important empirical question relates to understanding how costs of enforcement have been distributed so far, and what the effects of rebalancing those costs may be for internet companies, rightholders, and users. In a study of the market for out-of-commerce musical works, Heald proposed that notice and takedown regimes, in tandem with automatic detection systems, may create a market for previously unavailable works.67 The labour of digitizing, uploading, and disseminating the work is borne by the uploader, while the rightholder, once notified, 63  ibid. 210. 64  ibid. 215. 65 See Martin Husovec, ‘The Promises of Algorithmic Copyright Enforcement: Takedown or Staydown? Which Is Superior? And Why’ (2018) 42 Colum. J. of L. & Arts 53. 66  See Directive 2019/790/EU (n. 7) Art. 17. 67  See Paul Heald, ‘How Notice-and-Takedown Regimes Create Markets for Music on YouTube: An Empirical Study’ (2014) 83 UMKC L. Rev. 313L.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

118   Kristofer Erickson and Martin Kretschmer may simply select to monetize the work and collect advertising revenue from it. Heald examined a dataset consisting of ninety songs which reached number one on the pop music charts in Brazil, France, and the United States between 1930 and 1960, and an additional set of 385 songs dating from 1919 to 1926 (which should be out of copyright in the United States).68 He found that 73 per cent of the in-copyright works in his sample from the United States were monetized by a rightholder, with a lower rate of monetization in France (62 per cent) and Brazil (39 per cent).69 New uploads were less likely to have been monetized, while older uploads, particularly those with a higher number of views, were more likely monetized.70 Similarly to findings by Erickson and Kretschmer, Heald found that uploader creative practices were important in determining rightholder response. Videos consisting of straight recordings were more likely to be mon­et­ized by rightholders than amateur creative videos or cover performances.71 However, even those preferences varied by territory: French rightholders monetized a higher proportion of amateur videos and a lower proportion of straight recordings. In general, Heald found that there were similarly high rates of availability of older incopyright works (77 per cent had an upload on YouTube) and public domain copyright songs from 1919 to 1926 (with 75 per cent availability on YouTube).72 This rate is high compared to other mediums such as books, for example, where only 27 per cent of New York Times bestsellers from 1926 to 1932 were found to have copies available to purchase.73 The higher availability of in-copyright works on YouTube, despite the availability of takedown to rightholders, leads Heald to conclude that the Content ID system creates an efficient form of licensing which reduces transaction costs and enables uploaders to communicate market demand to rightholders. Urban, Karaganis, and Schofield found that algorithmic ‘DMCA plus’ techniques might be a source of competitive advantage for large incumbent platforms. Based on their qualitative interviews with large and small firms, the authors note that ‘In some striking cases, it appears that the vulnerability of smaller OSPs to the costs of implementing large-scale notice and takedown systems and adopting expensive DMCA Plus practices can police market entry, success, and competition.’74 Respondents cited the high costs involved, for example, in replicating bespoke systems such as Google’s Content ID, or outsourcing to third party fingerprinting services such as Audible Magic which was quoted as costing up to $25,000 per month.75 The ability of larger incumbent firms such as Google to monetize all kinds of user-generated content via AdSense and share that revenue with rightholders via Content ID was also seen as a competitive advantage from the perspective of smaller OSPs. Rather than provide rightholders with the option of leaving such content on their platforms, OSPs without that technology were limited to taking down videos in their entirety in response to takedown requests.

68  ibid. 314. 69  ibid. 316. 70  ibid. 319. 71  ibid. 322. 72  ibid. 324. 73  See Paul Heald, ‘How Copyright Keeps Works Disappeared’ (2014) 11 J.  of Empirical Legal Studies 829. 74  Urban, Karaganis, and Schofield (n. 16) 64. 75  ibid. 64.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   119 In addition to differences between large and small commercial enterprises, there are also concerns about the costs of complying with notice and takedown procedures for non-commercial institutions. For example, Schofield and Urban analysed the effects of DMCA and non-DMCA takedown requests on the practices of academic digital libraries. Since libraries have undertaken more digitization as part of their open-access public missions, and since public repositories increasingly allow contributions from useruploaders, this potentially exposes them to DMCA takedown requests. As the authors point out, libraries have historically had ‘sophisticated, careful and public-minded approaches to copyright’ through their handling of physical material.76 At the same time, libraries and academic repositories are not typically equipped with resources to handle large volumes of takedown requests such as those received by internet companies. Schofield and Urban surveyed respondents about institutional practices (how libraries dealt with notices once received, whether they forwarded to other departments, etc.), the volume of requests received, and the nature of those requests (copyright or noncopyright). In total, eleven libraries returned surveys and an additional five interviews were carried out.77 Since 2013, libraries had noted an increase in notices received, and some had put in place procedures to deal with DMCA takedown requests. Respondents reported that handling DMCA notices put pressure on staff time. Some libraries stated that a lack of legal confidence and a requirement to protect their reputation added uncertainty to their roles. Some erroneous takedown requests were reported to have been received. In one case, the IT department removed a deposited article, which the library later reinstated after careful review (the article was in the public domain).78 Overall, the authors found that librarians were more confident dealing with non-copyright removal requests. These included concerns about privacy, sensitivity, and security. Librarians had developed institutional norms over time to deal with these matters, but had not yet accomplished this in the realm of digital copyright. This, combined with the high degree of scrutiny and attention paid to evaluating DMCA notices, made institutions potentially vulnerable to an increase in costs related to handling takedown requests.

6.  Conclusion: Limitations, Gaps, and Future Research A number of studies reviewed here used data contained in the publicly accessible Chilling Effects/Lumen database.79 These are rich data, and the Lumen project provides an interface for researchers to sort and query the voluminous archive. However, use of this database could skew empirical findings in the direction of a small group of inter­medi­ar­ies who share their data (e.g. Google) and focus attention on particular units of observation 76  Brianna Schofield and Jennifer Urban, ‘Takedown and Today’s Academic Digital Library’ (2016) 13(1) I/S: A J. of L. and Policy 125, 130. 77  ibid. 131–2. 78  ibid. 138. 79  See Lumen (n. 12).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

120   Kristofer Erickson and Martin Kretschmer (individual notices and claims). A wider range of publicly accessible datasets on takedown would enrich the possibilities for more diverse empirical work. As noted in this chapter, some research has already considered the way that takedown practices are handled in settings such as universities and public libraries by seeking out data directly from those organizations. The empirical studies reviewed here also demonstrate that the study of copyright notice and takedown is a moving target: patterns of behaviour measured by Urban and Quilter in 2006 had shifted in later observations using the same data source. There was an explosion in quantity and diversity of takedown requests and adoption of new practices by both intermediaries and issuers. Even if the current legal regime remains stable, it is likely that practices will continue to shift: new business models may emerge, and rightholders that were once keen users of notice and takedown procedures may drop off as new users appear. For example, the adoption of subscription-based revenue m ­ odels by firms such as Microsoft and Adobe may result in waning investment in enforcement focused on piracy websites. Other rightholders may find it advantageous to enforce in this manner. The concentration of takedown notices directed at Google Search and YouTube might also change if new platforms become dominant, or if new practices of sharing potentially infringing material emerge. Empirical analysis of such trends is held back by a lack of access to data. As we have seen, what is happening inside ‘black box’ systems often has to be reverse-engineered, or revealed by experimental approaches. However, standardized and transparent automated data-collection methods are entirely feasible. They will be increasingly demanded by regulators that are tasked with overseeing new obligations and duties on platforms imposed by legislators. It would have been more rational to first enable better understanding, before changes to the liability regime were enacted. The current state of evidence suggests that, despite its flaws, the notice and takedown regime is working. A significant (and after 2013, vast) number of takedown notices are being sent by rightholders of various types, and processed expeditiously by service pro­ viders large and small. The concept of providing safe harbour to innovators while en­ab­ ling a mechanism for rightholders to protect their copyrights, appears to be achieving its purpose. Links to infringing materials are being pushed out of the top search results, infringing videos are being removed from sharing websites, and institutions are removing infringing materials hosted on their networks. The problems, as outlined in this review, remain significant. They relate to redressing contextual imbalances between differently situated intermediaries, holding rightholders and platforms to account for accuracy of takedown issuance and compliance, and providing meaningful due process for users whose content is removed. These shortcomings may be addressed through tweaking, rather than overhauling, the safe harbour regime. There is a deep tension between bringing platforms into the regulatory sphere and delegating regulatory functions (e.g. monitoring and filtering) to platforms themselves. The first approach may establish liability rules that platforms cannot escape; the second approach may lead to due process being bypassed. Who, for example, oversees Facebook’s machine learning, AI, and computer vision technology and their

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   121 30,000 human content moderators?80 The online world appears to have entered a phase of rad­ical experimentation, exploring new liability rules and new powers for regulatory agencies at the same time without fully understanding the efficacy or shortfalls of the safe harbour paradigm established two decades ago. Our review indicates that designing effective reporting requirements is critical to enabling empirical assessment of changes to the liability regime. A range of regulatory agencies are now crowding the field, ranging from content, data, and competition to electoral regulators. Is notice and takedown still a valid mechanism for addressing these issues, or has it run its course? 80  See Casey Newton, ‘The Trauma Floor: The secret lives of Facebook moderators in America’ (The Verge, 25 February 2019) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 6

The Ci v ic Rol e of OSPs i n M at u r e I n for m ation Societies Mariarosaria Taddeo

Online service providers (OSPs) have gone from offering connecting and informationsharing services to their paying members to providing open infrastructure and applications that facilitate digital expression, interaction, and the communication of information. This evolution put OSPs in a peculiar position, for they design and provide key services on which information societies depend. This raises questions as to what role they have in our societies, what moral responsibilities this role entails, and how OSPs should discharge these responsibilities. Over the years. the discussion concerning the responsibilities of OSPs has gone from defining measures that OSPs should deploy to correct their market bias and ensure a pluralistic web, to the impact that OSPs have on the internet, on the flourishing of demo­ crat­ic values, and on societies at large.1 The debate spans different fields, from information and computer ethics, corporate social responsibilities, and business ethics, to computer-mediated communication, law,2 and public policy. Topics of analyses range from biases and skewing of information indexed by search engines,3 the protection of

1 See Mariarosaria Taddeo and Luciano Floridi, ‘The Debate on the Moral Responsibilities of Online Service Providers’ [2015] Science and Engineering Ethics . 2 See Giancarlo Frosio, ‘Why Keep a Dog and Bark Yourself? From Intermediary Liability to Responsibility’ (2018) 26 Int’l J’ of L. and IT 1. 3  See Lucas Introna and Helen Nissenbaum, ‘Shaping the Web: Why the Politics of Search Engines Matters’, Social Science Research Network Scholarly Paper ID 222009 (2006) ; Laura Granka, ‘The Politics of Search: A Decade Retrospective’ (2010) 26 The Information Society 364.

© Mariarosaria Taddeo 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   123 users’ privacy4 and security,5 to the impact of OSPs on democratic processes,6 and their duties with respect to human rights.7 Elsewhere,8 I have analysed the relevant literature on the moral responsibilities of OSPs and identified three important aspects of the debate concerning: (1) expectations about the conduct of OSPs; (2) an increasing consensus about the relevance of OSPs in our societies; (3) the lack of agreement on the values and principles that should inform their conduct. I shall deal with points (2) and (3) in the second section of this chapter. Let me focus now on point (1), as expectations about the behaviours of OSPs underpin much of the debate on their role and moral responsibilities. Users, but also scholars and policymakers, often expect OSPs to align their goals with the needs of our societies.9 OSPs are expected to perform their tasks well and according to principles of efficiency, justice, fairness, and respect of current social and cultural values (emphasis added).10 As Shapiro stresses: in democratic societies, those who control the access to information have a responsibility to support the public interest. . . . these gatekeepers must assume obligation as trustees of the greater good.11

However, what these obligations may be remains open question. These range from Google’s generic motto ‘don’t be evil’ to much more specific guidelines concerning the protection of the public interest and respect for basic democratic principles, for example openness, transparency, freedom of the internet, security, and legal certainty, as identified in the 2011 G8 Deauville Declaration.12 At the same time, the international and multi­ cul­tur­al contexts in which OSPs operate complicate the definition of their obligations, 4  See Chi Zhang and others, ‘Privacy and Security for Online Social Networks: Challenges and Opportunities’ (2010) 24 IEEE Network 13. 5  See Vinton Cerf, ‘First, Do No Harm’ (2011) 24 Philosophy & Technology 463; Mariarosaria Taddeo, ‘Cyber Security and Individual Rights, Striking the Right Balance’ (2013) 26 Philosophy & Technology 353; Mariarosaria Taddeo, ‘The Struggle Between Liberties and Authorities in the Information Age’ [2014] Science and Engineering Ethics 1. 6 See Eli Pariser, The Filter Bubble: What The Internet Is Hiding From You (Penguin 2012); Cass  R.  Sunstein, Republic.Com (Princeton  U.  Press 2001); Luciano Floridi, ‘Mature Information Societies—a Matter of Expectations’ (2016) 29 Philosophy & Technology 1. 7  See Dennis Broeders and Linnet Taylor, ‘Does Great Power Come with Great Responsibility? The Need to Talk about Corporate Political Responsibility’ in Mariarosaria Taddeo and Luciano Floridi (eds), The Responsibilities of Online Service Providers (Springer 2017) 315–23. 8  For a more extensive analysis of the debate on the moral responsibilities of OSPs, see Taddeo and Floridi (n. 1). 9  See Robert Madelin, ‘The Evolving Social Responsibilities of Internet Corporate Actors: Pointers Past and Present’ (2011) 24 Philosophy & Technology 455; Denis McQuail, Media Performance: Mass Communication and the Public Interest (Sage 1992) 47. 10  McQuail (n. 9) 47. 11  Granka (n. 3) 365. 12  See 2011 G8 Deauville Declaration .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

124   Mariarosaria Taddeo for it requires an ethical framework able to square the different ethical views and stakeholders’ interests that OSPs face. In this chapter, I will describe the debate on the moral responsibilities of OSPs with respect to managing access to information (Section 1) and human rights (Section 2). I will then analyse the role and the nature of the responsibilities of OSPs in mature information societies (Section 3).13 I will conclude the chapter by applying Floridi’s soft ethics to consider what responsibilities the civic role of OSPs entails and how they should discharge them (Section 4).

1.  Managing Access to Information The organization and management of the access to information available online raises problems concerning the way in which OSPs select and rank such information.14 The research on this topic initially focused exclusively on search engines but, with the emergence of the Web 2.0, social networks and news aggregators also became objects of ana­lysis, for these OSPs too can skew users’ access to online information. Introna and Nissenbaum analyse the role of search engines in defining the scope of access to online information and stress the relation between such a scope and the development of a pluralistic democratic web.15 They advocate diversity of the sources of information as a guarantee of the fairness of information-filtering processes and the democratic development of the internet.16 Corporate, market-oriented, interests of the private companies running indexing and ranking algorithms can jeopardize both these aspects. In addition, Introna and Nissenbaum compare search engines to conventional publishers and suggests that, like publishers, search engines filter information following market regulations; that is, according to consumers’ tastes and preferences, and favour

13  See Floridi (n. 6). 14  See Nicholas Negroponte, Being Digital (new edn, Coronet Books 1996). 15  See Introna and Nissenbaum (n. 3). 16  Other relevant contributions on the diversity of the sources and information available on the web have been provided in the literature in information and communication studies, law, and public policy. The interested reader may find useful the following articles: Sandeep Pandey and others, ‘Shuffling a Stacked Deck: The Case for Partially Randomized Ranking of Search Engine Results’ in Klemens Böhm and others (eds), Proc. 31st International Conference on Very Large Databases (VLDB 2005); Frank Pasquale, ‘Rankings, Reductionism, and Responsibility’, Social Science Research Network Scholarly Paper no. 888327 (2006) ; Eszter Hargittai, ‘The Social, Political, Economic, and Cultural Dimensions of Search Engines: An Introduction’ (2007) 12 J.  of Computer-Mediated Communication 769; Elizabeth Van Couvering, ‘Is Relevance Relevant? Market, Science, and War: Discourses of Search Engine Quality’ (2007) 12 J. of Computer-Mediated Communication 866; Amanda Diaz, ‘Through the Google Goggles: Sociopolitical Bias in Search Engine Design’ in Amanda Spink and Michael Zimmer (eds), Web Search (Springer 2008); Lawrence Hinman, ‘Searching Ethics: The Role of Search Engines in the Construction and Distribution of Knowledge’ in Spink and Zimmer ibid.; Dirk Lewandowski, ‘The Influence of Commercial Intent of Search Results on Their Perceived Relevance’ (8 February 2011) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   125 powerful actors.17 This promotes the so-called ‘rich gets richer’ dynamic.18 Namely, websites include links to popular websites in order to be ranked higher by search engines which makes the popular site even more well-known and, thus, ranked even higher. Conversely, this system makes less known those websites that are already poorly linked and hence ranked lower. This vicious circle eventually leads to expunging niche, less renowned sources of information from the web, thus endangering the plurality and diversity of the internet. Two corrective mechanisms are then suggested: embedding the ‘value of fairness as well as [a] suite of values represented by the ideology of the Web as a public good’19 in the design of indexing and ranking algorithms, and transparency of the algorithms used by the search engines. The call for transparency of the search and ranking algorithms is not uncontroversial,20 as disclosing the structure of the algorithms could facilitate malicious manipulation of search results, while not bringing any advantage to the average non-tech-savvy user. It is also unclear to what extent market regulation of the internet really threatens the diversity of information sources. On the contrary, Granka maintains that in a market-regulated environment companies will devote their attention to the quality of the search results, which will have to meet the different needs and expectations of each single user, thereby guaranteeing diversity of the sources and fairness of the ranking. In this respect, the article also objects to the analogy describing OSPs, search engines in particular, as publishers. Search engines: parse through the massive quantities of available information . . ., the mechanisms whereby content is selected for inclusion in a user’s search result set is fundamentally different than in traditional media—search engines universally apply an algorithm, whereas traditional news media makes case-by-case decisions.21

OSPs’ editorial role is also analysed by Goldman who describes search engine bias as a necessary consequence of OSPs’ editorial work: to prevent anarchy and preserve credibility, search engines unavoidably must exercise some editorial control over their systems. In turn, this editorial control will create some bias.22

While the analysis recognizes that such filtering may reinforce the existing power structure in the web and bias search results towards websites with economic power,23 it also 17  See Introna and Nissenbaum (n. 3). 18  See Bernardo Huberman, The Laws of the Web: Patterns in the Ecology of Information (MIT Press 2003). 19  Michael Santoro, ‘Engagement with Integrity: What We Should Expect Multinational Firms to Do about Human Rights in China’ (1998) X(1) Business and the Contemporary World 30. 20  See Granka (n. 3). 21  ibid. 365. 22  Eric Goldman, ‘Search Engine Bias and the Demise of Search Engine Utopianism’ (2006) 8 Yale J. of L. and Tech. 188, 196. 23  See Niva Elkin-Koren, ‘Let the Crawlers Crawl: On Virtual Gatekeepers and the Right to Exclude Indexing’ (2001) 26 U. Dayton L. Rev. 179.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

126   Mariarosaria Taddeo advocates that the correction of search bias will follow from the fine-tuning of the search results with users’ preferences. No extra moral responsibilities should be ascribed to OSPs in that respect. A similar position has also been expressed by Lev-On and Manin.24 Their articles suggest that, given the huge set of data filtered by search engines, unintentional exposure to information conveying diverse and non-mainstream information cannot be excluded. The issue then arises as to whether incidental exposure to diverse information suffices to maintain an open, pluralistic web and unbiased access to information. Tailoring search results leads to an organic refinement of searching and ranking algorithms in order to accommodate users’ preferences and, at the same time, it may correct the distortion operated by OSPs and foster the diversity of the sources and the information circulating in the web. This is, for example, the argument proposed by Goldman.25 However, personalization of search result is far from being the solution to the problems of information-filtering. It has been objected to as a threat to democratic practices. The misuse of social media to tamper with US presidential elections, for example, has shown the concrete risks that personalization can pose to democratic processes.26 Customtailoring of search results challenges the affirmation of deliberative democracies, insofar as it undermines the possibilities of sharing different cultural backgrounds, view, and experiences and reduces the chances of users being exposed to sources, opinions, and information which may support or convey different world views. Several analyses have raised this issue. 27 Sunstein, for example, criticizes any approach relying on users’ preferences and market dynamics to shape information access and communication: it is much too simple to say that any system of communication is desirable if and because it allows individual to see and hear what they choose. Unanticipated, unchosen exposures, shared experiences are important too.28

He argues that custom-tailored access to information leads to a world fragmented in different versions of ‘the daily me’29 in which each individual is isolated in her or his informational bubble,30 from which conflicting views are excluded. Pariser has proposed a similar argument, stressing that the personalization of access to online information

24  See Azi Lev-On and B. Manin, ‘Happy Accidents: Deliberation and Online Exposure to Opposing Views’ in Todd Davies and Seeta Peña Gangadharan (eds), Online Deliberation: Design, Research and Practice (CSLI Publications 2009) 105; Azi Lev-On, ‘The Democratizing Effects of Search Engine Use: On Chance Exposures and Organizational Hubs’ in Spink and Zimmer (n. 16). 25  Goldman (n. 22). 26  See Nathaniel Persily, ‘Can Democracy Survive the Internet?’ (2017) 28 J. of Democracy 63. 27  Concern for the implication that filtering of information may have for participative democracy and the nature of the web have also been expressed in Lawrence Lessig, Code: And Other Laws of Cyberspace (Basic Books 1999). 28  Sunstein (n. 6) 131. 29  Negroponte (n. 15). 30  Pariser (n. 6).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   127 promotes the emerging personal informational ecosystems that undermine the emergence and fostering of democracy and pluralism.31 The combination of personalization of information with artificial intelligence (AI)based profiling and nudging techniques has made the risks highlighted by Pariser and Sunstein even more serious. AI can undermine and erode human self-determination due to its invisibility and influencing power: With their predictive capabilities and relentless nudging, ubiquitous but im­per­ cept­ible, AI systems can shape our choices and actions easily and quietly. . . . AI may also exert its influencing power beyond our wishes or understanding, undermining our control on the environment, societies, and ultimately on our choices, projects, identities, and lives. The improper design and use of invisible AI may threaten our fra­gile, and yet constitutive, ability to determine our own lives and identities and keep our choices open.32

AI may empower OSPs to do more things, from preventing suicide33 to offering more tailored content and enhancing the filter-bubble mechanism. Establishing appropriate governance and defining the moral responsibilities of OSPs are necessary steps for ensuring that possible misuses34 of AI, alongside the other services that OSPs offer, will not trump the proper uses of this technology.

2.  Human Rights: Harmful Content and Internet Censorship In a commentary, Vinton Cerf touched directly on the role of OSPs in preventing harmful uses of the web, stating that: it does seem to me that among the freedoms that are codified . . . should be the right to expect freedom (or at least protection) from harm in the virtual world of the Internet. The opportunity and challenge that lies ahead is how Internet Actors will work together not only to do no harm, but to increase freedom from harm.35

Following this view, ascribing moral responsibilities to OSPs with respect to the circulation of harmful material may be desirable. However, this also raises further problems 31 ibid. 32  Mariarosaria Taddeo and Luciano Floridi, ‘How AI Can Be a Force for Good’ (2018) 361 Science 751, 752. 33  See Norberto Nuno Gomes de Andrade and others, ‘Ethics and Artificial Intelligence: Suicide Prevention on Facebook’ (2018) 31 Philosophy & Technology 669. 34 See Luciano Floridi and others, ‘AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’ (2018) 28 Minds and Machines 689. 35  Cerf (n. 5) 465.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

128   Mariarosaria Taddeo when considering the duties that those responsibilities could prompt; for example, policing and filtering the content available online, and the possible breaches of individual rights, such as freedom of speech and information.36 Striking the balance between security of users and users’ right to freedom of speech and information is problematic. While OSPs should be held responsible for respecting it, it should not be their duty to define arbitrarily and independently the balance and decide, for example, how much freedom of information can be sacrificed in the name of users’ safety and security. This is not desirable for OSPSs—who may find themselves standing between laws curtailing freedom of speech, information, and anonymity, and citizens’ right to internet freedom; nor is it desirable for societies since it could lead to privatization of the judging power and poses issues of transparency and ac­count­abil­ity.37 Consider, for example, OSPs acting as both ‘judge and jury’38 with respect to the decision of the Court of Justice of the European Union on the right to be forgotten.39 To avoid that risk, it is crucial to separate the responsibilities of OSPs from the duties and authority of the state and supranational authorities, which should set clear norms shaping OSPs’ conduct with respect to human rights. At the moment, however, the debate focuses on whether OSPs have any re­spon­si­bil­ ities with respect to human rights. The discussion was reignited in late 2018 when it became clear that Google was considering entering into the Chinese market again,40 and before that in 2012 when the UN Human Rights Council declared the right to ‘internet freedom’ as a human right. This right calls on states to promote and foster access to the internet and to ensure that the rights to freedom of expression and information, as presented in Article 19 of the Universal Declaration of Human Rights, would be upheld online as well as offline.41 In the same vein, a report released by the UN in 2011 stressed that: 36  Internet censorship and freedom of speech have also been at the centre of a debate focusing on the balance between individual rights and state power. The topic does not fall within the scope of this chapter. The interested reader may find useful Mariarosaria Taddeo, ‘Cyber Security and Individual Rights, Striking the Right Balance’ (2013) 26 Philosophy & Technology 353; Taddeo, ‘The Struggle Between Liberties and Authorities in the Information Age’ (n. 5). 37  See Felicity Gerry and Nadya Berova, ‘The Rule of Law Online: Treating Data like the Sale of Goods: Lessons for the Internet from OECD and CISG and Sacking Google as the Regulator’ (2014) 30 Computer Law & Security Rev. 465. 38  Matt Warman, ‘Google is the “Judge and Jury” in the Right to be Forgotten (Telegraph, 14 July 2014) . 39  See Jeffrey Rosen, ‘Protecting Privacy on the Internet Is the User’s Responsibility’ (Philadelphia Enquirer, 2015) ; Luciano Floridi, ‘Should You Have The Right To Be Forgotten On Google? Nationally, Yes. Globally, No.’ (2015) 32 New Perspectives Quarterly 24. 40  See, for the 2018 debate on Google’s Operation Dragonfly aiming to reintroduce Google in the Chinese market, Mark Bergen, ‘Google in China: When “Don’t Be Evil” Met the Great Firewall’ (Bloomberg, 8 November 2018) . 41  See UN General Assembly Human Rights Council, ‘The Promotion, Protection and Enjoyment of Human Rights on the Internet’ (2012) A/HRC/20/L13. See also Florian Wettstein, ‘Silence as Complicity:

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   129 [g]iven the Internet has become an indispensable tool for realizing a range of human rights, combating inequality, and accelerating development and human progress, ensuring universal access to the Internet should be a priority for all States.42

Some authors, such as Chen,43 have argued that OSPs, and in particular social networks, bear both legal and moral responsibilities to respect human rights because of the centrality of their role on the web and of their knowledge of the actions undertaken by other agents, for example governmental actors, in the network. At the same time, both the Universal Declaration of Human Rights and the Resolution on the Promotion, Protection and Enjoyment of Human Rights on the Internet mainly address state actors, making problematic the expectation that OSPs should be held responsible for respecting and fostering human rights.44 This problem not only concerns OSPs; it also involves several other private actors, especially those working in the international market,45 making this issue a central topic in the literature on business ethics. Consider, for example, the case of human rights violation reported by Human Rights Watch which concern energy industries such as Royal Dutch/Shell’s operation in Nigeria, British Petroleum in Colombia, and Total and Unocal’s construction works in Burma and Thailand.46 Santoro47 and Brenkert48 stress the need to consider the context in which companies act before assessing their moral responsibilities. Santoro proposes a ‘fair share theory’ to assess the moral responsibilities of multinational companies complying with the requests of an authoritarian state. According to that theory, the responsibilities for respecting and fostering human rights are ascribed differently depending on the cap­abil­ity of the company. In particular, Santoro poses two conditions for evaluating the capabilities of private companies and ascribing responsibilities: (1) it has to be able to make the difference, that is, change local government policies; and (2) it has to be able to withstand the losses and damages that may follow from diverting from local government directions and laws.

Elements of a Corporate Duty to Speak Out Against the Violation of Human Rights’ (2012) 22 Business Ethics Quarterly 37; Nicola Lucchi, ‘Internet Content Governance and Human Rights’ (2013) 16 Vand. J. of Ent. & Tech. L. 809. 42  Frank La Rue, ‘Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’ (16 May 2011) A/HRC/17/27, s. 85 . 43  See Stephen Chen, ‘Corporate Responsibilities in Internet-Enabled Social Networks’ (2009) 90 J. of Business Ethics 523. 44 See David Jason Karp, ‘Transnational Corporations in “Bad States”: Human Rights Duties, Legitimate Authority and the Rule of Law in International Political Theory’ (2009) 1 Int’l Theory 87. 45  See Geraint Anderson, Just Business (Headline 2012). 46  See Human Right Watch, The Enron Corporation: Corporate Complicity in Human Rights Violations (Human Rights Watch 1999) . 47  See Santoro (n. 19). 48  See George Brenkert, ‘Google, Human Rights, and Moral Compromise’ (2009) 85 J. of Business Ethics 453; Santoro (n. 19).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

130   Mariarosaria Taddeo Both conditions shed little light on OSPs’ responsibilities with respect to human rights, as they can be used to support both sides of the argument. For example, major OSPs may have the means to effect change and they could withstand the consequences of diverging from the directions of local governments. Facebook’s CEO commented on this point, stating that: Today we’re blocked in several countries and our business is still doing fine. If we got blocked in a few more, it probably wouldn’t hurt us much either.49

At the same time, however, condition (1) offers a justification for any private company which may breach human rights; it is hard to determine the (in)ability to make a difference in governmental policies which could then allow a company to claim no moral responsibility for any violation of the human rights in which it participated while collaborating or complying with a local government directive. Condition (2) is at best too generic, since it justifies breaches of (possibly any) human rights when respecting them would harm a company’s profit. Other scholars support a different view and hold private actors morally responsible for protecting and fostering human rights.50 The Preamble to the Universal Declaration of Human Rights is often mentioned to support this point. It states that: every individual and every organ of society, keeping this Declaration constantly in mind, shall strive by teaching and education to promote respect for these rights and freedoms . . .51

The responsibility of all members of societies to promote human rights is mentioned and further elaborated in the Declaration of Human Duties and Responsibilities (the socalled Valencia Declaration),52 which focuses on the moral duties and legal re­spon­si­bil­ ities of the members of the global community to observe and promote respect for human rights and fundamental freedoms. The global community encompasses state and nonstate actors, individuals, and groups of citizens, as well as the private and the public sectors. Private companies are also expressly mentioned as responsible for promoting and securing the human rights set out in the Universal Declaration of Human Rights and in

49  Mark Zuckerberg (Facebook, 16 March 2015) . 50  See Denis Arnold, ‘Transnational Corporations and the Duty to Respect Basic Human Rights’ (2010) 20 Business Ethics Quarterly 371; Welsey Cragg, ‘Business and Human Rights: A Principle and Value-Based Analysis’ in George Brenkert and Tom  L.  Beauchamp (eds), The Oxford Handbook of Business Ethics (OUP 2010); Wettstein (n. 41). 51 Universal Declaration of Human Rights (adopted 10 December 1948) UNGA Res 217 A(III) (UDHR) Preamble. 52  Declaration of Human Duties and Responsibilities (1998) (DHDR).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   131 the Preamble to the UN Norms on the Responsibilities of Transnational Corporations and Other Business Enterprises.53 One of the cases about the moral responsibilities of OSPs and respect of human rights (freedom of speech in particular) that has been most debated in the relevant literature concerns compliance by some OSPs, such as Google, Microsoft, and Yahoo!, with requests made by the Chinese government on internet censorship and surveillance. OSPs have responded in different ways. Some, like Google (in 2010) and Yahoo! (in 2015), decided not to comply with the requests and withdrew from the Chinese market. Others refer to the so-called consequentialist argument to justify their business in China or in a context in which human rights are under a sharp devaluative pressure.54 The argument was first used by Google to support its initial compliance with the Chinese government’s requests. It holds that while the Chinese people cannot access some sources of information due to local censorship, they can still use Google’s services to access much more online information. Facebook and Microsoft have proposed the same argument. As Facebook’s CEO states: I believe we have a responsibility to the millions of people in these countries who rely on Facebook to stay in touch with their friends and family every day. If we ignored a lawful government order and then we were blocked, all of these people’s voices would be muted, and whatever content the government believed was illegal would be blocked anyway.55

Those who maintain that private companies ought to comply with human rights, because these are pre-eminent to local governmental actions, criticize the consequentialist argument. Multinationals . . . should respect the international rights of those whom they affect, especially when those rights are of the most fundamental sort.56

Dann and Haddow maintain the same position and ascribe moral responsibilities to company executives, who make the final decisions and shape their companies’ conduct.57 53  The document was approved on 13 August 2003 by the UN Sub-Commission on the Promotion and Protection of Human Rights. See UN Norms on the Responsibilities of Transnational Corporations and Other Business Enterprises (2003) E/CN.4/Sub2/2003/12/Rev2 . 54  Governmental censorship has spread throughout the globe with the internet; the literature on OSPs’ responsibilities in China casts an interesting light on a problem that concerns several other countries around the world. See Giuseppe Aceto and others, ‘Monitoring Internet Censorship with UBICA’ in Moritz Steiner, Pere Barlet-Ros, and Olivier Bonaventure (eds), Traffic Monitoring and Analysis (Springer 2015). 55  Zuckerberg (n. 49). 56  Thomas Donaldson, The Ethics of International Business (OUP 1992) 68. 57  See Gary Elijah Dann and Neil Haddow, ‘Just Doing Business or Doing Just Business: Google, Microsoft, Yahoo! And the Business of Censoring China’s Internet’ (2007) 79 J. of Business Ethics 219.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

132   Mariarosaria Taddeo Brenkert provides a different account and suggests the notion of ‘obedient complicity’: [t]his would occur when a business follows laws or regulations of a government to act in ways that support its activities that intentionally and significantly violate people’s human rights.58

The notion rests on the idea of permissible moral compromise. This is the com­prom­ise that agents make with themselves to forgo or even violate some of their moral principles to fulfil other, more important, values. OSPs operating in countries requiring internet censorship face conflicting responsibilities towards different stakeholders, not just users but also local employees and shareholders. For that reason, those OSPs may be justified in engaging in a moral compromise that may violate human rights if it enables the achievement of more important objectives. Brenkert’s article proposes the so-called ‘all thing considered’ approach to assess whether an OSP may be in a position to violate its moral principles or universal rights. The article considers the immediate context in which OSPs operate and the multiple responsibilities that this implies. For example, an OSP may be put in a position to com­ prom­ise its moral values or to disregard human rights and comply with local laws lest its employees working in a given territory are held liable for the company’s decision or to avoid damaging the shareholders’ interest. According to Brenkert, a moral compromise is justified in such cases. As with any consequentialist approach, the ‘all thing considered’ view enables a wide range of responsibilities of private companies to be covered and permit them to be assessed with regard to the company’s maximum utility. This is problematic, because an assessment of the moral responsibilities of a company depends on the scope of the context being considered. If one focuses on the immediate context, for example a specific country and the company’s interest in that country, the approach could facilitate the acceptance of moral compromise and justify disregarding human rights. But if a wider context is taken in consideration, for example the global reputation of the company and the impact that breaching human rights could have on the company’s public image, then the approach may justify compromising shareholders’ interests for the sake of human rights. Hence, while the approach was intended to mitigate the burden of OSPs’ moral responsibilities, it actually offers a further argument in favour of the duty of OSPs to respect and foster human rights. Given the global relevance and impact that OSPs have on information societies, it is increasingly less acceptable to maintain that OSPs, as private companies, are only responsible to their employees and shareholders.59 This is a point highlighted, for ex­ample, in the report of the Special Rapporteur on freedom of expression to the Human Rights Council, David Kaye, who stressed that: 58  George Brenkert, ‘Google, human rights, and moral compromise’ (2009) 85(4) J. of Business Ethics 453, 459. 59  See Chen (n. 43); Taddeo and Floridi (n. 1); Emily Laidlaw, ‘Myth or Promise? The Corporate Social Responsibilities of Online Service Providers for Human Rights’ in Taddeo and Floridi (n. 7) 135–55.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   133 Among the most important steps that private actors should take is the development and implementation of transparent human rights assessment procedures. They should develop and implement policies that take into account their potential impact on human rights.60

At the same time, the specification of the responsibilities of OSPs requires contextualizing the role of OSPs within the broader changes brought about by the information revolution and the role that they play in mature information societies.61 This will be the task of the next section.

3.  The Civic Role of Osps in Mature Information Societies Floridi defines mature information societies as societies whose members have developed an unreflective and implicit expectation to be able to rely on information technologies to perform tasks and to interact with each other and with the environment.62 Over the past two decades, we have witnessed a growing reliance on these technologies for developing a number of tasks, ranging from individual daily practices to matters relating to public life and the welfare of our societies. More recently, with big data and artificial intelligence, we have started to rely on computing technology to take sensitive decisions, rather than just performing tasks ranging from medical diagnosis to the administration of justice.63 As the main designers and developers of information technologies, OSPs play a central role in mature information societies. Some contributions to the literature identify this role as information gatekeeping.64 ‘Gatekeepers’ are agents who have a central role in the management of resources and infrastructure that are crucial for societies.65 The notion of gatekeepers has been studied in business ethics, social sciences, and legal and communication studies since the 1940s. For example, in 1947 Lewin famously described mothers and wives as gatekeepers, as they were the ones deciding and managing the access and consumption of food for their families. 60  David Kaye, ‘Freedom of Expression and the Private Sector in the Digital Age’ (2016) . 61  Luciano Floridi, ‘Mature Information Societies—a Matter of Expectations’ (2016) 29 Philosophy & Technology 1. 62  ibid. 1. 63  See Stuart Russell, ‘Robotics: Ethics of Artificial Intelligence: Take a Stand on AI Weapons’ (2015) 521 Nature 415; Julia Angwin Jeff Larson, ‘How We Analyzed the COMPAS Recidivism Algorithm’ (Propublica, 2016) ; Guang-Zhong Yang and others, ‘The Grand Challenges of Science Robotics’ (2018) 3 Science Robotics eaar7650. 64  See Taddeo and Floridi (n. 7). 65  See Kurt Lewin, ‘Frontiers in Group Dynamics’ (1947) 1 Human Relations 143.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

134   Mariarosaria Taddeo According to Metoyer-Duran’s definition, an agent is a gatekeeper if that agent: (a) controls access to information, and acts as an inhibitor by limiting access to or restricting the scope of information; and (b) acts as an innovator, communication channel, link, intermediary, helper, adapter, opinion leader, broker, and facilitator.66

Conditions (a) and (b) entail moral responsibilities insofar as gatekeepers have a regu­la­ tory function. The private nature of gatekeepers, along with the responsibilities entailed by (a) and (b), is one of the cruxes generating the problems concerning their moral responsibilities.67 In our societies, OSPs would be information gatekeepers as they control access to and flows of data and information.68 As gatekeepers, OSPs exercise a regulatory function69 which entails moral responsibilities towards the public good. Framing the discussion of the moral responsibilities of OSPs using the notion of gatekeepers unveils OSPs’ public role, and explains the expectations that users and regulators have with respect to their behaviour. However, the gatekeeping role only partially describes the function that OSPs have acquired in our societies and hence the responsibilities that they bear. OSPs increasingly play a more central role in public and policy debate, working to influence national pol­it­ ics and international relations.70 In this respect, they differ quite radically from other transnational corporations.71 Broders and Taylor argue that OSPs behave as political agents and thus they should bear corporate political responsibilities: OSPs exercise power over their users and are a counter power to state power in all corners of the world. . . . they are also political actors who merit serious diplomatic attention owing to their vital role in digital life . . .72

Also, this conceptualization of OSPs’ role is limited since it focuses mostly on the impact of OSPs in the international arena and disregards their central role as designers of the 66  Cheryl Metoyer-Duran, ‘Information Gatekeepers’ (1993) 28 Annual Rev. of Information Science and Tech (ARIST) 111. 67  See Jody Freeman, ‘Private Parties, Public Functions and the New Administrative Law’, Social Science Research Network Scholarly Paper no. 165988 (1999) ; Julia Black, ‘Decentring Regulation: Understanding the Role of Regulation and Self Regulation in a “Post-Regulatory” World’ (2001) 54 Current Legal Problems 103. 68  See Craig  J.  Calhoun (ed.), Dictionary of the Social Sciences (OUP 2002); Andrew Shapiro, The Control Revolution: How the Internet Is Putting Individuals in Charge and Changing the World We Know (PublicAffairs 2000); Lawrence Hinman, ‘Esse Est Indicato in Google: Ethical and Political Issues in Search Engines’ (2005) 3 Int’l Rev. of Information Ethics 19; Emily Laidlaw, ‘Private Power, Public Interest: An Examination of Search Engine Accountability’ (2008) 17 IJLIT 113. 69  Metoyer-Duran (n. 66) 111. 70  See Broeders and Taylor (n. 7). 71  See Andreas Georg Scherer and Guido Palazzo, ‘The New Political Role of Business in a Globalized World: A Review of a New Perspective on CSR and Its Implications for the Firm, Governance, and Democracy’ (2011) 48 J. of Management Studies 899. 72  See Broeders and Taylor (n. 7) 322.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   135 online environment. This is a key aspect that neither the gatekeeping nor the political conceptualization of OSPs fully grasp and that can be better analysed when contextualizing the role of OSPs within the conceptual changes brought about by the information revolution.73 The blurring of the line dividing real and virtual is one of those changes. This blurring has been noted and analysed by social scientists74 and psychologists,75 as well as by philosophers.76 Before the information revolution, being real was tantamount to (coupled with) being tangible, perceivable, and physical in the Newtonian sense. The information revolution decoupled real and tangible and coupled real and virtual. Reality in the information age includes virtual entities and environments along with tangible (physical) ones, making interactability—and no longer tangibility—the mark of reality.77 Think, for example, of the way in which Alice and her grandfather Bob enjoy their music: Bob may still own a collection of his favourite vinyl, while Alice simply logs on to her favourite streaming service (she does not even own the files on her computer). E-books, movies, and pictures all serve as good examples of the case in point. This decoupling and recoupling process has widened the range of what we consider real and has also blurred the very distinction between online and offline environments. As Floridi put it: ‘onlife’ designates the transformational reality . . . in contemporary developed societies.78

One difference still stands, though; that is, that online environment is designed, shaped, and developed by humans more than the physical one and tech companies, including OSPs, often lead this process. The services that enable our access to, and which shape our activities in, the online environment have a more central role than the one of gatekeepers or political actors. For, through their services, they shape our affordances. They contribute to inform the space of opportunities in which individuals and societies can flourish and evolve, and eventually impact how we understand reality and how we interact with each other and with the environment. As leading designers of online environments, OSPs make decisions that impact private and public lives, social welfare, and individual well-being. For this reason, 73  See Luciano Floridi, The Fourth Revolution, How the Infosphere Is Reshaping Human Reality (OUP 2014). 74  See Monroe E. Price, Media and Sovereignty: The Global Information Revolution and Its Challenge to State Power (MIT Press 2002). 75  See Uwe Hasebrink, Comparing Children’s Online Opportunities and Risks across Europe: CrossNational Comparisons for EU Kids Online :[European Research on Cultural, Contextual and Risk Issues in Children’s Safe Use of the Internet and New Media (2006–2009)] (EU Kids Online 2008) . 76  See Diana Coole and others, New Materialisms: Ontology, Agency, and Politics (Duke U. Press Books 2010); Mariarosaria Taddeo, ‘Information Warfare: A Philosophical Perspective’ (2012) 25 Philosophy and Technology 105; Luciano Floridi, ‘Digital’s Cleaving Power and Its Consequences’ (2017) 30 Philosophy & Technology 123. 77  See Luciano Floridi, Ethics of Information (OUP 2013). 78  Luciano Floridi, The Onlife Manifesto—Being Human in a Hyperconnected Era (Springer 2014) 61.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

136   Mariarosaria Taddeo OSPs play a civic role in mature information societies. And, hence, they have civic re­spon­si­bil­ities with respect to the way they conduct their business. These responsibilities require OSPs to consider the impact of their services and business models on the societies in which they operate and to take into account potential ethical benefits and risks. Ethical considerations need to become a constitutive part of their design process and business models. OSPs can discharge this civic responsibility by ensuring that: Social acceptability or, even better, social preferability must be the guiding principles for any [digital innovation] project with even a remote impact on human life, to ensure that opportunities will not be missed.79

Given the international and multicultural contexts in which OSPs operate, the specification of what is socially acceptable and preferable will be effective—that is, it will be regarded as ethically sound, appropriate, and desirable—only insofar as it will rest on an approach able to reconcile the different ethical views and stakeholders’ interests that OSPs face. Human rights and other principles80 offer guidance as to what fundamental values should shape OSPs’ practices, but these will have to be implemented considering different cultural and moral values. Frictions between fundamental and context-dependent values are to be expected, and resolving them will require collaboration between different stakeholders, including OSPs themselves, national and supranational political actors, as well as civil societies. In this scenario, OSPs (as well as policymakers and decision-makers) need to develop appropriate analyses to consider opportunities to harness ethical risks or to avoid or mitigate them. The civic role of OSPs requires them to develop such analyses in the first place and to establish processes to ensure the ethical governance of their services.

4.  Conclusion: The Duty of Ethical Governance Ethical governance of the digital should not be confused with the legal regulations in place to shape the design and use of digital technologies, nor is this something that erodes the space of legal compliance. Floridi distinguishes between hard ethics and soft ethics.81 If hard ethics is what enables us to shape fair laws or to challenge unfair ones, soft ethics goes over and above legal compliance. In some corners of the world, where 79  Floridi (n. 77). See also Mariarosaria Taddeo, ‘Cyber Security and Individual Rights, Striking the Right Balance’ (2013) 26 Philosophy & Technology 353. 80  See Josh Cowls and Luciano Floridi, ‘Prolegomena to a White Paper on an Ethical Framework for a Good Ai Society’, Social Science Research Network Scholarly Paper no. 3198732 (2018) . 81  Luciano Floridi, ‘Soft Ethics and the Governance of the Digital’ (2018) 31 Philosophy & Technology 1.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   137 laws respect and foster fundamental values, the governance of the digital is a matter of soft ethics. As he put it: [C]ompliance is necessary but insufficient to steer society in the right direction. Because digital regulation indicates what the legal and illegal moves in the game are, so to speak, but it says nothing about what the good and best moves could be to win the game—that is, to have a better society. This is the task of both digital ethics . . .82

At least when operating in open and democratic societies, the responsibilities of OSPs pertain to the ethical governance of the digital and soft ethics is essential to discharge them. OSPs need to embed ethical83 considerations in the design and development of their services at the outset in order to consider possible risks, prevent unwanted consequences, and seize the cost of missed opportunities.84 OSPs need to develop ethical foresight analyses,85 which will offer a step-by-step evaluation of the impact of practices or technologies deployed in a given organization on crucial aspects—like privacy, transparency, or liability—and may identify preferable alternatives and risk-mitigating strategies. This will bring a dual advantage. As an opportunity strategy, foresight methodologies can help to leverage ethical solutions. As a form of risk management, they can help to prevent, or mitigate, costly mistakes, by avoiding decisions or actions that are ethically unacceptable. This will lower the opportunity costs of choices not made or options not seized for lack of clarity or fear of backlash. Ethical governance of the digital is a complex, but necessary, task. The alternative may lead to devaluation of individual rights and social values, rejection of OSPs, and missing the opportunities that digital technologies bring to the benefit of societies.

82  ibid. 4. 83  See Luciano Floridi and Mariarosaria Taddeo, ‘What Is Data Ethics?’ (2016) 374 Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences . 84  See Luciano Floridi, ‘Technoscience and Ethics Foresight’ (2014) 27 Philosophy & Technology 499; Mariarosaria Taddeo and Luciano Floridi, ‘How AI Can Be a Force for Good’ (2018) 361 Science 751. 85  Floridi (n. 84).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 7

I n ter m edi a ry Li a bilit y a n d Fu n da m en ta l R ights Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko

The ethical implications of the role of Online Service Providers (OSPs) in contemporary information societies are raising social challenges. Of course, the OSPs’ role is ­unprecedented for their capacity to influence users’ inter­actions within the informational environment. Therefore, fundamental rights protection lies at the core of any policy debate regarding OSPs.1 It is actually a trifecta of competing fundamental rights in tension with each other that exacerbates the conundrum of the regulation of online inter­medi­ary liability. Being the gatekeepers of the information society, OSPs’ regulatory choices—­ increasingly through algorithmic tools—can profoundly affect the enjoyment of users’ fundamental rights, such as freedom of expression, freedom of information, and the right to privacy and data protection. On the other side, OSPs will make choices and implement obligations dealing with blocking and sanitization of a vast array of allegedly infringing content online that affect IP rightholders and creators. Finally, imposing too strict or expansive obligations on OSPs raises further fundamental rights challenges in connection with possible curtailment of their freedom to conduct business. 1  On the increasing influence of human and fundamental rights on the resolution of intellectual property (IP) disputes, see Christophe Geiger, ‘Constitutionalising Intellectual Property Law?, The Influence of Fundamental Rights on Intellectual Property in Europe’ (2006) 37(4) IIC 371; ‘Fundamental Rights as Common Principles of European (and International) Intellectual Property Law’ in A.  Ohly (ed.), Common Principles of European Intellectual Property Law (Mohr Siebeck 2012) 223; ‘Reconceptualizing the Constitutional Dimension of Intellectual Property’ in Paul Torremans (ed.), Intellectual Property and Human Rights (3rd edn, Kluwer Law Int’l 2015) 115; ‘Implementing Intellectual Property Provisions in Human Rights Instruments: Towards a New Social Contract for the Protection of Intangibles’ in Geiger (ed.) (n. 1) 661.

© Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability and Fundamental Rights   139 In particular, in online enforcement of IP and other rights, algorithms take decisions reflecting policy’s assumptions and interests that have very significant consequences for society at large, yet there is limited understanding of these processes.2 In this context, tensions have been highlighted3 between algorithmic enforcement and the European Convention on Human Rights (ECHR) and the Charter of Fundamental Rights of the European Union (EU Charter),4 with special emphasis on filtering, blocking, and other monitoring measures, which would fail to strike a ‘fair balance’ between copyright and other fundamental rights.5 This chapter will briefly map the complex conundrum triggered by the effects of intermediary liability and regulation on competing fundamental rights both of users, OSPs, and IP owners.

1.  Users’ Rights Users’ fundamental rights can be highly affected by regulation of intermediaries and obligations imposed on them that change the way in which users enjoy online services. Obviously, users’ freedom of information and freedom of expression—or freedom to receive and impart information—can be primarily affected but also the right to privacy.6 2  This very much recalls a previous lively discussion that emerged in the context of the 2001 Copyright Directive regarding technical protection measures (TPM) protected by that legislation sometimes at the expenses of limitations and exceptions foreseen by the same Directive. In that context, it was the technique not the law which could decide what could or could not be used, leaving the balance of copyright law in the hand of technical systems blind to the fundamental rights of users and their privileged positions. This generated strong criticism from academia. History seems to have repeated itself. See on the issue, Christophe Geiger, ‘The Answer to the Machine should not be the Machine, Safeguarding the Private Copy Exception in the Digital Environment’ (2008) 30 EIPR 121; Christophe Geiger, ‘Right to Copy v. Three-Step Test: The Future of the Private Copy Exception in the Digital Environment’ (2005) 1 Computer L. Rev Int’l 7. 3  See Joan Barata and Marco Bassini, ‘Freedom of Expression in the Internet: Main Trends of the Case Law of the European Court of Human Rights’ in Oreste Pollicino and Graziella Romeo (eds), The Internet and Constitutional Law: The Protection of Fundamental Rights and Constitutional Adjudication in Europe (Routledge 2016); Akdeniz v Turkey App. no. 20877/10 (ECtHR, 11 March 2014); C-314/12 UPC Telekabel Wien [2014] ECLI:EU:C:2014:192; C-360/10 Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v Netlog NV [2012] ECLI:EU:C:2012:85, paras 36–8; Stefan Kulk and Frederik Zuiderveen Borgesius, ‘Filtering for Copyright Enforcement in Europe after the Sabam Cases’ (2012) 34 EIPR 791, 791–4; Evangelia Psychogiopoulou, ‘Copyright enforcement, human rights protection and the responsibilities of internet service providers after Scarlet’ (2012) 34 EIPR 552, 555. 4  See Charter of Fundamental Rights of the European Union, 2012 OJ (C 326) 391. 5  See Chapter 29. 6  See Charter (n. 4) Arts 8, 11. The concept of user rights has received the most explicit development in the recently rendered CJEU judgments in Funke Medien and Spiegel Online (C-469/17 Funke Medien NRW GmbH v Bundesrepublik Deutschland [2019] ECLI:EU:C:2019:623, para. 70, and C‑516/17 Spiegel Online GmbH v Volker Beck [2019] ECLI:EU:C:2019:625, para. 54). For further discussion of copyright user rights in the case law of the CJEU, including in the context of the judgments in Funke Medien and Spiegel Online, see Chapter 29, as well as Christophe Geiger and Elena Izyumenko, ‘The Constitutionalization of Intellectual Property Law in the EU and the Funke Medien, Pelham and Spiegel Online Decisions of

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

140   Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko

1.1  Freedom of Information and Internet Access The right to freedom of expression, as protected by Article 10 ECHR and Article 11 EU Charter, benefits from a privileged position in the European constitutional order and is sometimes even called the ‘European First Amendment’.7 This right, which guarantees not only the right to impart information but also the right of the public to receive it,8 has in the course of recent years evolved towards inclusion of a genuine ‘right to internet access’, which has been increasingly construed as a fundamental right.9 The human rights nature of access to the internet has been sustained by noting that ‘the Internet, by facilitating the spreading of knowledge, increases freedom of expression and the value of citizenship’.10 Former US President, Barak Obama, declared on a visit to Shanghai that ‘freedom of access to information is a universal right’.11 The Council of Europe has specifically noted, also in response to three-strike legislation proposals, that access to the internet is a ‘fundamental right’.12 In a decision of 10 June 2009 on the first HADOPI law, the French Constitutional Council, for instance, stated explicitly that ‘[i]n the current state of the means of communication and given the generalized development of public online communication services and the importance of the latter for the participation in

the CJEU: Progress, But Still Some Way to Go!’, Centre for International Intellectual Property Studies (CEIPI) Research Paper No. 2019-09, available at or ; IIC (forthcoming 2020). 7  Dirk Voorhoof, ‘Het Europese “First Amendment”: de vrijheid van expressie en informatie en de rechtspraak van het EHRM betreffende art 10 EVRM (1994–1995)’ (1995) Mediaforum (Amsterdam) 11. See also Dirk Voorhoof, ‘Freedom of expression and the right to information: Implications for copyright’ in Geiger (n. 1) 331; Christophe Geiger, Droit d’auteur et droit du public à l’information, Approche de droit comparé (Paris Litec 2004) 166. 8  See e.g. Times Newspapers Ltd (Nos 1 and 2) v United Kingdom App. nos 3002/03 and 23676/03 (ECtHR, 10 March 2009) para. 27; Ahmet Yildirim v Turkey App. no. 3111/10 (ECtHR, 18 December 2012) para. 50; Guseva v Bulgaria App. no. 6987/07 (ECtHR, 17 February 2015) para. 36; Cengiz and Others v Turkey App. nos 48226/10 and 14027/11 (ECtHR, 1 December 2015) para. 56. On the public’s right to receive information, see also Christophe Geiger, ‘Author’s Right, Copyright and the Public’s Right to Information: A Complex Relationship’ in Fiona Macmillan (ed.), New Directions in Copyright Law (Edward Elgar 2007), Vol. 5, 24. 9  See on this question, Nicola Lucchi, ‘Access to Network Services and Protection of Constitutional Rights: Recognizing the Essential Role of Internet Access for the Freedom of Expression’ (2011) 19(3) Cardozo J. of Int’l and Comp L. 645; Molly Land, ‘Toward an International Law of the Internet’ (2013) 54(2) Harv. Int’l L.J. 393. 10  See Marshall Conley and Christina Patterson, ‘Communication, Human Rights and Cyberspace’ in Steven Hick, Edward Halpin, and Eric Hoskins (eds), Human Rights and the Internet (Macmillan 2000) 211. 11  Transcript of President Barak Obama’s 16 November 2009 Town Hall meeting with Chinese students in Shanghai, as released by the White House (CBS News, 16 November 2016) . 12  See Monika Ermert, ‘Council of Europe: Access to Internet is a Fundamental Right’ (IPWatch, 8 June 2009); ‘Internet Access is a Fundamental Right’ (BBC News, 8 March 2010) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability and Fundamental Rights   141 democracy and the expression of ideas and opinions, [the right to freedom of expression] implies freedom to access such services.’13 Since then, the fundamental right nature of access to the internet has been stressed by several international and national bodies, such as the UN Human Rights Council,14 the ITU-UNESCO Commission,15 the Costa Rican Constitutional Court declaring internet access essential to the exercise of fundamental rights,16 or the Finnish government officially making broadband a legal right.17 In Europe, a survey carried out by the European Court of Human Rights (ECtHR) of the legislation of twenty Member States of the Council of Europe revealed that: the right to Internet access is protected in theory by the constitutional guarantees applicable to freedom of expression and freedom to receive ideas and information. The right to Internet access is considered to be inherent in the right to access information and communication protected by national Constitutions and encompasses the right for each individual to participate in the information society and the obligation for States to guarantee access to the Internet for their citizens. It can therefore be inferred from all the general guarantees protecting freedom of expression that a right to unhindered Internet access should also be recognised.18

Accordingly, any measure that is bound to have an influence on the accessibility of the internet engages the responsibility of the state under Article 10 ECHR.19 Within the framework of that Article—and corresponding Article 11 EU Charter—the court deciding on website-blocking cases will have to look at the (1) manner of the site usage and (2) the effects of blocking on legitimate communication, but also (3) at the public interest in disabled information and (4) whether the alternatives to accessing such information 13  Conseil constitutionnel [Constitutional Council], Decision no. 2009-580 DC of 10 June 2009, s. 12 (Fra.). On this decision, see Christophe Geiger, ‘Honourable Attempt but (ultimately) Disproportionate Offensive against Peer-to-peer on the Internet (HADOPI), A Critical Analysis of the Recent AntiFilesharing Legislation in France’ (2011) 42(4) IIC 457, 467 ff. This ‘right to access internet’ was based on freedom of expression of Art 11. of the French Declaration on Human Rights, the court specifying that therefore any infringement of that right must be strictly limited. 14 UN General Assembly, Human Rights Council Resolution, ‘The Promotion, Protection and Enjoyment of Human Rights on the Internet’ A/HRC/RES/20/8, twentieth session, 16 July 2012. See also UN General Assembly Human Rights Council, ‘The Promotion, Protection and Enjoyment of Human Rights on the Internet’ (Resolution) (20 June 2014) A/HRC/26/L.24. Prior to that, the right to internet access was only implicitly considered by the UN as a human right, insofar as it is inherent in the ‘freedom . . . to seek, receive and impart information and ideas through any media and regardless of frontiers’. Universal Declaration of Human Rights (10 December 1948) 217 A (III), Art. 19. 15  See Kaitlin Mara, ‘ITU-UNESCO Broadband Commission Aims at Global Internet Access’ (IPWatch, 10 May 2010). 16  ‘Acceso a Internet es un derecho fundamental’ (Nacion, 8 September 2010) . 17  ‘Finland Makes Broadband a Legal Right’ (BBC News, 1 July 2010) . 18  Ahmet Yildirim v Turkey App. no. 3111/10 (ECtHR, 18 December 2012) para. 31. 19  ibid. para. 53.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

142   Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko were available. Under certain circumstances, it will further be pertinent to consider (5) the Article 10 implications for not only internet users, but also the intermediaries concerned. The European Commission stressed a similar point by noting: Any limitations to access to the Open Internet can impact on end-users’ freedom of expression and the way in which they can receive and impart information. Although operators need to manage Internet traffic in order to ensure the proper functioning of the networks (including managing network congestion, security threats, etc.), there are many instances when unjustified blocking and throttling occurs.20

The ECtHR provided some guidance on the potential implications of the general public interest in information affected by intermediaries’ actions, by recalling its established case law, in accordance with which: while Article 10 § 2 of the Convention does not allow much leeway for restrictions of freedom of expression in political matters, for example, States have a broad margin of appreciation in the regulation of speech in commercial matters . . ., bearing in mind that the breadth of that margin has to be qualified where it is not strictly speaking the ‘commercial’ expression of an individual that is at stake but his participation in a debate on a matter of general interest.21

The Court referred to its earlier findings in Ashby Donald—a case that concerned the conviction in France of three fashion photographers for copyright infringement by taking photographs of designers’ clothes and publishing them online without the consent of the rightholders. There, likewise, it was noted that, ‘although one cannot deny that the public is interested in fashion in general and haute couture fashion shows in particular, it could not be said that the applicants took part in a debate of general interest when restricting themselves to making photographs of fashion shows accessible to the public’.22 In the light of that case law, the Court was not convinced that the case of Akdeniz raised an important question of general interest. The ECtHR thus seemed to imply that in other cases with greater public interest in information the blocking might not be justified in terms of Article 10 ECHR. It is 20 European Commission, ‘Staff Working Document, Impact Assessment Accompanying the Document Proposal for a Regulation of the European Parliament and of the Council laying down measures concerning the European single market for electronic communications and to achieve a Connected Continent, and amending Directives 2002/20/EC, 2002/21/EC and 2002/22/EC and Regulations (EC) No. 1211/2009 and (EU) No. 531/2012’ SWD (2013) 331 final, s. 3.4. 21  Akdeniz (n. 3) para. 28. On this case (and other freedom of expression-related IP cases), see Christophe Geiger and Elena Izyumenko, ‘Intellectual Property before the European Court of Human Rights’ in Christophe Geiger, Craig Nard, and Xavier Seuba (eds), Intellectual Property and the Judiciary (Edward Elgar 2018) 9. 22  Ashby Donald and Others v France App. no. 36769/08 (ECtHR, 10 January 2013) para. 39, translation from French published in (2014) 45(3) IIC 354. For an extensive comment, see Christophe Geiger and Elena Izyumenko, ‘Copyright on the Human Rights Trial: Redefining the Boundaries of Exclusivity through Freedom of Expression’ (2014) 45(3) IIC 316, trying to extract guidelines of the of the case law of the ECtHR for an Art. 10 ECHR scrutiny of copyright enforcement measures.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability and Fundamental Rights   143 not­able that the general public interest in information is not reduced to political context. In effect, such interest had previously been recognized by the Court in the information on, for example sporting matters23 or performing artists,24 as well as when the material at issue related to the moral position advocated by an influential religious community.25 The standards of scrutiny would also typically be more stringent for artistic, cultural, or otherwise ‘civil’ expression.26 Similar issues have also been raised in connection with the right to be forgotten and, more broadly, data protection rights in the internet.27 In 2014, the Court of Justice of the European Union (CJEU) ruled that an internet search engine operator is responsible for the processing that it carries out of personal data which appear on web pages published by third parties.28 Thus, under certain circumstances, search engines can be asked to remove links to web pages containing personal data. The recognition by the EU of a socalled ‘right to be forgotten’ has ignited critical reactions pointing to the fact that the right to be forgotten would endanger freedom of expression and access to information.29 According to a Communication from the Organization for Security and Co-operation in Europe (OSCE) on open journalism, ‘the legitimate need to protect priv­acy and other human rights should not undermine the principal role of freedom of the media and the right to seek, receive and impart information of public interest as a basic condition for democracy and political participation’.30 As Floridi and Taddeo noted: [s]triking the correct balance between the two is not a simple matter. Things change, for example, depending on which side of the Atlantic one is. According to the European approach, privacy trumps freedom of speech; whereas the American view is that freedom of speech is preeminent with respect to privacy. Hence, defining the responsibilities of OSPs with respect to the right to be forgotten turns out 23 See Nikowitz and Verlagsgruppe News GmbH v Austria App. no. 5266/03 (ECtHR, 22 February 2007) para. 25 (society’s attitude towards a sports star); Colaço Mestre and SIC—Sociedade Independente de Comunicação, SA v Portugal App. nos 11182/03 and 11319/03 (ECtHR, 26 April 2007) s. 28 (an interview by the president of the sports club); Ressiot and Others v France App. nos 15054/07 and 15066/07 (ECtHR, 28 June 2012) para. 116 (doping practices in professional sport). 24 See Sapan v Turkey App. no. 44102/04 (ECtHR, 8 June 2010) para. 34 (a book about a Turkish pop star). 25 See Verlagsgruppe News GmbH and Bobi v Austria App. no. 59631/09 (ECtHR, 4 December 2012) para. 76. 26  See David Harris and others, Harris, O’Boyle, and Warbrick: Law of the European Convention on Human Rights (OUP 2009) 457. 27  See Chapter 25. 28  See C-131/12 Google Spain SL and Google Inc. v Agencia Española de Protección de Datos (AEPD) and Mario Costeja González [2014] ECLI:EU:C:2014:317. 29  See e.g. Miquel Peguera, ‘The Shaky Ground of the Right to Be Delisted’, 15 Vand. J. of Ent. & Tech. L. 507 (2015); Jeffry Rosen, ‘The Right to Be Forgotten’ (2015) 64 Stan. L. Rev. Online 88; Stefan Kulk and Frederik Zuiderveen Borgesius, ‘Google Spain v. González: Did the Court Forget About Freedom of Expression?’ (2014) 3 European J. of Risk Regulation 389; Jonathan Zittrain, ‘Don’t Force Google to Forget’ (New York Times, 14 May 2014) . 30  OSCE Representative on Freedom of the Media, Dunja Mijatović, 3rd Communiqué on Open Journalism, 29 January 2016, 2 .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

144   Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko to be quite problematic, as it involves the balancing of different fundamental rights as well as considering the debate on the national versus international governance of the Internet.31

In this regard, the CJEU stated that a person’s right to privacy generally overrides ‘as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in finding that information upon a search relating to the data subject’s name’.32 However, the CJEU also noted that this general rule should not apply if there is a preponderant public interest in having access to the information ‘for particular reasons, such as the role played by the data subject in public life’.33 On 26 November 2014, the European data protection authorities (DPAs) assembled in the Article 29 Working Party (WP29) adopted guidelines on the implementation of the Google Spain judgment.34 They include common criteria to be used by national DPAs when addressing complaints. According to WP29, a balance must be made between the nature and sensitivity of the data and the interest of the public in having access to that information.35 However, if the data subject plays a role in public life, the public interest will be significantly greater.36 Therefore, the guidelines concluded, the impact of delisting on individual rights to freedom of expression and access to information will be very limited. When DPAs assess the relevant circumstances, delisting will not be appropriate, if the public interest overrides the rights of the data subject.37 The guidelines also contain thirteen key criteria which the national DPAs will apply to handle complaints following refusals of delisting by search engines. These criteria have to be read in the light of the ‘the interest of the general public in having access to [the] information’.38 Following in the footsteps of WP29 clarifications and criteria, European national courts and privacy authorities further operated the necessary balancing between personal priv­acy interest and public interest in access to information by noting that the Costeja ruling ‘does not intend to protect individuals against all negative communications on the Internet, but only against “being pursued” for a long time by “irrelevant”, “excessive” or “unnecessarily defamatory” expressions’.39 Instead, for example, conviction for a serious crime will in general provide information about an individual that will remain relevant,40 users cannot obtain the delisting of search results of recent news with a relevant public 31  Mariarosaria Taddeo and Luciano Floridi, ‘The Debate on the Moral Responsibility of Online Service Providers’ (2015) Sci. Eng. Ethics 1, 18–19. 32  C-131/12 (n. 28) para. 81. 33 ibid 34 See Art. 29 Data Protection Working Party, ‘Guidelines on the Implementation of the CJEU Judgment on Google Spain v. Costeja’ (2014) 14/EN WP 225 (hereafter WP29 Guidelines) . 35  ibid. 2. 36 ibid. 37 ibid. 38  ibid. 11, 13–19. 39  Rechtbank [District Court] Amsterdam [2014] ECLI:NL:RBAMS:2014:6118 (Neth.), as translated in Joran Spauwen and Jens van den Brink, ‘Dutch Google Spain ruling: More Freedom of Speech, Less Right To Be Forgotten For Criminals (Inforrm’s Blog, 27 September 2014) . 40 ibid

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability and Fundamental Rights   145 interest,41 or, again, the personal data included in the Commercial Registry ‘cannot be cancelled, anonymized, or blocked, or made available only to a limited number of interested parties’, given the prevalent interest in promoting market transparency and protecting third parties.42

1.2  Freedom of Expression Freedom of expression as freedom to impart information must also be considered among the counterposing rights that must be balanced within the intermediary liability dilemma. In Google v Louis Vuitton, the Advocate General of the CJEU pointed to the fact that general rules of civil liability (based on negligence)—rather than strict liability IP law rules—best suit the governance of the activities of internet intermediaries. His argument—crafted in the context of trade mark infringement online—stressed that: [l]iability rules are more appropriate, since they do not fundamentally change the decentralised nature of the internet by giving trade mark proprietors general— and virtually absolute—control over the use in cyberspace of keywords which correspond to their trade marks. Instead of being able to prevent, through trade mark protection, any possible use—including, as has been observed, many lawful and even desirable uses—trade mark proprietors would have to point to specific instances giving rise to Google’s liability in the context of illegal damage to their trademarks.43

According to this argument, a negligence-based system would better serve the delicate balance between protection of IP rights, access to information, and freedom of expression that the online intermediary liability conundrum entails. As Van Eecke mentioned, ‘the notice-and-take-down procedure is one of the essential mechanisms through which the eCommerce Directive achieves a balance between the interests of rightholders, online intermediaries and users’.44 In this regard, recital 46 of the e-Commerce Directive explicitly requires the hosting provider to respect the principle of freedom of expression when deciding a takedown request.45 Imperfect as it is, a notice-and-takedown mechanism embeds a fundamental safeguard for freedom of information as long as it forces intermediaries to actually consider the infringing nature of the materials before deciding whether to take them down. 41  See Garante per la Protezione dei Dati Personali [Data Protection Authority], Decision no. 618 (18 December 2014) (It.) . 42  See C-398/15 Camera di Commercio, Industria, Artigianato e Agricoltura di Lecce v Salvatore Manni [2016] ECLI:EU:C:2016:652, Opinion of AG Bot. 43 C-236–238/08 Google France, SARL & Google Inc. v Louis Vuitton Malletier SA, Viaticum SA, Luteciel SARL v Centre Bational de Recherche en Relations Humaines (CNRRH) SARL, Pierre‑Alexis Thonet, Bruno Raboin, Tiger, a franchisee of Unicis ECLI:EU:C:2009:569, AG’s Opinion, para. 123. 44  Patrick Van Eecke, ‘Online Service Providers and Liability: A Plea for a Balanced Approach’ (2011) Common Market L. Rev. 1455, 1479–80. 45  See Directive 2001/29/EC of the European Parliament and of the Council on the harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L167, recital 46.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

146   Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko In contrast, an ex ante mechanism based on filtering and automatic infringementassessment systems that online intermediaries might deploy to monitor potentially infringing users’ activities might disproportionally favour property rights against other fundamental rights, as consistent jurisprudence of the CJEU has highlighted.46 At the present level of technological sophistication, false positives might cause relevant chilling effects and negatively impact users’ fundamental right to freedom of expression. Automated systems cannot replace human judgment that should flag a certain use as fair or falling within the scope of an exception or limitation, in particular since the boundaries of the privileged uses are often blurred and difficult case-by-case analysis is needed to determine whether a particular use is infringing.47 Also, complexities regarding the public domain status of certain works might escape the discerning capacity of contentrecognition technologies. In the own word of the CJEU, these measures: could potentially undermine freedom of information, since that system might not distinguish adequately between unlawful content and lawful content, with the result that its introduction could lead to the blocking of lawful communications. Indeed, it is not contested that the reply to the question whether a transmission is lawful also depends on the application of statutory exceptions to copyright which vary from one Member State to another. In addition, in some Member States certain works fall within the public domain or may be posted online free of charge by the authors concerned.48

Following approval by the European Parliament of Article 17 of the Directive on copyright and related rights in the Digital Single Market,49 the implications of an increase in the general monitoring of content uploaded onto Online Content Sharing Service Providers’ services and the increased use of automated filtering and enforcement systems in this regard, raises important questions relating to the preservation of users’ fundamental rights to expression and information.50 46  See C-360/10 (n. 3) para. 52. 47  Furthermore, in the EU the manner in which the Member States implement even the very same exceptions can vary considerably from country to country. E.g. a quotation exception has very a different scope across Europe. See Bernt Hugenholtz and Martin Senftleben, ‘Fair Use in Europe: in Search of Flexibilities’, Institute for Information Law Research Paper no. 2012/33 (2012) 15–17. See also in this sense, Christophe Geiger and Franciska Schönherr, Frequently Asked Questions (FAQ) of Consumers in relation to Copyright, Summary Report (EUIPO 2017) (noting that ‘[c]opyright law throughout the EU does not give unanimous answers to Consumers’ 15 Frequently Asked Questions. . . . The result is the following: even if a few common basic principles can certainly be identified, the exceptions to these principles as well as their implementation vary significantly’). The study lists exceptions and limitations to copyright as one of the areas of major divergence in national copyright law. See ibid. 6–8. 48  See C-360/10 (n. 3) para. 50. 49  See Directive 2019/790/EU of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92. 50  Many scholars have raised this issue. See recently in this sense, e.g. Martin Senftleben, ‘Bermuda Triangle—Licensing, Filtering and Privileging User-Generated Content Under the New Directive on Copyright in the Digital Single Market’ (April 2019) SSRN Research Paper no. 3367219 .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability and Fundamental Rights   147 Some courts have also recognized that website blocking engages the freedom of expression of the internet service providers (ISPs).51 As noted in the Advocate General’s Opinion in Telekabel, ‘[a]lthough it is true that, in substance, the expressions of opinion and information in question are those of the ISP’s customers, the ISP can nevertheless rely on that fundamental right by virtue of its function of publishing its customers’ expressions of opinion and providing them with information.’52 In support of this contention, the Advocate General referred to an established body of ECtHR case law, in accordance with which ‘Article 10 guarantees freedom of expression to “everyone”, [with] [n]o distinction [being] made in it according to the nature of the aim pursued or the role played by natural or legal persons in the exercise of that freedom.’53 According to the ECtHR, although ‘publishers do not necessarily associate themselves with the opinions expressed in the works they publish, . . . by providing authors with a medium they participate in the exercise of the freedom of expression . . .’54 ISPs’ freedom of expression claims are not as such unusual in the practice of the ECtHR. In the Delfi AS case, the Grand Chamber considered this right in connection with the liability of Estonia’s largest internet portal for hosting infringing content generated by its users.55 On the facts, however, no violation of Article 10 was established. That said, in a recent judgment on a similar issue the Court found that holding an ISP liable for user comments did indeed violate that ISP’s freedom of expression.56 On the national level, it was also highlighted in a series of UK cases that ISPs and website operators’ freedom of expression is affected by website blocking.57 Nevertheless, in these particular cases, property rights of copyright owners clearly outweighed ISPs’ competing rights.58

1.3  Right to Privacy and Data Protection Intermediaries’ regulation and enforcement actions also struggle to find the proper balance between privacy and freedom of expression. On one side, courts stressed the importance of imposing liability on intermediaries by noting that ‘violations of privacy of individuals and companies, summary trials and public lynching of innocents are routinely reported, all practiced in the worldwide web with substantially increased damage

51  See on this issue extensively, Christophe Geiger and Elena Izyumenko, ‘The Role of Human Rights in Copyright Enforcement Online: Elaborating a Legal Framework for Website Blocking’ (2016) 32(1) American U. Int’l L. Rev. 43; and from the same authors Chapter 29 in this Handbook. 52 C-314/12 UPC Telekabel Wien [2013] ECLI:EU:C:2013:781, Opinion of AG Villalón, para. 82. 53  Öztürk v Turkey App. no. 22479/93 (ECtHR, 28 September 1999) para. 49. 54 ibid. 55 See Delfi AS v Estonia App. no. 64569/09 (ECtHR, 16 June 2015). 56 See Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary App. no. 22947/13 (ECtHR, 2 February 2016). 57 See Twentieth Century Fox Film Corp. & Others v British Telecommunications Plc [2011] EWHC 1981 (Ch), [200] (UK); EMI Records Ltd & Others v British Sky Broadcasting Ltd & Others [2013] EWHC 379 (Ch), [94] (UK). 58 See Twentieth Century Fox (n. 57) [200]; EMI Records (n. 57) [107].

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

148   Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko because of the widespread nature of this medium of expression’.59 Again, upholding the protection of the right to privacy against freedom of expression, courts reinforce this ‘internet threat’ discourse by noting that, in the internet, ‘[d]efamatory and other types of clearly unlawful speech, including hate speech and speech inciting violence, can be disseminated like never before, worldwide, in a matter of seconds, and sometimes remain persistently available online.’60 On the other hand, courts have highlighted that the unqualified deployment of filtering and monitoring obligations may also impinge on users’ right to protection of personal data. The CJEU has outlined the disproportionate impact of these measures on users’ privacy by concluding that: requiring installation of the contested filtering system would involve the identification, systematic analysis and processing of information connected with the profiles created on the social network by its users. The information connected with those profiles is protected personal data because, in principle, it allows those users to be identified.61

In Scarlet and Netlog, the CJEU established for the first time a ‘fundamental right to the protection of personal data, which is not fairly balanced with copyright holders’ rights when a mechanism requiring the systematic processing of personal data is imposed in the name of the protection of intellectual property’.62 According to the ECtHR, which tends to be critical of systems that intercept communications, secrecy of communication or the right to respect for private life63 could be also impinged on by filtering technologies, especially when those systems monitor the content of communications.64

2.  OSPs, Freedom of Business, and Innovation Apart from freedom of expression, another fundamental right related to balancing when imposing obligations on OSPs is the freedom to conduct their business as per Article 16 EU Charter.65 In contrast to the long constitutional tradition of freedom of expression, the freedom to conduct a business is not known to any other international 59  Delfi (n. 55) para. 110. 60  Google Brazil v Dafra, Special Appeal no. 1306157/SP (Superior Court of Justice, 24 March 2014) . 61  C-360/10 (n. 3) para. 49. 62  Gloria González Fuster, ‘Balancing Intellectual Property Against Data Protection: a New Right’s Wavering Weight’ (2012) 14 IDP 34, 37. 63  See EU Charter (n. 6) Art. 7. 64  See Kulk and Borgesius (n. 3) 793–4. 65  See EU Charter (n. 4) Art. 16. On this relatively recent fundamental right and its relation to IP, see Gustavo Ghidini and Andrea Stazi, ‘Freedom to conduct a business, competition and intellectual property’ in Geiger (n. 1) 410.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability and Fundamental Rights   149 human rights instrument but the EU Charter, which also makes the freedom to conduct a business a relatively young right.66 As a consequence, ‘to date the case law has not . . . provided a full and useful definition of this freedom’.67 Both the textual context and the judicial history of Article 16, therefore, point to its lack of qualification, and allow the state a wide power to interfere with it.68 This particularly ‘weak’ nature of the right69 has arguably allowed the CJEU to also rule in favour of rightholders in cases where the Advocate General has failed to find a ‘fair balance’.70 Obviously, new obligations imposed on online intermediaries increase barriers to innovation by making it more expensive for platforms to enter and compete in the market. Online intermediaries might be called on to develop and deploy costly technology to cope with legal requirement and obligations. The CJEU has emphasized the economic impact on OSPs of new obligations, such as filtering and monitoring obligations. The CJEU assumed that monitoring all the electronic communications made through a network, without any limitation in time, directed to all future infringements of existing and yet-to-be-created works ‘would result in a serious infringement of the freedom of the hosting service provider to conduct its business’.71 Hosting providers’ freedom of business would be disproportionally affected since an obligation to adopt filtering technologies would require the ISP to install a complicated, costly, and permanent system at its own expense.72 In addition, according to the CJEU, this obligation would be contrary to Article 3 of the Enforcement Directive, providing that ‘procedures and remedies ne­ces­sary to ensure the enforcement of the intellectual property rights . . . shall not be un­neces­sar­ ily complicated or costly [and] shall be applied in such a manner as to avoid the creation of barriers to legitimate trade’.73

3.  IP Owners and Property Rights Another important fundamental right implicated by intermediary liability and regulation is the right to property of IP holders. This is also the right against which the rights 66  Note, however, that some national constitutions provided for the protection of the freedom to conduct a business long before these supranational developments. See e.g. Italian Constitution of 1947, Art. 41; Spanish Constitution of 1978, Art. 38; Croatian Constitution of 1990, Art. 49; and Slovenian Constitution of 1991, Art. 74. 67 C-426/11 Mark Alemo-Herron and Others v Parkwood Leisure Ltd [2013] ECLI:EU:C:2013:82, Opinion of AG Villalón, para. 49. 68  See C-283/11 Sky Österreich GmbH v Österreichischer Rundfunk [2013] ECLI:EU:C:2013:28, para. 46; T-587/13 Miriam Schwerdt v Office for Harmonisation in the Internal Market [2015] ECLI:EU:T:2015:37, para. 55. 69  See Xavier Groussot, Gunnar Thor Pétursson, and Justin Pierce, ‘Weak Right, Strong Court—The Freedom to Conduct Business and the EU Charter of Fundamental Rights’ in Sionaidh Douglas-Scott and Nicholas Hatzis (eds), Handbook on EU Law and Human Rights (Edward Elgar 2017) 326–44; EMI Records (n. 55) [107]. 70  See C-314/12 (n. 3) para. 49. 71  C-360/10 (n. 3) para. 46. 72 ibid. 73  See Directive 2004/48/EC of the European Parliament and of the Council on the enforcement of intellectual property rights [2004] OJ L157, Art. 3.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

150   Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko mentioned in the previous sections need to be balanced. In Europe, this balancing is predetermined by the fundamental-right status of IP as per Article 17(2) EU Charter and (somewhat implicitly) Article 1 Protocol No. 1 ECHR, as well as by considerations of how effective an enforcement of this fundamental right must be. The latter, in turn, would depend on what the European legislator regards as a valid objective of copyright protection. In 2013, the ECtHR noted that ‘[a]s to the weight afforded to the interest of protecting the copyright-holders, the Court would stress that intellectual property bene­fits from the protection afforded by Article 1 of Protocol No. 1 to the Convention.’74 Consequently, as stated in that case and further reiterated since, when balancing two competing interests both protected by the Convention on the level of human rights, states are afforded ‘a particularly wide’ margin of appreciation.75 The European position that IP by definition enjoys human rights protection was predetermined by a number of prior developments on both judicial and legislative levels. First of all, the EU Charter, unlike the ECHR, expressly enshrined the protection of the right to IP in its catalogue of rights under Article 17(2).76 The ECHR, on its part, although not containing a specific IP clause, has been interpreted by the Strasbourg Court as extending its Article 1 Protocol No. 1 protection to the entire range of traditionally recognized IP rights.77 The most indicative in this regard is an oft-quoted Anheuser-Busch case, which expanded Article 1 First Protocol protection to ‘mere’ applications for the registration of trade marks.78 Certain scholars have thus stressed that this European approach to an autonomous fundamental-right nature of IP might lead, however, to a more rigorous protection of IP interests vis-à-vis other fundamental rights involved in balancing, including the users’ information rights and the ISPs’ business freedom.79 74  Neij and Sunde Kolmisoppi v Sweden App. no. 40397/12 (ECtHR, 19 February 2013) para. 10. See also Geiger and Izyumenko (n. 22) 316. 75  Neij and Sunde Kolmisoppi (n. 74) para. 11. See also Akdeniz (n. 3) para. 28; Ashby Donald (n. 22) para. 40. 76 See Christophe Geiger, ‘Intellectual “Property” after the Treaty of Lisbon: Towards a Different Approach in the New European Legal Order?’ (2010) 32(6) EIPR 255; ‘Intellectual Property Shall be Protected!? Article 17(2) of the Charter of Fundamental Rights of the European Union: A Mysterious Provision with an Unclear Scope’ (2009) 31(3) EIPR 113; Jonathan Griffiths and Luke McDonagh, ‘Fundamental Rights and European IP Law: the Case of Art 17(2) of the EU Charter’ in Christophe Geiger (ed.), Constructing European Intellectual Property: Achievements and New Perspectives (Edward Elgar 2012) 75 ff. 77  See in the field of copyright: Akdeniz (n. 3); Neij and Sunde Kolmisoppi (n. 74); Ashby Donald (n. 22); Balan v Moldova App. no. 19247/03 (ECtHR, 29 January 2008); Melnychuk v Ukraine App. no. 28743/03 (ECtHR, 5 July 2005); Dima v Romania App. no. 58472/00 (ECtHR, 26 May 2005). In the field of trade marks: Paeffgen Gmbh v Germany App. nos 25379/04, 21688/05, 21722/05 and 21770/05 (ECtHR, 18 September 2007); Anheuser-Busch Inc. v Portugal App. no. 73049/01 (ECtHR, 11 January 2007). In the field of patent law: Lenzing AG v United Kingdom App. no. 38817/97 (ECommHR, 9 September 1998); Smith Kline & French Lab. Ltd v Netherlands App. no. 12633/87 (ECommHR, 4 October 1990). 78  See Anheuser-Busch (n. 77). See also Klaus-Dieter Beiter, ‘The Right to Property and the Protection of Interests in Intellectual Property—A Human Rights Perspective on the European Court of Human Right’s Decision in Anheuser-Bush Inc v Portugal’ (2008) 39(6) IIC 714, 717. 79  See e.g. for a critique of the approach towards treating IP rights and other fundamental rights ‘as if they were of equal rank’, Alexander Peukert, ‘The Fundamental Right to (Intellectual) Property and the

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability and Fundamental Rights   151 However, the reach of the (human) right to property protection for IP should not be overestimated. As noted by the CJEU in Telekabel, ‘there is nothing whatsoever in the wording of Article 17(2) Eu Charter to suggest that the right to intellectual property is inviolable and must for that reason be absolutely protected’.80 Article 17(2) EU Charter could then be considered to be nothing more than a simple clarification of Article 17(1), that clearly recalls that ‘[t]he use of property may be regulated by law in so far as is ne­ces­ sary for the general interest’.81 Likewise, the first paragraph of Article 1 First Protocol ECHR provides for the possibility of restrictions of the right ‘in the public interest’, while the second paragraph of the same provision allows the state ‘to enforce such laws as it deems necessary to control the use of property in accordance with the general interest . . .’82 This limited nature of the right to property was clearly envisaged by the drafters of the ECHR and the EU Charter. As the preparatory works of the First Protocol to the ECHR demonstrate, a newly introduced property paradigm was viewed as being of a ‘relative’ nature as opposed to the absolute right to own property.83 A similar logic, clearly excluding an ‘absolutist’ conception of IP, accompanied the preparatory documents of the EU Charter, insofar as the drafters took care to specify that ‘the guarantees laid down in paragraph 1 [of Art. 17] shall apply as appropriate to intellectual property’ and that ‘the meaning and scope of Article 17 are the same as those of the right guaranteed under Article 1 of the First Protocol to the ECHR’.84 This approach also matches the traditional construction of Article 27(2) of the Universal Declaration of Human Rights (UDHR) and Article 15(1)(c) of the International Covenant on Economic, Social and Cultural Rights (ICESCR), both of which secure for authors the benefits from the ‘protection of the moral and material interests resulting from [their] scientific, literary or artistic production’. As stressed in the report by the UN Special Rapporteur in the field of cultural rights, although it is tempting to infer from the wording of these provisions that Article 15(1)(c) recognizes a human right to Discretion of the Legislature’ in Geiger (ed.) (n. 1) 132; Robert Burrell and Dev Gangjee, ‘Trade Marks and Freedom of Expression: A Call for Caution’ (2010) 41(5) IIC 544; Christina Angelopoulos, ‘Freedom of Expression and Copyright: The Double Balancing Act’ (2008) 3 IPQ 328. See, however, Christophe Geiger, ‘Fundamental Rights, a Safeguard for the Coherence of Intellectual Property Law?’ (2004) 35 IIC 268; Christophe Geiger, ‘Copyright’s Fundamental Rights Dimension at EU Level’ in Estelle Derclaye (ed.), Research Handbook on the Future of EU Copyright, (Edward Elgar 2009) 27; Christophe Geiger, ‘The Social Function of Intellectual Property Rights, or How Ethics Can Influence the Shape and Use of IP Law’ in Graeme Dinwoodie (ed.), Methods and Perspectives in Intellectual Property (Edward Elgar 2013) 153, underlining that the property aspects protected at a constitutional level refer to a property of a special kind, property with a strong social function, which should therefore not be equated with physical property since it is far more limited in its scope. Under this understanding, IP has to be considered as having a more limited nature than the right to physical property, which should be taken into account when balancing it with other fundamental rights. In fact, as the legislature has a vast margin of appreciation when defining the contours of rights on intangibles, the constitutional protection remains rather ‘weak’ in comparison with other rights. 80  C-314/12 (n. 3) para. 61. 81 ibid. 82 ibid. 83  See Council of Europe, ‘Preparatory Work on Article 1 of the First Protocol to the European Convention on Human Rights’ (13 August 1976) CDH (76) 36, 12 and 16. 84 Note from the Praesidium, ‘Draft Charter of Fundamental Rights of the European Union, Explanations on Article 17 of the EU Charter’ (2000), 19–20.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

152   Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko protection of intellectual property, ‘this equation is false and misleading’.85 According to the UN Committee on Economic, Social and Cultural Rights (CESCR)—the body in charge of the implementation of the ICESCR—an evident distinction exists in principle between standard IP rights and the human rights protection given to creators in accordance with Article 15(1)(c).86 Thus, Article 15(1)(c) guarantees some sort of protection; however, for the UN Committee it cannot be interpreted as guaranteeing IP rights or as elevating IP to the human rights regime.87

4. Conclusions The implications of online intermediaries’ liability and regulation raise important questions relating to the preservation of the fundamental rights of users, OSPs, and IP rightholders. The tension between competing freedom of expression, property, privacy, and freedom of business rights leads to unavoidable constriction of some rights against ­others depending on policy choices. This ‘conundrum’ brought about by multiple opposing competing rights, interests, and players involved in intermediary liability regulation has yet to find a sustainable policy solution. While legislators have so far tended to privilege some of the relevant interests against others, increasingly resorting to mandatory or voluntary promotion of filtering, monitoring, and automated enforcement technologies, courts have often found middle grounds based on a case-by-case analysis, thus securing a better balance between multiple fundamental rights. This flexibility needs to be preserved and could lead policymakers to reflect on adopting rules that allow the judiciary to adapt to the specific situation of a case, while also reflecting on other implementing legal mechanism based on remuneration rights which would ‘legalize’ several infringing situations.88 85  UN General Assembly, ‘Report of the Special Rapporteur in the field of cultural rights, Farida Shaheed, Copyright Policy and the Right to Science and Culture, Human Rights Council’ (2014) A/HRC/28/57, s. 26. On this report, see Christophe Geiger (ed.), Intellectual Property and Access to Science and Culture: Convergence or Conflict? (CEIPI/ICTSD 2016) (including a Foreword by the Special Rapporteur). 86  See CESCR, ‘General Comment no. 17 on Article 15(1)(c) of the ICESCR’ (12 January 2006) E/C12/GC/17. 87  ibid. For a detailed analysis of these documents, see Christophe Geiger, ‘Implementing Intellectual Property Provisions in Human Rights Instruments: Towards a New Social Contract for the Protection of Intangibles’ in Geiger (ed.) (n. 1) 661 ff. 88  See Christophe Geiger, ‘Challenges for the Enforcement of Copyright in the Online World: Time for a New Approach’ in Paul Torremans (ed.), Research Handbook on the Cross-Border Enforcement of Intellectual Property (Edward Elgar 2014) 704; Christophe Geiger, ‘Promoting Creativity through Copyright Limitations, Reflections on the Concept of Exclusivity in Copyright Law’ (2010) 12(3) Vand. J. of Ent. & Tech. L. 515. More recently, see Joao Pedro Quintais, Copyright in the Age of Online Access, Alternative Compensation Systems in EU Law (Kluwer Law Int’l 2017). On the possibility of achieving flexibility under European copyright law via the introduction of the open-ended flexible exception to copyright infringement based on the balancing factors of the European freedom of expression law, see Christophe Geiger and Elena Izyumenko, ‘Towards a European “Fair Use” Grounded in Freedom of Expression’ (2019) 35(1) Am. U. Int’l L. Rev. 1.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

pa rt I I I

SA F E H A R BOU R S , L I A BI L I T Y, A N D F R AGM E N TAT ION

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 8

A n Ov erv iew of the U n ited State s’ Section 230 I n ter n et I m m u n it y Eric Goldman

47 USC § 230 says that websites and other online services are not liable for third party content. This legal policy is simple and elegant, but is hardly intuitive, and it has had extraordinary consequences for the internet and our society. This chapter provides an overview of section 230 and how it compares to some foreign counterparts.

1.  Pre-Section 230 Law Typically, liability for third party content attaches when the disseminator has the discretion to publish it or not. If a disseminator cannot exercise editorial control—such as tele­phone service providers functioning in their legal status as common carriers—the disseminator is not legally responsible for third party content it had to disseminate.1 In contrast, if the disseminator can exercise editorial control over what to disseminate— such as traditional publishers—the disseminator accepts legal liability for the decisions it makes.2 Thus, traditional publishers are usually liable for all content they, in their editorial discretion, choose to publish. With respect to third party content, many online intermediaries do not closely resemble either common carriers or traditional publishers. Unlike offline publishers, many online intermediaries do not have humans pre-screen online content before disseminating it. 1  See e.g. Restatement (Second) of Torts s. 612 cmt. g (1977) (US). 2  See Rodney Smolla, 1 Law of Defamation (Thomson Reuters 2018) s. 4:87.

© Eric Goldman 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

156   Eric Goldman The volume of content often makes pre-screening too expensive or slow;3 and ­pre-screening is effectively impossible for real-time content like live-streamed video. At the same time, most online intermediaries are not legally defined as common carriers,4 and regulators expect them to routinely refuse services that support abusive, objectionable, or criminal outcomes.5 Given the ill-fitting paradigms, how should courts apply traditional editorial control/ liability principles? Before Congress enacted section 230, two cases addressed this issue. The first was Cubby v CompuServe, a 1991 case from the Southern District of New York. At the time, CompuServe provided its subscribers with dial-up access to its network plus content resources and databases that they could browse. CompuServe charged subscribers per minute they were online, and CompuServe could share a portion of those subscriber charges as licence fees to content publishers. CompuServe had a licensing arrangement to carry a third party newsletter called Rumorville. Rumorville periodically uploaded its content electronically to CompuServe’s servers. CompuServe employees did not pre-screen these uploads. Perhaps not surprisingly (given its name), Rumorville was accused of defamation. The plaintiff sued CompuServe and others. The court dismissed CompuServe,6 holding that for defamation purposes, CompuServe acted as the ‘distributor’ of Rumorville, not as the publisher, and therefore could be liable only if it knew or had reason to know of the defamatory content. Due to Rumorville’s automated uploads, CompuServe lacked such scienter. Though CompuServe won the ruling, the court’s legal standard was not necessarily good news for other defendants. The ruling suggested that an online intermediary would be liable for third party content whenever it knew, or should have known, of the defamation. This created a notice-and-takedown standard for defamation, including ‘heckler’s vetoes’ where aggrieved individuals can easily remove content by making unfounded allegations of defamation.7 Cubby also left unresolved what should happen with claims other than defamation or when humans pre-screen third party content before publication. Nevertheless, the Cubby ruling provided some guidance to the nascent industry. Many online intermediaries chose to take a light-handed approach to moderating or removing third party content to position themselves, like CompuServe, as the relatively passive recipient of third party content.

3  e.g. in 2015, YouTube got 400 hours of new uploaded video every minute. See Statista . 4  See e.g. Religious Technology Center v Netcom On-Line Communication Services, 907 F.Supp. 1361 (ND Cal. 1995) (US). 5  See Sen Ron Wyden, ‘The Consequences of Indecency’ (TechCrunch, 23 August 2018) . 6 See Cubby Inc. v CompuServe Inc., 776 F.Supp. 135 (SDNY 1991) (US). 7 See Zeran v America Online Inc., 129 F.3d 327 (4th Cir. 1997), cert. denied, 524 US 937 (1998) (US) (discussing the risks of hecklers’ vetoes for defamation).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   157 The second pre-section 230 ruling shook up that practice. The 1995 New York state court case of Stratton Oakmont v Prodigy involved the online message boards of Prodigy, a CompuServe competitor. A subscriber allegedly posted defamatory remarks about Stratton Oakmont, an investment bank subsequently depicted unfavourably in the movie Wolf of Wall Street.8 The investment bank sued Prodigy for $100 million. The court held that Prodigy could be liable for the subscriber’s defamatory message board posts because Prodigy had marketed itself as a family-friendly service and had taken steps to remove objectionable content from its message board. The court said that collectively these efforts had turned Prodigy into a ‘publisher’ of the subscriber-supplied message board content and exposed it to liability.9

1.1  The Moderator’s Dilemma In 1995, the United States experienced a techno-panic about children’s access to online pornography.10 Congress wanted and expected online intermediaries to aggressively screen out pornography and other objectionable content.11 It would be counterproductive for the law to deter online intermediaries from taking these socially valuable steps. Arguably, the Stratton Oakmont ruling did exactly that. According to Stratton Oakmont, endeavouring to remove objectionable content, but doing the job imperfectly, left the online intermediary legally exposed for everything it missed—with potentially business-ending legal exposure for each and every missed item. This dynamic creates the ‘Moderator’s Dilemma’.12 Stratton Oakmont drives online intermediaries to choose between: • moderate or remove user content to promote a safe or family-friendly environment, but at the cost of accepting legal responsibility for anything it misses, which the service will manage by aggressively screening/removing third party content to mitigate that liability—or by eliminating third party content entirely; or • do as little as possible to manage user content, in which case it can argue that it does not ‘know or have reason to know’ about any legally problematic content and possibly escape legal exposure for that content. The latter tactic is exactly what Congress did not want online intermediaries to do, because it would lead to more children being exposed to pornography online. However, 8 See The Wolf of Wall Street (2013). 9 See Stratton Oakmont Inc. v Prodigy Services Co., 1995 WL 323710 (NY Sup. Ct. 1995) (US). 10  See e.g. Philip Elmer-Dewitt, ‘Cyberporn’ (Time, 3 July 1995) . 11  cf. 47 USC § 230(b)(4) (US) (‘It is the policy of the United States . . . to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material’). 12  See Eric Goldman, ‘Sex Trafficking Exceptions to Section 230’, Santa Clara U. Legal Studies Research Paper no. 13/2017 (2017) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

158   Eric Goldman following the Cubby and Stratton Oakmont cases, many online intermediaries were likely to prioritize risk mitigation and reduce their efforts to screen out objectionable content. Section 230 eliminates the Moderator’s Dilemma. As the legislative history explains, section 230 was intended to overrule Stratton Oakmont ‘and any other similar decisions which have treated providers and users as publishers or speakers of content that is not their own because they have restricted access to objectionable material’.13

2.  Section 230’s Protections for Defendants Section 230(c)(1) says: ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’14 This translates into three prima facie elements of the immunity. (1) The immunity applies to a ‘provider or user of an interactive computer service’. The statutory definition of ‘interactive computer service’15 contemplates the operations of CompuServe, Prodigy, America Online, and other mid-1990 services that packaged dial-up internet/server access with access to proprietary content. However, courts have interpreted ‘providers’ expansively to include virtually any service available through the internet.16 Furthermore, ‘users’ of interactive computer services should, in theory, cover everyone who is a customer of a pro­vider.17 This means, in practice, that everyone online should satisfy this first element. (2) The immunity applies to any claims that treat the defendant as a ‘publisher or speaker’. This standard makes sense in the context of defamation, where ‘publication’ is an express element of the plaintiff ’s prima facie case.18 However, courts usually read this element more broadly so that it applies regardless of whether the claim’s prima facie elements contain the term ‘publisher’ or ‘speaker’. Typically, courts find this element satisfied whenever the plaintiff seeks to hold the defendant responsible for third party content, regardless of which specific causes of action the plaintiff actually asserts, unless the claim fits into one of the statutory exceptions.19 (3) The immunity applies when the plaintiff ’s claim is based on information ‘provided by another information content provider’. This element seeks to divide the universe of content into first party and third party content, and defendants are 13  HR 104-458 (1996). 14  47 USC § 230(b)(4). 15  47 USC § 230(f)(2). 16  See Ian Ballon, 4 E-Commerce & Internet Law (2017 update) 37.05[2] (‘almost any networked computer would qualify as an interactive computer service, as would an access software provider’). 17 See Barrett v Rosenthal, 40 Cal. 4th 33 (2006) (US). 18  See Smolla (n. 2) s. 1:34. 19  See Ballon (n. 16) 37.05[1][C].

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   159 immunized only for the latter. Sometimes, the division between first party and third party content is obvious. Employee-authored content normally qualifies as first party content;20 user-submitted content normally qualifies as third party content. However, plaintiffs have deployed a vast array of legal theories—some more successful than others—to muddy the distinction and hold defendants liable for what is otherwise fairly obviously third party content. A recap of these three prima facie elements: typically, section 230(c)(1) applies to anyone connected to the Internet for any claim (other than statutorily excluded claims) that is based on third party content. In other words, websites and online services are not liable for third party content. Section 230(c)(1) says nothing about the defendant’s scienter. As a result, a defendant can have scienter about problematic content—for example, they can ‘know’ that they are publishing tortious or illegal third party content—and still avoid liability for that content. For that reason, the section 230 jurisprudence has not been plagued by the same tendentious philosophical inquiries into what and when an online service ‘knows’ about user content that we have seen in the copyright context with the Digital Millennium Copyright Act’s (DMCA) online safe harbour.21 Furthermore, section 230(c)(1) applies even if the defendant does human pre-screening or post-publication reviews of third party content. This means section 230(c)(1) is equally available to a service that exercises the same level of editorial control as a traditional publisher—or zero editorial control. Thus, section 230 remarkably collapses the historical legal distinctions between traditional publishers and common carriers. Section 230(c)(1) says that online intermediaries can function like traditional publishers but receive the favourable legal treatment of common carriers. As you can imagine, this counterintuitive result frequently baffles plaintiffs and occasionally baffles judges. Section 230(c)(1) provides substantial immunity for defendants, but it is also highly valued by defendants for its procedural benefits.22 Frequently, section 230(c)(1) defendants win on a motion to dismiss. In those situations, the case ends quickly and at relatively low cost, without expensive discovery, summary judgment motions, or trials. Furthermore, because of its broad scope, plaintiffs’ attempts to work around section 230(c)(1) often does not work, so creative plaintiffs cannot artfully plead past a motion to dismiss. Contrast the quick and easy section 230(c)(1) dismissals with the often-arduous defence wins in section 512 cases.23 The procedural benefits make a huge substantive difference to defendants. 20  But see Delfino v Agilent Technologies Inc., 145 Cal. App. 4th 790 (Cal. App. Ct. 2006) (US) (employer qualified for s. 230 immunity for employee activity). 21  See Eric Goldman, ‘How the DMCA’s Online Copyright Safe Harbor Failed’ (2015) 18 Korea U. L. Rev. 103. 22  See Eric Goldman, ‘Why Section 230 Is Better Than the First Amendment’ (2109) Notre Dame L. Rev. Online (forthcoming) < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3351323>. 23  See e.g. UMG Recordings Inc. v Shelter Capital Partners LLC, 667 F.3d 1022 (9th Cir. 2011) (US) (Veoh went bankrupt defending its eligibility for the safe harbour).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

160   Eric Goldman Online intermediaries also can benefit from the section 230(c)(2) safe harbour. That provision protects the intermediary’s good faith decisions to block or remove content the intermediary considers ‘to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable’,24 as well as the provision of blocking/filtering instructions (e.g. providing anti-spam or anti-spyware software).25 Compared to section 230(c)(1), online intermediaries take advantage of section 230(c)(2) relatively rarely for several reasons. • In the mid-2000s, courts emphatically held that section 230(c)(2) protected antispyware vendors.26 This functionally eliminated lawsuits by software manufacturers who were aggrieved that their software had been classified as spyware. • Otherwise, section 230(c)(2) applies to claims by a service’s content uploaders who are unhappy that the service removed their content. In many cases, any particular content item will not be significant enough for an aggrieved uploader to sue, but there are exceptions. For example, if a record label puts a lot of money behind marketing the URL of a YouTube video, and then YouTube removes the video, the marketing investments may be wasted.27 However, these plaintiffs—the content uploaders—almost always agree to the service’s user agreement, which will contain many protective provisions such as termination for convenience, a disclaimer of any obligation to publish (or a reservation of the service’s unrestricted editorial discretion), limitations of liability, and more. Section 230(c)(2) could be a helpful complement to the contract protections,28 but it is rarely essential. • Unlike section 230(c)(1), section 230(c)(2) considers whether the defendant acted in ‘good faith’. Plaintiffs routinely allege bad faith removal or blocking by the defendant, even if the plaintiff does not actually have any evidence to back up the allegation. These unsupported allegations increase the odds that the case will survive a motion to dismiss and advance to discovery and summary judgment—an expensive and time-consuming proposition. If defendants can find a way to defeat the lawsuit on a motion to dismiss, section 230(c)(2)’s delayed resolution is not that helpful.

2.1  Section 230’s Statutory Exclusions Section 230 has four statutory exclusions where it is categorically unavailable.

24  47 USC § 230(c)(2)(A). 25  ibid. § 230(c)(2)(B). 26  See e.g. Zango Inc. v Kaspersky Lab Inc., 568 F.3d 1169 (9th Cir. 2009) (US). 27  See e.g. the YouTube ‘remove-and-relocate’ cases, including Darnaa v Google LLC, 2018 WL 6131133 (9th Cir. 2018) (US); Kinney v YouTube LLC, 2018 WL 5961898 (Cal. App. Ct. 2018) (US); Song Fi v Google Inc., 2018 WL 2215836 (ND Cal. 2018) (US); Bartholomew v YouTube LLC, 17 Cal. App. 5th 1217 (Cal. App. Ct. 2017) (US); Lancaster v Alphabet Inc., 2016 WL 3648608 (ND Cal. 2016); Lewis v YouTube LLC, 244 Cal. App. 4th 118 (Cal. App. Ct. 2015) (US). 28 See Eric Goldman, ‘Online User Account Termination and 47 USC s230(c)(2)’ (2012) 2 U. C. Irvine L. Rev. 659.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   161 (1) ECPA/state law equivalents. Section 230 does not apply to plaintiffs’ claims based on the Electronic Communications Privacy Act or state law equivalents.29 This exception is functionally a null set because it is almost impossible for a defendant to simultaneously violate the ECPA and qualify for section 230. (2) Intellectual property (IP) claims. Section 230 does not apply to ‘intellectual property’ claims.30 ‘Intellectual property’ is not defined in the statute, and this exception is more complicated than it might appear. In Perfect 10 v CCBill,31 the Ninth Circuit held that the exclusion only applied to federal IP claims, so section 230 pre-empted all state IP claims. State IP claims can include state trade mark claims, state copyright claims, trade secret claims, publicity right claims, and possibly others. Thus, in the Ninth Circuit, the CCBill precedent has been used numerous times in lawsuits predicated on third party state IP claims.32 Courts outside the Ninth Circuit do not agree with the CCBill ruling,33 so state IP claims are still viable in those jurisdictions despite section 230. All courts agree that federal IP claims, such as federal copyright and federal trade mark claims, based on third party content are clearly excluded from section 230. However, Congress expressly said the Defend Trade Secrets Act (DTSA), the federal trade secret law, is not an IP law, so any DTSA claims based on third party content are immunized by section 230.34 (3) Federal criminal prosecutions. Prosecutions of federal crimes are not immunized by section 230.35 The US Department of Justice (DOJ) rarely pursues websites for third party content, but the exceptions are noteworthy. For example, in 2007, Google, Yahoo, and Microsoft paid a total of $31.5 million to settle complaints about running illegal gambling ads.36 In 2011, Google paid $500 million to settle complaints about running ads for illegal pharmacies.37 In the DOJ’s prosecution of the purported operator of the Silk Road marketplace for illegal items, Ross Ulbricht, got a lifetime sentence.38 While federal criminal enforcement is excluded from section 230, civil claims based on federal criminal law are pre-empted to the extent they are predicated on

29  See 47 USC § 230(e)(4). 30  ibid. § 230(e)(2). 31 See Perfect 10 Inc. v CCBill LLC, 488 F.3d 1102 (9th Cir. 2007) (US). 32  See Ballon (n. 16) 37.05[5]. 33  See e.g. Atlantic Recording Corp. v Project Playlist Inc., 603 F.Supp.2d 690 (SDNY 2009) (US). 34  See Eric Goldman, ‘The Defend Trade Secrets Act Isn’t an Intellectual Property Law’ (2017) 33 Santa Clara High Tech. L.J. 541. 35  See 47 USC § 230(e)(1). 36  See Corey Boles, ‘U.S. Fines Google, Microsoft, Yahoo’ (Wall Street Journal, 20 December 2007) . 37  See ‘Google Forfeits $500 Million Generated by Online Ads & Prescription Drug Sales by Canadian Online Pharmacies’ (US DOJ, 24 August 2011) . 38 See US v Ulbricht, 858 F.3d 71 (2d Cir. 2017) (US).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

162   Eric Goldman third party content.39 Similarly, except for FOSTA’s limited exceptions, state criminal prosecutions are pre-empted to the extent they are predicated on third party content.40 (4) FOSTA. In 2018, Congress created several new exceptions to section 230 in the Fight Online Sex Trafficking Act (FOSTA). Those exceptions are beyond the scope of this chapter.41

3.  Section 230’s Implications Section 230 is an exceptionalist statute: it treats the internet differently than other media. Indeed, section 230 is the quintessential exceptionalist statute from the mid-1990s exceptionalism peak.42 To demonstrate section 230’s exceptionalist nature, consider this example: Scenario A: an author submits a ‘letter to the editor’ to her local newspaper. The letter contains defamatory statements. The newspaper publishes the letter in its ‘dead trees’ print edition. The newspaper will be equally liable for defamation with the letter author.43 Scenario B: the same author submits the same letter to her local newspaper, but the local newspaper only publishes the letter in its online edition, not in its print edition. The author will still be liable, but section 230 protects the newspaper from liability.

It is counterintuitive that scenarios A and B produce different outcomes. The letter contains the exact same content, by the exact same author, disseminated by the exact same publisher, yet the liability results diverge. As the maxim goes, the medium matters.44 Section 230 can be characterized as a type of legal privilege. Section 230 ‘privileges’ online publishers over offline publishers by giving online publishers more favourable legal protection. This leads to a financial privilege by reducing online publishers’ prepublication costs and post-publication financial exposure. These legal and financial 39 See Doe v Bates, 2006 WL 3813758 (ED Tex. 2006); Jane Doe No. 1 v Backpage.com LLC, 817 F.3d 12 (1st Cir. 2016) (US). 40  See Eric Goldman, ‘The Implications of Excluding State Crimes from 47 U.S.C. s 230’s Immunity’, Santa Clara U. Legal Studies Research Paper no. 23/13 (2013) . 41  For an explanation of FOSTA and its effect on s. 230, see Eric Goldman, ‘The Complicated Story of FOSTA and Section 230’ (2019) 17 First Amendment L. Rev. 279. 42  See Eric Goldman, ‘The Third Wave of Internet Exceptionalism’ in Berin Szoka and Adam Marcus (eds), The Next Digital Decade: Essays on the Future of the Internet (TechFreedom 2011) 165. 43  See Smolla (n. 2) s. 3:87; New York Times Co. v Sullivan, 376 US 254 (1964). 44  cf. Marshall McLuhan, Understanding Media: The Extensions of Man (MIT Press 1964).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   163 privileges make sense in the 1990s context, where most major newspapers were de facto monopolies in their local communities45 and many online publishers of third party content were small hobbyists.46 But in the modern era, where internet giants like Google and Facebook are among the most highly valued companies in the world47 and most newspapers are struggling to survive,48 this allocation of privileges might seem even more counterintuitive. However, focusing on section 230’s benefit to the internet giants fundamentally misunderstands section 230’s immunity. Section 230 allows companies to dial up or down their level of editorial control to reflect the needs of their community. Other services can adopt different editorial practices than Google or Facebook, so there can be a wider array of options for authors and readers. More importantly, new marketplace entrants do not need to make the same upfront investments into content moderation that Google and Facebook make. If new entrants had to develop industrial-grade content moderation procedures from day one, we would see far fewer new entrants. That makes section 230’s benefit to Google and Facebook almost beside the point. Section 230’s real payoff comes from keeping the door open for new entrants that compete with—and hope ultimately to dethrone—Google and Facebook. Regulators often think curtailing section 230 will be a good way to hurt the internet giants like Google and Facebook, but they are completely mistaken. Google and Facebook can afford to accommodate new regulatory obligations in ways that new entrants cannot. Thus, reducing section 230’s immunity enhances Google and Facebook’s marketplace dominance by making it more difficult for new entrants to emerge and compete. Sometimes people say that ‘the internet’ no longer needs section 230’s 1990s-style exceptionalist privilege,49 but this makes sense only by equating ‘the internet’ with ‘Google and Facebook’. Unless we are at the end of the innovation curve for new internet services based on third party content—and we are likely still closer to the beginning than to the end—section 230’s immunity fosters and encourages the yet-to-be-born services that we will all benefit from. Another common complaint is that section 230 legally authorizes online intermediaries to do nothing about problematic content, which is what every profit-maximizing 45  See e.g. Fed. Comm. Comm’n, The Media Landscape . 46  Such as bulletin-board services (BBSes), which were often run as non-commercial or small enterprises. See Eric Schlachter, ‘Cyberspace, the Free Market, and the Free Marketplace of Ideas: Recognizing Legal Differences in Computer Bulletin Board Functions’ (1993) 16 Hastings Comm. & Ent. L.J. 87. 47  See e.g. Elvis Picardo, ‘Eight of the World’s Top Companies Are American’ (Investopedia, 20 October 2018) . 48  See e.g. Michael Barthel, ‘Newspaper Fact Sheet’ (Journalist.com, 13 June 2018) . 49  See e.g. Fair Housing Council of San Fernando Valley v Roommates.com LLC, 521 F.3d 1157 fn. 15 (9th Cir. 2008) (US) (‘The Internet is no longer a fragile new means of communication that could easily be smothered in the cradle by overzealous enforcement of laws and regulations applicable to brick-andmortar businesses’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

164   Eric Goldman s­ ervice will choose to do. Accordingly, the critics believe section 230 facilitates the creation of ‘cyber cesspools’;50 that is, services that solicit, and profit from, anti-social content. It is true that section 230 protects all services from liability for third party content, even services that do not moderate content and therefore become hotbeds of anti-social content. Despite this, many online intermediaries spend a lot of money on content moderation. Why? As a practical matter, cyber cesspools tend to quickly fail in the market, despite section 230’s protection. In the late 2000s, for example, gossip-focused services like JuicyCampus and People’s Dirt generated substantial angst—until they failed.51 More recently, Yik Yak’s anonymous local service garnered substantial criticism. It, too, is gone.52 It turns out that ‘cyber cesspools’ develop terrible reputations and become poor business investments. People will always keep trying to build them, but section 230 does not ensure their survival. In contrast, any reputable service will invest in content moderation efforts as part of building trust with their users. These services will not just do the minimum. More conceptually, section 230 critics often assume that reducing the immunity will reduce the total incidence of anti-social content, but this is almost certainly false. All legitimate online intermediaries voluntarily undertake socially valuable efforts to screen out anti-social content—just as the designers of section 230 had hoped they would. If amendments to section 230 reinstate the Moderator’s Dilemma, some of those services will turn off or reduce their voluntary policing efforts, and we could counterintuitively see a net society-wide increase in anti-social behaviour due to the services that opt for the do-nothing approach.53 Stated differently, even if some rogue services abuse section 230’s immunity, section 230’s immunity might still produce the lowest net level of anti-social content of any policy option. There is another alternative: we could restore the offline publishers’ liability rule to all online services and hold online services liable for all third party content they publish. This would likely force online services to pre-screen all third party content. This would dramatically reduce the level of anti-social content, but it also would eliminate many of the internet’s best aspects.

50  See e.g. Saul Levmore and Martha Nussbaum (eds), The Offensive Internet: Speech, Privacy, and Reputation (HUP 2012). 51 See Jeffrey Young, ‘JuicyCampus Shuts Down, Blaming the Economy, Not the Controversy’ (Chronicle of Higher Education, 5 February 2009) ; Donna St. George and Daniel de Vise, ‘Slanderous Web Site Catering to Teens Is Shut Down’ (Washington Post, 10 June 2009) . 52  See Valeriya Safronova, ‘The Rise and Fall of Yik Yak, the Anonymous Messaging App’ (New York Times, 27 May 2017) . 53  See Eric Goldman, ‘Balancing Section 230 and Anti-Sex Trafficking Initiatives’, Santa Clara U. Legal Studies Research Paper no. 17/2017 (2017) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   165 For example, imagine how Twitter might work with human-pre-screened tweets. If Twitter could even afford to stay in business due to the labour costs, all tweets would have a delay of minutes, hours, or longer. There might still be some social role for a time-delayed Twitter, but more likely it would lose most of its value. There are many other services we enjoy daily that would not make any sense financially or functionally if  humans had to pre-screen each item of content, with a substantial time delay to publication. Section 230 creates winners and losers, but so does every other policy alternative. Section 230 strikes a balance between promoting innovation and motivating industry players to voluntarily undertake socially valuable screening efforts. Its results are stunning: aided in large part by section 230, the internet has grown into an integral part of our society, companies have created trillions of dollars of economic wealth and millions of new jobs, and we have benefited from new services and content that never would have emerged with a different liability regime. Most people benefit from section 230-protected internet services on an hour-by-hour or even minute-by-minute basis.

4.  Comparative Analysis Section 230 is a globally unique policy solution. No other country has yet adopted anything so categorically favourable to intermediaries. However, NAFTA 2.0 (the US–Mexico–Canada Agreement (USMCA)), agreed to in 2018, requires Canada and Mexico to adopt section 230-like protections.54 This creates the possibility that North America will move in a direction opposite to the rest of the world. This section contrasts section 230 with some examples of other jurisdictions’ rules.

4.1  EU’s ‘Right to Be Forgotten’ In the 2014 Costeja case, the Court of Justice of the European Union (CJEU) concluded that ‘the activity of a search engine consisting in finding information published or placed on the internet by third parties, indexing it automatically, storing it temporarily and, finally, making it available to internet users according to a particular order of preference’ constituted ‘the processing of personal data’.55 This means search engines are required to comply with the 1995 European Data Privacy Directive (95/46/EC). Accordingly, search engines must ‘remove from the list of results displayed following a search made on the basis of a person’s name links to web pages, published by third 54  See Eric Goldman, ‘Good News! USMCA (a/k/a NAFTA 2.0) Embraces Section 230-Like Internet Immunity’ (Technology & Marketing Law Blog, 3 October 2018) . 55  See C-131/12 Google Spain SL and Google Inc. v Agencia Española de Protección de Datos and Mario Costeja González [2014] ECLI:EU:C:2014:317, para. 21. See also Chapter 25.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

166   Eric Goldman parties and containing information relating to that person’,56 even if that information remains on the third party website, and even if the third party lawfully published the information. The right of people to erase search results about them ‘override[s] . . . not only the economic interest of the operator of the search engine but also the interest of the general public in having access to that information upon a search relating to the data subject’s name’.57 However, the erasure right can be trumped by a ‘preponderant interest of the general public in having, on account of its inclusion in the list of results, access to the information in question’, such as the person’s role ‘in public life’.58 The court’s inscrutable language did not specify the exact grounds on which a person may request erasure of search results, putting the burden on search engines, and the data protection authorities who will hear complaints about the search engine’s decisions, to figure it out. Google59 decided it will remove links to ‘irrelevant, outdated, or otherwise objectionable’ information from search results for the individual’s name; but Google will not de-index a search result if ‘there’s a public interest in the information’, such as information ‘about financial scams, professional malpractice, criminal convictions, or public conduct of government officials’.60 A de-indexing request only affects results when the person’s name is searched. Searches for other search keywords could still display those search results. Meanwhile, even if Google de-indexes the search result, the original source material will remain online. Thus, if an online newspaper article is subject to a de-indexing request, Google might remove the search result from the name search, but the original article is still available at the online newspaper. Google’s de-indexing standards—‘irrelevant, outdated, or otherwise objectionable’ content that is not ‘in the public interest’—are obviously problematic. For example, if 99 per cent of searchers would find an information item irrelevant but 1 per cent of searchers would find it exceptionally relevant, what should Google do? Similarly, how can Google determine when information is outdated, ‘otherwise objectionable’, or not in the public interest? Frequently, it is not obvious what information has historical significance, and information’s historical significance might change over time. Something that appears irrelevant at one point in time might emerge as essential information at a different time.61 The CJEU made it clear that it does not want search engines to rely on third party publishers’ judgments of what is relevant or important. In Costeja’s case, he complained 56  ibid. para. 88. 57  ibid. para. 99. 58 ibid. 59  I focus on Google’s responses because it lost the case and Google has a 90+ per cent market share in most European countries. Google also receives the vast bulk of de-indexing requests submitted to all search engines. 60 See European Privacy Requests Search Removals FAQs, Google, . 61  e.g. Justice Kavanaugh’s Summer 1982 calendar (see ) appeared to be historically insignificant for decades—until it became a crucial piece of evidence in the Senate confirmation hearings for his appointment to the US Supreme Court.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   167 about an article referencing him that was published in a traditional newspaper. Normally we would assume a newspaper only publishes items of ‘public interest’ (at least to its audience). Yet, the CJEU said that level of public interest was not sufficient. The CJEU ruling shocked many Americans because US law would not permit a similar result. As an online content publisher, Google is protected by the First Amendment’s free speech and free press clauses.62 Thus, any regulatory mandate that Google include or exclude information in its search index is almost certainly unconstitutional.63 Furthermore, section 230 (both (c)(1) and (c)(2)) statutorily immunize search engines for their indexing decisions, including their refusal to de-index content (even if that content is tortious).64 Collectively, US law makes it clear that search engines, including Google.com, cannot be legally compelled to implement a right to be forgotten (RTBF).65

4.2  EU Electronic Commerce Directive In 2000, the EU enacted the Electronic Commerce Directive (Directive 2000/31/EC), including safe harbours for ‘mere conduits’ (Art. 12), caching (Art. 13), and hosting (Art. 14). The provisions parallel the DMCA online safe harbours, in part because the Directive was enacted shortly after the DMCA and modelled on it. Like the DMCA’s section 512(c), the Directive’s hosting safe harbour is built around a notice-and-takedown scheme. However, this provision differs from the section 512(c) hosting safe harbour in several key ways. First, the Directive requires EU Member States to enact a law consistent with the Directive, but Member States are not required to 62  See US Const. Amend. 1. 63  See e.g. Search King Inc. v Google Technology Inc., 2003 WL 21464568 (WD Okla. 2003) (US); Langdon v Google Inc., 474 F.Supp.2d 622 (D. Del. 2007) (US); Zhang v Baidu.com Inc., 10 F.Supp.3d 433 (SDNY 2014) (US); Google Inc. v Hood, 96 F.Supp.3d 584 (SD Miss. 2015) (US) (vacated on other grounds); e-ventures Worldwide v Google Inc., 2017 WL 2210029 (MD Fla. 2017) (US); Eugene Volokh and Donald Falk, ‘First Amendment Protection For Search Engine Search Results’ (20 April 2012) . See also Martin v Hearst Corp., 777 F.3d 546 (2d Cir. 2015) (US) (publication cannot be obligated to remove article about an expunged arrest). 64 See e.g. Maughan v Google Technology Inc., 143 Cal. App. 4th 1242 (Cal. App. Ct. 2006) (US); Murawski v Pataki, 514 F.Supp.2d 577 (SDNY 2007) (US); Shah v MyLife.Com Inc., 2012 WL 4863696 (D. Or. 2012) (US); Merritt v Lexis Nexis, 2012 WL 6725882 (ED Mich. 2012) (US); Nieman v Versuslaw Inc., 2012 WL 3201931 (CD Ill. 2012) (US); Getachew v Google Inc., 491 Fed. Appx. 923 (10th Cir. 2012) (US); Mmubango v Google Inc., 2013 WL 664231 (ED Pa. 2013) (US); O’Kroley v Fastcase Inc., 831 F.3d 352 (6th Cir. 2016) (US); Fakhrian v Google Inc., 2016 WL 1650705 (Cal. App. Ct. 2016) (US); Despot v Baltimore Life Insurance Co., 2016 WL 4148085 (WD Pa. 2016) (US); Manchanda v Google Inc., 2016 WL 6806250 (SDNY 2016) (US). 65  In 2013, California passed an ‘online eraser’ law that requires user-generated content websites to let minors remove posts they have made. See Cal. Bus. & Profs. Code § 22580-82. This law has not been subject to a constitutional challenge, but it may violate the websites’ First Amendment interests. See Eric Goldman, ‘California's New “Online Eraser” Law Should Be Erased’ (Forbes Tertium Quid Blog, 24 September 2013) . Furthermore, in contrast to Europe’s RTBF, California’s law only applies to content posted by a minor; the statute explicitly says that the minor cannot remove any third party content.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

168   Eric Goldman follow this provision identically. As a result, there are national differences in the Directive’s enactment. Secondly, the Directive’s hosting provision governs all claims related to user-generated content, not just copyright. Thus, the Directive’s notice-andtakedown system applies to all claims asserted pursuant to all legal doctrines, including those that would be covered by section 230 in the United States. Thirdly, the Directive contemplates that hosts may be liable without ever receiving a takedown notice. In practice, courts in the EU find hosts liable in the absence of takedown notices more frequently than we see with section 512(c) cases. Fourthly, unlike section 512(c)(3), the Directive does not define what constitutes a proper takedown notice, so web hosts are more likely to take action in response to any notice submitted by anyone. Section 230 provides dramatically more protection for web hosts in the United States than the Electronic Commerce Directive provides for European hosts. Accordingly, US user-generated content websites launching localized services for European users typically screen and remove third party content in Europe using more rigorous standards than they use in the United States—usually, blocking and removing content that would have been left untouched in the United States.

4.3  The UK Defamation Law The UK Defamation Act 2013, section 5 says that web hosts are not liable for user-generated defamatory content until they have received a takedown notice that meets statutorily specified criteria. However, a web host cannot qualify for the safe harbour if it ‘has acted with malice’ towards the offending post, setting up potential fights about web host scienter pre-takedown notice. For web hosts to qualify for this safe harbour, they must be able to identify the offending user with enough specificity to enable a lawsuit against the user. In effect, the law eliminates unattributable user content in the UK.66 Any web host that wants to qualify for the safe harbour (and who would not?) must know enough about their users to turn them over to anyone who complains about their posts. The postings themselves can be publicly presented anonymously or pseudonymously, but users will know that their identity will be revealed effectively on request and that may act as a deterrent to posting. To the extent unattributed public discourse has value, the UK Defamation Act forecloses that possibility. (Note also that the attributability of user-generated content only relates to a safe ­harbour for defamation, but plaintiffs with other types of claims can demand the poster’s identity because the web host will have everyone’s contact information on file.)

66  See Eric Goldman, ‘UK’s New Defamation Law May Accelerate the Death of Anonymous UserGenerated Content Internationally’ (Forbes Tertium Quid, 9 May 2013) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   169 Overall, the UK Defamation Act is worse for web hosts than section 230 in at least two main ways. First, section 230 does not require content removal upon a takedown notice. Secondly, section 230 applies even if the web host does not know who is doing the posting. Also, for authors, section 230 indirectly preserves their ability to write on an unattributed basis, which may encourage a more robust public discourse.

4.4  Germany’s Network Enforcement Law (NetzDG) In 2017, Germany enacted the Network Enforcement Act (Netzwerkdurchsetzungsgesetz, NetzDG). The law requires sites to maintain procedures to remove ‘unlawful’ user content, to remove ‘manifestly unlawful’ user content within twenty-four hours of getting complaints about it (and within seven days for merely ‘unlawful’ user content), and to make various public reports about content removals. The law imposes substantial fines of up to €50 million for non-compliance. The law nominally targeted social media platforms, but it applies to any online services with 2+ million registered users in Germany. ‘Unlawful’ content includes a wide range of content, including ‘public incitement to crime’, ‘violation of intimate privacy by taking photographs’, defamation, ‘treasonous forgery’, forming criminal or terrorist organizations, and ‘dissemination of depictions of violence.’ The NetzDG supplements the EU Directive in important ways, all in ways that section 230 would forbid. First, it provides a specific turnaround time for takedowns. Secondly, it starts to creep into regulating the mechanics of a service’s content moderation/removal operations. Thirdly, it requires the production of reports about complaints received and the service’s responses, which may spur further regulatory oversight. Finally, the statute specifies penalties that can be punitive in practice, which surely will spur the regulated services to over-remove content.

4.5  Brazil’s Internet Bill of Rights In 2014, Brazil enacted an ‘Internet Bill of Rights’ (Marco Civil da Internet).67 ‘Providers of internet applications’—presumably including both websites and mobile apps— generally are not civilly liable for user-generated content until a court orders its removal. However, apparently copyright claims are not covered by the law; claims related to ‘honor, reputation or personality rights’ can be fast-tracked to a small claims court; and claims related to non-consensual pornography are subject to a notice-and-takedown regime. By generally deferring intermediary liability until a court order, the Marco Civil more closely resembles section 230 than the European notice-and-takedown rules. Still, it does not go as far as section 230. For example, a court cannot order a US web host to remove user-generated content; any lawsuit directly against the web host is pre-empted

67  See Chapter 10.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

170   Eric Goldman by section 230, and a court order in any other lawsuit cannot reach the web host due to Federal Rules of Civil Procedure 65(d)(2).68

4.6  Section 230 and Foreign Judgments In 2010, Congress enacted the SPEECH Act.69 Among other provisions, the law says that a foreign court judgment for defamation cannot be enforced in the United States if the result would have violated section 230 if litigated in a US court; and unsuccessful plaintiffs must pay the attorneys’ fees of the section 230-immunized entity. Thus, even if foreign laws are less protective of web hosts or other intermediaries than section 230, a foreign court’s defamation judgment will not work in US courts if it violates section 230.70

5.  What’s Next for Section 230? In 2018, Congress enacted FOSTA,71 its first significant reduction in section 230’s protective immunity in twenty-two years. While FOSTA is bad policy,72 its passage has emboldened many of section 230’s critics. Numerous victims’ advocacy groups now hope that they too can get specific exceptions to section 230 for their situation; and other critics now believe that the immunity can be undermined more broadly. A wide range of anti-social content—including non-consensual pornography, opioid promotions, election interference, and terrorist-supplied content—have been identified as possible new section 230 exceptions. Furthermore, a number of senators have prominently criticized section 230. For example, Senator Ted Cruz (R-TX) repeatedly (but completely falsely) claims that section 230 only applies to ‘neutral public forums’73 and thus should not protect any service that removes conservative-leaning contributions. Separately, Senator Mark Warner (D-VA) wrote a white paper of policy proposals, including eviscerating section 230.74 68  See e.g. Blockowicz v Williams, 630 F.3d 563 (7th Cir. 2010) (US); Giordano v Romeo, 76 So.3d 1100 (Fla. Dist. App. Ct. 2011) (US). See also Hassell v Bird, 5 Cal. 5th 522 (2018) (US) (Yelp cannot be compelled to honour a removal injunction). 69  See 28 USC §§ 4101–5. 70  See e.g. Trout Point Lodge Ltd v Handshoe, 729 F.3d 481 (5th Cir. 2013) (US). 71  See Allow States and Victims to Fight Online Sex Trafficking Act of 2017, HR1865 (115th Cong. 2017–18) (US). 72  See Goldman (n. 41). 73  See Sen. Ted Cruz, ‘Sen. Ted Cruz: Facebook Has Been Censoring or Suppressing Conservative Speech for Years’ (FoxNews.com, 11 April 2018) . 74  See Sen. Mark Warner, ‘Potential Policy Proposals for Regulation of Social Media and Technology Firms’ (July 2018) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   171 Conspicuously, amendments to section 230 may have bipartisan appeal in an era where partisanship ordinarily leads to gridlock. Also, section 230 is inextricably linked to the widespread regulatory antipathy towards Google and Facebook. Each time those companies make highly visible missteps— which seems to be constantly—prevailing anti-section 230 views grow a little stronger. Thus, section 230 appears to be quite imperilled. Yet, amid this doom and gloom, the USMCA embraced section 230, potentially extending section 230’s reach to make it a North American legal standard and limiting Congress’s ability to undermine it significantly. USMCA was the first time in over two decades that the expansion of internet immunity was included in any trade agreement; and the ‘exporting’ of section 230-like immunity to other countries would seem to represent a major endorsement of its policy outcomes. So which way will section 230 go? Are we near the end of its functional life? Or are we on the cusp of more universally embracing it? The data points provide reason for both optimism and deep, deep pessimism about section 230’s future.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 9

The Impact of Fr ee Tr a de Agr eem en ts on I n ter n et I n ter m edi a ry Li a bilit y i n L ati n A m er ica Juan Carlos Lara Gálvez and Alan M. Sears

Free trade agreements (FTAs) signed by the United States in the current century have consistently included provisions attempting to harmonize copyright provisions, including the regulation of liability for internet intermediaries for online copyright infringement.1 The regulation in those agreements follows the model established in domestic federal law in the United States through the Digital Millennium Copyright Act (DMCA), which regulates certain conditions for liability exemptions for internet intermediaries for acts of copyright infringement taking place over the internet.2 1  These include free trade agreements with Chile (2004), Singapore (2004), Bahrain (2006), Morocco (2006), Oman (2006), Peru (2007), Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Dominican Republic (2005), Panama (2012), Colombia (2012), and South Korea (2012). 2  The DMCA also regulates the circumvention of controlled access to copyrighted works. Third party content posted online, outside copyrighted material, continues to be controlled by the Communications Decency Act 230 (47 USC § 230) (US), which was enacted shortly before the DMCA with the explicit goal of promoting internet development. Section 230 of the CDA gives internet intermediaries complete immunity for third party or user-generated content (UGC), immunizing ISPs and internet users from liability for torts committed by others using their website or online forum, even if the provider fails to take action after receiving actual notice of the harmful or offensive content. It also gives ‘good Samaritan’ immunity for restricting access to certain material or giving others the technical means to restrict access to that material, without receiving requests to do so.

© J. Carlos Lara and Alan M. Sears 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   173 The DMCA was a largely innovative piece of legislation when it became effective in 1998, and was enacted as part of the implementation of two international agreements known as the ‘Internet Treaties’, signed under the World Intellectual Property Organization (WIPO) in 1996: the WIPO Copyright Treaty (WCT) and the WIPO Performances and Phonograms Treaty (WPPT), which were meant to adapt a trad­ ition­al copyright framework so as to be able to address novel issues brought about by the information age.3 The DMCA took the broad provisions of the WIPO Internet Treaties and implemented them with a higher degree of detail. At the time, several features were unique to the DMCA. Among them were its intermediary liability provisions. Every bilateral FTA—as well as a number of multilateral FTAs—the United States has entered into since 2002 has contained similar provisions, thus promoting their inclusion in the national law of other countries. However, these are controversial, and whether they drive the internet economy or create a more restrictive online space is a matter of debate. In this chapter, the impact of such provisions in Latin American countries will be analysed.

1.  Background: Notice and Takedown in the Dmca The DMCA introduced a system that classifies obligations for different online service providers (OSPs) and sets out different requirements for each class to claim a safe ­harbour in the case of an alleged copyright violation through their systems. The OSPs covered are the intermediaries that provide services of: transitory digital network communications (§ 512(a)); system caching (temporary storage services, § 512(b)); information storage at the direction of users (hosting, § 512(c)); and information location tools (search engines and directories, § 512(d)). Of these, the most important provisions in practical terms are those aimed at hosting services (§ 512(c)).4 In general, OSPs are required to designate a representative to receive notifications of alleged infringement,5 they must adopt a policy to terminate the accounts of repeat infringers,6 and implement measures that accommodate standard technical measures that prevent the access or reproduction of copyrighted works.7 The OSPs are under no obligation to monitor copyright infringement, and they must establish a notification/ counter-notification system to address challenges to takedown decisions.8 3  See Ruth Okediji, ‘The Regulation of Creativity Under the WIPO Internet Treaties’, Minnesota Legal Studies Research Paper 30/09 (2009) . 4  For a detailed analysis of the characteristics and trends of the notice-and-takedown system, with a close view of the content of the notices, see Jennifer Urban and others, ‘Notice and Takedown in Everyday Practice’, UC Berkeley Public Law Research Paper no. 2755628 (2017) . 5  17 USC § 512(c)(2) (US).    6  ibid. § 512(i)(1)(A).    7  ibid. § 512(i)(1)(B). 8  A counter-notification system would be put in place to contest the removal of content, in order to restore it, if the original uploader challenges the content of the notice.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

174   Juan Carlos Lara GÁlvez and Alan M. Sears Safe harbours from liability for third party or UGC provide a mechanism whereby OSPs can shield themselves from secondary copyright liability. The safe harbour is triggered by removing or blocking the allegedly infringing content from their systems, after receiving a notice that infringing content is made available on their networks. Section 512(c) provides that OSPs are not liable for copyright infringement when they satisfy the following legal conditions: (1) they do not receive a financial benefit directly attributable to the infringing activity; (2) they are not aware of the presence of infringing material or know any facts or circumstances that would make infringing material apparent; and (3) upon receiving notice from copyright owners or their agents, they act expeditiously to remove the purported infringing material. The last requirement has become a key feature of the American framework: secondary liability exemption is triggered by (a rightholder) providing knowledge through ([effective] ‘notice’) and the subsequent removal (‘takedown’) of content. The law is dependent on receiving a mere private communication, even an electronic one. This has led to a large number of notifications,9 and the automation of both the notification and the removal processes.10 As a result, both purposeful or inadvertent notifications have resulted in some criticism of actors abusing automated processes to remove or block non-infringing content.11 The DMCA system is, in effect, the set of rules under which platforms that dis­sem­in­ ate UGC, such as YouTube or Facebook, have functioned for a long time in the United States, with content removals based on private notices of infringement happening every second, without a review of the merits of the allegation of infringement. A vast body of scholarship in the United States and abroad has criticized the DMCA provisions. The criticisms can be categorized as either general criticisms of the legal framework or criticism of its practical application,12 including the transparency and accountability regarding the legal merit of the requests, leading to unfair removal of lawful content en masse.13 Many, if not most, of these takedowns are automated, and are in fact generated

9  See Urban and others (n. 4). 10  See Maayan Perel and Niva Elkin-Koren, ‘Accountability in Algorithmic Copyright Enforcement’ (2016) 19 Stan. Tech. L. Rev. 473. 11  A database of exemplary cases of content removal, with a large portion of cases of non-infringing content removed on copyright grounds, is the Lumen Database, which can be found at . 12  See Annemarie Bridy and Daphne Keller, ‘U.S. Copyright Office Section 512 Study: Comments in Response to Notice of Inquiry’, Social Science Research Network Scholarly Paper no. 2757197 (2016) . 13  See Urban and others (n. 4) (finding that 31 per cent of takedown requests were ‘potentially problematic’, in that they were either fundamentally flawed or of questionable validity). This may have even worsened with time. According to Google, for over 16 million notices sent by one submitter, 99.97 per cent of the related URLs were not in Google’s search index, and in total 99.95 per cent of the URLs were not in Google’s index. See Michael Geist, ‘Bogus Claims: Google Submission Points to Massive Fraud in Search Index Takedown Notices’ (Michaelgeist.ca, 22 February 2017) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   175 by bots,14 which may have contributed to the dramatic increase in takedown notices seen over time.15 Nevertheless, there has also been some discussion of its positive role for the growth of internet companies, as a balanced approach to internet regulation.16 Moreover, there are arguments that it is not a sufficiently strict model to prevent or deter infringement, and that more tools for copyright holders are needed.17 Its adoption, twenty years after its passage, is still a matter of debate.

2.  Free Trade Agreements and Notice-and-Takedown Provisions The DMCA model for notice and takedown was exported through FTAs signed between the United States and many other countries, including several in Latin America.18 The inclusion of intellectual property frameworks is a reflection of the overall strategy of the US government to use FTAs to further its goals.19 Intellectual property rights (IPRs) expanded from being negotiated in WIPO to include another forum, the World Trade Organization, a less egalitarian venue for participating countries. These negotiations happened shortly before the WIPO Internet Treaties, and led to the signature of the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs) in 1994. But after developing countries demanded flexibility to implement their TRIPs obligations, through the ‘Doha Declaration on the TRIPS Agreement and Public Health’ of the 2001 WTO Ministerial Conference, the forums expanded yet again. It has been suggested that this is why developed countries began promoting bilateral or regional treaties to advance their interests outside multilateral fora, in the form of investment protection agreements and FTAs.20 14  See Urban and others (n. 4). 15  See Daniel Seng, ‘The State of the Discordant Union: An Empirical Analysis of DMCA Takedown Notices’ (2014) 18 Va. J. of L. & Tech. 369, 389–90 and 444 (referencing Table 1: Takedown Notices by Recipient and Year). 16  See David Kravets, ‘10 Years Later, Misunderstood DMCA is the Law That Saved the Web’ (Wired, 27 October 2008) . 17  See Donald Harris, ‘Time to Reboot?: DMCA 2.0’ (2015) 47 Arizona State L.J. 801. A similar debate has occurred in Europe, resulting in the passage of a Copyright Directive in early 2019 that mandates the use of ‘upload filters’. See ‘James Vincent Europe’s controversial overhaul of online copyright receives final approval’ (The Verge, 26 March 2019) . 18  See Office of the United State Representative, ‘Free Trade Agreements’ . 19 Jeffrey Schott, ‘Free Trade Agreements and US Trade Policy: A Comparative Analysis of US Initiatives in Latin America, the Asia-Pacific Region, and the Middle East and North Africa’ (2006) 20(2) Int’l Trade J. 95–138. 20  See Beatriz Busaniche, ‘Intellectual Property Rights and Free Trade Agreements: A Never-Ending Story’ in David Bollier and Silke Helfrich (eds), The Wealth of the Commons (Levellers Press 2012).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

176   Juan Carlos Lara GÁlvez and Alan M. Sears What could not be obtained in multilateral fora could be achieved through FTAs. To that end, the Bipartisan Trade Promotion Authority Act of 2002 was passed,21 and similar to the earlier Trade Act of 1974,22 it allowed the executive branch of the US government to negotiate and sign FTAs as congressional-executive agreements (which need the majority vote of each house of Congress)23 as opposed to treaties (which need the approval of two-thirds of the US Senate, as mandated by the US Constitution).24 The Act included the requirement that ‘the provisions of any multilateral or bilateral trade agreement governing intellectual property rights that is entered into by the United States reflect a standard of protection similar to that found in United States law’. Such a provision promotes a model—based on the US version of intellectual property law— that benefits its own industries.25 Under this authority, the Office of the United States Trade Representative (USTR) negotiated several FTAs later signed by the US government, all of them containing intellectual property provisions that mirrored US law to varying degrees of detail, including internet intermediary liability for copyright infringement. It was under this authority that the United States signed such agreements with Chile (2003), CAFTA-DR (Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Dominican Republic, 2004), Peru (2006), Colombia (2006), and Panama (2007). Although the agreements are currently in force, their complete implementation in most cases is pending. Relatively recently, a new delegation of authority was signed into law—known as the Bipartisan Congressional Trade Priorities and Accountability Act of 2015—which extends the power of the executive to promote trade through 2021, with slight additions to the language of the previous Act from 2002. As discussed later, this was meant to be exercised in the passage of the Trans-Pacific Partnership Agreement (TPP), an agreement involving Latin American countries that included similar internet intermediary liability provisions.

3.  Implementation of FTA Intermediary Liability Provisions in Latin America Provisions demanding the implementation of rules similar to those of the DMCA already exist in the aforementioned FTAs, which suggests that similar rules should become national law in each country, thus enacting the same system in several places of Latin America. However, because these agreements usually cover large portions of each 21  See 19 USC c.24 (US).    22  See 19 USC § 2101. 23  Jane Smith and others, ‘Why Certain Trade Agreements Are Approved as Congressional-Executive Agreements Rather Than Treaties’ (Congressional Research Service, 15 April 2013) . 24  See US Constitution, Art. II, Section 2(2). 25  See Vivencio Ballano, Sociological Perspectives on Media Piracy in the Philippines and Vietnam (Springer 2015) 44–5.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   177 country’s economy and foreign trade (e.g. tariffs and importation restrictions), and the fact that even intellectual property revisions include a large range of subject matter (e.g. trade marks, patents, pharmaceutical patents, substantive copyright issues, and enforcement mechanisms and penalties), intermediary liability provisions are generally not among the aspects of implementation given priority. Notwithstanding the amount of detail that the FTAs include for the regulation of these matters, implementation so far has still allowed for some degree of flexibility. As discussed further later, more recent efforts by the USTR have attempted to prevent such flexibilities in future agreements, as well as push to implement intellectual property provisions without further delay. One of the most frequent tools of pressure by the USTR is its annual Special 301 Report, published since 1989, that reviews intellectual property enforcement in foreign countries, in order to highlight those that in its view do not offer enough protection for intellectual property holders from the United States, and to explain where its concerns lie. In its 2018 edition, four Latin American countries were on its ‘Priority Watch List’, and eight were on the ‘Watch List’.26

3.1  A Comparison of Provisions on Effective Notice in FTAs with Latin American Countries In all agreements signed or negotiated by the United States since 2003, internet inter­ medi­ary liability provisions appear as virtually identical to those in the DMCA: the OSP must expeditiously remove the materials upon notification, there must be a publicly designated representative to receive notifications, the OSP must adopt a policy for repeat infringers and for accommodating standard technical measures, and there is no obligation to monitor copyright infringement, among other obligations. The text in all of the provisions is virtually identical between agreements, as exemplified by the rules that trigger secondary liability exemptions for hosting and search and indexing services, compared below in Table 9.1. Yet one agreement, negotiated by the USTR and signed into an agreement by all of its negotiating parties, included text with a slight variation even more in line with US law. The TPP, which included Chile, Mexico, Canada, the United States, and eight other countries around the Pacific Ocean, represented an attempt by the United States to establish rules even closer to those of the DMCA, and in fact to go beyond them to re­inforce enforcement.27 The internet intermediary liability provisions were strongly promoted 26   See USTR, 2018 Special 301 Report . The ‘Priority Watch List’ countries are those with serious IPR deficiencies that warrant increased attention concerning the problem areas, and the ‘Watch List’ countries are those with similar deficiencies but not rising to the level of ‘Priority Watch List’. See John Masterson, International Trademarks and Copyright: Enforcement and Management (ABA 2004) 20. 27   See Sean Flynn and others, ‘U.S. Proposal for an Intellectual Property Chapter in the Trans-Pacific Partnership Agreement’ (2012) 28(1) American U. Int’l L. Rev. 105–202, 202.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

178   Juan Carlos Lara GÁlvez and Alan M. Sears

Table 9.1  Provisions on actual knowledge of infringement in free trade agreements with the United States Chile

(c) With respect to functions (b)(iii) [hosting] and (iv) [search engines], the limitations shall be conditioned on the service provider: . . . (ii) expeditiously removing or disabling access to the material residing on its system or network upon obtaining actual knowledge of the infringement or becoming aware of facts or circumstances from which the infringement was apparent, including through effective notifications of claimed infringement in accordance with subparagraph (f) [minimum notice requirements]; and . . .28

CAFTA-DR

(v) With respect to functions referred to in clauses (i)(C) [hosting] and (D) [search engines], the limitations shall be conditioned on the service provider: . . . (B) expeditiously removing or disabling access to the material residing on its system or network on obtaining actual knowledge of the infringement or becoming aware of facts or circumstances from which the infringement was apparent, such as through effective notifications of claimed infringement in accordance with clause (ix) [minimum notice requirements]; and . . .29

Peru

(v) With respect to functions referred to in clauses (i)(C) [hosting] and (D) [search engines], the limitations shall be conditioned on the service provider: . . . (B) expeditiously removing or disabling access to the material residing on its system or network on obtaining actual knowledge of the infringement or becoming aware of facts or circumstances from which the infringement was apparent, such as through effective notifications of claimed infringement in accordance with clause (ix) [minimum notice requirements]; and . . .30

Colombia

(v) With respect to functions referred to in clauses (i)(C) [hosting] and (D) [search engines], the limitations shall be conditioned on the service provider: . . . (B) expeditiously removing or disabling access to the material residing on its system or network on obtaining actual knowledge of the infringement or becoming aware of facts or circumstances from which the infringement was apparent, such as through effective notifications of claimed infringement in accordance with clause (ix) [minimum notice requirements]; and . . .31

Panama

(v) With respect to functions referred to in clauses (i)(C) [hosting] and (D) [search engines], the limitations shall be conditioned on the service provider: . . . (B) expeditiously removing or disabling access to the material residing on its system or network on obtaining actual knowledge of the infringement or becoming aware of facts or circumstances from which the infringement was apparent, such as through effective notifications of claimed infringement in accordance with clause (ix) [minimum notice requirements]; and . . .32

  United States–Chile Free Trade Agreement, Art. 17.11, subsection 23.  Central America–Dominican Republic–United States Free Trade Agreement (CAFTA-DR), Art. 15.11, subsection 27. 30   Peru–United States Trade Promotion Agreement, Art. 16.11, subsection 29. 31   Colombia–United States Trade Promotion Agreement, Art. 16.11, subsection 29(b). 32   United States–Panama Trade Promotion Agreement, Art. 15.11, subsection 27. 28

29

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   179

TPP (2016)

(a) With respect to the functions referred to in paragraph 2(c) [hosting] and paragraph 2(d) [search engines], these conditions shall include a requirement for Internet Service Providers to expeditiously remove or disable access to material residing on their networks or systems upon obtaining actual knowledge of the copyright infringement or becoming aware of facts or circumstances from which the infringement is apparent, such as through receiving a notice of alleged infringement from the rightholder or a person authorised to act on its behalf . . .33

USMCA

(a) With respect to the functions referred to in paragraph 2(c) [hosting] and paragraph 2(d) [search engines], these conditions shall include a requirement for Internet Service Providers to expeditiously remove or disable access to material residing on their networks or systems upon obtaining actual knowledge of the copyright infringement or becoming aware of facts or circumstances from which the infringement is apparent, such as through receiving a notice of alleged infringement from the rightholder or a person authorized to act on its behalf . . .34

by the USTR ever since entering the negotiations on behalf of the United States in 2008 (yet without authorization from Congress until 2015), which is evidenced by the proposal of such provisions to become part of the agreement, according to an early 2011 version of the text.35 The TPP had a long, controversial history, including harsh indictments of its internet intermediary liability provisions.36 Eventually, after the agreement was signed, President Trump withdrew the US participation,37 and the rest of the countries came to an agreement suspending many of the intellectual property provisions, as discussed in the following section. Notwithstanding the failure of the TPP to create new intermediary liability rules, the same language was again pushed for—and ultimately included—in the revision of the North American Free Trade Agreement (NAFTA), which became the US–Mexico–Canada Agreement (USMCA).38 The most salient difference between the language in the TPP and the USMCA and the language in prior trade agreements is the explicit specification of the source of the ef­f ect­ ive notice in the TPP and the USMCA. While previous agreements did not mention where the notice could come from (thus allowing for flexibility to, for instance, require a judicial or administrative notice), the TPP and the USMCA make it clear that such a   Trans-Pacific Partnership Agreement, Art. 18.82, subsection 3.   United States–Mexico–Canada Agreement, Art. 20.J.11, subsection 3(a). Text is subject to legal review for accuracy, clarity, and consistency, and is subject to language authentication. 35   See Knowledge Ecology International, ‘The Complete Feb 10, 2011 Text of the US Proposal for the TPP IPR Chapter’ (Keionline, 10 March 2011) . 36   See Matthew Rimmer, ‘Back to the Future: The Digital Millennium Copyright Act and the TransPacific Partnership’ (2017) 6(3) Laws 11. 37   See ‘Trump Abandons Trans-Pacific Partnership, Obama’s Signature Trade Deal’ (New York Times, 23 January 2017). . 38   This agreement is known as CUSMA in Canada, and T-MEC in Mexico. 33

34

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

180   Juan Carlos Lara GÁlvez and Alan M. Sears notice may come ‘from the rightholder or a person authorized to act on its behalf ’. One possible cause of this small but important change in the position of the USTR might be explained by the current state of the implementation of prior FTAs, including those in Latin America, examined later.

3.2  Completed Implementation Only two countries in Latin America have completed the process of implementing their FTA obligations concerning intermediary liability for copyright infringement: Chile and Costa Rica.

3.2.1 Chile In Chile, safe harbour for online copyright infringement was implemented in May 2010 through Law No. 20,435, which modified large portions of the existing copyright law. The draft bill, introduced in April 2007, was perceived by the executive (in charge of drafting the bill implementing the US–Chile FTA) as an opportunity to update existing copyright law well beyond trade agreement obligations, providing a more balanced approach between exclusive rights of authors, interests of copyright holders, and interests of the public in using copyrighted works for legitimate purposes. The executive introduced a number of limitations and exceptions to copyright claims along with new rights and tools for rightholders.39 With regards to internet intermediary liability, Chilean law mostly follows the structure and provisions of the Chile–US FTA. Liability limitations apply only where the service provider does not initiate transmission, or select material or recipients, and some additional formal conditions apply. Following the provisions of the FTA, the law ex­pli­ cit­ly excludes an obligation for service providers to actively monitor content.40 The system is intended to protect service providers who do not provide content directly but who act passively as intermediaries from liability.41 To that end, the law classifies providers of the services into different categories, depending on whether the entity is: (1) transmitting, routing, or connecting; (2) caching through an automatic process; (3) providing storage at the direction of a user of material (hosting); and/or (4) referring or linking users to an online location by using information location tools (search engines), including hyperlinks and directories.42 In general, 39  See Daniel Alvarez Valenzuela, ‘The Quest for Normative Balance: The Recent Reforms to Chileʼs Copyright Law’, International Centre for Trade and Sustainable Development Policy Brief 12 (2011) . 40  See Law no. 17, 336, Art. 85 P (Chile). 41  See Claudio Ruiz Gallardo and J. Carlos Lara Gálvez, ‘Liability of Internet Service Providers (ISPs) and the Exercise of Freedom of Expression in Latin America’ in Eduardo Bertoni (ed.), Towards an Internet Free of Censorship: Proposals for Latin America (Centro de Estudios en Libertad de Expresion y Acceso a la Informacion (CELE) 2011) 18. 42  It should be noted that the same provider could fall under any one of the categories, so long as the activity at the time fits the description.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   181 caching and hosting providers are required to establish a policy to terminate the service of repeat infringers, not interfere with technological protection measures, and not be in the creation or selection of content or its recipients. They must also designate contact information to receive copyright infringement notices. The most important innovation from Chile’s framework, which moves away from the DMCA model, lies in the triggers for secondary liability exemption.43 The removal of infringing content is subjected to judicial proceedings, which allows for a court to order that removal, when several formal requirements are met. That is to say, the standard for ‘actual knowledge’ from the FTA, as implemented in Chilean law, is the reception of a court order for content removal or blocking. The procedure is ‘brief and summary’, conducted before a civil judge, and in the request for removal, the infringed rights must be carefully documented, as must be the copyright holder, infringing content, type of infringement, and its location in the service provider’s network. The court must then issue a resolution ordering removal of content. To be exempt from secondary liability, the intermediary is required to expeditiously remove or block content from its systems only when such judicial court order to remove or block content is received.44 The statute does not define ‘expeditious’, nor does it require content to be removed within a specific time period after notification. Private notices of infringement are not completely excluded from the Chilean implementation, but they do not hold nearly as much weight as in the United States. Private notices can be redirected towards infringers, and such notices have been redirected for ‘educational’ purposes by some rightholders,45 but it does not create a legal obligation to either remove content or stop providing services to the alleged infringer.46 The US government criticized Chile’s approach, from the time of its discussion in Congress, where alternative models to judicial enforcement were rejected.47 After its entry into force, the USTR denounced Chile’s implementation of its FTA obligations as an insufficient way to protect the interests of copyright holders. Since 2010, the USTR has kept Chile on the ‘Priority Watch List’ in its annual Special 301 Report, explicitly calling on the Chilean government to ‘correct’ its law, so as ‘to permit effective and ex­ped­itious action against online piracy’.48 The Chilean government has consistently either rejected the conclusions of the Special 301 Report or dismissed it as a unilateral instrument, outside the dialogue mechanism created in the FTA.49 43  See Alberto Cerda Silva, ‘Limitación de responsabilidad de los prestadores de servicios de Internet por infracción a los derechos de autor en línea’ (2015) 42 Revista de derecho (Valparaíso) 121–48. 44  See Law No. 17,336, Art. 85 Ñ (Chile). 45  The International Federation of the Phonographic Industry sent such notices to be forwarded by the thousands in 2013, explicitly mentioning Art. 85 U of the Intellectual Property Law. See ‘IFPI envía 4000 notificaciones por usar P2P a usuarios de internet en Chile’ (FayerWayer, 30 July 2013) . 46  See Law No. 17,336, Art. 85 U (Chile).    47  See Cerda Silva (n. 43). 48  See USTR (n. 26) 62. 49 See Dirección General de Relaciones Económicas Internacionales, Ministerio de Relaciones Exteriores, ‘Comunicado Oficial: Acerca del Reporte 301 EE.UU.’ (27 April 2016) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

182   Juan Carlos Lara GÁlvez and Alan M. Sears A compromise was reached on the TPP that could redirect to an implementation of the Chile–US FTA. In its Annex 18-F, the TPP allowed its parties to implement Article 17.11.23 of the Chile–US FTA instead of the TPP for ISP liability. Though this has been taken as a success by Chile to retain its system (as Canada also did in Annex 18-E),50 it must be noted that the language does not make it explicit that the current law in Chile is seen as a sufficient implementation of the bilateral FTA.

3.2.2  Costa Rica The government of Costa Rica avoided a long discussion in Congress over the implementation of the internet intermediary liability provisions by approving Executive Decree No. 36,880 in December 2011.51 Interestingly enough, its validity as an act implementing CAFTA-DR is troubling for two reasons. First, as an executive order, it does not have the hierarchical status of a law within the Costa Rican system, and it was not the subject of Congressional debate. Secondly, the Decree limits its own reach from the outset. Article 2 limits its applicability to the service providers that voluntarily submit themselves to its rules, while also establishing measures to address potential copyright infringement. Similar to the CAFTA-DR, the Decree classifies service providers into four categories, distinguishing between providers of services of connection and routing, caching, hosting, and linking or referencing (including search engines).52 It does not require monitoring of infringing activities, notwithstanding those ordered by a court of law or those made through use of technological protection measures.53 It establishes general conditions for liability exemptions, and requires, inter alia, the service provider: (1) to have a policy to terminate the service of repeat infringers; (2) not to interfere with techno­logic­al protection measures, or with content, its origin, or recipients; (3) to designate a recipient of notices (for providers of hosting and referencing services); and (4) to remove infringing content in accordance with the Decree (for providers of caching, hosting, and linking and referencing services).54 The ‘actual knowledge’ standard set by Executive Decree 36,880 has a peculiar formulation. Liability limitations apply to hosting providers and search engines that ex­ped­ itious­ly remove or block access to allegedly infringing content, at the time of: obtaining actual knowledge of the infringement or noticing the facts [or circumstances, for hosting services] from which infringement is evident, including through a notice received according to Article 11 [containing notice requirements within a collaborative scheme] or through a notification from a competent judicial authority ordering the removal of content or blocking of access to it.55

In other words, actual knowledge comes either by fact or circumstances that stem from a court order (as in Chile) or from a private notice (somewhat similar to the United 50  See Michael Geist, ‘The Trouble with the TPP’s Copyright Rules’ in Scott Sinclair and Stuart Trew (eds), The Trans-Pacific Partnership and Canada: A Citizen’s Guide (Lorimer 2016) 158–68. 51  See Decreto Ejecutivo no. 36880-COMEX-JP, Reglamento sobre la limitación a la responsabilidad de los proveedores de servicios por infracciones a Derechos de Autor y Conexos de Acuerdo con el Artículo 15.11.27 del Tratado de Libre Comercio República Dominicana–Centroamérica–Estados Unidos (CR) (hereafter Executive Decree 36880). 52  ibid. Art. 7.    53  ibid. Art. 5.    54  ibid. Arts 6–10.    55 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   183 States). As a result, two secondary liability exemption schemes exist in the Decree: judicial enforcement and a collaboration procedure. The first system in the Decree resembles proper judicial enforcement. The rightholders or their agents must file a petition, within a judicial procedure or as a preliminary injunction, detailing the rights infringed and their ownership, the nature of the in­frin­ging content and form of infringement, as well as its location on the networks or systems of the service provider. The court will then decide under general copyright rules and order appropriate measures such as cancelling the account, removing or blocking the content identified as infringing, or other measures deemed necessary as long as they are the least costly option for the service provider.56 In the second of these schemes, the collaboration procedure, rightholders can send a written sworn statement to the service provider (for caching, hosting, and indexing) if their rights under copyright law are infringed. The service provider has up to fifteen days to determine whether it requires additional information from the copyright holder. After this period, the service provider must send a notice to the content provider (the user) within thirty days of the first notice. The content provider has fifteen days to voluntarily remove the infringing content or file a counter-notice to the copyright holder. If the user does not respond to the first notice, the service provider may cancel their account, or remove the content or block access to it. The user may still file a counter-notice to challenge the removal. This collaborative procedure roughly resembles a ‘notice and notice’ system.57 In its Special 301 Report, the USTR has kept Costa Rica on its ‘Watch List’, stating that this forty-five-day term may have a negative effect on ‘online piracy’. A ‘good faith’ removal can also allow for a safe harbour, provided that the affected user is given a chance to file a counter-notice, unless the rightholder has informed the service provider it will commence judicial proceedings against the alleged infringer.58 The Decree explicitly mentions actual knowledge can derive from ‘facts and circumstances’ surrounding the upload.

3.3  Pending Implementation Of all the other Latin American countries with FTA obligations for internet inter­medi­ary liability provisions, only Colombia has officially attempted to pass an implementation act. The Colombia–United States Trade Promotion Agreement (CTPA) has been the 56  ibid. Art. 18. 57  Under ‘notice and notice’ systems, the intermediary should ‘include a detailed notice about the location of the material considered unlawful and the legal basis for the unlawfulness, as well as an adequate option for counter-notice to the user who produce the content, with judicial oversight guarantees’. IACHR Office of the Special Rapporteur for Freedom of Expression (OSRFE), Freedom of Expression and the Internet (31 December 2013) § 109 . This system is already in place in Canada and has been a defence for Canadian negotiators to exempt Canada from the notice-and-takedown provisions present in both the failed TPP and in the USMCA. 58  Executive Decree 36880 (n. 51) Art. 15 (CR).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

184   Juan Carlos Lara GÁlvez and Alan M. Sears subject of several failed attempts to implement copyright-related provisions. Each one of them is publicly identified as ‘Ley Lleras’, followed by a number.59 The USTR objected to Colombia’s 2018 entry into the Organization for Economic Co-operation and Development (OECD), because of its outstanding and unimplemented copyright obligations.60 The first attempt to implement the internet intermediary liability provisions of the CTPA came through Bill No. 241 of 2011.61 The Bill contained language largely similar to that found in the FTA. The sponsors of the Bill attempted a quick turnaround in Congress, and an open public discussion of its contents was avoided.62 The Colombian Bill closely followed the DMCA model of notice and takedown, with a private notice required to trigger removal of content and activate limitation of liability for intermediaries. After a quick committee discussion and approval, some criticism over its likely unconstitutionality, and strong public outcry,63 the Bill was archived in November 2011. Subsequent attempts at copyright reform, each known as another iteration of ‘Ley Lleras’, addressed different parts of the law and did not include internet intermediary liability provisions. The USTR attempted to reduce flexibility in the intermediary liability framework of the CTPA. A side letter signed on 22 November 2006, the same date as the signature of the CTPA, was filed by the Deputy United States Representative at the time. The ‘ISP Side Letter’ gave a very detailed example of a model that would constitute an ‘effective written notice or counter-notification’ under the CTPA.64 Such a private notification as described in the side letter would be sufficient ‘effective notice’, thereby introducing a private notice as the proper way of building ‘actual knowledge’. The letter is understood by both parties (represented by the USTR and the Colombian Minister of Commerce) to be an ‘integral part of the [trade promotion] agreement’.65 The commitment under the CTPA to adopt the necessary legislation was scheduled to be completed by May 2013. As a result of this failure in implementation, the USTR placed Colombia on the ‘Priority Watch List’ in its Special 301 Report for the year 2018, citing ‘lack of meaningful progress’ in implementing the CTPA, including ISP liability provisions,66 downgrading its status after decades on the ‘Watch List’. 59  The most recent attempt, ‘Ley Lleras 6’, contains substantive provisions on copyright, strengthening exclusive rights as well as penalties and enforcement mechanisms, and implementing obligations against the circumvention of technological protection measures that prevent access to or reproduction of copyrighted works. 60 ‘Colombia invited to join OECD despite some US objections’ (Financial Times, 25 May 2018) . 61  See Proyecto de Ley 241 de 2011 (Col.). 62  See Marcela Palacio Puerta, Derechos de autor, tecnología y educación para el siglo XXI: El tratado de libre comercio entre Colombia y Estados Unidos (2016) 164–6. 63  See Carlos Cortés Castillo, ‘El debate pendiente en Colombia sobre la protección de derechos de autor en Internet. El caso de la “Ley Lleras”’, Fundación Karisma Working Paper 4/2013 (2013) . 64 See US–Colombia ISP Side Letter . 65 ibid.   66  See USTR (n. 26) 10.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   185 A similar situation occurred in Peru. Following the signature of the Peru–United States Trade Promotion Agreement (PTPA), an ‘ISP Side Letter’—with similar content as the Colombian side letter—was sent by the USTR in April 2006, as an ‘understanding’ between the parties that would in essence become part of the agreement.67 The same text appeared years later in the February 2011 text of the TPP, as part of the proposal by the United States for an ISP Side Letter. Almost identical content may be found in similar side letters additional to FTAs already signed with Singapore (2003), Australia (2004), Bahrain (2004), Morocco (2004), and Oman (2006). However, the PTPA framework, including its side letter, has not yet been implemented in national law. Peru appeared in the 2018 Special 301 Report ‘Watch List’, with a very slight recommendation to implement the PTPA intermediary liability rules therein. By the end of 2018, there had been no movement through legislative or executive action to implement those FTA provisions. The remaining Latin American countries (Panama, the Dominican Republic, and the CAFTA bloc, with the exception of Costa Rica), have neither adopted new legislation, nor introduced draft bills in their respective legislatures, nor signed new executive decrees. Only time will tell whether the US government will push for further implementation of existing FTA commitments, and whether those efforts will face resistance.

3.4  The Current Promotion of the DMCA Model in FTAs The TPP was an ambitious attempt to export DMCA rules into an agreement across continents. It was initially signed by Chile, Peru, and Mexico, and another nine countries.68 At the time Peru joined the negotiations for the TPP, it had yet to implement the DMCA-like provisions included in its earlier FTA with the United States. Mexico, on the other hand, it did not have any pre-existing obligations despite being a signatory to NAFTA with the United States. The TPP may have represented a renewed effort by the United States to promote the DMCA model with Chile and Peru, and to expand its cover­age so as to include Mexico. The withdrawal of the United States from the TPP in early 2017 was thought to derail the deal; it was the largest party in the agreement and a force driving the negotiations.69 However, the remaining parties reached an agreement in March 2018 to an amended version of the TPP, now dubbed the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), which notably suspended Article 18.82: Legal

67 See US–Peru ISP Side Letter . 68  The original parties were Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, Vietnam, and the United States. 69  See Xin En Lee, ‘What is the TPP?’ (CNBC, 3 April 2018) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

186   Juan Carlos Lara GÁlvez and Alan M. Sears Remedies and Safe Harbours.70 In April 2018, the United States informally expressed some interest in joining the agreement,71 and it remains to be seen whether they do so and attempt to revive these ‘suspended’ provisions. More recently, the Trump administration has focused on renegotiating the North American Free Trade Agreement (NAFTA). NAFTA (the United States, Mexico, and Canada being the treaty’s signatories) has been in force since 1994; however, it does not include internet intermediary liability provisions, and neither does Mexican federal law. The renegotiation of the TPP within NAFTA presented the US government with an opportunity to promote its DMCA system among its neighbours. The DMCA model had already been touted as a ‘foundation’ of the digital economy in the United States by internet industry representatives promoting its inclusion in the revised NAFTA.72 Although it appeared that the United States was willing to move forward on a bilateral trade deal with Mexico that included provisions on intermediary liability,73 Canada was able to agree to terms in late September 2018 to form the US–Mexico–Canada Agreement (USMCA).74 The final version contains language nearly identical to that used in the TPP in Article 20.J.11: Legal Remedies and Safe Harbors.75 As a consequence of Mexico not having any intermediary liability provisions,76 only Mexico must implement this article within three years of the USMCA’s entry into force.77 The USMCA marks a return to a mandatory counter-notice (and put-back) provisions found in the bilateral FTAs,78 whereas under the TPP it was optional.79 If material is mistakenly removed or disabled, the service provider must restore material upon receipt of a counter-notice, unless the rightholder, who sent the original takedown notification, initiates civil judicial proceedings within a reasonable amount of time.80 Under the

70  See Comprehensive and Progressive Agreement for Trans-Pacific Partnership, Annex 7(k) . It should be noted that the treaty needs six ratifications to come into effect, and currently there are only three. 71  See ‘Trump to reconsider joining TPP trade pact’ (BBC News, 13 April 2018) . 72  See Internet Association, ‘Modernizing NAFTA for today’s economy’, White Paper (2017) . 73  See William Mauldin, ‘Trade Deal Could Move Ahead Without Canada, U.S. Official Says’ (Wall Street Journal, 25 September 2018) ; United States–Mexico Trade Fact Sheet: Modernizing NAFTA into a 21st Century Trade Agreement . 74  See John Paul Tasker and Elise von Scheel, ‘Canada, U.S. have reached a NAFTA deal—now called the USMCA’ (CBC News, 30 September 2018) . 75  See USMCA, Art. 20.J.11, subsection 3(a). 76  See Canada meets the requirements of Annex 20-A to be exempt. 77  See USMCA, Art. 20.90, subsection 3(g). 78  See e.g. Colombia–United States Trade Promotion Agreement, Art. 16.11, subsection 29(b)(x). 79  cf. USMCA, Art. 20.J.11, subsection 4 with the TPP, Art. 18.82, subsection 4. 80  This time period is determined according to domestic laws or regulations.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   187 TPP, this provision applied only if a state party already had a similar system for ­counter-notices in domestic law. Mexico has ratified the agreement, while Canada and the United States have yet to do so. While the ratification of the agreement in the United States was in a state of flux,81 after a number of amendments, the agreement is set to be ratified, and Canada is expected to follow suit.82

4.  The Convenience of the DMCA Approach for Notice and Takedown in Latin America Internet intermediary liability rules are supposed to reflect a balance between the interests of copyright holders, as well as the users uploading that content, and the public in general.83 The liability model established by the DMCA relies on a safe harbour system in which a copyright holder need only send a private notice of infringement to an inter­medi­ary in order to see content removed from the internet, so that the service provider can be exempted from secondary liability for copyright infringement on its network or systems. Enacting DMCA-like rules for such a secondary liability framework is unadvisable, not only in Latin American countries, but also in the rest of the world. The first reason is the fundamental lack of evidence to support the convenience of this approach for Latin American countries, which have very different concerns, economies, and development goals than the United States.84 FTAs, as promoted by the United States (with its detailed language, well beyond the WIPO Internet Treaties framework), while advancing the interests of US industries, have effectively limited the potential of national policies in broad areas such as intellectual property. Moreover, the existing FTAs do not provide any guidance or promotion of a broader balance between the substantive provisions of copyright law and its enforcement mechanisms, and the legitimate interests of internet users that may be affected. In other 81  See Jacob Pramuk, ‘House Dems, Trump administration fail to reach deal on USMCA trade agreement’ (CNBC, 21 November 2019) ; Erica Werner and others, ‘Trump’s North American trade deal at risk of stalling in Congress’ (Washington Post, 29 March 2019) . 82  Erica Werner and Rachel Siegel, ‘Senate approves new North American trade deal with Canada and Mexico’ (Washington Post, 16 January 2020) . 83  See Cerda Silva (n. 43). 84 See Pedro Roffe and Maximiliano Santa Cruz, Intellectual Property Rights and Sustainable Development: A Survey of Major Issues (Economic Commission for Latin America and the Caribbean 2007).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

188   Juan Carlos Lara GÁlvez and Alan M. Sears words, DMCA-like rules would come to establish rules for easy content removal, based on alleged copyright infringement, in countries where such infringement can happen through trivial acts not covered by balancing provisions such as copyright limitations and exceptions or the fair use doctrine, which are not equally promoted in FTAs.85 Enhancing exclusive rights and enforcement mechanisms, without counterweights in the public interest, has the effect of reducing the rights of users to pursue their own legitimate purposes such as expression or education. This criticism of the DMCA shows that it remains a controversial piece of legislation. As such, FTA provisions that follow its framework should not be implemented in Latin American countries that have agreements with the United States without a strong, public debate and an analysis of its adequacy to domestic law, including its constitutional and human rights principles. Otherwise, its transposition into very different legal and cultural landscapes would entail the implementation of a one-size-fits-all solution, without accounting for national context or input from local stakeholders.86 Participation in these negotiations and implementation processes remains a sore point. For all its controversy, the DMCA can still be said to be the product of Congressional debate in the United States, whereas the growing demands for certain provisions through FTAs leaves little room for input from local stakeholders or different interest groups, both in FTA closed-door negotiations and in their implementation by legislatures. The strong opposition from academia and civil society groups against the TPP, including the opposition to its intellectual property and ISP liability provisions, was partly based on the exclusion of the viewpoints of relevant stakeholders and public interest representatives,87 while interest groups from the US intellectual property industries had far more access to the negotiators.88 This lack of transparency not only creates problems for the substance of internet intermediary liability provisions, but for democratic debate itself as the source of the rules that govern societal relations. Finally, the importance of the need for balance cannot be downplayed. Beyond copyright protections and exceptions, content uploads, and removals, lie questions of balance between fundamental rights that are at the crux of criticism of the DMCA system. The exercise, via the internet, of fundamental rights recognized in international human rights law demands a higher level of participation, democratic debate, and risk-assessment analysis to allow for better informed public policy.

85  See e.g. Jonathan Band, ‘Evolution of the Copyright Exceptions and Limitations Provision in the Trans-Pacific Partnership Agreement’, Social Science Research Network Scholarly Paper no. 2689251 (2015) . In the TPP, it was only after a high level of public pressure that some degree of balance was introduced to the text of the agreement. However, its length and detail is minuscule compared to the provisions enhancing the rights of IPRs. 86  See e.g. Cecilia Lerman, Impact of Free Trade Agreements on Internet Policy, a Latin America Case Study (Internet Policy Observatory 2015) 25 . 87  See ‘More Than 400 Civil Society Groups Write Congress on TPP and Fast Track’ (Public Citizen, 4  March 2013) . 88 See Margot Kaminski, ‘The Capture of International Intellectual Property Law Through the U.S. Trade Regime’ (2014) 87 Southern California L. Rev. 977.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   189

5. Conclusions The DMCA model for internet intermediary liability has been proposed in other FTAs negotiated by the United States, including in the revision to NAFTA now known as the USMCA. However, it is questionable whether implementing the DMCA model is ideal for Latin American countries, and whether it should form part of future FTAs. First, the appeals to combat ‘online piracy’ by the USTR are lacking a proper evalu­ation of the supposed damage to the copyright industries in the countries where this rhetoric is promoted. In addition, there is no consensus as to whether the DMCA is the best model for balancing the rights of copyright holders and the general public. Furthermore, this debate opens questions regarding the legitimacy of the democratic processes that might be undermined by FTAs’ obligations and the fundamental rights that might be affected if the intermediary liability regime were implemented in the same manner as in the United States.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 10

The M a rco Ci v il da I n ter n et a n d Digita l Constitu tiona lism Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes*

The laws on intermediary liability lie at the intersection of several complex legal debates. The way these laws are conceived, drafted, and interpreted has important implications in shaping the digital world, by empowering or limiting private and state actors, by promoting or harming free speech and privacy, and by promoting or hindering experimentation and innovation. By limiting online service providers’ (OSPs) liability for the conduct of their users, these rules can even shape business models, and wellcrafted intermediary liability rules can remove, under certain conditions, the internet platforms’ incentive to interfere with their users’ rights and activities. In Brazil, the landmark statute on the liability of internet intermediaries is Law 12.965/2014 which has been labelled the Civil Rights Framework1 for the Internet (Marco Civil da Internet, or MCI).2 The MCI was enacted by Congress in 2014, with enormous support from both civil society entities and internet companies, and it was *  We would like to thank Clara Iglesias Keller and the editor of this Handbook for helpful comments, and Renan Medeiros de Oliveira for excellent research assistance. Brazilian legislation can be found at  and case law at , , and . 1  ‘Framework’ has become the most frequent translation for ‘Marco’. Critics have pointed out, however, that it is not a precise description of the MCI—an internet-centric, ordinary federal law, that ‘establishes the principles, guarantees, rights and obligations for the use of Internet in Brazil’ (Francis Augusto Medeiros and Lee Bygrave, ‘Brazil’s Marco Civil da Internet: Does it live up to the hype?’ (2015) 31 Comp. L. & Sec. Rev. 1, 121). 2  See Marco Civil da Internet 2014 (BR) (hereafter MCI 2014).

© Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   191 praised internationally as a model legislation for the protection of freedom of expression and other human rights.3 In substance, the MCI shifted the balance of power between private and state actors. It created safeguards against intrusion on users’ communications and privacy by both law enforcement authorities and internet access providers, while also creating the obligation for the latter to retain certain types of data that would allow the identification of users engaging in illegal activities. The MCI also created ‘safe harbours’—clauses that establish procedures and conditions that, if followed by OSPs, limit their civil and criminal liability for acts committed by their users. In doing so, the MCI reduced the incentives for OSPs to police the online activity of its users, for example by removing controversial content shared by its users so as to avoid liability. While it adopts specific rules on liability, the MCI also establishes broad principles and expansive guidelines, and expressly affirms the protection of freedom of expression and privacy in the digital sphere, against both public and private actors. These and other traits of the MCI (e.g. the high level of civil society mobilization around its drafting and enactment) have encouraged some observers to discuss the MCI as a ‘Constitution for the Internet’ or a ‘Digital Bill of Rights’.4 Indeed, as we will see later, the MCI does embody some of the elements highlighted by the growing literature on ‘digital constitutionalism’.5 In this chapter, we briefly review the history of the Brazilian Marco Civil da Internet, explaining how different business, government, and civil society actors pushed for and shaped this law. We will then engage with the substance of the MCI, focusing specifically on its main intermediary liability provisions and how they are being applied in practice, so far, by Brazilian courts. In exploring the contrast between the formal provisions and the ‘law in action’, we will briefly explore key aspects of Brazilian intermediary liability law, such as the conditions intermediaries must fulfil in order to benefit from the im­mun­ities the MCI grants them and the requirements for a valid court order targeting illegal content, depending on the kinds of user activities.

3  Paper’s newsroom, ‘Projeto brasileiro de Marco Civil da Internet é modelo internacional, diz relator da ONU’ (ONUBR, 9 April 2014) . 4  Dillon Mann, ‘Welcoming Brazil’s Marco Civil: A World First Digital Bill of Rights’ (World Wide Web Foundation, 26 March 2014) . On a more sceptical note, see Medeiros and Bygrave (n. 1) (pointing out that the MCI ‘does not contain any right that has not been enacted elsewhere in the world’, and that it ‘it fleshes out rights that already exist in Brazil (albeit in a latent or vague form), rather than creating entirely new rights’). 5  For a critical review of the main themes in this literature, see Edoardo Celeste, ‘Digital Constitutionalism: Mapping the Constitutional Response to Digital Technology's Challenges’, HIIG Discussion Paper Series no. 2018-02 (2018).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

192   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes

1.  The Civil Rights Framework for the Internet 1.1  The Path Towards the MCI’s Enactment Prior to 2014, Brazil had no specific legislation regulating the liability of internet intermediaries for their users’ activities. The absence of such rules had been criticized as a source of legal uncertainty that was harmful to internet platforms, as both companies and judges often adopted disproportionate measures to try to curb the spread of potentially illegal content online. An early example of such pre-MCI measures was the judicial decision on the Cicarelli case,6 which led to the blockage of the video-streaming platform YouTube for almost forty-eight hours. The famous model and TV host Daniela Cicarelli and her boyfriend sued YouTube and demanded the removal of an unauthorized video, as well as damages for the violation of privacy and publicity rights. The video depicted the couple engaged in intimate scenes on a beach near the city of Cadiz, in Spain. The plaintiffs requested a preliminary injunction to remove the videos from YouTube. Although the request was denied by the trial judge, the decision was later reversed by the Fourth Chamber of the Court of Appeals of the State of São Paulo, which required YouTube to block online access to the videos.7 Enforcing the blockage decision proved to be trickier than expected. Copies of the original video kept resurfacing on the platform. Considering the potential damages to the plaintiff ’s privacy and publicity, the trial judge ordered several internet access pro­ viders to block the YouTube platform itself. After the decision was implemented by two backbone companies, the Court of Appeals of São Paulo reversed the blocking and said that its original decision had been misinterpreted. The court clarified that the original blocking decision targeted only the specific content that infringed the plaintiff ’s rights, and therefore should not reach the video platform in its entirety. The case was dismissed at trial level in 2007. In ruling favourably for YouTube, the judge considered that no privacy rights had been violated, since the intimate scenes had been filmed in a public space. The Court of Appeals reversed that decision in June 2008, determining that YouTube should adopt all necessary measures to exclude the videos, including warning and ultimately punishing all users who re-uploaded the video. The court also set a daily fine in case of non-compliance with the decision. In 2015, the Superior Court of Justice (STJ), in a new appeal on that case, reviewed the amount of damages

6  For additional details and documents and an overview of the case in English, see ‘YouTube case: Non-compliance with judicial requests for content removal’ (Bloqueios.info, 9 January 2007) . 7  See Fernando Porfírio, ‘Justiça confirma veto ao vídeo de Cicarelli na internet’ (ConJur, 28 September 2006) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   193 awarded to the plaintiffs in the lower courts, and eventually decreased them from almost 100 million reais (in total) to 250,000 reais for each plaintiff.8 Although the blocking of YouTube played a decisive role in raising debates on intermediary liability in Brazil, other cases reached the STJ as early as 2010. These cases further illustrate the legal uncertainty around the liability of internet intermediaries in the pre-MCI era. On one of the first occasions that the STJ assessed the liability of internet intermediaries, the court ruled favourably for state prosecutors from the Brazilian state of Rondônia (RO), in a class action. The State Prosecutor’s Office sought a preliminary injunction ordering Google to remove and to prevent the re-uploading of defamatory content against minors posted by the users of the social network Orkut. Although the company successfully removed the illegal content identified, it did not create a filtering mechanism to prevent the re-uploading of similar content, leading the Rondônia State Court to impose a fine of 5,000 reais per day for failing to comply with the court’s order. Google appealed to the STJ, arguing that it had no technical means for pre-screening or monitoring all the content being uploaded by its users. Although Google promptly removed the content identified as illegal, the Second Chamber of the STJ ruled, in a March 2010 decision, that removing the illegal content was not enough, and that Google should be held jointly liable (together with its users) in case new, similar content was posted.9 The same court, however, pointed in a different direction in a December 2010 decision, also involving Orkut.10 In that case, the Third Chamber of the STJ developed an argument to set aside the strict liability rule required by the Consumer Protection Code. Although it recognized that Google should be subject to consumer protection rules, the court asserted that filtering content to prevent illegal activities was not an ‘intrinsic feature’ of the service Google provided. Therefore, the service should not be considered defective. The court also rejected the idea that, in providing its services, Google had exposed its consumers to a ‘risk’—a scenario in which Brazilian law would allow for the application of a strict liability rule. Instead, OSPs such as Google would be required simply to promptly remove the content on notice and to adopt measures to allow the identification of its users. Failing to do so, the court said, could make OSPs liable for their users’ illegal activities.11 The same rationale would then be used to decide one of the first cases against the Google search engine, but with two important twists. In a case dealing with the deindexation of defamatory content, the STJ moved beyond its previous decisions in affirming that, as the plaintiff had been able to identify the precise URL to be delisted, she could be expected to have been able to identify the original publisher of the harmful content. If that was the case, reasoned the court, then it would be more appropriate for the plaintiff to bring an action against the original publisher and not against the search 8  See Superior Tribunal de Justiça [Superior Court of Justice] (STJ) Renato Aufiero Malzoni Filho v Google and Youtube [2015] REsp 1492947 (Bra.). 9  See STJ Ministério Público do Estado de Rondônia v Google [2010] REsp 1.117.633/RO (Bra). 10  See STJ I P DA S B v Google [2010] REsp 1.193/764/SP (Bra.). 11 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

194   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes engine, which would only index publicly available information. In other words, the decision indicated that search engines cannot be compelled to remove content from their indexes, even when that content is deemed illegal.12 In a case decided in December 2013, however, the court crafted an exception to that rule, affirming that the search engine could be sued even if the original content had been removed or altered in the original source, as long as it still appeared as a search result. Therefore, the decision indicated that a search engine can be compelled to update its cache memory to reflect the current, actual state of the original source.13 In 2011, another case involving Orkut was decided by the Fourth Chamber of the STJ. The plaintiff requested the platform to exclude all defamatory content against him, as well as the payment of damages. In a preliminary injunction, the trial judge ordered Orkut to remove, within forty-eight hours, all negative topics relating to the plaintiff. The trial court decision was confirmed at the Rio Grande do Sul State’s Appeals Court, and ultimately reached the Superior STJ after an appeal by Google. The STJ affirmed that the platform should be able to filter uncontroversially defamatory content. In its decision, the STJ said that, as Google had created this ‘untameable monster’, it should bear responsibility for the ‘disastrous consequences’ of lacking oversight of its users. The decision also stated: (1) that even if automated filtering was impossible in the case of controversial speech, the company should bear responsibility for assessing the content and keeping it online at its own risk; and (2) that a large internet company should be aware of the existence of a potentially offensive message as soon as it is posted, regardless of notification by the person offended. The decision thus upheld and expanded the trial judge’s decision that required Google to proactively monitor and remove the content from the Orkut social network.14 Intellectual property (IP) issues were also brought to the STJ before the MCI was enacted. In the Dafra case (decided in 2014), YouTube was found liable for copyright infringement due to a parody of a commercial of Dafra (a motorcycle manufacturer) that made negative comments about the company’s products. YouTube removed the video after a takedown request. However, Dafra filed a lawsuit requesting YouTube to actively prevent the exhibition of any video using the company’s name or trade mark, regardless of the exact title as the first video and regardless of which user had uploaded it. The plaintiff also sought to identify the user who posted the parody and asked YouTube to pay monetary damages due to its negligence. These claims were accepted by the court even though parodies are among the exceptions to copyright protection in Brazilian copyright law.15 Moreover, the decision affirmed that after being formally notified by the plaintiff, YouTube had an obligation to remove all existing videos with the same title, despite the 12  See STJ Maria da Graça Xuxa Meneghel v Google [2012] REsp 1.136.921-RJ (Bra.). 13  See STJ Segunda Turma Recursal dos Juizados Especiais Cíveis e Criminais do Estado do Acre v Google [2013] REcl 5072-AC (Bra.). 14  See STJ Tiago Valenti v Orkut [2011] REsp 1.175.675-RS (Bra.). 15  See Copyright Law, art. 47 (Bra.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   195 precise identification of the URL.16 The opinion of Judge Luis Felipe Salomão (in the STJ) considered that YouTube would be able to filter new versions of the video re-uploaded to the platform. However, the opinion also emphasized that, because this issue had not been raised by the plaintiff, it was beyond the scope of the appeal. The obligation to indicate the URL of the content to be removed was eventually settled in a different case by the Second Section of the STJ. In another appeal (REcl 5.072/AC), decided on December 2013, the Court stated that the specific URL is an essential requirement for a court order to determine the removal of content.17 Behind these three cases, and several others, we find a common rationale: internet platforms should bear the full risks of illegal activities by their users. As detailed in the next section, the MCI was drafted, among other reasons, to change and limit this state of affairs, bringing more legal certainty for internet users and intermediaries. However, even today, some important pre-MCI cases are still pending in the Brazilian courts. The most important of these relates to the applicability of the Consumer Protection Code’s (CDC) strict liability rules to Orkut.18 A high school teacher, Ms Aliandra Vieira, requested Orkut to exclude a community in which some of her students were posting defamatory comments. As Orkut refused to take the community down, Ms Vieira filed a lawsuit against the company before a small claims court in the state of Minas Gerais, requesting an injunction to take down the community, as well as damages from Google for hosting the content. The court awarded Ms Vieira 10,000 reais in damages. Google’s appeal to the Court of Appeals of Minas Gerais was repealed, and the court affirmed that the company should indeed be considered strictly liable for user-posted content. Google appealed to the Federal Supreme Court (STF), but the case is still pending in the court’s docket. The STF has so far only recognized its jurisdiction on the case by recognizing that the appeal has ‘general repercussion’—but this means that before a final decision is reached by the STF, all similar litigation on the topic throughout the country must be suspended. As these and other cases made their way through the Brazilian courts, the media, and the legal community, legislative debates in Congress were pushing for the enactment of a statute regulating the activities of internet intermediaries. Among many such legislative efforts, one that garnered significant public attention was a ‘cybercrime’ bill proposed by Senator Eduardo Azeredo.19 The bill was regarded by scholars and civil society organizations as a move to censor online speech and to strengthen the protection of IP rights. 16  See ‘Superior Court of Justice, Fourth Chamber, Google Brazil v. Dafra, Special Appeal No. 1306157/ SP’(WILMap,24March2014). 17 See Acre v Google (n. 13). The same conclusion was eventually reached by the Second Section of the STJ in a case concerning copyright infringement on the platform Orkut. See STJ Botelho Indústria e Distribuição Cinematográfica Ltda v Google [2015] REsp 1.512.647-MG (Bra.). 18  See ‘Supreme Court of Justice, Aliandra v Orkut, ARE no. 660861’ (WILMap, 9 April 2012) . 19  Eduardo Azeredo’s Cybercrime Bill was proposed as a substitute for proposal 84/99 in the Brazilian House of Representatives. For a full record of the procedures in the House, see .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

196   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes It triggered reactions from different social sectors and led to a public campaign for the internet not to be regulated from a criminal standpoint. In that scenario, it become clear that innovation and protection of human rights online would depend on the enactment of a civil rights framework for the internet.

1.2  The MCI Legislative Process In 2009, the Brazilian Internet Steering Committee enacted a resolution with a ‘Charter of Principles for the Governance and Use of the Internet’.20 Almost at the same time, the Brazilian Congress was facing mounting pressure from organized civil society groups against the ‘Cybercrime Bill’ presented by Senator Eduardo Azeredo. The Azeredo Bill aimed to create a national legal framework for preventing and fighting cybercrime. His proposal was largely inspired by the Budapest Convention, a controversial international treaty sponsored by the Council of Europe and approved in 2001, shortly after the 9/11 attacks in United States.21 The Azeredo Bill raised serious concerns about privacy and surveillance among Brazilian internet activists. It established mandatory data retention, for a period of five years, for all internet service providers (ISPs) active in the country. Furthermore, public interest groups voiced concerns that the Bill could be employed to criminalize common and widely accepted behaviours by internet users, as well as to strengthen the enforcement of IP rights online.22 Reactions to the Cybercrime Bill eventually converged on an alternative proposal for regulating the internet—instead of establishing crimes and penalties for misuse, the idea would be to affirm principles, rights, duties, and responsibilities relating to use of the internet. Presented under the label of a ‘Civil Rights Framework for the Internet’, the proposal was received with enthusiasm by government officials. In 2009, the Ministry of Justice partnered with the Center for Technology and Society of the Getulio Vargas Foundation (CTS/FGV), a non-profit research and higher education institution, to develop a blueprint for an online public consultation process as part of the drafting of the new legislation. The draft presented for consultation was officially named the ‘Marco Civil da Internet.’ The process began with public debates on the general principles that should govern the internet (broadly inspired by the CGI Charter of Principles for the Governance and Use

20  See Comitê Gestor da Internet no. Brasil (CGI.br), Brazilian Internet Steering Committee Principles for the Governance and Use of the Internet . 21  See Budapest Convention . 22  See Luiz Moncau, Ronaldo Lemos, and Thiago Bottino, ‘Projeto de Lei de Cibercrimes: há outra alternativa para a internet brasileira?’ (2008) 249 Revista de Direito Administrativo 273–94 . See also Guilherme Varella, ‘PL Azeredo: a contramão dos direitos e liberdades na Internet’ (IDEC, 28 July 2011) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   197 of Internet).23 The outcome of those debates served as a foundation and orientation for the first draft of the Bill, which was then subject to a new round of public consultation. The level of public engagement with the consultations24 was high. During the process, individual users and governmental and non-governmental entities submitted more than over 2,000 contributions, and the substance of that input was in many ways reflected in the final text of the Bill. One of the main changes adopted during the public consultations was the substitution of a ‘notice-and-takedown’ regime for all types of il­legal online activity by a set of safe harbour rules, by which intermediaries would be liable only if they failed to comply with a direct court order requiring the removal of infringing content. The final draft was presented to Congress on 24 August 2011.25 It encompassed users’ rights and general principles and rules for the regulation of the internet, including on data retention, ISPs’ liability, and net neutrality. In spite of having garnered public support during the consultation process, the proposal was stopped in its tracks in Congress. Telecommunication companies fiercely attacked the net neutrality provisions; broadcasters lobbied against any change that could amount to making copyright more flex­ible; and public authorities lobbied for longer data retention periods and reduced safeguards and controls in their access to users’ data. It would take an external event to change this state of affairs. The Snowden leaks on surveillance by the US National Security Agency (NSA) changed the whole political landscape, re-energizing public support for increased control on how power can be exercised over individuals on the internet. On 11 September 2013, as a direct response to information revealed by the leaks, President Dilma Rousseff issued a message endowing the MCI Bill with ‘urgent’ status for the purposes of the legislative process in Congress.26 In practice, this meant that each of the two legislative chambers, the Chamber of Deputies and the Federal Senate, would have forty-five days to vote on the Bill. The urgency procedure hastened deliberations in the Chamber but several points in the text had to be negotiated between political leaders before the Bill was finally approved by the Deputies on 25 March 2014. The fact that the NETmundial meeting was taking place in São Paulo probably made things faster in the Senate, where the Bill was quickly discussed and approved on 22 April and officially converted into law during the Meeting on 23 April 2014.

23  See CGI.br, Principles for the Governance and Use of the Internet . 24  See Ministério da Cultura, Cultura Digital, Online Public Consultation for the Marco Civil da Internet . See also the public consultation by the Ministry of Justice for the decree issued to provide further details on the implementation of the MCI. See Ministério da Justiça, Pensando o Direito, Public Consultation on the MCI 2014 Decree . 25  For further information about the Bill and its amendments in Congress, see Câmara dos Deputados, PL 2126/2011 . 26  See Dilma Rousseff, Message 391/2013 .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

198   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes

2.  The ‘MCI on the Books’ In this section, we will review the main traits of the MCI’s original text. In substance, the MCI covers much more than intermediary liability. It addresses key topics of internet governance such as data retention, network neutrality, privacy, and jurisdiction and conflict of laws issues, as well as defining the rights and principles for the internet in Brazil.

2.1  General Provisions of the MCI Data retention. The Azeredo Bill would have established a period of five years of mandatory data retention for internet intermediaries. The MCI has more moderate provisions to that effect: the period of mandatory data retention for access logs for ISPs providing connectivity services is only one year.27 Due to lobbying and public pressure by the federal police, however, other internet services—application services, meaning all websites and over-the-top (OTT) services—are also subject to a regime of mandatory data retention (six months).28 The provision was criticized due to the risks it poses to privacy. However, it provided a clear answer to the question of the obligations of internet intermediaries to provide information to authorities investigating users engaging in illegal activities online. Moreover, it created an important privacy safeguard in establishing that the authorities can only access the data thus retained by means of a court order. Net neutrality. Before the MCI, debates in Congress around the Azeredo Bill con­ sidered requiring intermediaries to proactively and pre-emptively monitor their networks and platforms, as well as to secretly inform authorities of potentially illegal activities by their users. This was a highly demanding obligation for ISPs, and it expressed how cybercrime concerns had moved centre stage in the Brazilian legislative debates on internet regulation at that point. In contrast, the net neutrality provision adopted in the MCI expressly rejects such an obligation. Article 9, § 3º establishes that access providers cannot block, monitor, filter, or analyse the content of data packets flowing through their networks, with the only exception being made for technical measures necessary to provide the internet service itself.29 The rule may provide an important defence against judicial decisions that would require an access provider to block specific content for law enforcement purposes.

27  The MCI establishes that access providers must retain connection records/logs (‘the set of information pertaining to the date and time of the beginning and end of a connection to the internet, the duration thereof and the IP address used by the terminal to send and receive data packages’) and must not retain application logs (‘the set of information regarding the date and time of use of a particular internet application from a particular IP address’). See MCI 2014 (n. 2) art. 5, VI and VIII and arts. 13 and 14. 28  ibid. art. 15. 29  ibid. art. 9, s 3.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   199 However, there are ongoing lobbying efforts by copyright holders to promote new rounds of legislation, in addition to the MCI, that would authorize the blocking of websites ‘dedicated to criminal activity’, including in cases of copyright infringement.30 Privacy. In Brazil, as elsewhere, the Snowden revelations brought privacy concerns to the fore, in addition to the freedom of expression concerns that had typically characterized a previous generation on internet governance debates.31 A small number of explicit privacy provisions were included in the MCI. They include a user’s right to be clearly informed about how their personal data is being collected and processed, and a right not to have one’s data processed for different purposes than those for which the data were originally collected.32 The MCI also requires that a user’s collection of personal data should be subject to their specific consent.33 Other rights and principles. Article 7, IV of the MCI also establishes a right of internet users to stay connected to the internet, ‘except if [the disconnection of the service] is due to a debt directly originating from its use’.34 This means, in practice, that the MCI forbids law enforcement policies or agreements between private actors that would allow for the disconnection of a user engaged in illegal activities (e.g. the ‘three-strike approach’ discussed in the early 2000s).35

2.2  Intermediary Liability Rules The MCI established a broad immunity for internet access providers and a narrower, conditional immunity for other OSPs (‘application providers’, in the statute’s wording).36 Article 18 states that ‘[t]he provider of connection to internet shall not be liable for civil damages resulting from content generated by third parties.’37 Article 19 establishes the intermediary liability rules for application providers, stating that an application provider ‘can only be subject to civil liability for damages resulting from content generated by third parties if, after a specific court order, it does not take any steps to, within the framework of their service and within the time stated in the order, make unavailable the content that was identified as being unlawful’.38 The MCI, therefore, does not take into account whether the intermediary was aware of the existence of infringing content on its platform. Requiring a court order means 30  See International Intellectual Property Alliance, ‘2018 Special 301 Report on Copyright Protection and Enforcement’ (8 February 2018) 95 . 31  See Dennis Redeker, Lex Gill, and Urs Gasser, ‘Towards digital constitutionalism? Mapping attempts to craft an Internet Bill of Rights’ (2018) 80 Int’l Communication Gazette 311. 32  See MCI 2014 (n. 2) art. 7, VI and VIII. 33  ibid. art. 7, IX. 34  ibid. art. 7, III. 35  For a broader account of those topics, see Luiz Moncau and Pedro Mizukami, ‘Brazilian chamber of deputies approves Marco Civil Bill’ (Info Justice, 25 March 2014) . See also Nicolo Zingales, ‘The Brazilian approach to internet intermediary liability: blueprint for a global regime?’ (28 December 2015) 4(4) Internet Policy Rev. . 36  See MCI 2014 (n. 2). 37  ibid. art. 18. 38  ibid. art. 19.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

200   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes rejecting using such awareness as a standard to make OSPs liable. Moreover, it makes it clear that the OSP is not legally entitled to make a call on which content is legal. This perspective is further developed in the first paragraph of article 19, which states that the court order must provide a clear identification pointing to the specific infringing content, ‘allowing the unquestionable location of the material’.39 If the court order standard serves the purpose of repealing any strict liability regimes, as well as the proactive monitoring of content, the specific and clear identification of the infringing content prevents the imposition of ‘staydown’ obligations (i.e. the obligation to ensure that the removed content ‘stays down’ and do not reappear on the platform or service).40 An important exception, however, can be found in the second paragraph of article 19. Originally, the MCI would have adopted a ‘horizontal’ approach, by applying the rules equally to all types of illegal content. However, lobbying efforts from copyright holders and broadcasters led to the creation of this specific provision, which explicitly states that copyright infringement is an exception to the general rule of article 19.41 As the current Brazilian copyright law dates from 1998 and does not have any specific provisions relating to the internet, this clause has led, in practice, to the adoption of a notice-and-takedown regime for copyright infringement, following the model established in the United States by the Digital Millennium Copyright Act and in Europe by the e-Commerce Directive.42 During the MCI debates in Congress, members of parliament were concerned with the possibility that illegal and harmful content would stay online for long periods before courts could issue a decision determining its removal. In this sense, the third paragraph of article 19 was created to allow citizens to file lawsuits in special ‘small claims courts’, which were designed in the 1990s to adjudicate less complex, low-profile cases with increased speed, simpler procedural requirements, and in which parties could represent themselves (i.e. without having to hire a lawyer) under certain conditions.43 Paragraph four of article 19 also allows judges to issue preliminary injunctions that may pave the way for a partial or full takedown request in the future, ‘as long as the requisites of truthiness of the author’s claims, the reasonable concern of irreparable damage, or damage that is difficult to repair’ are met.44 The second important exception in the MCI’s horizontal approach is the non-consensual dissemination of intimate images, popularly known as ‘revenge porn’. The law explicitly affirms that the removal of this kind of content requires only the ‘receipt of notice by the participant or his/her legal representative’.45 Following the general rule of article 19, the notice ‘must contain sufficient elements that allow the specific identification of the

39  ibid. art. 19(1). 40  For the different types of notice and action mechanisms, see Chapter 27. 41  See MCI 2014 (n. 2) art. 19(2). 42  See the Digital Millennium Copyright Act of 1998, 17 USC § 512 (US); Directive (EC) 2000/31 of the European Parliament and the Council on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1. 43  See MCI 2014 (n. 2) art. 19(3). 44  ibid. art. 19(4). 45  ibid. art. 21.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   201 material said to violate the right to privacy of the participant-user and the confirmation of the legitimacy of the party presenting the request’.46

2.3  Intermediary Liability Beyond the MCI While the copyright exception in the MCI is not included in copyright laws, other statutes—such as some electoral laws47 and child protection laws48—create specific rules of intermediary liability and address the question of online content removal. In the elect­or­al sphere, after public debates on whether removal of content that violates electoral law would require a court order, recent statutes have clearly created a judicial decision requirement, thus reconciling electoral laws with the general rule of the MCI. Brazilian electoral laws are updated before each election. Law 9.504 states that promoted content on the internet can only be removed after a specific court order.49 The Superior Electoral Court (TSE) Resolution no. 23.551/2017 repeats, in part, the wording of the MCI’s article 33, § 1º, determining that ‘to ensure freedom of expression and prevent censorship, court orders may determine the removal of content that infringes the electoral law’.50 Article 33 also establishes other details, such as that court orders must point to the URL of the infringing content; that such orders must not require the removal of content in less than twenty-four hours (except in ‘extraordinary’ cases); and that the effects of those court orders will cease after the end of the electoral period.51 The Child and Adolescent Statute (ECA) criminalizes the act of offering, trading, making available, distributing, or publishing by any means photographs, videos, or any records of sexual nature involving children or adolescents.52 The crime is punishable with imprisonment which can be applied to anyone who provides the means or services for access to that type of content. However, § 1º of article 241-A states that the provision is only applicable when the service provider, once notified, does not disable access to the infringing content.53 In practice, this means that service providers must respond to takedown notices to avoid criminal liability under the ECA.

46  ibid. art. 21(1). 47  Brazil holds elections every two years. During the election period, many requests to remove content are addressed by intermediaries. This can be clearly observed on the Google Transparency Report, ‘Solicitações governamentais de remoção de conteúdo’ . A broad account of lawsuits seeking content removal based on the electoral law can be found in the Ctrl-X project . 48  See lei no. 8.069 de 13 Julho 1990 [Child and Adolescent Statute (ECA) of 1990] (Bra.). 49  See lei no. 9.504 de 30 Setembro 1997, art. 57-B(4) (Bra.). 50  Superior Electoral Court (TSE) resolution no. 23.551/2017, art. 33(1) (Bra.). 51  ibid. art. 33 (3, 4, 6). 52  See MCI 2014 (n. 2) art. 241-A. 53  ibid. art. 241-A(1).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

202   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes

3.  The MCI in Practice 3.1  The Brazilian Justice System The Brazilian federation has a dual judicial system, with both state and federal courts.54 The federal courts’ jurisdiction is not defined by subject matter, but rather ratione personae: all cases in which one of the parties is the federal government, one of its agencies, or one its companies must be litigated before the federal courts.55 Moreover, the Brazilian justice system is further divided into specialized branches with separate labour courts, electoral courts, and military courts—each of which have their own specific court of last instance. Both state and federal courts are divided on trial and appellate levels (instâncias), and they both include internal, specialized ‘small claims’ courts for more simple cases. At both levels, the small claims and the ordinary courts can, in principle, hear cases arising under federal law. In the case of the MCI, litigation typically takes place in the state court system, unless the federal government or one of its entities is a defendant or plaintiff. The state and federal systems connect, at the top, in one of the two apex or ‘high’ courts: the Supremo Tribunal Federal, which has jurisdiction over appeals and abstract review lawsuits in which a constitutional question is raised, and the Superior Tribunal de Justiça.56 Appeals on cases involving the MCI typically reach the STJ as the third and last instance for appeals. However, in Brazilian legal practice it is not particularly hard to make the case that a constitutional controversy is involved. In the case of the MCI, for example, it can be argued that the (mis)application of the MCI in a given case violates free speech or privacy, thus potentially triggering the STF’s jurisdiction. Still, in quantitative terms, decisions on MCI-related issues are expected to come more often from the STJ than from the STF. The relationship between high court rulings (‘jurisprudência’) and the behaviour of the lower courts is a complex issue in Brazilian law. Formally, there is no system of binding precedent. When a high court decides a concrete case, the conventional, traditional view is that such a ruling is binding only for the litigating parties; Brazilian judges have been traditionally trained to see high court interpretations of the law and even of the constitution as persuasive, non-binding authority. This scenario has changed considerably since the 1990s, and especially since the Judicial Reform of 2005 and the Civil Procedure

54  For general information on the Brazilian judicial system, see Keith Rosenn, ‘Judicial Review in Brazil: Developments under the 1998 Constitution’ (2000) 7 Sw. J. L. & Trade Am. 291. 55  See Constitution 1998, art. 109(I) (Bra.). 56  While all trial judges and the majority of appellate judges in both state and federal courts are career professionals, who enter the judiciary by means of a public examination at the trial level, high court judicial appointments are more political. STF judges are appointed by the President and confirmed by the Senate; STJ judges follow the same procedure but the appointees must be chosen from a list of names drafted by the Prosecutor’s Office (Ministério Público), state courts of appeals, and federal courts of appeals.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   203 Code of 2015.57 These most recent rules create many mechanisms by which high courts can quickly dismiss appeals that are in tension with their established case law. In the specific case of the STF, statutory and constitutional interpretations adopted when deciding appeals can under certain procedural conditions be treated as binding—for most practical purposes, if not as a formal matter—by the lower courts.58 Regarding the timing of the decisions, judicial developments on the MCI are shaped by a couple features of the justice system. First, the Brazilian judiciary—especially at the appellate level and in the higher courts—is known for its very large docket and potentially very long proceedings. It is not rare for appeals at the STJ and STF to take several years before being decided. Secondly, and most importantly, at the STF and the STJ the reporting judge in the case and a panel’s presiding judge have complete discretion, in practice, to decide when a case is ready for judgment. There is no binding deadline or effective judicial custom in place constraining these judicial actors in that regard.59 This arrangement means that, in spite of the very large backlog, some cases are selected for a quick decision, while others will linger in the docket for years and sometimes decades. The combination of these features makes it very hard to predict when controversies involving new statutes, such as the MCI, will be settled by the STJ or perhaps the STF. Indeed, as we will see, some key lawsuits concerning the interpretation and implementation of the MCI have been lingering in these two courts’ dockets for years and there is no reliable way of estimating when they will be decided.

3.2  Relevant Decisions in High Courts In its pre-MCI decisions, as we saw earlier, the STJ had been pointing in conflicting directions. Depending on how they articulated legal ideas of strict liability, joint liability, and liability arising from ‘risks’ created by economic activity, STJ decisions could move either towards or away from the idea of safe harbours for internet intermediaries that promptly removed illegal content, when requested to do so, and that provided the means to identify users engaged in illegal activities.60 The MCI was drafted, among other ­reasons, precisely to eliminate some of these legal uncertainties. With the MCI, the Brazilian legislation expressly created a safe harbour provision as a general rule. The law became an important trench not only for protecting internet companies but also for freedom of expression. 57  See, generally, Maria Angela de Santa Cruz Oliveira and Nuno Garoupa, ‘Stare decisis and certiorari arrive to Brazil: A Comparative Law and Economics Approach’ (2012) 26(2) Emory Int’l L. Rev. 555, 555–98. 58  For an overview of these reforms, see Maria Angela de Santa Cruz Oliveira, ‘Reforming the Brazilian Supreme Federal Court: a Comparative Approach’ (2006) 5 Wash. U. Global Stud. L. Rev. 99. 59  See Diego Werneck Arguelhes and Ivar Hartmann, ‘Timing Control without Docket Control: How Individual Justices Shape the Brazilian Supreme Court’s Agenda’ (2017) 5(1) J. of Law and Courts 105–40 (including a discussion of (discretionary) timing control mechanisms at the level of the STF). 60  See e.g. STJ IP DA SB v Google [2010] REsp 1.193.764-SP (Bra.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

204   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes Currently, more than 100 decisions concerning the liability of internet intermediaries have been issued by the STJ. In a recent official compilation of its decisions on the topic, the court affirmed that its case law has repeatedly stated that internet intermediaries: (1) are not strictly liable for user-generated illegal content; (2) cannot be compelled to pre-emptively filter information uploaded by users; (3) must remove infringing content as soon as they are made aware of illegal content on their platforms, and have to pay damages if they fail to do so; and (4) must develop and maintain minimally effective mech­an­isms to enable the identification of their users.61 The STJ decisions also seem to assert, as a general rule, that those seeking the removal of infringing content on the internet must clearly specify that content’s specific URL. Indeed, the STJ takes the indication of the URL as a requirement for any court order determining content removal, in order to both protect application providers and mitigate negative impacts on freedom of expression. The STJ’s case law also emphasizes that the use of URLs to clearly identify the infringing content to be removed is a safe, ob­ject­ ive criteria for the assessment of OSPs’ compliance with court orders.62 Another relevant feature of the STJ’s case law concerns the liability of search engines for the indexing of content publicly available on the internet. As mentioned in Section 2.1, the pre-MCI decisions by the STJ had asserted that search engines would not be li­able for the content indexed by their services. However, the court maintained the same position even after enactment of the MCI, in a case decided in December 2016. Following the pre-MCI rationale, the STJ’s Third Chamber decided that a search engine cannot be compelled to remove infringing contents from its index.63 However, this position is not settled within the court. In a case in May 2018 involving a ‘right to be forgotten’ claim,64 the same Third Chamber decided, by a majority, that Google had to delist from its search engine page of results a number of links pointing to news content connecting the plaintiff to a possible fraud in a civil service public exam­in­ ation, whenever the search query was performed using the plaintiff ’s name. Interestingly, the decision is not based on data protection rules. It was issued prior to the recently approved (and still not in force) Brazilian Data Protection Law, which created a legal 61  See ‘Provedores, redes sociais e conteúdos ofensivos: o papel do STJ na definição de responsabilidades’ (Superior Tribunal de Justiça Noticias, 17 September 2017) . 62  See e.g. STJ Cristiane Leal de Oliveira v Google [2018] REsp 1.698.647 (Bra.). The decision mentions that, without the specific URL, the intermediary would have to monitor all the content. As mentioned earlier, prior to the MCI different panels of the STJ (the third and fourth chambers, specifically) were issuing conflicting decisions on the necessity of identifying the URL of the infringing content. The requirement of URL identification eventually prevailed in the decision of the STJ’s Second Section (which gathers the third and fourth chambers of the court) in REcl 5.072/AC, decided on December 2013. The same conclusion was reached by the Second Section on a case involving copyright infringement on the Orkut platform (REsp 1.512.647-MG, decided May 2015). 63  In that case, the plaintiff was seeking to remove from Google’s index the links to search results that would violate her privacy. See STJ SMS v Google [2016] REsp 1.593.873-SP (Bra.). 64  See STJ Denise Pieri Nunes v Google [2018] REsp 1.660.168-RJ (Bra.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   205 framework in Brazil very similar to the current European laws on data protection.65 Therefore, many important issues were not clearly addressed in the decision, including (and most importantly) the legal grounds that would justify the delisting from search engines of content that could be considered perfectly legal.66

3.3  Cases Pending Before the Federal Supreme Court In addition to the pre-MCI Aliandra case (discussed in Section 2.1), three relevant cases are still pending before the STF. In the first case (Recurso Extraordinário no. 1.037.396), the STF has accepted a discussion of the constitutionality of the MCI’s article 19.67 Two other cases (ADI 5527 and APDF 403/2016)68 involve recent trial court decisions determining the nationwide suspension of the WhatsApp service for failing to provide information requested by authorities during the course of criminal investigations. (a) The constitutionality of the MCI’s article 19. In a case originally brought before the small claims courts of São Paulo, the plaintiff requested the exclusion of a fake profile that used her name, the payment of monetary damages from Facebook, and the IP address of the computer used to create and manage the profile.69 The trial judge agreed with the plaintiff ’s claims, with the exception of the payment of monetary damages, as he considered that Facebook had acted promptly after the court ordered the removal of the profile. After the plaintiff ’s appeal, the 2nd Panel of the Piracicaba Special Appeals Court (Colégio Recursal) decided that Facebook was liable for not removing the fake profile and for not providing the proper tools to remove the content. The judges decided that article 19’s limited intermediary liability was incompatible with consumer protection rules enshrined in the Brazilian Constitution and the Consumer Protection Code, and awarded the plaintiff 10,000 reais in damages.70 On appeal, Facebook requested the STF to affirm the constitutionality of article 19, dispelling the alleged conflict between the 65  Brazil’s Data Protection Law will become enforceable on August 2020. See Regulation 2016/679/EU of the European Parliament and the Council on the General Data Protection Regulation (the GDPR) [2016] OJ L119/1 or Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data [2016] OJ L119/1. 66  Other issues would include the public nature of the plaintiff (a public prosecutor, in that case), and the impact of the passage of time for the recognition of a ‘right to be forgotten’ in a given case. Most of these issues were completely ignored by the decision, which does very little to provide legal certainty of the existence and applicability of a right to be forgotten in Brazilian law. For a discussion of recent developments around the right to be forgotten in Brazil and in the region, see Diego Werneck Arguelhes and Luiz Fernando Marrey Moncau, ‘Privacy Law in Latin America’ in Conrado H. Mendes and Roberto Gargarella (eds), The Oxford Handbook of Constitutional Law in Latin America (OUP 2020). 67  For more information and documents, see Felipe Octaviano Delgado Busnello, ‘Case n. 1037396’ (WILMap, 4 April 2018) . 68  See Luiz Fernando Marrey Moncau, ‘Supreme Court, Arguição de Descumprimento de Preceito Fundamental403/2016’(WILMap,19April2016). 69  See Delgado Busnello (n. 67). 70 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

206   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes MCI and the consumer protection rules. The STF has so far only determined that the constitutional controversy underlying the appeal has ‘general repercussions’ and has therefore accepted to decide the case.71 This is one of the most relevant pending cases in Brazil on the issue of intermediary liability, and the fate of the safe harbour regime adopted in article 19 of the MCI hinges on the STF’s decision. (b) The ‘WhatsApp blockage’ cases in the STF. Since WhatsApp arrived in Brazil, trial judges have ordered the suspension of the messaging application (owned by Facebook) on four different occasions, three of which resulted in the complete blocking of the service nationwide. These injunctions were issued due to the alleged non-compliance of the company with previous judicial orders, in the course of criminal investigations, requesting data about investigated individuals. As some of the cases were not public, it is not entirely clear if they involved requests of data that the company would be required to provide under the MCI (e.g. application logs), or if they involved the actual content of WhatsApp conversations, which would be protected by end-to-end cryptography. The blockages were extremely unpopular and heavily criticized, leading two political parties to file constitutional review lawsuits directly before the STF. In ADI 5527,72 filed by the Partido da República (PR), the main claim is that article 12, III and IV of the MCI should be considered unconstitutional. These are the legal provisions that allow for the suspension of services that fail to protect users’ personal data and privacy. In ADPF 403,73 filed by the Partido Popular Socialista (PPS), the plaintiff asked the STF to declare the same provisions unconstitutional in order to protect freedom of expression (art. 5, IX of the Brazilian Constitution), and because suspending a service in that fashion would be a clearly disproportional measure. At the time of writing, the cases are still pending before the STF.74

4.  The MCI and Digital Constitutionalism Digital constitutionalism (DC) has emerged as a terminology to link ‘a constellation of initiatives that sought to articulate a set of political rights, governance norms, and

71 ibid. 72  For a summary of the case, see Felipe Mansur, ‘ADI 5527 and appblocks: a problem in the wording of the Law or in its Interpretation?’ (Bloqueios.info, 18 November 2016) . 73  See, for a summary of the case, Paula Pécora de Barros, ‘ADPF 403 in STF: are whatsapp blocking constitutional?’(Bloqueios.info,21November2016). 74  A timeline of all service blocks in Brazil and case materials is available at .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   207 limitations on the exercise of power on the internet’.75 Instead of focusing on the technical architecture of the internet, the documents adopt a political perspective by committing to specific rights and legal principles that must shape online interactions between citizens, companies, and the state.76 While DC and related expressions have been employed in a variety of senses since the 1990s, they are arguably connected by the commitment to create ‘a normative framework for the protection of fundamental rights and the balance of powers in the digital environment’.77 As a full review of these debates is beyond the scope of this chapter, in this section we briefly attempt to contextualize and position Brazil’s MCI within the broader backdrop of this ‘constellation of initiatives’ and the normative commitments they express.78 In one influential attempt to map the state of DC initiatives globally, Redeker and ­others identified thirty-two documents embodying proposals to regulate the internet with the following shared traits: (1) substantively, they address ‘fundamental political questions that have an inherently constitutional character’, by articulating rights, structuring governance for both state and private actors online, and laying down limits to state power; (2) they ‘speak to a particular and defined political community’; (3) they at least aspire to formal and legitimate political recognition in such community, including the appropriate enforcement mechanisms, while not necessarily being formally enacted as a statute or constitutional norm; (4) they are, at least on some level, comprehensive efforts, in the sense that they favour broad principles instead of focusing on specific policy issues; (5) they ‘represent the views of some organization, coalition, state or other organized group of some kind’, thus excluding documents that express the position of specific individuals.79 Evaluated against these criteria, the MCI arguably stands out as a particularly intense manifestation of DC. Out of the thirty-two documents compiled by Redeker and other, the MCI is the only one that currently has the status of a formal statute. Most examples of DC worldwide are non-binding charters, official statements, or bills that are still being debated by legislatures.80 Moreover, although in the authors’ classification the MCI is coded as originating from the government, and while it is true that the executive branch 75  Redeker, Gill, and Gasser (n. 31) 303. Claudia Padovani and Mauro Santaniello, ‘Digital constitutionalism: Fundamental rights and power limitation in the Internet ecosystem’ (2018) 80 Int’l Communication Gazette 4, 296. 76  See Micheal Yilma, ‘Digital privacy and virtues of multilateral digital constitutionalism—preliminary thoughts’ (2017) 25 Int’l J. of L. and IT 2, 117; Padovani and Santaniello (n. 75) 296 (‘these documents have a political rather than a technical primary scope . . . digital constitutionalists foster a normative framework embedded in, or informed by, international human rights law and domestic democratic constitutions’). 77  Edoardo Celeste, ‘Digital Constitutionalism: Mapping the Constitutional Response to Digital Technology’s Challenges’, HIIG Discussion Paper Series No. 2018-02 (2018) 15 (also noting that there is no consensus among the analysed scholars on the aim of digital constitutionalism). 78  See Celeste ibid. for a useful critical review of the literature, as well as a proposal to organize the debate in conceptual terms. 79  The emphasis is on ‘constitutionalism’ not on ‘constitutions’ because the aspirations and values that underlie those documents are, from that perspective, more important than their formal status in the legal system. See Redeker, Gill, and Gasser (n. 31) 303. 80  See Padovani and Santaniello (n. 75) 298.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

208   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes played an important role in shaping and promoting the proposal, the MCI was enacted by the Brazilian Congress after a high level of participation and engagement by civil society groups, scholars, and business interests. While the executive branch did play an important role, the final text was hailed in the Brazilian political community as a landmark statement of the binding principles that would shape online life from that moment forward.81 This perspective on DC emphasizes the similarities and continuities between the documents and traditional constitutional concerns, arguments, ideas, and texts. It includes documents that, while not having constitutional or even statutory status, adopt constitutional language and aspirations in attempting to regulate the internet, focusing on broad principles aiming to protect rights and regulate how power can be exercised over individuals online. The exact scope of application of these aspirations, however, has been debated. In the most recent version of their study, Redeker and others ac­know­ ledge that, across the set of documents they map, ‘power’ might or might not include both public and private actors.82 In contrast, some authors use DC to refer specifically or primarily to attempts to translate constitutional aspirations into the realm of private companies’ governance. In such perspectives, manifestations of DC are seen first and foremost as a reaction to the fact that, in contemporary societies, and especially on the internet, individual rights are put at risk by private actors as often as (if not more than) by state or quasi-state actors.83 Private power is typically exercised online by corporations that control, in many ways, a user’s access to and enjoyment of rights and liberties online.84 After the 2000s, corporations increasingly became the central arena for the challenge of controlling power and protecting rights online.85 In this scenario, beyond noting descriptive regularities in such movements for protecting rights online, some commentators have adopted a normative dimension and proposed specific versions of constitutionalist ideol­ogy for the virtual world.86 In doing so, they make the claim that we should update and translate the traditional concerns of constitutionalism—protecting rights and limiting

81  The strong connection between national politics and the MCI might even be a problem for the ‘exportability’ of this proposal to other communities. See Guy Hoskins, ‘Draft Once; Deploy Everywhere? Contextualizing Digital Law and Brazil’s Marco Civil da Internet’ (2018) 19 Television & New Media 5 (arguing that the specific political conflicts that shaped the MCI’s legislative process, as well as Brazil’s recent dictatorial past, make it harder to see this document as a potential blueprint for affirming online rights in other contexts). 82  See Redeker, Gill, and Gasser (n. 31) 304 (‘[e]fforts towards digital constitutionalism may aim limit the power of both public authorities and private corporations through the recognition of rights’). 83  See Nicolas Suzor, ‘The Role of the Rule of Law in Virtual Communities’ (2010) 25 Berkeley Tech. L.J. 1818, 1818–86. See also Celeste (n. 77) 15 (arguing that ‘digital constitutionalism . . . is a concept which refers to a specific context, the digital environment, where private actors emerge beside nation states as potential infringers of fundamental rights’). 84  See e.g. Nicolas Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (2018) Social Media + Society 1. 85  Redeker, Gill, and Gasser (n. 31) 314. 86  See e.g. Suzor (n. 83) (for a specific analysis/application of constitutional, rule of law values to the governance of online platforms).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   209 power—into a realm where the enjoyment of powers is very much threatened by private companies that hold the resources and tools to shape our online lives.87 How do the MCI’s provisions measure up against these aspirations? The MCI has been criticized for simply affirming, on the internet, the applicability of the same rights as those enjoyed outside the virtual sphere.88 But this, in itself, is a consequential legislative decision, with implications beyond the specific rights affirmed. Asserting the ap­plic­abil­ity of fundamental rights online provides a justification for stricter state regu­ la­tion of the private governance of the internet. Moreover, in certain legal communities it might also pave the way, at least in principle, for the possibility of judicial enforcement against private actors, such as OSPs, of freedom of expression, access to information, privacy, and property, among other fundamental rights. The MCI includes general provisions that express a commitment to protecting fundamental rights against private parties, although the exact scope of this protection is not necessarily the same for all the rights included in the statute. Article 8 states that ‘guaranteeing the rights to privacy and freedom of expression in communications is a precondition for the full exercise of the right to access to the internet’.89 Article 7 establishes that ‘access to internet is essential to the exercise of citizenship’.90 These broad principles are then detailed in several clauses, which create specific rights that, in the MCI’s terms, override contractual clauses that purport to regulate those matters differently.91 Consider the following examples. First, the privacy and communications provisions. Although falling short of articulating a complete data protection regime, the MCI mandates that OSPs clearly inform their users of how their personal data is collected and processed, and users have the right not to have such data employed for purposes other than that for which consent was originally given.92 Moreover, article 10 and its provisions establish that data concerning internet connection and application logs, as well as personal data and the content of private communications, must be stored and made available in compliance with the rights to ‘intimacy, private life, personal honour, and 87  In this sense, see Suzor (n. 84) 13 (noting that ‘a process that seeks to articulate a set of limits on private power that will best encourage innovation and autonomy and simultaneously protect the legitimate interest of participants in the increasingly important virtual spaces’); Celeste (n. 77) 16 (highlighting that ‘in a context where both public and private actors can affect the protection of fundamental rights, the aim of digital constitutionalism does involve the limitation of power of both these categories of actors’). 88  See Medeiros and Bygrave, ‘Brazil’s Marco Civil da Internet’ (n. 1). 89  See MCI 2014 (n. 2) art. 8. 90  ibid. (‘Art. 7. Access to the internet is essential to the exercise of citizenship, and the following rights are guaranteed to the users’). 91  ibid. (‘Art. 8. The guarantee of the right to privacy and freedom of speech in the communications is a condition for the full exercise of the right to access to the internet. Sole Paragraph: Contractual clauses that are in breach of the provision [caput] above are void by operation of law, such as those that: I – cause an offense to the inviolability and secrecy of private communications over the internet; or II – in adhesion contracts, do not provide an alternative to the contracting party to adopt the Brazilian forum for resolution of disputes arising from services rendered in Brazil’). 92  See MCI 2014 (n. 2) art. 7(VII) (establishing that users have the right to ‘non-disclosure to third parties of users’ personal data, including connection records and records of access to internet applications, unless with express, free and informed consent or in accordance with the cases provided by law’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

210   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes the image of the parties directly or indirectly involved’. The MCI also states specifically that the content of communications can only be made available by means of a court order.93 Secondly, as already mentioned, article 7, IV of the MCI affirms the right of internet users not to be disconnected from the internet, except when the disconnection of the service is due to a debt directly originating from its use.94 Within the MCI framework, access to the internet is fundamentally connected to citizenship, and the specific right to stay connected to the internet is designed to be enforceable against some private parties (access providers). Article 7, V takes a further step and creates a right to the ‘main­ten­ance of the quality of the internet connection contracted with the provider’.95 The wording of these provisions and the legislative history of the MCI leave no doubt that this limited right to internet access is enforceable against private companies.96 Beyond judicial enforcement, however, such provisions can serve to enable a framework for regu­la­tion aimed at limiting violations, by private companies, of other dimensions of individual access to the internet. Thirdly, is the issue of protecting freedom of expression against private parties. While the MCI states that freedom of expression will be protected online,97 it does not ex­pli­ cit­ly state that private parties must respect internet users’ freedom of expression to the same extent that state actors are bound. However, in article 19 the MCI seems to at least hint in that direction. Before laying out the rules for (conditional) intermediary liability, article 19 states that such rules are created ‘in order to ensure freedom of expression and prevent censorship’.98 Indeed, at least some trial and appellate court decisions that have treated article 19 as a sufficient legal basis for enforcing users’ freedom of expression against OSPs. In 2018, the Third Chamber of Santa Catarina Court of Appeals (TJSC), for example, confirmed a trial court decision affirming that in the absence of a specific court order, Google could not remove content posted by a user after receiving notification by a third party claiming that there was a copyright violation. In their reading of article 19, the judges stated that ‘freedom of expression prevails until a court order says otherwise’, and found Google liable for damages caused by its violation of the user’s right.99 The majority opin-

93  Art. 10(2) (‘The content of private communications may only be made available by court order, in the cases and in the manner established by law, and in compliance with items II and III of art. 7’). 94  See MCI 2014 (n. 2) art. 7(IV). 95  ibid. art. 7(V). 96  There is a Constitutional Amendment Proposal (PEC 185/2015) in Congress that, if approved, would include the ‘right to internet access’ to art. 5 of the Brazilian Constitution as an explicit, specific fundamental right. The proposal can be found at . 97  See MCI 2014 (n. 2) art. 3(I) and 8. 98  ibid. art. 19. 99  See Luciano Pádua, ‘Google é condenado por tirar paródias das músicas 10% e Malandramente do YouTube’ (Jota, 9 April 2018) (a case involving the removal, by YouTube, of a parody of a copyrighted song that had been posted on the platform).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   211 ion also found that YouTube’s terms of service were in violation of the MCI in that regard, and therefore should not be enforced.100 Similar decisions can be found in other state courts, and some of them also point to procedural concerns in the way OSPs deal with removal requests.101 That is, even when the MCI authorizes content to be removed, it establishes a procedure for user participation and defence that must be followed. Still, neither of the two high courts (the STF and STJ) have taken a clear position on the issue. It remains to be seen whether this is the direction that Brazilian judges will follow in implementing the MCI’s basic commitment to protecting rights in a virtual world shaped and governed by private actors.102

5.  Concluding Remarks Brazil’s MCI was the outcome of an open legislative process with high levels of public engagement from business, government, and civil society sectors. Against the previous scenario of legal uncertainty, it attempted to define a clear regime for intermediary li­abil­ity, balancing freedom of expression, privacy, and due process concerns as well as affirming a connection between access to the internet and citizenship. The MCI includes several provisions affecting the activities of internet intermediaries, such as data retention, net neutrality privacy, and consumer’s rights. With regard to intermediary liability, it established a broad immunity for internet access providers and a narrower, conditional immunity to other OSPs. As a general rule, application pro­viders will not be liable unless they do not take steps to remove infringing content after a court orders its removal. The only two exceptions crafted by the law are copyright violations and the non-consensual dissemination of intimate images (popularly known as ‘revenge porn’). 100  ibid. Note that the MCI includes a provision stating that art. 19 claims that involve copyright violations will be covered by specific rules, to be enacted in the future. See MCI 2014 (n. 2) art. 2 (providing that ‘[t]he implementation of the provisions of this article for infringement of copyright or related rights is subject to a specific legal provision, which must respect freedom of speech and other guarantees provided for in art. 5 of the Federal Constitution’). 101  See e.g. Redação, ‘Justiça determina que rede social restabeleça página cancelada’ (Tribunal de Justiça do Estado de São Paulo, 3 May 2018) ; BEA, ‘Turma mantém condenação do facebook por desativar página de deputado’ (TJDFT, 30 July 2018) . 102  The provisions mentioned earlier can still be considered limited, in their scope, if measured against a more ambitious normative vision of digital constitutionalism. E.g. the MCI contains very little in terms of procedural guarantees and decision-making rules that can shape how the platforms themselves are governed. See Suzor (n. 83) for an outline of such a proposal. It prohibits them from violating certain substantive rights of their users, but it does not establish specific procedures on how their norms of governance must be created, changed, and enforced.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

212   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes By requiring such a court order, the MCI repeals strict liability regimes and proactive monitoring of content. Moreover, requiring specific and clear identification of the infringing content prevents the imposition of ‘staydown’ obligations (i.e. the obligation to ensure that the removed content ‘stays down’ and does not reappear on the platform or service).103 In practice, however, other laws may come into play in such cases, depending on the type of content disseminated on internet platforms. Brazilian electoral laws—which are typically updated before each election and play an important role in the removal of online content during electoral periods—currently reaffirm the MCI standards and clearly state that a URL must be indicated by any party seeking content removal. In doing so, they replicate judicial constructions of the MCI provision determining that a court order must provide a clear identification pointing to the specific infringing content, ‘allowing the unquestionable location of the material.’ Electoral laws, however, go beyond the MCI in defining that content removal orders must not require the removal of content in under twenty-four hours (except in ‘extraor­ din­a ry’ cases). The Child and Adolescent Statute (ECA) also established a specific framework for content removal, by defining crimes relating to distributing or publishing by any means, photographs, videos, or any records of a sexual nature involving children or adolescents. In practice, this means that service providers must respond to takedown notices to avoid criminal liability under the ECA. That said, the MCI’s rights and principles have only just begun to be litigated in the courts and three important cases are pending before the STF. The most important case will discuss the constitutionality of the MCI’s article 19, the main provision on the li­abil­ ity of internet intermediaries for user-generated content. Two other cases involve recent trial court decisions determining a nationwide suspension of the WhatsApp service for failing to provide information requested by the authorities during the course of criminal investigations. As these cases illustrate, the MCI rights and principles will have to be interpreted by judges, companies, and regulators in the near future. In this process, many of the conflicts arising in the MCI’s drafting process will resurface, potentially leading both to expansive and restrictive reinterpretations. In that process, the question of the formal status of the MCI will be a key feature of what the future holds for the regulation of the internet in Brazil. Formally, the MCI is simply an ordinary federal statute; it does not have constitutional status—or even the ‘supra-legal, but infra-constitutional’ status that the STJ has conferred on international human rights norms.104 The MCI can thus be superseded by future legislation, and new rounds of legislation are expected on related issues such as copyright, trade mark, and publicity and image rights. Moreover, on these topics, it is still debatable whether even a more recent statute like the MCI has successfully displaced, in the judicial mindset and judicial decisions, older 103  See Chapter 27. 104  See Antonio Moreira Maués, ‘Supra-legality of international human rights treaties and constitutional interpretation’ (2013) 18 SUR Int’l J. on Human Rights.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   213 statutes of the same rank, for example the Civil Code or the Consumer Protection Code. As we have seen, there are still high-profile cases on these statutory conflicts waiting to be decided by the STJ and even the STF. While it is unlikely that the STF will consider large parts of the MCI to be unconstitutional at that point, the court might still restrict its scope by giving greater weight to rules emanating from different legal regimes. That is not to say that the MCI is in a particularly vulnerable position. In Brazil, many high-profile, landmark statutes are in the same situation as the MCI: the Civil Code, the Criminal Code, the Code of Civil Procedure, and the Consumer Protection Code are all ordinary statutes that, considering how much thought, effort, and (in the case of the MCI) public participation has gone into their drafting process, are expected to last, as a coherent whole, for longer than a few legislatures. But this normative aspiration of scholars and practitioners has rarely carried the day in Brazil. Even the high-profile Civil, Criminal, and Procedure Codes are often tweaked and adjusted by specific, pos­ter­ior laws—and their relationship to one another, however important each might be in principle in its specific domain, is always a matter to be decided and shaped by judicial decisions. At least in its first half-decade of existence, however, and in spite of permanent lobbying efforts by business sectors, the MCI has so far resisted significant change, either through new rounds of legislation or through judicial interpretation. While this re­sili­ence remains to be tested and observed over a longer period, it might suggest that the MCI has gathered political and doctrinal relevance beyond its formal status as an or­din­ary federal law—a consequential outcome for global debates on digital constitutionalism.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 11

I n ter m edi a ry Li a bilit y i n A fr ica: L ook i ng Back, Mov i ng Forwa r d? Nicolo Zingales*

How far does the conventional discourse on the protection of intermediaries for third party content apply to law and policymaking in the African continent? The dynamics of intermediary liability differ significantly in the African context. This chapter reviews key developments in the region, documenting a trend of progressive diffusion of inter­medi­ary liability protections running in parallel to an increasing pressure on intermediaries to fulfil broad and open-ended public policy mandates. It is argued that the tension between these two forces undermines the value of intermediary protections, creating an uncertain terrain for a wide range of actors, and potentially affecting both technological progress and the creation of local content in one of the most underdeveloped regions of the world. This is seen against the backdrop of the great potential offered by the African Union in driving harmonization in the field, and the blueprint of the most sophisticated model of intermediary protections in the region: the South African Electronic Communications Act.

1.  The Slow Rise of the Intermediary Liability Discourse in Africa Limitations of intermediary liability for third party content have not traditionally taken centre stage in discussions about the regulatory framework for information technology *  The author would like to acknowledge the support of the Fundação Getulio Vargas and the CyberBRICS project, in the context of which he was hosted at the FGV Law School in January and February 2019. See . © Nicolo Zingales 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   215 in the African region, typically focused on more immediate concerns such as internet access (or the lack thereof), the use of technology for developmental purposes,1 and cybersecurity. This may help to explain why liability limitations began to surface relatively late compared to other jurisdictions around the world. By way of illustration, a series of studies2 conducted by the Association for Progressive Communications (APC) between 2012 and 2013 on intermediary liability in Nigeria, Kenya, South Africa, and Uganda showed that only half of the surveyed countries provided for some sort of limitation, one of which of fairly recent introduction (Uganda, in 2011). Note that this was more than ten years after the establishment of liability safe harbours in the e-Commerce Directive in Europe, and almost thirteen and fifteen years after the ‘safe harbours’ of the Digital Millennium Copyright Act (DMCA) and the Communication Decency Act (CDA) in the United States. One may naively question the reason for such lag, especially considering that policymakers in Europe and the United States had been discussing intermediary liability across the Atlantic since the 1990s as an integral part of concepts such as the ‘information superhighway’3 and the ‘information society’.4 However, it is possible to offer at least one economic reason why intermediary liability has been far from a priority in the African continent: the major connectivity gap that affected the continent since the beginning of the internet hindered not only the developmental potential of African markets, but also the hosting of African internet sites (mainly based in the United States and Europe)5 and the ability of intermediary services in the region to make substantial profits. This, along with a perception of less reliable enforcement of laws across the continent6 and a widespread use in international contracts of exclusive jurisdiction clauses electing forum outside the African region, may well have served as sufficient deterrent to the unfolding of important intermediary liability battles. On the other hand, precisely in the light of the crucial role of internet service providers (ISPs) in delivering con­nect­ iv­ity across the region, one could argue that failing to explicitly regulate the liability of technology providers in large infrastructure investments in the region amounts to a glaring omission from a policymaking perspective. Understanding the net value of intermediary protection legislation requires an appreciation of the realpolitik of liability in each jurisdiction, which goes beyond the scope of this chapter. However, an analysis 1  This is the field of research and policy called ‘information technology for development’, or ‘ICT4D’. One notable initiative in this space is InfoDev, a global multi-donor programme in the World Bank Group that supports growth-oriented entrepreneurs through business incubators and innovation hubs and aims to foster the attainment of African development goals by building local capacity. See . 2  The entire collection of APC papers on this theme can be found at . 3  See US Vice-President Al Gore’s remarks at the International Telecommunications Union on 21 March 1994 . 4  European Commission, ‘Europe’s Way to the Information Society. An Action Plan’ COM(94) 347 (1994). 5  See e.g. Mike Jensen, ‘Africa: Internet Status, 10/29/00’, University of Pennsylvania African Studies Centre (2000) ; and the periodic status reports at . 6  For concrete measurements, see the World Bank, ‘Strength of legal rights index’ .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

216   Nicolo Zingales of legal provisions addressing intermediary liability provides an interesting picture of this evolving framework.

2. First-generation Liability Limitations 2.1  South Africa Whatever the reason for the slow emergence of the concept of intermediary liability in the continent, it is apparent that South Africa was at the forefront of this discussion. As early as July 1999, just about the time the EU legislator was finalizing the drafting of the framework for intermediary liability in Europe, the South African Department of Communications issued a Discussion Paper on E-Commerce.7 The department called for laws regarding the liability of online intermediaries that ought to be ‘clear about the role of each participant along the chain of on-line services and transactions’,8 noting that such intermediaries seldom have direct responsibility for, or even knowledge of, the specific content of the websites that may be hosted on their facilities.9 The discussion teased out in the paper led to another document for public consultation, the Green Paper on Electronic Commerce for South Africa, which raised a number of fundamental questions about the nature of enforcement in an intermediated environment.10 Ultimately, the process culminated in the enactment in 2001 of the Electronic Communications and Transactions Act (ECTA), which constitutes to date the most articulate framework for dealing with intermediary liability in Africa.11 The present section illustrates that framework in detail. ECTA was above all a legislation designed to enable the growth of e-commerce in South Africa. Among its stated goals, it aimed to remove and prevent barriers to electronic communications and transactions in the Republic; promote legal certainty and confidence 7  See Department of Communications Republic of South Africa, Discussion Paper on Electronic Commerce Policy (1999) . 8  ibid. 29. 9 ibid. 10 See Department of Communications Republic of South Africa, A Green Paper on Electronic Commerce for South Africa (2000) . (It included the following questions: ‘1. How should liability be determined in copyright infringements given intangibility and documents in transit? 2. What is the potential liability of end-users “reproducing” infringing copies (transient copies) of copyrighted works by the mere act of viewing them on their PC’s, etc? 3. How do we strike a balance of enforcing and monitoring intellectual property rights with the need to promote use of e-commerce and cyberspace publishing? 4. Is framing (the incorporation of a website within a website) and hyperlinking (the creation of digital paths linking two or more websites) an infringement of copyright? 5. How are the expenses, efforts, duration and technical evidence demands for enforcing copyright protection in court going to be implemented?’) 11  Electronic Transactions and Communications Act (ECTA) no. 25 of 30 August 2002 (S. Africa).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   217 in respect of electronic communications and transactions; ensure that electronic transactions in the Republic conform to the highest international standards; and encourage investment and innovation in respect of electronic transactions in the Republic.12 In line with those aims, ECTA devoted an entire chapter (Part XI) to limitations of liability for service providers, defined as ‘any person providing information system services’, which includes ‘the provision of connections, the operation of facilities for information systems, the provision of access to information systems, the transmission or routing of data messages between or among points specified by a user and the processing and storage of data, at the individual request of the recipient of the service’. This def­in­ition was broad enough to capture four different types of activity, mimicking the corresponding categories of intermediation enshrined in the US DMCA: mere conduit; caching; hosting; and information location tools. Much like in the DMCA and in the EU’s e-Commerce Directive, the safe harbours were accompanied by the overarching principle that the law cannot be construed to require a service to monitor the data which  it transmits or stores, or to actively seek facts or circumstances indicating an unlawful activity. However, this was made subject to the proviso that the Minister of Communications may, in accordance with section 14 of the Constitution (which enshrines the right to privacy), prescribe procedures for service providers to (a) inform the competent public authorities of alleged illegal activities undertaken or information provided by recipients of their service; and (b) to communicate to the competent authorities, at their request, information enabling the identification of recipients of their service. Thus, it is important to clarify from the outset that the safe harbour provisions do leave some wiggle room for the government to require intermediaries to deviate from the ‘no monitoring’ principle for special categories of case, although this power has never been used to date. It is also worth noting that, as for the safe harbour for hosts in the DMCA, it is not crystal-clear what happens if a notification did not follow the specified takedown pro­ced­ure, specifically as it may be sufficient to trigger actual or constructive knowledge on the provider. Conceivably, traditional common law standards apply. For example, the common law of defamation finds negligence sufficient to trigger liability of the media.13 Similarly, in the field of copyright South African courts have defended the idea that in­dir­ect liability is triggered when a copyright owner suffers an economic loss that was foreseeable by a defendant who was under a legal duty to prevent it;14 and have rejected defences grounded on pleaded ignorance over the illegality of the works being distributed through the services of the ISP, finding sufficient the notice of facts that would 12  ibid. s. 2. See also the preamble to the Act which lists as its purpose ‘to provide for the facilitation and regulation of electronic communications and transactions; to provide for the development of a national e-strategy for the Republic; to promote universal access to electronic communications and transactions and the use of electronic transactions by SMMEs; to provide for human resource development in electronic transactions; to prevent abuse of information systems; to encourage the use of e-government services; and to provide for matters connected therewith’. 13  See s. 4(4). 14 See Arthur E. Abrahams & Gross v Cohen and others [1991] (2) SA 301 (S. Africa).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

218   Nicolo Zingales suggest to a reasonable person that a copyright infringement was being committed.15 What constitutes knowledge will thus vary depending on the type of content in question, a point of difference with the US (copyright-focused) framework and which creates some problems in the overlap between ECTA and other regimes.16 An even more distinct peculiarity of the South African model of intermediary protections is that it conditions the limitations of liability on the fulfilment of two additional requirements: (1) the intermediary’s membership of an industry representative body (IRB); and (2) adoption and implementation of the corresponding code of conduct.17 Section 71 clarifies that such recognition can only be obtained upon application to the minister, provided that (a) members of the representative body under examination are subject to a code of conduct; (b) membership is subject to adequate criteria; (c) the code requires continued adherence to adequate standards of conduct; and (d) the representative body is capable of monitoring and enforcing the code of conduct adequately. The proportionality of the first requirement has been questioned, particularly for small service providers,18 as the imposition of upfront costs in that regard may ef­fect­ ive­ly chill conduct that would otherwise benefit from the liability limitations, including speech and business activity that would be protected in other legal regimes.19 The third requirement, concerning the ‘code of conduct’, was integrated by the Minister of Communications in 2006 with the issuance of the ‘Guidelines for recognition of industry representative bodies of information system service providers’.20 The Guidelines largely confirm the ‘hands off ’ approach chosen by the legislator by stating that ‘the only monitoring or control done by the State . . . is to ensure that the IRB and its ISPs meet certain minimum requirements’. However, this does not take away the important point that through the Guidelines and the ministerial approval the government is able to subordinate liability limitations to the fulfilment of minimum standards of conduct, encourage the attainment of ‘preferred’ (i.e. non-compulsory) standards and promote respect of a number of overarching principles.21 A crucial task of the Guidelines is to direct the IRB to define a specific takedown pro­ced­ure, published on the IRB’s website and to which members must provide a link from their websites. The Guidelines point out that this procedure needs to be in line 15 See Gramophone Co. Ltd v Musi Machine (Pty) Ltd and others [1973] (3) SA 188, 188 (S. Africa); Paramount Pictures Corp. v Video Parktown North (Pty) Ltd [1986] (2) SA 623 (T), 251 (S. Africa). 16  See s. 4(4). 17  See ECTA (n. 11) s. 72. 18  See Alex Comninos, ‘Intermediary Liability in South Africa’, APC Intermediary Liability in Africa Research Papers 3/2012 (2012). 19  Fees for membership of the Internet Service Provider Association (which has for a number of years been the only IRB) are not negligible, amounting to a minimum of a monthly R5,250.00 + R735.00 VAT (more than US$400 in total). Actual rates depend on the size of the applicant’s business. See ISP, Membership . 20  Government Notice, Gazette 14 December 2006, no. 29474. 21  Minimum requirements, e.g., require: the adoption of professional conduct; the use of standard terms with a commitment not to spam or infringe intellectual property; the promise of reasonably feasible service levels; the protection of copyright, minors, and against spam and cybercrime; the availability of a complaint procedure, a disciplinary procedure, and one for monitoring compliance. Preferred requirements entail higher levels of commitment under those categories.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   219 with the requirement set out by section 77(1) of ECTA, which specifies the particulars a complainant has to provide in order to notify a service provider or its designated agent of unlawful activity (e.g. location, nature of the infringing material, remedial action required, and contact details). The same section grants service providers immunity from liability for wrongful takedowns in response to a notification, but imposes such liability on any person who has made material misrepresentations leading to damages. However, the absence of detailed provisions in the Guidelines creates a situation where ISPs are free not to establish any ‘counter-notice’ or ‘notice and putback’ mech­an­ ism, allowing the user to respond to the allegations of infringement and request to restore the allegedly infringing content. In fact, the Internet Service Provider Association (ISPA)—the only IRB to date—refrained from inserting such a safeguard mechanism in the takedown procedure.22 This was under the spotlight in 2012 since the Minister of  Communications proposed, in the Electronic Communication Transaction Act Amendment Bill of 2012, the introduction of section 77A, entitled ‘Right to remedy on receipt of a take-down notice’. The amendment aimed, according to the Explanatory Note accompanying the Bill, to allow for the right of reply in accordance with the prin­ciple of administrative justice and the audi alteram partem rule. However, the mechanism by which it endeavoured to do so was wholly inadequate, as it merely requested ISPs to respond to a ‘first take-down notice’ within ten business days (or less, if the complainant could demonstrate irreparable or substantial harm) instead of informing the user concerned and allowing him to intervene in the process by making representations before the ISP. Further, the proposed amendment did not foresee any kind of consequence for the ISP for failure to respond to such notice: it established liability only in case of failure to implement a ‘final take-down notice’, meaning a notice that a complainant is entitled to issue if: (1) after due consideration of the response by the ISP, he considers that the matter has not been resolved to his satisfaction; or (2) he has received no response from the ISP within the allotted time period. Therefore, following the proposed amendment, it would ultimately be on the complainant to decide whether something should be removed by the ISP, much to the dismay of the principle of due process. An additional problem with the proposed amendment is that the genuineness of the adversarial process may be undermined by the misalignment between the interest of the users and the incentives of ISPs: ISPs would not only be affected by the threat of liability for failure to remove potentially illegal content, but also further burdened by the costs involved in defending the user’s interest. Unsurprisingly, given the aforementioned flaws, the proposal has never made it into law, with the result that the takedown process remains controversial—if not unconstitutional. While the South African model of liability limitations is far from perfect, it has nevertheless been crucial in setting a standard for the development of similar laws in the region. Although the majority of African countries still lack a dedicated framework for intermediary liability, three examples of jurisdictions that followed South Africa are provided below. 22  See ISPA, Take Down Procedure v 3.2 .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

220   Nicolo Zingales

2.2 Ghana Ghana’s model of liability protection was clearly inspired by the South African ECTA. Equivalent liability limitations where introduced in 2008 with the Electronic Transactions Act (ETA), which lists in its article 1 all the aforementioned ECTA ob­ject­ives and follows a very similar structure. Liability limitations are covered in articles 90 to 93 of the Act, followed by a detailed takedown procedure (art. 94), the ‘no monitoring principle’ and the carve-outs for contractual liability and compliance with orders imposed by a court or a competent authority (art. 95).23 Additional carve-outs are then provided by article 97, referring inter alia to obligations imposed under licensing or other regulatory regimes or under any law. The latter allows the government to deviate from the established intermediary liability protections, even without being required to follow a specific procedure as prescribed in ECTA. There are two more important nuances which make ETA different from ECTA. The first concerns the liability of service providers for wrongful takedowns, which is spe­cif­ ic­al­ly prescribed—rather than excluded—by ETA. This is a welcome modification, as it encourages providers to consider the merits of a claim before proceeding to content removal. It thus reduces the well-known chilling effects connected with the noticeand-takedown process,24 whereby providers respond to one-sided incentives to avoid liability for failure to remove. The second is that ETA takes a more hands-off approach on industry self-regulation, by envisaging the possibility of the establishment of a voluntary industry code to deal with any matter provided for in the Act;25 contrary to ECTA, however, it refrains from imposing as a condition for enjoying liability limitations membership of a representative body following a code of conduct. This, too, may be con­sidered an improvement, particularly from the perspective of small to medium-sized enterprises, while on the other hand lessening the ability of regulators to nudge compliance with other public policies.

2.3 Zambia As was the case for Ghana, the Zambian framework for intermediary liability largely follows the South African model. The influence of the latter goes as far as to lead to an identical title of the implementing act, the Electronic Communications and Transactions Act of 2009. The objectives listed in the preamble to the Act differ only slightly from those of the South African ECTA.26 However, the most significant difference on inter­medi­ary liability concerns the liability for wrongful takedown. Perhaps due to the conflict in that 23  See Electronic Transactions Act (ETA) of 2008 no. 772 of 2008 (Ghana). 24  See Wendy Seltzer, ‘Free Speech Unmoored in Copyright’s Safe Harbor: Chilling Effects of the DMCA on the First Amendment’ (2010) 24 Harv. J. of L. & Tech. 171. 25  See ETA, s. 89 (Ghana). 26  Most notably, to include the creation of a Central Monitoring and Coordination Centre for the interception of communications.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   221 regard between the South African model and its first implementer (Ghana), the drafters of the Zambian ECTA decided to leave that matter outside the safe harbour, as is currently the case in many jurisdictions around the world. This means that service providers may in principle be subject to liability under common law with respect to any damages arising from wrongfully executed takedowns, although as a matter of fact it might not always be practical to establish the provider’s responsibility where it is merely a consequence of wrongful third parties notices. For that purpose, it may be relevant to examine the due diligence of a provider in affording aggrieved parties an opportunity to make representations and request a reinstatement of the content, as well as in considering applicable statutory defences.27

2.4 Uganda In 2011, it was Uganda’s turn to introduce its own framework to deal with electronic communications and transaction, titled the Electronic Transaction Act28 and bearing a great resemblance to the South African ECTA. In the interest of time, let us skip over the objectives, which are by and large identical to those identified in ECTA, and zoom in on three notable differences between the two Acts. First, an important peculiarity exists in the wording of the conduit safe harbour here which unlike the South African model explicitly provides shelter against both civil and criminal liability.29 It also differs from that model (and from international practice) as it does not require the access provider to remain passive as to the initiation of the transmission, the selection of the addressee, and the modification of the transmitted content. Although one could argue that ‘merely providing access’ presupposes a passive character of the intermediary activity, it can also be doubted that such phrasing constitutes a sufficiently clear delimitation of the activities for which the safe harbour may be invoked, especially insofar as the clarification in section 29(3) expands the notion of ‘providing access’ to include the automatic and temporary storage of the third party material for the purpose of providing access. Finally, under the heading of ‘jurisdiction of the courts’, the Act not only establishes the exclusive jurisdictional competence of courts presided by a Chief Magistrate or Magistrate Grade 1 over all offences in the Act, but also provides that, notwithstanding anything to the contrary in any written law, such courts have the power to impose a 27  The existence of a duty to consider such defences was affirmed in the United States by the Ninth Circuit Court of Appeals in Lenz v Universal Music Corp. 801 F.3d 1126 (9th Cir. 2015) (US). 28  Electronic Transaction Act, Act Supplement no. 4 of 18 March 2011, Uganda Gazette no. 19 Vol. CIV of 18 March 2011 (Ug.). 29  ibid. s. 29(1) (according to which: ‘A service provider shall not be subject to civil or criminal liability in respect of third-party material which is in the form of electronic records to which he or she merely provides access if the liability is founded on—(a) the making, publication, dissemination or distribution of the material or a statement made in the material; or (b) the infringement of any rights subsisting in or in relation to the material’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

222   Nicolo Zingales penalty or punishment in respect of any offence under the Act. This provision thus dangerously opens the door to the imposition of liability under the Act even in circumstances falling within the safe harbours.

3.  Interstate Cooperation in the African Region: Implications for Intermediary Liability Given the diverging state of intermediary liability across the African region, and the notable absence of intermediary liability protection in a number of African states, it is useful to examine what instruments and institutions may be leveraged to promote harmonization. The history of regional interstate cooperation in Africa began in 1963, when thirty-two signatories adopted the treaty on the Organization of African Unity (OAU). The purposes of the OAU included the promotion of the unity and solidarity of African states; the defence of their sovereignty, territorial integrity, and independence; and the eradication of all forms of colonialism from Africa.30 From the outset, there was an understanding that social and economic issues would be part of the purview of the OAU: the Assembly of the Heads of States and Governments, the OAU supreme organ, established a specialized ministerial commission to deal with social and economic issues. The OAU set its goal in the creation of stronger cooperation to intensify interstate trade and eliminate trade barriers which led, for instance, to the Economic Community for West African States in 1975, the Preferential Trade Area for Eastern and Southern Africa in 1981,31 and the Treaty establishing the African Economic Community in 1991; while support for further coordination among African states has been given since 1958 by the United Nations Economic Commission for Africa. It was only in the 1980s that the scope of cooperation expanded to encompass human rights, with the signature of the African Charter of Human and Peoples’ Rights which entered force in 1986. The Charter marked a significant departure from the principle of unfettered state sovereignty on which the OAU had been based,32 conferring oversight and interpretation of the Treaty to the African Commission on Human and Peoples’ Rights. The role of the Commission was further reinforced in 1998 by a Protocol on the Establishment of an African Court on Human and Peoples’ Rights, that was adopted by members of the OAU in June 1998 and came into force on 25 January 2004.33 30  See OAU Charter of 25 May 1963, Art. 2. 31  See G.W. Uku, ‘Organization of African Unity’ (1994) 47(1) Pakistan Horizon 29, 32. 32  See Claude Welch, ‘The Organisation of African Unity and the Promotion of Human Rights’ (1991) 29(4) J. of Modern African Studies 535–55. 33  However, only eight states of the thirty that ratified the protocol have made the declaration recognizing the competence of the court to receive cases from NGOs and individuals.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   223 The Charter provides a strong foundation for the protection of fundamental rights that might be affected by intermediary liabilities. However, unlike other regional instruments for the protection of human rights, it does not contain (with limited exceptions) a list of circumstances under which each right may be interfered with. Instead, it contains a general clause stating that ‘The rights and freedoms of each individual shall be exercised with due regard to the rights of others, collective security, morality and common interest’,34 and in one instance (the right to liberty) entirely carves out protection measures taken ‘in accordance with national law’. This approach can be criticized for departing from the strict necessity test applicable under other human rights frameworks, which results in little external restraint on a government’s power to create laws contrary to the spirit of the right.35 In an effort to overcome this deficiency, a specific reference to the necessity test commonly used in international human rights law was introduced by the Declaration of Principles on Freedom of Expression in Africa providing that ‘[a]ny restrictions on freedom of expression shall be provided by law, serve a legitimate interest and be necessary and in a democratic society.’36 However, the Declaration merely serves as a recommendation, while ‘State Parties to the African Charter on Human and Peoples’ Rights should make every effort to give practical effect to these principles’.37 In 2001, the OAU was replaced by the African Union (AU), which entailed a stronger commitment to supranational cooperation and a broader set of institutions to develop AU policy: this included the Assembly, the Executive Council, the pan-African Parliament, the Commission, the Permanent Representatives Committee, and most importantly the African Court of Justice to settle legal disputes among member states. The aims of the AU, which administers a number of treaties amongst its fifty-five member states, include a number of objectives which could arguably be used as a springboard for the development of intermediary liability principles: for instance, ‘to accelerate the political and socio-economic integration of the continent’, ‘to promote sustainable development at the economic, cultural and social levels as well as the integration of African economies’, ‘to promote and protect human and people’s rights in accordance with the African Charter of Human and Peoples’ Rights and other human rights instruments’, and ‘to coordinate and harmonize the policies between the existing and future regional economic communities for the gradual attainment of the objectives of the Union’.38 To accomplish those goals, the AU does not need to go as far as adopting interstate treaties 34  African Charter of Human and Peoples’ Rights of 27 June 1981 (entered into force 21 October 1986), OAU Doc. CAB/LEG/67/3 rev. 5, 21 ILM 58 (1982), Art. 27. 35  See Emmanuel Bello, ‘The Mandate of the African Commission on Human and Peoples’ Rights’ (1988) 1 (1) African J. of Int’l L. 55. Richard Gittleman, ‘The Banjul Charter on Human Peoples’ Rights: a legal analysis’ in Claude Welch and Ronald Meltzer (eds), Human Rights and Development in Africa (Albany 1984) 152–76. 36  African Commission, Declaration of Principles on Freedom of Expression in Africa of 2002, Art. 2. 37  ibid. Art. 16. 38  Constitutive Act of the African Union of 11 June 2000, Art. 3(c), (h), (j), and (k).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

224   Nicolo Zingales or agreements: the Pan-African Parliament can propose model laws for the consideration and approval by the Assembly (composed of the heads of state).39 Unfortunately, the only fully fledged initiative taken to date directly relating to electronic communications is the AU Convention on Cyber Security and Personal Data Protection. The Convention was adopted in 2014 following a joint effort with the UN Economic Commission for Africa (UNECA) to establish cyber legislation based on the continent’s needs and which adheres to the legal and regulatory requirements on electronic transactions, cybersecurity, and personal data protection. Among other provisions, the Convention requires states parties to take the necessary legislative and/ or regulatory measures to criminalize a range of conduct that may be taken by an inter­ medi­ary, for instance to ‘produce, register, offer, manufacture, make available, disseminate and transmit an image or a representation of child pornography through a computer system’, and ‘create, download, disseminate or make available in any form writings, messages, photographs, drawings and any other representation of ideas or theories of racist or xenophobic nature through a computer system’.40 More worryingly from an intermediary standpoint, the AU Convention contains aggravations rather than limitations of liability: it requires states parties to ensure that, in the case of offences committed through a digital communication medium, the judge can hand down additional sanctions;41 and it explicitly envisages that states parties ensure that legal persons can be held criminally responsible for the offences listed in the Convention, without excluding the liability of natural persons who are perpetrators or accomplices.42

4.  Second-generation Liability Limitations: the Rise of Hybrid Instruments It is important to note that the AU Convention mentioned in the previous section currently has no binding value: it has only received ratification by three AU Member States, as opposed to the fifteen required to enter into force.43 Nevertheless, it would be disingenuous to ignore the awareness-raising effect triggered by the whole process that led to the adoption of the Convention, which has put cybersecurity and the protection of personal data on the agenda of several AU Member States. This has also provided states with the opportunity of introducing liability limitations into their legal frameworks, sometimes with an added nuance when compared to the first generations of liability protection examined in Section 2. The present section thus offers a snapshot of recent 39  ibid. Art. 8 (relating to the Pan-African Parliament). 40  AU Convention on Cyber Security and Personal Data Protection of 27 June 2014, Art. 29(a) and (e). 41  ibid. Art. 31(2)(a). 42  ibid. Arts 31(2)(a) and 30(2). 43  ibid. Art. 36.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   225 legislative initiatives in the field bearing notable implications for the liability of internet intermediaries.

4.1 Malawi In 2016, Malawi introduced comprehensive legislation, the Electronic Transactions Act,44 with the twofold overarching objective of regulating electronic transactions and attending to cybersecurity concerns.45 Specifically, in line with the AU Convention, the Act establishes in its Part X a number of offences relating to the improper use of computer systems, namely child pornography, cyber harassment, offensive communications, cyber stalking, hacking, unlawfully disabling a computer system, spamming, and the broader categories of illegal trade and commerce and attempting, aiding, and abetting a crime. As for the AU Convention, some of those offences lend themselves to an interpretation that would create intermediary liabilities: for instance, the distribution and transmission of any pornographic material through an information system,46 knowingly permitting any electronic communications device to be used for cyber harassment,47 and aiding and abetting any other person to commit any of the offences under the Act.48 At the same time, the Act devotes its Part IV to the ‘liability of online intermediaries and content editors and the protection of online users’. By way of a connecting thread between these different themes, Part IV begins with an affirmation of the freedom of communication, along with a closed list of exceptions.49 While these carve-outs may be used to justify obligations imposed on internet intermediaries in relation to the offences described above, perhaps most concerning is the open-endedness of the last exception: its reference to ‘the enhancement of compliance with the requirements of any other written law’ provides a hook to justify legislative action entailing further restrictions, ostensibly even where this may be in conflict with the liability protections enshrined in the Act. This virtually nullifies the value of the intermediary liability in section 100, according to which ‘in case of inconsistency between provisions of the Act and a provision of any other written law relating to the regulation of electronic transactions, the former prevails to the extent of the inconsistency’. The remainder of Part IV contains a rather conventional set of intermediary safe harbours, including immunity for wrongful takedowns, clarification of ‘no monitoring obligations’, and a saving clause for liability under contract, law, ministerial direction, and court order. Unlike other frameworks, however, Part IV includes in the safe harbours 44  Electronic Transactions Act (ETA) no. 33 of 2016 (Mal.). 45  ibid. Preamble. 46  ibid. s. 85(2)(e) (emphasis added). 47  See ibid. s. 86(c). 48  See ibid. s. 93(2). 49  These include the prohibition of child pornography, incitement of racial hatred, xenophobia or violence, and justification for crimes against humanity; the promotion of human dignity and pluralism in expressions of thoughts and opinions; the protection of public order and national security, the facilitation of technical restrictions to conditional access to online communication; and the enhancement of compliance with the requirements of any other written law.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

226   Nicolo Zingales a general provision on the liability of an ‘intermediary service provider’ which exempts it from both civil and criminal liability where it has not initiated the message and has no actual or constructive knowledge50 and for any act done ‘in good faith’ under the same section.51 How this general provision is supposed to interact with the more specific safe harbours remains unclear.

4.2 Ethiopia Also from 2016 is the Ethiopian Computer Crime Proclamation,52 the overarching objective of which was ‘to provide for computer crime’ (sic.).53 Some of the crimes established by the Proclamation could conceivably be committed by an intermediary: for example, the distribution of a computer device or computer program designed or adapted exclusively for the purpose of causing damage to a computer system, computer data, or network;54 or the distribution of any picture, poster, video, or image through a computer system that depicts a minor or a person appearing to be a minor engaged in sexually explicit conduct;55 the intimidation of a person or his family with serious danger or injury by disseminating any writing, video, audio, or any other image;56 or the dissemination of any writing, video, audio, or any other picture that incites violence, chaos, or conflict among people.57 However, unlike other jurisdictions introducing cybercrimes, the prohibitions are crafted with explicit intentionality requirements, effectively ruling out intermediary liability in the absence of specific knowledge of the nature of the intermediated speech. At the same time, knowledge of the commission of any of the crimes stipulated in the Proclamation or even of the dissemination of ‘any illegal content data by third parties’ through the administered computer systems triggers a duty to report to the police and take appropriate measures,58 which can be particularly effective for turning intermediaries into ‘cyber-police’59 when seen in conjunction with the blanket duty of retention of computer traffic data for one year.60 Yet the most peculiar part of the Proclamation is section 16, which establishes the criminal liability of a service provider under sections 12 to 14 of the Proclamation (and, thus, not all of the offences mentioned earlier) for any illegal computer content data disseminated through its computer systems by third parties if: (1) it was directly involved in the dissemination or edition of the content/data; or (2) upon obtaining actual know­ ledge that the content data was illegal, failed to take any measure to remove or to disable 50  See ETA, s. 25(1) (Mal.). 51  ibid. s. 25(4). 52  Ethiopian Computer Crime Proclamation no. 958 of 2016, Official Gazette, 22nd Year no. 83, 91904 (Eth.). 53  ibid. Preamble. 54  ibid. s. 7(2). 55  ibid. s. 12(1). 56  ibid. s. 13. 57  ibid. s. 14. 58  ibid. s. 27. 59  For more discussion on the dangers associated with turning intermediaries into ‘cyber-police’, see Luca Belli, Pedro Augusto Francisco, and Nicolo Zingales, ‘Law of the Land or Law of the Platform? Beware of the Privatisation of Regulation and Police’ in Luca Belli and Nicolo Zingales (eds), Platform Regulations. How Platforms are Regulated and How They Regulate Us (Fundação Getulio Vargas Press 2017). 60  Computer Crime Proclamation, s. 24 (Eth.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   227 access to the content data; or (3) failed to take appropriate measure to remove or to dis­able access to the content data upon obtaining notice from the competent administrative authorities. In principle, this is the opposite of a safe harbour, as there is no guarantee in the Act that the intermediary will be shielded from liability by engaging in the opposite of the proscribed conduct. As a matter of practice, however, one can expect that the specific intentionality requirements in the offences will be sufficient to minimize the circumstances in which secondary criminal liability can be imposed. Nevertheless, the flip side of having some sort of liability limitation (rectius, clarification) in that law is that its focus on criminal responsibility potentially leaves intermediaries wide open to other types of claim, including civil liability, administrative orders, and injunctions.

4.3 Kenya Kenya adopted its cybersecurity law, called the Computer Misuse and Cybercrimes Act, in 2018.61 Once again, from the preamble it is clear that the focus of the law is pre­dom­in­ ant­ly to provide for offences relating to computer systems, while ‘connected purposes’ is briefly mentioned at the end. In addition to more conventional cybercrimes, the Act captures some of the most recent concerns such as the publication of false information with the intent that the data will be considered or acted on as authentic,62 its aggravated form when it is calculated to result or does results in panic, chaos, or violence among citizens or is likely to discredit the reputation of a person,63 cyber harassment,64 wrongful distribution of obscene or intimate images,65 and cyber-terrorism66. As is the case with Ethiopia, these provisions are carefully subordinated to the existence of intentionality, which should in principle rule out the liability of service providers in the absence of knowledge. Nevertheless, the regime of liability protection is strengthened by section 56, which provides a tripartite framework of liability limitation: • first, a service provider shall not be subject to any civil or criminal liability, unless it is established that it had actual notice, actual knowledge, or wilful and malicious intent, and not merely through omission or failure to act, and that it had thereby facilitated, aided, or abetted the use by any person of any computer system controlled or managed by a service provider in connection with a contravention of the Act or any other written law; • secondly, a service provider shall not be liable under the Act or any other law for maintaining and making available the provision of their service; • thirdly, a service provider shall not be liable under the Act or any other law for the disclosure of any data or other information that the service provider discloses only

61  Law no. 5 of 2018, entered into force 30 May 2018 (Ken.). 62  ibid. s. 22. 63  ibid. s. 23. 64  ibid. s. 27. 65  ibid. s. 37. 66  ibid. s. 33.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

228   Nicolo Zingales to the extent required under the Act or in compliance with the exercise of powers under Part IV (‘Investigation Procedures’). It should be noted that the first type of limitation represents the opposite extreme of the safe harbour provided by the Ethiopian framework, parsing out the circumstances under which the provider incurs no liability. Furthermore, all three limitations os­ten­sibly reach beyond the scope of cybercrime, excluding all types of civil and criminal liability and in the last two instances even liability under any other law. This provides a useful suggestion on how overlapping liability could be coordinated between different regimes.67 At the same time, it is worth pointing out that the second limitation is extremely vague and unclear, and the last condition has nothing to do with the legality of third party content. The obvious explanation for such an expanded understanding of the liability of intermediaries, matching equivalent provisions on data retention and interception in other cybercrime laws, is that the ultimate goal was the establishment of an infrastructure permitting the detection and investigation of crime, rather than a predictable environment for the business of content intermediation.

4.4  South Africa In 2017, South Africa introduced a Cybercrime Bill68 that aligns with the agenda set by the AU Convention. It has the aims, among others, to: create offences and impose penalties which have a bearing on cybercrime; criminalize the distribution of ‘harmful’ data messages; and impose obligations on electronic communications service providers and financial institutions to assist in the investigation and reporting of cybercrimes.69 Prohibited acts in the Bill include unlawful distribution via a computer system of a data message which incites damage to property or violence;70 unlawful and intentional distribution of a data message containing an intimate image without consent;71 and unlawful and intentional distribution of a harmful data message, which includes messages that could be reasonably regarded as harmful in that they threaten damage to property or violence, encourage self-harm or harm others, and are inherently false in nature and aimed at causing mental, psychological, physical, or economic harm to a specific person or a group of persons.72 The Bill fails to clarify the significance of the characterization as ‘unlawful’ of the conduct proscribed in those offences, although the requirement of intentionality may effectively rule out the liability of intermediaries in the absence of knowledge. However, the Bill does not foresee a specific mechanism whereby providers 67  It is worth mentioning that Kenya’s Copyright Board had consulted in 2016 on specific amendments to the Copyright Act to provide web-blocking measures in cases of copyright infringements online, which introduced a detailed system of safe harbours for copyright infringement. See . However, the amendment was never adopted into law. 68  See Cybercrime and Cybersecurity Bill B6-2017, published in Government Gazette no. 40487 of 9 December 2016 (S. Africa). 69  ibid. Preamble. 70  ibid. s. 16. 71  ibid. s. 18. 72  ibid. s. 17.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   229 are deemed to have obtained such knowledge, which would create a state of greater uncertainty compared to the existing scenario under ECTA. In contrast, the Bill envisages the creation of a specific mechanism, to be defined by the cabinets of the minister responsible for policing and of the minister responsible for the administration of justice, for electronic communication service providers and financial institutions to report to the South African police without undue delay (and in any event within seventy-two hours of becoming aware) the fact that their system is involved in the commission of any category or class of offence identified by those cabinets.73 Similar to ECTA, the Bill contains the caveat that it cannot be interpreted to impose on electronic communication service providers and financial institutions any obligations to monitor the transmitted or stored data or actively to seek facts or circumstances indicating unlawful activity.74 However, the obligation to report and the parallel obligation to preserve any information that may be of assistance to law enforcement are backed up with an offence and the imposition of a fine for failure to comply (R50,000).75 Inevitably, entry of the Bill into law would impact the regulatory environment concerning internet intermediaries. Yet it is important to acknowledge that perhaps even more serious consequences for intermediaries stem from hate speech and anti-discrimination legislation, particularly in the light of the prominent role they play in South African society. In fact, article 1 of the South African Constitution affirms that dignity, equality, and advancement of human rights and freedoms are essential values on which the democratic Republic of South Africa is founded. Such high importance is attributed to the rights to equality and human dignity, contained in articles 9 and 10, that they are both considered non-derogable even under the exceptional circumstances of a state of emergency identified in section 37. The state has given specific implementation to the equality principle through the Promotion of Equality and Prevention of Unfair & Discrimination Act (PEPUDA) of 2000. The Act, which attributes jurisdiction to every magistrates’ court and high court to serve as ‘equality courts’, prohibits race, gender, and disability-based discrimination, and provides further details on the more general notion of unfair discrimination.76 Furthermore, section 12 outlaws the publication and dissemination of any information that could be ‘reasonably construed to demonstrate a clear intention to unfairly discriminate’, subject to the proviso that this shall not prevent bona fide engagement in artistic creativity, academic, and scientific inquiry, fair and accurate reporting in the public interest, or the exercise of freedom of expression in accordance with section 16 of the Constitution. It also addresses hate speech, by adopting a similar prohibition against the publication, propagation, advocacy, or communication of words based on one or more of the prohibited grounds, ‘which could reasonably be construed to demonstrate a clear intention to (a) be hurtful; (b) be harmful or to incite harm; (c) promote or propagate hatred’. This definition goes beyond the narrow notion of hate speech carved out from the protection 73  ibid. s. 52(1) and (2). 74  ibid. s. 52(4). 75  ibid. s. 52(3). On 28 January 2019, that amount was equivalent to US$3,658. 76 See Promotion of Equality and Prevention of Unfair Discrimination Act no. 4 of 2000, s. 14 (S. Africa).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

230   Nicolo Zingales of free speech in the Constitution, consisting of ‘advocacy of hatred that is based on race, ethnicity, gender or religion, and that constitutes incitement to cause harm’.77 As written, the prohibition poses a potentially very significant challenge for content moderation by internet intermediaries, as a determination that a given form of expression is ‘hurtful’ or that it constitutes a bona fide engagement in one of those acts necessarily requires a careful appreciation of the context of a communication. The significance of this legislation is even greater considering that the antidiscrimination law not only concerns civil liability. According to section 10(2) of the Act, the equality courts retain discretion to forward the matter to the Director of Public Prosecutions for the institution of criminal proceedings. In addition, in 2000 the Ad Hoc Joint Committee on Promotion of Equality and Prevention of Unfair Discrimination Bill adopted a resolution recommending the adoption of legislation in line with the Convention on the Elimination of All Forms of Racial Discrimination (ICERD) that would criminalize hate speech offences. The subsequent Draft Prohibition on Hate Speech Bill, proposed in 2004, was never approved; however, in 2013 the Department of Justice and Constitutional Development introduced a Draft Policy Framework on Combating Hate Crimes, Hate Speech and Unfair Discrimination which, in the words of Minister of Justice and Constitutional Development Jeff Radebe ‘refine[s] the concept of hate speech in a way that reflects South Africa’s commitment to high standards of free expression at the same time as combatting hate speech’.78 The Framework resulted in the Prevention and Combating of Hate Crimes and Hate Speech Bill,79 which retains a prohibition identical to that contained in the Equality Act, but removes the reference to ‘hurtful’ and expands the list of exempted activities to include the bona fide interpretation and proselytizing or espousing of any religious tenet, belief, teaching, doctrine, or writings. The Bill also provides specific treatment for the mere propagation of hate speech by a person who ‘intentionally distributes or makes available an electronic communication which that person knows constitutes hate speech through an electronic communications system which is accessible by any member of the public or by a specific person who can be considered a victim of hate speech’. While the subordination of the offence to the existence of knowledge of the hateful nature of the speech is certainly welcome, one can expect that courts will construct the provision to include both actual and constructive knowledge. An expansive reading of this type of legislation would pose a threat to legal certainty for internet intermediaries, and potentially undermine the effectiveness of the liability limitations. Unfortunately, the constitutional protection afforded to the right to freedom of expression appears insufficient to avert those consequences, as measures adopted in relation to the categories of unprotected speech listed

77  ibid. s. 16. 78  Jeff Radebe, ‘Notes’ (Panel: Imagine a World Without Hate!, Maroela Room, Sandton Sun Hotel, 25 August 2013) . 79  Bill published in Government Gazette no. 41543 of 29 March 2018 (S. Africa).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   231 in article 16(2) do not require the fulfilment of the stringent necessity test devised by ­section 36.80 In contrast, such a test would be applicable for other types of liability, for example for defamation that does not rise to the level of ‘hate speech’. Defamation is disciplined in South Africa under the common law notion of actio injuriarum, which requires the existence of wilful or negligent conduct that violates a personality interest, such as one’s good name and reputation.81 The common law initially alleviated complainants from the need to prove intentionality (animus injurandi) in the case of the media, thereby imposing strict liability on newspapers owners, printers, publishers, and editors.82 However, later judgments established that such strict liability had a significant chilling effect on the free flow of information, and thus in line with section 36 a valid claim of defamation to trigger a presumption of animus injurandi would need to show at least negligence on the part of the media.83 To overcome the presumption, the defendant needs to prove the nonexistence of animus injurandi based on the circumstances of the case.84 The section 36 test also applies to the obligations imposed by the Film and Publication Act (FPA), a law passed in 1996 with the objective of regulating the creation, production, possession, and distribution of films, games, and certain publications in order to allow consumers to make informed viewing choices and protect children from exposure to disturbing and harmful material, as well as to make the use of children in and the ex­pos­ure of children to pornography punishable.85 The FPA identifies a number of categories of harmful or potentially harmful material, and creates a regime of registration, classification, and authorization by the Film and Publication Board (FPB), which must be complied with by anyone intending to exhibit, distribute, publish, broadcast, or otherwise make available to the public a ‘publication’.86

80  According to which: ‘The rights in the Bill of Rights may be limited only in terms of law of general application to the extent that the limitation is reasonable and justifiable in an open and democratic society based on human dignity, equality and freedom, taking into account all relevant factors, including: (a) the nature of the right; (b) the importance of the purpose of the limitation; (c) the nature and extent of the limitation; (d) the relation between the limitation and its purpose; and (e) less restrictive means to achieve the purpose.’ 81 See Van Zyl v Jonathan Ball Publishers (Pty) Ltd [1999] (4) SA 571 (W) (S. Africa); National Media Ltd v Bogoshi [1998] (4) SA 1196 (A) (S. Africa). 82  Pakendorf and Others v De Flamingh [1982] ZAENGTR 1 (31 March 1982) (S. Africa). 83  See e.g. National Media Ltd v Bogoshi [1998] 4 All SA 347 (A) (S. Africa). 84 See Khumalo & Others v Holomisa [2002] (5) SA 401 (CC) (S. Africa). The factors that courts will consider to determine whether the publication was reasonable are: (1) the nature and tone of the report; (2) the nature of information on which the allegations were based; (3) the steps taken to verify the information; (4) the steps taken to obtain the affected party’s response; (5) whether that response has been published; and (6) the need to publish. See also, most recently, Sayed v Editor, Cape Times [2004] (1) SA 58 (c) 73 (S. Africa). 85  See Film and Publication Act (FPA) (1996) s. 2 (S. Africa). 86  ibid. s. 1 (by the definition in the FPA, ‘publication’ refers to a wide range of material, including ‘any message or communication, including a visual presentation, placed on any distributed network’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

232   Nicolo Zingales Section 36 provides solid ground for constitutional challenges to this legislation. Shortly after the introduction of the FPA, in De Reuck,87 the Constitutional Court initially condoned the authorization regime of child pornography in recognition of the importance of the accompanying exemptions. A few years later, however, in Print Media South Africa,88 the court reached the opposite conclusion with regard to the broader category of ‘publications containing sexual conduct’: the provision was deemed unconstitutional because the legislator could have imposed less severe restrictions of freedom of expression, or simply permitted the publishers to obtain an advisory opinion by the FPB without being penalized for failure to do so. By contrast, it is striking that no constitutional challenge has so far been made against the requirement for all ISPs (a category which is defined by reference to ‘the business of providing access to the Internet by any means’, which to date includes cyber-cafes) to register with the Board and take rea­son­able steps to prevent the use of their services for hosting or distributing child porn­og­raphy.89 This is despite the fact that failure to comply constitutes an offence punished with a fine and/or up to six months and five years’ imprisonment, respectively.90

4.4.1  Liability limitations when you are also ‘cyber-police’: a complex terrain The framework described previously complicates the picture initially painted for the South African intermediary liability regime. It was noted in Section 2 that the great advantage of this model based on guidelines-driven codes of conduct is that it allows a government to demand intermediaries to comply with specific principles and standards in exchange for liability limitations. However, the main problem with the implementation of that idea is that the Guidelines developed by the Ministry of Communication left ample discretion for IRBs in the design of such a procedure, resulting in a lack of certainty over the effective fulfilment of the codes of conduct of the ECTA requirements: for example, while they list requirements concerning the observance of consumer protection and privacy provisions of ECTA merely as optional (‘preferred requirements’, ss. 6.5 and 6.6), this contrasts with the explicit letter of Chapters 7 and 8 of ECTA and the obligations imposed by Chapters 3, 4, and 5 of PAIA. Of course, section 79 of ECTA makes clear that violations of other laws would not be shielded under the liability limitations offered by Chapter 11 of ECTA, which do not affect ‘any obligations founded on an agreement, licensing and regulatory obligations, and any court or legal obligations to remove, block or deny access to “data message” ’: this means that liability will for instance not be escaped for failure to remove, or wrongful takedown of, unreasonably discriminatory and indecent content’. Such provision suggests that, while it is possible for an IRB or an ISP to establish more demanding liability regimes, there is at the outset no coordination between ECTA and other laws. 87 See De Reuck v Director of Public Prosecutions [2003] (12) BCLR 1333 (S. Africa). 88 See Print Media South Africa v Minister of Home Affairs [2012] (6) SA 443 (CC) (S. Africa). 89  See FPA (n. 85) s. 27A(1) (S. Africa). 90  ibid. s. 27A(4).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   233 Another example of incoherence is the restatement in the Guidelines’ principles of the ‘no monitoring’ principle laid out in section 78 of ECTA. This is followed by an attempt to provide more clarity by specifying that such a principle is not applicable with respect to the requirements set out by the FPA on child pornography,91 which seem to require ISPs to conduct packet inspection92 to verify the content of the communications. However, the enhanced clarity is only apparent insofar as the guidance fails to refer to monitoring which is de facto required for ISPs to avoid liability under the Equality Act. In conclusion, the regime just described only appears to confer on ISPs substantial immunity from liability for the content produced by third parties. However, the regime unfortunately fails to provide the much-needed security, both because of its limited scope and the lack of clarity of some of its key features: first, unlike many other regimes around the globe, ISPs may be subject to injunctions, as well as liable under criminal law, for the conduct of their users. Secondly, immunity from liability does not apply horizontally across the board, but leaves intact liability under specific legislations such as the FPA or the Equality Act. And, thirdly, the immunity conferred is deficient as ISPs could still be found liable if knowledge gathered outside the notice-and-takedown procedure is deemed sufficient under common law. Finally, as mentioned earlier (Section 2), the legal framework adopted conflicts with the principle of due process, as it permits the establishment of a violation without ensuring full respect of the accused infringers’ right to be heard—both users and ISPs. Similar problems were detected in the observation of intermediary liability regimes in other jurisdictions, particularly emerging from the adoption of post-security cyber legislation. In Malawi, it was shown that the hybrid nature of the Cybercrime Act led to a loophole for the protection of freedom of expression enshrined in intermediary liability protections, in particular when a restriction would enhance compliance with the requirements of any other written law. In Ethiopia, the insertion of liability limitations in the context of a cybercrime law was myopically focused on the criminal responsibilities arising from the offences established in that law, without sufficient consideration of the potentially huge realm of civil and administrative liabilities. However, Kenya provided a useful example of how liability limitations can be asserted in a proactive manner, preventing the attachment of liability under any contrasting law. This ‘non-obstante’ clause is not a novelty in intermediary liability regimes, having been used, for instance (not without its shortcomings), in Malawi’s Electronic Transactions Act and India’s Information Technology Act;93 yet, if designed carefully, promises to be a valuable tool to prevent inconsistent frameworks both nationally and across the region. 91  It is also important to clarify that ECTA has wider coverage than the FPA, whose definition of an ISP as ‘any person whose business is to provide access to the Internet by any means’ is applicable only to internet access providers. 92  This could be a shallow packet inspection (e.g. packet fingerprinting) or on some occasions a deep packet inspection, the latter referring to a more invasive technique that allows monitoring at a lower layer. 93  See Information Technology Act no. 21 of 2000, s. 79 (Mal.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

234   Nicolo Zingales

5.  Conclusion: The Role of the African Union and the Promise of the South African Model This section concludes our brief overview of intermediary liability developments in the African territory. After a reflection in Section 2 on the peculiarities of the African context, Section 3 illustrated the success of South Africa in exporting its model throughout the continent in a first wave of diffusion of intermediary liability protections, lasting for about a decade from the early 2000s. Section 4 documented the emergence of a second wave (or second generation) of liability limitations in the steps of the AU Convention, characterized by an interweaving of protections and obligations for internet intermediaries. Section 4 also explained that the AU has not yet played the significant role that it could play to promote regional harmonization on cyber-legislation, although the efforts undertaken for the establishment of the AU Convention have proven influential in the region. It is therefore natural to suggest that similar efforts could be made to create a shared notion of intermediary liability limitations, and develop a consistent answer to the all-important question of their relationship to obligations imposed on intermediaries in other statutes. As an intellectual experiment, then, it is interesting to consider what could be the foundations for the establishment of an African baseline model. There is little doubt that intermediary liability falls within the competences of the AU, insofar as it concerns human rights, economic development, and socio-economic integration. On what grounds should this project be pursued, then? One natural point of departure is the Joint Declaration on Freedom of Expression and the Internet adopted in 2011 by the Special Rapporteur on Freedom of Expression and Access to Information of the African Commission on Human and Peoples’ Rights (ACHPR) together with three other special rapporteurs.94 The Declaration stresses the importance of limiting liability of conduits, searching, and caching providers in the absence of intervention in the content;95 and calls for minimum safeguards with regard to other intermediaries, including the no monitoring principle and takedown procedures offering sufficient protection for freedom of expression.96 Furthermore, article XI explicitly encourages self-regulation as an effective tool for redressing harmful speech.97 This can usefully be read in conjunction with section IX of the Declaration of Principles on Freedom of Expression in Africa, 94  See OCSE, ‘International Mechanism for Promoting Freedom of Expression: Joint Declaration on Freedom of Expression and the Internet by the United Nations Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE) Representative on Freedom of the Media, the Organization of American States (OAS) Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples’ Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information’ (2011) . 95  Or if they refuse to obey a court order. See ibid. s. 2(a). 96  ibid. s. 2(b). 97  ibid. XI.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   235 which despite recommending the institution of an independent administrative body receiving complaints for violations of freedom of expression in the media, goes on to state that ‘effective self-regulation is the best system for promoting high standards in the media’. The bottom line in these declarations is not that everything should be left to self-regulation. Rather, the suggestion is that the effectiveness of self-regulation can be harnessed when it is constrained by a limiting framework: in essence, it is a call for ‘coregulation’.98 Co-regulation can be implemented in different forms and certainly within that realm is the arrangement that has been in place since 2002 in South Africa’s ECTA. In it and its Guidelines, the State imposed on intermediaries respect of certain principles through a powerful incentive (the liability limitations) and yet left them with a discrete margin of manoeuvre in how respect for those principles is embedded in the design and enforcement of codes of conduct. Linking the safe harbours for intermediaries to the respect of minimum standards of protection in different domains, including cybercrime, goes a long way towards preventing a sacrifice of important values in the pursuit of effectiveness. However, when such responsibilities are passed on to private entities it is crucial not only to maintain close supervision of enforcement, but also to craft the overarching framework in a way that is consistent with the obligations contained in other legislation and ensures respect for fundamental rights. These are two of the problems observed in the case of ECTA and its Guidelines: they fail to address coordination with other legislation and causes of action, and neglect the consideration of important fundamental rights (e.g. due process) that remain conspicuously absent in the code of conduct adopted by the only recognized IRB. While these problems could be tempered by a flourishing of parallel IRBs competing on the quality of their codes of conduct, the state remains ultimately the responsible entity for creating an environment conducive to such competition or otherwise for ensuring fair and effective enforcement of digital rights. In that light, this chapter has argued that the AU has all the cards to pivot the discussion, including framework agreements and model laws, and thereby to promote human rights and economic development across the region.

98  For an in-depth illustration of this concept, see Chris Marsden, Internet Co-Regulation: European Law, Regulatory Convergence and Legitimacy in Cyberspace (CUP 2011).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 12

The Li a bilit y of Austr a li a n On li n e I n ter m edi a r ies Kylie Pappalardo and Nicolas Suzor*

1.  Liability: Active Intermediaries and Recalcitrant Wrongdoers Online intermediary liability law in Australia is a mess. The basis on which third ­parties are liable for the actions of individuals online is confusing and, viewed as a whole, largely incoherent. Australian law does not express a coherent or consistent articulation of when, exactly, an online intermediary will be liable for the actions of third parties. There are conflicting authorities both within and between separate bodies of law that impose separate standards of responsibility on online intermediaries. Courts are struggling to adapt the law to apply to new technological contexts in a way that adequately balances competing interests from within the confines of existing doctrines. The result is a great deal of uncertainty for both plaintiffs and intermediaries. Different doctrines in Australian law employ a range of different tests for determining when an actor will be liable for the actions of a third party. So far, these primarily include cases brought under the laws of defamation, racial vilification, misleading and deceptive conduct, contempt of court, and copyright. These bodies of law have all developed largely independently of each other. They are conceptually different and derive from different historical contexts, and the courts have generally applied them in isolation. The particular fault elements on which liability is based are all different and cannot easily be compared at a detailed level. In some of these doctrines, like copyright, there is a sep­ar­ate *  An extended version of this chapter was originally published in the Sydney Law Review [2018] 40. Suzor is the recipient of an Australian Research Council DECRA Fellowship (project number DE160101542).

© Kylie Pappalardo and Nicolas Suzor 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Liability of Australian Online Intermediaries   237 head of liability for secondary liability as distinguished from the underlying wrongful act; in others, the actions of intermediaries operating on behalf of another are assessed under the same tests of primary liability. It is useful, however, to take a broad view, and look at the common ways that the courts are struggling to deal with very similar issues under the weight of very different doctrinal traditions. This process of abstraction necessarily involves ‘throwing away detail, getting rid of particulars’, but with the intent to ‘produce the concepts we use to make explanatory generalizations, or that we analogize with across cases’.1 The easy cases are those most closely analogous to that of a mass media publisher who exercises editorial control over the content of communications. Where the intermediary moderates or selects the material to be published, courts have been able to draw a clear analogy with, for example, newspaper editors, and are able to find wrongdoing relatively easily. Under both defamation law and consumer protection law, for example, where the intermediary exercises some level of judgment and editorial control, courts have variously explained that the intermediary ‘accepts responsibility’2 or ‘consents to the publication’.3 This is a version of the ‘Good Samaritan’ problem, where intermediaries who voluntarily take on some responsibility to moderate have a greater legal risk of exposure than those who do not exercise any editorial control.4 The law is much more complicated where online intermediaries do not directly exercise a large degree of editorial control. Our common law has not yet developed a clear theory to determine when a service provider who creates a technology or system that enables wrongful behaviour will be liable. One of the basic organizing principles of our legal system is that there is usually no liability without fault. With few exceptions,5 the common law does not impose obligations on institutions or individuals to protect the rights of another against harm caused by third parties. This notion is most commonly expressed in the rule that there is no general duty to rescue.6 This general rule reflects a 1  Kieran Healy, ‘Fuck Nuance’ [2016] Sociological Theory 5–6. 2  Visscher v Maritime Union of Australia (No. 6) NSWSC 350 [29]–[30] (Beech-Jones J) (Aus.); Australian Competition and Consumer Commission v Allergy Pathway Pty Ltd (No. 2) (2011) 192 FCR 34 [33] (Finkelstein J) (Aus.). 3  Trkulja v Google Inc., Google Australia Pty Ltd VSC 533 [31] (Beach J) (Aus.). 4  Part of the protections of the Communications Decency Act 1996, s. 230 (US) were specifically designed to overturn Stratton Oakmont Inc. v Prodigy Services Co., 1995 WL 323710 (NY Sup. Ct. 1995) (US), which held that Prodigy was liable for the defamatory bulletin board postings of its users because it exercised editorial control through moderation and keyword-filtering software. See further, Paul Ehrlich, ‘Communications Decency Act § 230’ (2002) 17(1) BTLJ 401. 5  The first main exception at common law is where there is a non-delegable duty and a special relationship of control between the defendant and the third party, e.g. that between parents and children, school authorities and pupils, and prison wardens and prisoners. See Smith v Leurs (1945) 70 CLR 256 (Aus.); Commonwealth v Introvigne (1982) 150 CLR 258 (Aus.); Home Office v Dorset Yacht Co. Ltd [1970] AC 1004 (UK). The second main exception applies where the defendant has had some role in creating the risk from which the plaintiff needs rescuing. See G.H.L. Fridman, ‘Non-Vicarious Liability for the Acts of Others’ [1997] TLR 102, 115, discussing Smith v Littlewoods Organisation Ltd [1987] AC 241 [279]–[281] (UK). 6 See Dorset Yacht (n. 5) [1027] (Lord Reid); Frank Denton, ‘The Case Against a Duty to Rescue’ (1991) 4(1) Can. J. of L. & Jur. 101, 101–4; Les Haberfield, ‘Lowns v Wood and the Duty to Rescue’ [1998] TLR 56, 59, 65; John Goldberg and Benjamin Zipursky, Torts (OUP 2010) 118–19.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

238   Kylie Pappalardo and Nicolas Suzor fundamental liberal commitment to autonomy:7 individuals are free to act as they choose so long as those actions do not harm others.8 Courts have consistently held that ‘the common law of private obligations does not impose affirmative duties simply on the basis of one party’s need and another’s capacity to fulfil that need’.9 The common law emphasizes personal responsibility; to require a person to help another simply because they have the capacity to do so would unhinge the law from its underlying objectives of promoting personal responsibility for one’s actions and deterring reckless or unreasonable behaviour.10 If the defendant is not personally responsible for causing the harm, then from this perspective, the threat of liability cannot act as an effective deterrent for wrongful behaviour.11 Thus, under the common law—and according to responsibility theory—a person will generally only be responsible for a harmful outcome where his or her actions caused the harm (causation) and where that person might have acted to avoid the harm but did not (fault).12 As Justice Mason has stated, the notion of fault within the law can act as a ‘control device’ to ensure that the burden to repair is proportional to the defendant’s responsible role in the occurrence of harm.13 These general principles, however, conflict with another basic principle: that for every wrong, the law provides a remedy.14 In cases brought against intermediaries in Australia and overseas, courts are often presented with a meritorious claim without a clear remedy, and face the difficult task of determining whether to extend the existing law to require intermediaries to take action to protect plaintiffs’ rights. In two cases in 2012 and 2013, respectively, Roadshow v iiNet15 and Google v ACCC,16 the High Court rejected the extension of existing doctrine to impose liability for large, general-purpose intermediaries.17 Despite these two High Court decisions, the overall state of Australian inter­medi­ary liability law is still one of confusion, both within and across doctrines. 7  See Ernest Weinrib, ‘The Case for a Duty to Rescue’ (1980) 90(2) Yale L.J. 247, 279; Jane Stapleton, ‘Legal Cause: Cause-in-Fact and the Scope of Liability for Consequences’ (2001) 54(3) Vand. L. Rev. 941, 949 fn. 17. 8  See John Locke, Two Treatises of Government (CUP 1988) 106–9; John Stuart Mill, On Liberty (Start Publishing LLC 2012) Ch. IV: Of the Limits to the Authority of Society Over the Individual. 9  Denton (n. 6) 109. 10  See Jane Stapleton, ‘Duty of care: peripheral parties and alternative opportunities for deterrence’ (1995) 111 (Apr.) LQR 301, 312, 317; Denton (n. 6) 124. 11  See Stapleton (n. 10) 305, 310–12, 317. 12  See Stephen Perry, ‘The Moral Foundations of Tort Law’ (1992) 77 Iowa  L.  Rev. 449, 513; John Goldberg and Benjamin Zipursky, ‘Tort Law and Responsibility’ in John Oberdiek (ed.), Philosophical Foundations of the Law of Torts (OUP 2014) 17, 20–1; Emmanuel Voyiakis, ‘Rights, Social Justice and Responsibility in the Law of Tort’ (2012) 35(2) UNSWLJ 449, 458; Denton (n. 6) 127; Peter Cane, ‘Justice and Justifications for Tort Liability’ (1982) 2 OJLS 30, 53–4. 13  Justice Keith Mason, ‘Fault, causation and responsibility: Is tort law just an instrument of corrective justice?’ (2000) 19 Aust. Bar Rev. 201, 207–8. 14  See the maxim ‘ubi jus ibi remedium’: Herbert Broom, Selection of Legal Maxims, Classified and Illustrated (William S. Hein & Co. 1845) [91]. 15 See Roadshow Films Pty Ltd v iiNet Ltd (2012) 248 CLR 42 (Aus.). 16 See Google Inc. v Australian Competition and Consumer Commission (2013) 249 CLR 435 (Aus.). 17  See Robert Burrell and Kimberlee Weatherall, ‘Providing Services to Copyright Infringers: Roadshow Films Pty Ltd v iiNet Ltd’ (2011) 33 SLR 801 [829]–[830] (Aus.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Liability of Australian Online Intermediaries   239 Across different fact scenarios in consumer protection, defamation, racial vilification, contempt of court, and copyright cases, mere knowledge of the content can lead to an inference that a third party publisher adopts or endorses its continual publication and is  responsible for the harm that results. Across each of these doctrines, plaintiffs have sought to link an intermediary’s capacity to do something about wrongdoing with a normative proposition that they therefore ought to do it. We argue that, as a result, inter­medi­ary liability has expanded in a largely unprincipled way that has eroded the important connection between liability and responsibility.

1.1  Consumer Protection Law In Google Inc. v ACCC, the High Court held that Google Inc. was not liable when it created a system to enable third parties to create advertisements that were reproduced on Google’s search results pages. The High Court was clear in finding that Google did not ‘endorse’ advertisements submitted by third parties and published on its own web pages, on the basis that the content of the material was wholly determined by the advertiser and published automatically by Google.18 Google ‘did not itself engage in misleading or deceptive conduct, or endorse or adopt the representations which it displayed on behalf of advertisers’.19 Liability for misleading and deceptive conduct is strict, but requires actual wrongful conduct on the part of the defendant that is likely to mislead or deceive— there is no separate secondary head of liability. Whether Google knew that the content was misleading was irrelevant.20 On its face, the High Court’s ruling is quite strong: even though Google was likely to know that the material was likely to mislead or deceive,21 it was not responsible for advertisements created by others.22

18 See Google Inc. v ACCC (2013) 249 CLR 435 [68] (French CJ, Crennan and Kiefel JJ) (Aus.) (noting that ‘each relevant aspect of a sponsored link is determined by the advertiser. The automated response which the Google search engine makes to a user’s search request by displaying a sponsored link is wholly determined by the keywords and other content of the sponsored link which the advertiser has chosen. Google does not create, in any authorial sense, the sponsored links that it publishes or displays’). 19  ibid [73] (French CJ, Crennan and Kiefel JJ) (emphasis added); Hayne J agreeing but warning that there can be no general rule that would immunize publishers ‘unless they endorsed or adopted the content’: [115]. See also [162] (Heydon J): ‘if a person repeats what someone else has said accurately, and does not adopt it, there is nothing misleading in that person’s conduct’. 20 See Google (n. 16) [68] (French CJ, Crennan and Kiefel JJ). 21  At first instance, Nicholas J would have found that Google was likely to have actual or constructive knowledge that the advertisements in question were using competitors’ brands. See Australian Competition and Consumer Commission v Trading Post Australia Pty Ltd [2011] FCA 1086 [242] (Aus.). Intention, however, is not a relevant element of an action for misleading or deceptive conduct. See Google (n. 16) [97]–[98] (Hayne J). 22  Importantly, however, the High Court’s decision does not foreclose the possibility of intermediaries being liable as accessories, nor does it necessarily prevent future development of the terms ‘adopt’ and ‘endorse’ on the facts in intermediary cases in the future: see Radhika Withana, ‘Neither Adopt nor Endorse: Liability for Misleading and Deceptive Conduct for Publication of Statements by Intermediaries or Conduits’ (2013) 21 AJCCL 152.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

240   Kylie Pappalardo and Nicolas Suzor The decision in Google v ACCC must be contrasted with the earlier Federal Court decision in Allergy Pathway (No. 2), where the respondents were found to have breached their undertaking not to engage in misleading and deceptive conduct23 when they failed to remove comments posted by third parties on their Facebook page.24 Allergy Pathway was liable for contempt of court on the basis that it knew about the comments and failed to remove them. The Federal Court relied specifically on defamation precedent25 in reaching the conclusion that Allergy had ‘accepted responsibility for the publications when it knew of the publications and decided not to remove them’.26 Importantly, Google v ACCC was pleaded in a narrow way that alleged Google itself had made the misleading representations—not that it had misled the public by publishing false claims. An alternative approach in similar circumstances could have seen the ACCC allege that Google’s conduct as a whole in developing its Adwords system and publishing third party content was likely to mislead or deceive consumers. This broader argument could conceivably justify the imposition of liability on Google ‘for the economic harms produced by its industrial activities, centred on devising and operating systems used for trading information’.27 It is an argument to which at least some members of the High Court were apparently sympathetic,28 and it is possible that a differently pleaded case on similar facts could well turn out differently in the future.29

1.2 Defamation In defamation, the word ‘publish’ extends liability to intermediaries who fail to remove defamatory material posted by others.30 For internet hosts that exercise some degree of control over the content they disseminate, they will be liable in the same way that newspaper publishers31 or broadcasters32 who carry content created by others are liable. For others with a less active role, like the operators of discussion fora who provide the fa­cil­ities for others to post comments, liability will accrue as a subordinate publisher 23  The undertaking was given in the context of litigation brought by the ACCC: Australian Competition and Consumer Commission v Allergy Pathway Pty Ltd [2009] FCA 960 (27 August 2009) [5] (Aus.). 24  Note that the respondents were also found to have breached the undertakings through material that they themselves had posted: Australian Competition and Consumer Commission v Allergy Pathway Pty Ltd (No. 2) (2011) 192 FCR 34 (Aus.). 25  ibid. [24]. 26  ibid. [32]–[33] (Finkelstein J). 27  Megan Richardson, ‘Why Policy Matters: Google Inc v Australian Competition and Consumer Commission’ (2012) 34 Sydney L. Rev. 587, 594. 28 See Google (n. 16) [117] (Hayne J); see further Seb Tonkin, ‘Google Inc. v. Australian Competition and Consumer Commission’ (2013) 34 Adelaide L. Rev. 203, 205–7. 29  See Amanda Scardamaglia, ‘Misleading and Deceptive Conduct and the Internet: Lessons and Loopholes in Google Inc v Australian Competition and Consumer Commission’ (2013) 35(11) EIPR 707, 713; see also Peter Leonard, ‘Internet Intermediary Liability: A Landmark Decision by the High Court of Australia in Google Inc v ACCC’ (2013) 15(9) Internet Law Bulletin 158. 30 See Byrne v Deane [1937] 1 KB 818, 837 (UK). 31 See Wheeler v Federal Capital Press of Australia Ltd [1984] Aust Torts Rep. para. 80-640 (Aus.). 32 See Thompson v Australian Capital Television Pty Ltd (1996) 186 CLR 574, 589–90, 596 (Aus.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Liability of Australian Online Intermediaries   241 once they know that the content they carry is likely to be defamatory.33 By contrast, it is generally understood that an internet service provider (ISP) who merely provides a telecommunications service over which others can publish and access defamatory material is not likely to be liable for ‘publishing’ that content.34 In recent cases, however, courts have had much greater difficulty applying these principles to intermediaries who are more removed from the primary act of publication but have more than a purely facilitative role in making material available. For search engines and others who link to defamatory material, the limits to liability in defamation can be drawn from the combination of two notionally distinct principles. The first is the threshold question that an intermediary could not properly be said to ‘publish’ the content, and the second is the defence of innocent dissemination, which applies where a secondary or subordinate publisher does not have actual or constructive knowledge of the content of defamatory material. Because the defence of innocent dissemination will not apply after the content is explicitly drawn to the attention of the intermediary by a complaint, in many cases the most crucial limiting factor is the question of whether an inter­medi­ary has actually published the content in the first place.35 The core issue in difficult suits brought against intermediaries turns on this elusive distinction between active publishing and passive facilitation. In one case, Yahoo!7 conceded that because at least one person had read an article, hosted on a third party website, by following a link presented through the Yahoo! search engine, Yahoo had ‘published’ that article.36 This case may be an outlier; there is emerging authority in the UK37 and Canada38 that suggests that more needs to be done to ‘publish’ a defamatory imputation.39 But where the line should be drawn is not clear. The established law is that what might otherwise be a purely passive role in facilitating publication becomes an act of publication by omission if the secondary actor has ‘consented to, or approved of, or adopted, or promoted, or in some way ratified, the continued presence of that 33  No Australian courts have specifically considered this point in detail, but see Oriental Press Group Ltd v Fevaworks Solutions Ltd [2013] HKCFA 47 (HK). See also Godfrey v Demon Internet Ltd [2001] QB 201, 208–10 (UK). Under the defence of innocent dissemination, liability accrues from the point at which the intermediary acquires knowledge of defamatory material: see David Rolph, ‘Publication, Innocent Dissemination and the Internet after Dow Jones & Co Inc v Gutnick’ (2010) 33(2) UNSWLJ 562, 573–5. 34 See Bunt v Tilley [2007] 1 WRL 1243 [25]–[36] (UK). 35 See Oriental Press Group Ltd v Fevaworks Solutions Ltd [2013] HKCFA 47 (HK). 36 See Trkulja v Yahoo! Inc. LLC [2012] VSC 88 [6]–[7] (Kaye J) (Aus.). 37  See eg Metropolitan International Schools Ltd v Designtechnica Corp. [2010] 3 All ER 528 (UK); Tamiz v Google Inc [2012] EWHC 449 (UK). 38 See Crookes v Newton [2011] 3 SCR 269 [42] (CA) (where a majority of the Canadian Supreme Court found that ‘[m]aking reference to the existence and/or location of content by hyperlink or otherwise, without more, is not publication of that content’). Note, however, that this was a narrow majority. Two members of the Court held that linking to defamatory content in a way that indicates agreement, adoption, or endorsement of that content may be sufficient to ground liability ([48] (McLachlin CJC and Fish J)), and Deschamps J would have found that linking directly to defamatory material would amount to publication: [111]–[112]. 39  See Kim Gould, ‘Hyperlinking and Defamatory Publication: A Question of “Trying to Fit a Square Archaic Peg into the Hexagonal Hole of Modernity”?’ (2012) 36(2) Aust. Bar Rev. 137.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

242   Kylie Pappalardo and Nicolas Suzor statement . . . in other words . . . [if there is] an acceptance by the defendant of a responsibility for the continued publication of that statement’.40 So, for example, in a recent Australian case, the defamatory imputation was found to have been endorsed by the defendant when it used the words ‘read more’ to imply that the content it linked to was a true account.41 These cases become even more difficult when, as in the case of search engines, an intermediary presents a preview or ‘snippet’ of the content of third party sites. For ex­ample, in 2015, Google was found liable in the South Australian Supreme Court for publishing defamatory material when its search engine presented links accompanied by an extract of text that carried defamatory imputations.42 It was also liable in 2012 when its image search results arranged images from third party pages in a way that gave rise to a defamatory imputation.43 In a case brought more recently on very similar facts, the Victorian Court of Appeal would likely have found that Google’s search results amounted to a subordinate publication of potentially defamatory content, but the case was not pleaded in that way.44 The concept of publication is a relatively poor mechanism to delineate responsibility. The general principle in defamation law is that nearly everybody involved in the chain of publication is potentially responsible as a publisher. A conduit—an ISP, for example— that is ‘passive’ and ‘merely facilitates’ communications between users of its system is not likely to be liable for defamation. But the law on the distinction between ‘active’ ‘publishing’ and ‘conduct that amounts only to the merely passive facilitation of disseminating defamatory matter’ is still not well developed.45 Apart from ISPs, it is unclear what types of internet intermediaries may be beyond the scope of defamation law. Liability probably does not extend to people who help design or host website infrastructure but have no substantive involvement with the content.46 Some Australian and UK courts have doubted whether search engines can be liable for the outputs of automated systems designed to identify third party content that matches search terms entered by 40  Urbanchich v Drummoyne Municipal Council (1991) Aust. Torts Rep. 81–127 [7] (Hunt J) (Aus.). 41 See Visscher v Maritime Union of Australia (No. 6) NSWSC 350 [30] (Aus.) (Beech-Jones J found that the defendant’s description of the link ‘amounted to, at the very least, an adoption or promotion of the content’ of the linked article’). 42 See Duffy v Google Inc. [2015] SASC 170 (Aus.). Upheld on appeal in Google Inc. v Duffy [2017] SASCFC 130 (Aus.). 43 See Trkulja v Google Inc., Google Australia Pty Ltd [2012] VSC 533 (Aus.). 44 See Google Inc. v Trkulja [2016] VSCA 333 (2016) [349], [357] (Aus.). 45  Rolph (n. 33) 580; Joachim Dietrich, ‘Clarifying the Meaning of “Publication” of Defamatory Matter in the Age of the Internet’ (2013) 18(2) MALR 88. 46  A decision in the WA Supreme Court considered whether an individual was liable in defamation when she ‘assisted in creating the infrastructure which allowed this material to be displayed to the public’. See Douglas v McLernon [No. 3] [2016] WASC 319 (22 June 2016) [33] (Aus.). The case concerned a defendant that was very far removed from the wrongdoing, but was apparently put on notice of defamatory content on the site. In dismissing the action, Justice Martin turned to the established principles of tort to explain why someone so far removed ought not to be liable: ‘I remain to be persuaded [that a party may be liable for] providing assistance in tort (or the encouraging, counselling or facilitating of the tort) on the basis of involvement, simply because the person does not, as it is here contended, then act to “pull the plug” on a website, or act to terminate the capacity of someone to use an acquired website’: [41].

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Liability of Australian Online Intermediaries   243 the user,47 but the decisions of the Victorian Court of Appeal48 and the South Australian Supreme Court49 referred to earlier explicitly reject this proposition at least from the time the search engine is put on notice of the defamatory content.

1.3 Vilification Like defamation, intermediaries who provide a forum for third party content can be li­able under the Racial Discrimination Act 1975 (Cth) when those comments amount to vilification. Section 18C makes it unlawful to ‘do an act’50 that is reasonably likely to ‘offend, insult, humiliate or intimidate’ a person or group where that act is motivated by ‘race, colour or national or ethnic origin’. In the two decisions that have considered the provision in the context of an online forum, courts have come to somewhat conflicting conclusions as to when a secondary actor will be liable for providing the facilities for another to make vilifying comments. The uncertainty lies primarily in the intentional element of the provision. As in defamation, courts agree that providing the facilities to enable others to post comments and failing to remove them is sufficient to constitute an ‘act’ of publication of the substance of those comments, at least once the operator has knowledge of the comments.51 The conflict is whether failure to remove comments is done ‘because of the race, colour or national or ethnic origin’ of the person or group. In Silberberg, Gyles J found that there was insufficient evidence to draw that conclusion; the respondent’s failure to remove the offensive comments was ‘just as easily explained by inattention or lack of diligence’.52 In Clarke v Nationwide News, by contrast, Barker J held that where the respondent ‘actively solicits and moderates contributions from readers’, the ‘offence will be given as much by the respondent in publishing the offensive comment as by the original author in writing it’.53 The court was able to infer that one of the reasons for the news website’s decision to publish the offensive comments was because of their racial connotations.54 Apart from emphasis placed on the act of moderation in Clarke, there is no easy way to reconcile these two authorities.

47  See e.g. Bleyer v Google Inc. LLC [2014] 311 ALR 529 (Aus.); Metropolitan International Schools Ltd v Designtechnica Corp. [2009] EWHC 1765 (QB) (unreported) (UK). 48  See Google (n. 44) [352] (Ashley JA, Ferguson JA, McLeish JA). 49  See Duffy v Google Inc. [2015] 125 SASR 437 [204]–[205] (Blue J) (Aus.); Google (n. 42) [140], [151], [178] (Kourakis CJ), [536] (Peek J). 50  The reference to ‘an act’, in this case, specifically includes an omission. See Racial Discrimination Act 1975 (Cth), s. 3(3) (‘refusing or failing to do an act shall be deemed to be the doing of an act and a reference to an act includes a reference to such a refusal or failure’). 51 See Silberberg v Builders Collective of Australia Inc. (2007) 164 FCR 475, 485 (Aus.); Clarke v Nationwide News Pty Ltd (2012) 289 ALR 345 [110] (Aus.). 52  Silberberg (n. 51) 486. 53  Clarke (n. 51) [110]. 54  ibid. [199] (‘The act of publishing a comment which is objectively offensive because of race in such circumstances will give offence because of race as much as the public circulation of such a comment by the original author might have done’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

244   Kylie Pappalardo and Nicolas Suzor

1.4 Copyright Under copyright, secondary liability arises when an actor ‘authorizes’ the infringing conduct of another.55 Unfortunately, there is little clear guidance as to the limits of authorization liability. For internet intermediaries, the difficult question is whether the developer of software that facilitates infringement or the operator of a service that hosts or indexes internet content will be taken to have authorized any resulting infringements. The limiting principle was articulated in relation to mass media in Nationwide News v CAL that ‘a person does not authorise an infringement merely because he or she knows that another person might infringe the copyright and takes no step to prevent the infringement’.56 This principle has always been hard to apply in practice. The accepted legal meaning of ‘authorize’ is ‘sanction, approve, countenance’.57 The case law explains that ‘authorize’ is broader than ‘grant or purport to grant the right to do the infringing act’58 but narrower than the broadest dictionary definition of ‘countenance’.59 There is a broad range between those two points and, unsurprisingly, there is therefore considerable uncertainty in Australian copyright law as to the precise meaning of ‘authorization’.60 A central authority is UNSW v Moorhouse, where the university was liable when the photocopiers it provided in a library were used to infringe copyright. Different members of the High Court emphasized different reasons for this conclusion: UNSW was liable either on the basis that it had tacitly invited infringement61 or because it had some degree of control over the technology that facilitated infringement in addition to know­ledge that infringement was likely.62 The relatively few cases on authorization liability in the digital age do not clearly establish the bounds of the doctrine. In Cooper,63 the operator of a website was liable for creating a system that allowed users to post hyperlinks to other websites hosting in­frin­ ging MP3s for download. Justice Branson found that Cooper was liable in part because he could have chosen not to create and maintain the website.64 Cooper’s liability ultimately 55  See Copyright Act 1968 (Cth), ss. 36(1), 101(1) (Aus.). 56  Nationwide News Pty Ltd v Copyright Agency Ltd (1996) 65 FCR 399, 422 (Aus.). 57  University of New South Wales v Moorhouse and Angus & Robertson (Publishers) Pty Ltd (1975) 6 ALR 193, 200 (Gibbs J), 207 (Jacobs J) (with McTiernan ACJ concurring) (Aus.). 58  Roadshow Films Pty Ltd v iiNet Ltd [2012] HCA 16 [126]–[127] (Gummow and Hayne JJ) (Aus.). 59  ibid. [68] (French CJ, Crennan and Kiefel JJ), [125] (Gummow and Hayne JJ). 60  See Rebecca Giblin, ‘The Uncertainties, Baby: Hidden Perils of Australia’s Authorisation Law’ 20 AIPJ 148, 153; David Lindsay, ‘ISP Liability for End-User Copyright Infringements: The High Court Decision in Roadshow Films v iiNet’ (2012) 62(4) Telecomm. J. of Australia 53.1, 53.16, 53.18–53.19. 61  University of New South Wales v Moorhouse and Angus & Robertson (Publishers) Pty Ltd (1975) 133 CLR 1, 21 (Jacobs J) (Aus.). 62  ibid. 14 (Gibbs J), 20–1 (Jacobs J; McTiernan ACJ agreeing). 63  Cooper v Universal Music Australia Pty Ltd (2006) 237 ALR 714 (Aus.). 64  ibid. 723. A similar finding was reached in 2017 in the case of Pokémon Co. Int’l, where Redbubble was liable for authorizing copyright infringement (committed when users sold products via the Redbubble website that featured unlicensed images of Pokémon characters) in part because it had designed and operated the website that allowed those sales. See Pokémon Co. Int’l, Inc. v Redbubble Ltd [2017] FCA 1541 [58] (Pagone J) (Aus.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Liability of Australian Online Intermediaries   245 rested on the finding that he had ‘deliberately designed the website to facilitate infringing downloading’.65 Cooper’s ISP, which provided practically free hosting for Cooper’s website in exchange for advertising, was also liable for failing to take down Cooper’s website despite knowing that it was facilitating infringement.66 In Sharman, the operators of the Kazaa peer-to-peer file-sharing network had less control over the decisions of users to share infringing files.67 The control that it did have was the ability to design the software differently, including developing warnings for users and interfering with searches for content that was possibly infringing. Sharman was held liable for the infringements of their users on the basis that it knew that infringement was prevalent on the system,68 it took active steps to encourage infringement,69 and it failed to do anything to limit infringement.70 In 2012, in iiNet, the High Court refused to extend liability to an ISP who, it found, had no obligation to take action to restrict copyright infringement by its subscribers.71 Unlike Sharman and Cooper, iiNet did nothing to encourage infringement in the way that it provided general purpose internet access services to its subscribers.72 Neither did iiNet have any real advance control over what its users did online—iiNet did not control the BitTorrent system and could not monitor how it was used.73 Much of the High Court’s decision therefore focused on iiNet’s level of knowledge about infringements after the fact. The High Court ultimately found that the notices that alleged that iiNet’s users had infringed did not provide sufficiently specific knowledge of individual infringement to found liability.74 It is nonetheless possible that iiNet could have been liable if the quality of allegations made against its users by rightsholders was better. That is to say, the High Court left the way open for future cases to potentially base liability primarily on knowledge and some ability to mitigate the harm, even without the fault elements of encouragement or control.

2.  Limiting Devices and their Flaws There are few effective safe harbours for intermediaries in Australia. The copyright safe harbour, designed to mirror the US Digital Millennium Copyright Act, was drafted to 65  Cooper (n. 63) 720–1 (Branson J) (French J agreeing), 745 (Kenny J). 66  ibid. 725–6 (Branson J), 746 (Kenny J); Justice French concurred with both judgments. 67 See Universal Music Australia Pty Ltd v Sharman License Holdings Ltd [2005] 222 FCR 465 (Aus.); Pokémon (n. 64). 68  ibid. [404]: infringing file-sharing was ‘a major, even the predominant, use of the Kazaa system’. 69  ibid. [405]–[406]. 70  ibid. [411]. 71 See Roadshow (n. 58) [77] (French CJ, Crennan And Kiefel JJ), [143] (Gummow and Hayne JJ). 72  ibid. [112] (Gummow and Hayne JJ) (noting that ‘iiNet had no intention or desire to see any primary infringement of the appellants’ copyrights’). 73  ibid. [65] (French CJ, Crennan and Kiefel JJ). 74  The High Court emphasized that the notices provided by AFACT were insufficiently reliable to justify potential action by iiNet to suspend or ban subscribers. See ibid. [34], [74]–[75], [78] (French CJ, Crennan and Kiefel JJ) and [92], [96], [138], [146] (Gummow and Hayne JJ).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

246   Kylie Pappalardo and Nicolas Suzor only apply to internet access providers (‘carriage service providers’), and not the search engines and content hosts that would most need to rely on the protection that safe harbours provide.75 An amendment in 2018 extended the safe harbour to cover libraries, cultural and educational institutions, archives, and organizations assisting people with disabilities, but not general internet intermediaries.76 There are another set of copyright exceptions designed to protect ‘mere conduits’ that has very little work to do, because as soon as a provider is alleged either to have taken a positive step or to have failed to act to restrain infringement, it is no longer ‘merely’ passive and loses protection.77 The High Court in Roadshow v iiNet held that these provisions offer protection ‘where none is required’.78 Meanwhile, the safe harbour for other types of liability under state law— found in Schedule 5, clause 91 of the Broadcasting Service Act—disappear once the service provider is put on notice, and therefore provide little protection for intermediaries who are uncertain about the lawfulness of user-generated content that they host.79 The lack of effective safe harbours means that the limiting factors within each body of law are critical for establishing liability and mitigating risk for service providers. Despite doctrinal differences, the liability of intermediaries often appears to turn on the degree to which the intermediary is seen by the court to be an active participant in the wrong. The courts adopt complex factual tests based on analogies with the historical application of each doctrine in the mass-media era. Because each doctrine evolved distinctly, each includes a different test on which liability is based. Each doctrine, however, requires some active behaviour on the part of the intermediary to found liability. In cases where an intermediary is found to be liable, it is invariably viewed as an active wrongdoer. The textual tests are often expressed as an overarching factor—often a single word, like ‘authorize’ or ‘publish’—to purportedly distinguish those intermediaries that actively participate in the wrong from those that merely provide the infrastructure that facilitates it. The Australian case law demonstrates that when courts attempt to distinguish between active and passive actors, they sometimes appear to be using intent, either actual or inferred, to determine whether the act of designing the system was morally wrongful. The result is that intermediaries that appear to be performing similar functions face quite disparate consequences. Where a court must choose to focus either on the initial positive act of designing a system or the later passive act of merely facilitating an isolated instance of harm, there is a great deal of uncertainty in the doctrine. This problem becomes worse when the evaluation of whether an intermediary was ‘passive’ or ‘active’ 75  See Robert Burrell and Kimberlee Weatherall, ‘Exporting Controversy—Reactions to the Copyright Provisions of the US-Australia Free Trade Agreement: Lessons for US Trade Policy’ (2008) U. of Illinois J. of L. Tech. and Policy 259. 76  See Copyright Amendment (Service Providers) Act 2018 (Cth) (Aus.). 77  Copyright Act 1968 (n. 55) ss. 39B and 112E. 78  Roadshow Films Pty Ltd v iiNet Ltd (2012) 248 CLR 42, 55–6 [26] (French CJ, Crennan and Kiefel JJ). 79  See Peter Leonard, ‘Safe Harbors in Choppy Waters—Building a Sensible Approach to Liability of Internet Intermediaries in Australia’ (2010) 3 J. of Int’l Media & Ent. L. 221; Brian Fitzgerald and Cheryl Foong, ‘Suppression Orders after Fairfax v Ibrahim: Implications for Internet Communications’ (2013) 37 Aust. Bar Rev. 175.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Liability of Australian Online Intermediaries   247 depends on moral culpability at the time of designing the system. Intent is not an element of most of the causes of action that apply to intermediaries; liability requires some volitional act, but not intent to cause the harm. The result can be to effectively read intention as an element of secondary liability where it does not otherwise exist. The other limiting device deployed by courts is closely linked to knowledge. A passive facilitator can be transformed into an active wrongdoer once they know of the harm but fail to respond appropriately. Liability in these types of cases comes in one of two ways. The first is through editorial control. Where an intermediary actively moderates content on its site, it is often taken to have assumed responsibility for content, and is accordingly liable. This is the most straightforward application of intermediary liability; unless there is some statutory immunity, it is usually the case that principles of liability applied to broadcast and print media transfer relatively easily to internet intermediaries who exercise direct editorial control over (and therefore assume responsibility for) posts made by others. The second way that intermediaries gain knowledge that triggers liability is when the existence of content is drawn to their attention—usually by the plaintiff. The danger with making knowledge central to liability is that the ambiguity that exists within the traditional fault elements of each doctrine may sometimes effectively be replaced with the simpler proposition that knowledge of unlawful content or behaviour, coupled with some ability to limit its impact, is sufficient to found liability. The big challenge, across all of these cases, is that knowledge, without a clearly defined concept of fault, actually does little to ground liability.80 An intermediary’s knowledge of wrong­ doing has only a minimal relationship to the question of whether its technology, service, or actions actually cause or contribute to the wrong.81 Actual or constructive knowledge is being used to try to separate out ‘bad actors’, but this does not easily fit within the doctrinal history of each of the causes of action.82 Nor does it fit generally with the overarching assumption of the Australian legal system that liability only follows fault.83 Certainly, but for the intermediary’s actions, no harm would be suffered. This is true in the broad sense of the ‘but for’ test in common law, which throws up all the relevant conditions that can be said to be ‘causes in fact’ of the harm.84 Thus, it is possible to argue 80  See Joachim Dietrich, ‘Authorisation as Accessorial Liability: The Overlooked Role of Knowledge’ (2014) 24 AIPJ 146. 81  See Rebecca Giblin-Chen, ‘On Sony, Streamcast, and Smoking Guns’ (2007) 29(6) EIPR 215, 224; Peter Cane, ‘Mens Rea in Tort Law’ (2000) 20 OJLS 533; Avihay Dorfman and Assaf Jacob, ‘Copyright as Tort’ (2011) 12(1) Theoretical Inquiries in Law 59. 82  See e.g. Kylie Pappalardo, ‘Duty and Control in Intermediary Copyright Liability: An Australian Perspective’ (2014) 4(1) IP Theory 9. 83  See Stephen Perry, ‘The Moral Foundations of Tort Law’ (1992) 77 Iowa L. Rev. 449, 513; Goldberg and Zipursky (n. 12) 21; Cane (n. 12) 53–4; Bernard Weiner, Judgements of Responsibility: A Foundation for a Theory of Social Conduct (Guilford Press 1995) 7–8. 84 See Chappel v Hart (1998) 195 CLR 232, 283–4 (Jayne J) (Aus.); March v E. & M.H. Stramare Pty Ltd (1991) 171 CLR 506, 532 (Deane J) (Aus.); Richard Wright, ‘The Grounds and Extent of Legal Responsibility’ (2003) 40 San Diego L. Rev. 1425, 1494; Jane Stapleton, ‘Choosing what we mean by “Causation” in the Law’ (2008) 73 Missouri L. Rev. 434, 471–4; H.L.A. Hart and Tony Honoré, Causation in the Law (OUP 1985) 106, 114; David Hamer, ‘“Factual causation” and “scope of liability”: What’s the difference?’ (2014) 77(2) Modern L. Rev. 155, 170–1; Stapleton, ‘Legal Cause: Cause-in-Fact and the Scope of Liability for Consequences’ (n. 7) 961; Richard Epstein, ‘A Theory of Strict Liability’ (1973) 2(1) J. of Legal Studies 151, 190–1.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

248   Kylie Pappalardo and Nicolas Suzor that without access to the internet, service, or platform, the user would not have been able to post the content that has infringed copyright, defamed another, or otherwise caused harm. But merely providing the facilities is said not to be enough to ground li­abil­ity. Something more is always required, but the way in which the case law is developing makes it very difficult to identify what exactly that means. What the courts seem to be doing in these cases is confusing or converging the assessment of whether an intermediary ought to act in response to the risk of harm with the standard of care that might be expected of the intermediary once that duty to act is established. Under general tort law principles, after a duty is established, courts ask: ‘What is the standard of care that a reasonable person in the defendant’s position would exercise in the circumstances?’ This question sets a benchmark against which to determine whether the defendant’s conduct falls short. The standard of care exhibited by a reasonable person will take into account any special skills or knowledge that a person in the defendant’s position would have.85 Across a range of intermediary liability cases, by contrast, instead of treating knowledge as a factor that informs the standard of care in these cases, the courts are treating knowledge (or allegations of harm) as a factor that informs the imposition of a duty, like reasonable foreseeability. This is a lot of work for the concept of knowledge to do, and creates a great deal of uncertainty around the extent of liability when intermediaries create systems that enable others to post content or communicate. Knowledge is a poor limiting device in intermediary liability law. Whether an inter­ medi­ary has sufficient knowledge of potential harms to be liable is often very difficult to ascertain. Where knowledge is imputed at the design stage on the basis that the system is likely to be used to cause harm, the court must come to some determination of what degree of harm, or likelihood of harm, is sufficient.86 Meanwhile, when courts infer knowledge from editorial control, they actively discourage moderation in a way that encourages, rather than limits, risky behaviour. When knowledge is provided on notice, it is often imputed on the plaintiff ’s assertion of wrongdoing and nothing more.87 ‘Knowledge’, in a substantive sense, requires more than mere awareness of potentially problematic content—it requires intermediaries to make a judgment about whether the material falls within the ambit of the relevant law. At the time that an intermediary is put on notice, it is usually only through an allegation of harm, and it is sometimes difficult for an intermediary to evaluate whether a claim is likely to be made out. In defamation, for example, this may require an evaluation of whether evidence of the truth of an 85 See Imbree v McNeilly (2008) 236 CLR 510 [69] (Aus.); Heydon v NRMA Ltd (2000) 51 NSWLR 1, 117 (Aus.); Roe v Minister of Health [1954] 2 QB 66 (UK); H v Royal Alexandra Hospital for Children (1990) Aust. Torts Rep. 81-000 (Aus.); Amanda Stickley, Australian Torts Law (Lexis Nexis Butterworths 2013) 226, 228–9; Donal Nolan, ‘Varying the Standard of Care in Negligence’ (2013) 72 CLJ 651, 656. 86  See e.g. Universal Music Australia Pty Ltd v Cooper (2005) 150 FCR 1 [84] (Aus.); Universal Music Australia Pty Ltd v Cooper (2006) 156 FCR 380 [149] (Aus.); Universal Music Australia Pty Ltd v Sharman License Holdings Ltd (2005) 22 FCR 465 [181]–[184] (Aus.). 87  See e.g. Pokémon (n. 64) [54] (Pagone J).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Liability of Australian Online Intermediaries   249 im­put­ation can be gathered; in copyright, the existence of a fair dealing defence or a licence88 can be a difficult question of fact and law. If the notice relates to the transitory communications of users, an intermediary may have no ability to evaluate whether the past conduct is actually wrongful.89

3. Conclusions This chapter has explored the principles by which online intermediaries are held liable for third party actions across a range of legal areas in Australia: defamation, vilification, copyright, and content regulation. Modern intermediary liability law is not simply an admonishment against consciously helping others to commit legal wrongs; it is an expectation that in appropriate circumstances intermediaries will proactively prevent wrongdoing by others, sometimes by designing systems that seek to prevent wrongful behaviour. In many of the legal areas canvassed in this chapter, courts and legislators ask intermediaries such as ISPs, search engines, website hosts, and technology developers to take some responsibility for the acts of users that occur over their networks and services. These questions are fundamentally about responsibility. However, the ways in which legal rules and principles have developed to ascribe responsibility to online intermediaries have not always been clear or coherent. In many ways, the push for greater online enforcement and intermediary regulation has not been based on responsibility at all, but has been about capacity—the capacity to do something when faced with knowledge that harm may otherwise result. We argue that much of the uncertainty at the heart of inter­ medi­ary liability law stems from the merger and confusion of concepts of capacity and responsibility. Australia’s current laws lack clear mechanisms for disentangling these concepts and distinguishing those intermediaries that are closely involved in their users’ wrongful acts from those that are not. Many of the areas of intermediary liability covered here have their origins in tort law. Responsibility theory in tort law tells us that a person will be responsible for a harmful outcome where his or her actions caused or contributed to the harm (causation) and where harm was the foreseeable result of those actions such that the person might have acted to avoid the harm but did not (fault).90 This is more than liability based on know­ledge of wrongdoing and a failure to act. It requires more active involvement than 88  See e.g. Viacom Int’l Inc. v YouTube Inc, 676 F.3d 19 (2d Cir. 2012) (US), where some of the allegations of infringement turned out to be authorized; see further Zahavah Levine, ‘Broadcast Yourself ’ . 89  In some cases, e.g. transitory communications, an intermediary has no way of evaluating an allegation of infringement: see Nicolas Suzor and Brian Fitzgerald, ‘The Legitimacy of Graduated Response Schemes in Copyright Law’ (2011) 34(1) UNSWLJ 1. 90  See Perry (n. 12) 513; Goldberg and Zipursky (n. 12) 20–21; Voyiakis (n. 12) 458; Denton (n. 6) 127; Cane (n. 12) 53–4.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

250   Kylie Pappalardo and Nicolas Suzor that—normally, a clear and direct contribution to the resulting wrong. Our review of Australian intermediary liability law across different doctrines reveals that often, in asking whether intermediaries are liable, courts have been asking what intermediaries can do to prevent harm. But, in most cases, courts have not been closely examining the intermediary’s causal role in the wrong to determine whether the intermediary indeed ought to be held responsible. The result is that Australian law has sometimes ascribed liability without first establishing fault.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 13

From Li a bilit y Tr a p to the Wor ld’s Sa fe st H a r bou r: L e ssons from Chi na, I n di a, Ja pa n, Sou th Kor e a, I n don esi a, a n d M a l aysi a Kyung-Sin Park

Germany’s social media law sounds innocent to the extent that it obligates social media companies to take down only ‘unlawful content’ and holds them liable for failure to do so.1 However, the lessons from Asia where liability was imposed on failure to take down notified unlawful content speak otherwise. Such an innocent-sounding rule has profound repercussions. It is no wonder that Germany’s impact on other countries was discussed in a 2018 case on ‘whether online speech regulation laws in democratic countries could lead to a damaging precedent for illiberal states to also regulate online speech in more drastic manners’.2 Unless we want to paralyse the freedom of unapproved uploading and viewing and therefore the power of the internet, an intermediary that cannot possibly know who posts what content, should not be held responsible for defamation or copyright infringements committed via third party content hosted on its services. If intermediaries 1 See the 2017 Network Enforcement Act (Gesetz zur Verbesserung der Rechtsdurchsetzung in ­sozialen Netzwerken, NetzDG) (Ger.). 2  ‘Beyond Intermediary Liability: The Future of Information Platforms’, Yale Law School Information Society Project (13 February 2018) .

© Kyung-Sin Park 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

252   Kyung-Sin Park are held liable for this unknown content, the intermediaries will have to protect themselves either by constantly monitoring what is posted on their service (i.e. general monitoring obligations) or by approving all content prior to being uploaded. If that occurs, it can be said that when a posting remains online, it remains online with the acknowledgement and tacit approval of the intermediary that was aware of the posting and yet did not block it. The power of the internet—the unapproved freedom to post and download—will be lost. The United States made headway by stipulating that no ‘interactive computer service’ should be considered the speaker or publisher of that content.3 Some think that went too far because it shielded liability for content clearly known to be unlawful even for the intermediaries themselves. In response, a ‘safe harbour’ regime could have been set up to exempt from liability only content not known about. The EU did just that, although adding a requirement to act on such knowledge in order to obtain immunity,4 while the US Digital Millennium Copyright Act (DMCA)—perhaps heeding calls from Hollywood, the biggest rightholder group in the world—went further by limiting immunity only to when intermediaries take down all content on notification regardless of whether the content is illegal or intermediaries are aware of that fact.5 The DMCA, incentivizing intermediaries into automatic takedowns, is often criticized6 whereas the EU model makes available immunity to intermediaries who receive notification but are not convinced of its substance. In any event, notice-and-takedown safe harbours have spread.7 In this chapter, we will compare against the EU- or US-style approach of the selfproclaimed ‘safe harbours’8 of six major countries in Asia: China, India, Japan, South Korea, Indonesia, and Malaysia—selected for their religious diversity from Islamism to Confucianism, a combined population covering 3.4 billion out of the total 4.5 billion on the continent,9 and extremely high internet-penetration rates.10 Confusion abounds. 3  See the Communication Decency Act [1996] 47 USC § 230 (US). 4  See Council Directive 2000/31/EC of the European Parliament and the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (e-Commerce Directive) [2000] OJ L178/1, Art. 14. 5  See the Digital Millennium Copyright Act of 1998, 17 USC § 512(c) (US). Importantly, the noticeand-takedown safe harbour is not applicable to content where intermediaries have actual knowledge of its illegality even before and without notice being given by a rightholder or any other person. 6  See Jennifer Urban and Laura Quilter, ‘Efficient Process or “Chilling Effects”? Takedown Notices Under Section 512 of the Digital Millennium Copyright Act: Summary Report’, Electronic Frontiers Foundation, Takedown Hall of Shame ; Wendy Seltzer, ‘Free Speech Unmoored in Copyright’s Safe Harbor: Chilling Effects of the DMCA on the First Amendment’ (2010) 24 Harv. J of L. & Tech. 171. 7  See Daniel Seung, Comparative Analysis of National Approaches of the Liability of the Internet Intermediaries (WIPO, 2010). 8  ibid. (noting e.g. that ‘the [DMCA] safe harbor provisions served as the template for the enactment of similar defenses in the European Union, the People’s Republic of China, and India’). 9 See World Population Review, Asia . 10  See Internet World Stats .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   253 Contrary to the term ‘safe harbour’, some civil society organizations have characterized China’s system as ‘strict liability’.11 Maybe the truth lies in between as we shall discover.

1. China 1.1  Basic Laws and Regulations As in other countries, the default pre-safe-harbour position of Chinese law is that inter­ medi­ar­ies who have the requisite knowledge of infringing activity on their services will be held jointly and severally liable. In addition to that general rule, Article 36 of the Law of Tort Liabilities (LTL) controls intermediary liability in China.12 Article 36 paragraph 3 of the LTL restates the above default position, and paragraph 2 stipulates that when a person whose rights are being infringed sends notification of that fact to the intermediary, which then fails expeditiously to take the necessary measures on receiving that notification, the intermediary is jointly and severally liable with the direct infringer to the extent that any further damages have been caused by its inaction.13 These provisions are considered by Chinese commentators to have been ‘borrowed’14 from Regulation on the Protection of the Right of Communication Through Information Network of the People’s Republic of China,15 which was in turn a Chinese attempt to adapt the DMCA safe harbour.16 11  See Article 19, ‘Internet Intermediaries: Dilemma of Liability’ (2019); Center for Democracy and Technology, ‘Shielding the Messengers: Protecting Platforms of Expression and Innovation’ (December 2012). 12  Promulgated by the National People’s Congress Standing Committee (NPCSC) on 26 December 2009, effective 1 July 2010, in 2010 Standing Comm Nat’l People’s Cong. Gaz. 4 (Ch.) . 13  ibid. Art. 36 (stating: . . . Paragraph 2. Where a user engages in online infringing activity, a right holder so harmed has a right to notify the corresponding service provider, requesting the latter to take necessary measures, such as deleting, screening, removing, references or links to the online infringing material or activity. Where the service provider upon receipt of the notification fails to take prompt measures, the service provider shall be jointly liable for the harm resulted from this failure. Paragraph 3. When the service provider knows that a user injuries other person’s rights and interests and does not take necessary measures, the service providers shall be jointly liable with the user). 14 See Huaiwe He, ‘Online Intermediary Liability for Defamation under Chinese Laws’ (2013) 7 . 15  See Regulation on the Protection of the Right of Communication Through Information Network of the People’s Republic of China, promulgated by the State Council on 10 May 2006, effective 1 July 2006 in St Council Gaz. no. 468, amended 16 January 2013, effective 1 March 2013 in St Council Gaz. no. 634 (Ch.) (hereafter RCIN Regulation). Regulations issued by the executive branch State Council of the PRC are lower in standing than laws passed by the National People’s Congress (NPC) or its Standing Committee, the National People’s Congress Standing Committee (NPCSC). What is unique is that the Regulations are binding on all courts, provided that courts are able to disregard them in the event that they find the Regulations contrary to higher laws. 16  cf. RCIN Regulation (n. 15) Art. 14 (‘Where a right owner believes that a work, performance, or sound or video recording involved in the service of a network service provider who provides information

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

254   Kyung-Sin Park However, what is glaringly absent from the text of LTL 36 is that there is no clause that states that intermediaries ‘shall be exempt’ from liability in certain circumstances, which is the central predicate of section 512 of the DMCA. This may come as a surprise to Chinese commentators who typically reason as follows: Upon receiving notification, if the ISP expeditiously takes measures to disable the allegedly infringing online material, it may be exempt from liability. For instance, in Gengxin Chen et al v Baidu (Beijing) Network Tech Co Ltd,17 after receiving a notification from the plaintiff, Baidu removed the defaming hyperlink from search results. While finding that the plaintiff ’s reputation was damaged, the court held that the damage was not caused by Baidu and thus Baidu was not liable.18

Notice the phrase ‘may be exempt’. Although the entire paragraph seems to be in conformity with a normal description of DMCA-like notice and takedown, the truth is that there is no phrase ‘may be exempt’ in the relevant statute, nor a phrase ‘shall be’. There is not even an attempt to create an elective safe harbour, let alone a mandatory one. In contrast, the only exemption stated in the statute works in the opposite direction. According to the Supreme Court’s Judicial Interpretation of LTL, the intermediary ‘shall be’ exempt from liability for removing the content on notification to the user who posted the content.19 On the request of the poster, the intermediary is required to provide the content of the notification so that the poster is able to bring a lawsuit against the complainant for damages to restore the removed material or seek damages for removal.20 However, there is no obligation to inform the poster that the posting has been removed. Although RCIN itself is properly phrased after the liability-exempting language of DMCA down to emulating the rule whereby the exemption from liability to the poster is bartered with restoration on the poster’s notification,21 RCIN 14-17 is often interpreted storage space or provides searching or linking service has infringed on the right owner’s right of communication through information network, or that the right owner’s electronic rights management information attached to such work, performance, or sound or video recording has been removed or altered, the right owner may deliver a written notification to the network service provider, requesting it to remove the work, performance, or sound or video recording, or disconnect the link to such work, performance, or sound or video recording. The written notification shall contain the following particulars: (1) the name, contact means and address of the right owner; (2) the title and network address of the infringing work, performance, or sound or video recording which is requested to be removed or to which the link is requested to be disconnected; and (3) the material constituting preliminary proof of infringement. The right owner shall be responsible for the authenticity of the written notification’). 17  See Jiangsu Province Nanjing City Interim Ct Gengxin Chen et al. v Baidu (Beijing) Network Tech Co. Ltd (2014) (Ch.) (emphasis added). 18  He (n. 14) 9. 19  Judiciary Interpretation for Statutory Application in Adjudicating Torts to Personal Rights Through Information Networks promulgated by the Supreme People’s Court on 23 June 2014, effective 10 October 2014, in Sup. People’s Ct Gaz. no. 12, Art. 7(1) (Ch.) (Judiciary Interpretation for Online Torts). 20  ibid. Arts 7(2) and 8. 21  See RCIN Regulation (n. 15) Art. 16 (a network service is not liable for damages by reason of storage of works, performance, sound recordings/video recordings (collectively called ‘material’) at the direction of a user who makes the material available to the public through the information network, if the service

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   255 as a liability-imposing rule (or at least so that a failure to obtain a safe harbour directly translates into a liability) as you will see in cases such as Fanya and Pomoho. This only adds to the deep divergence between section 512 of the DMCA and LTL 36.

1.2  Liability-Imposing v Liability-Exempting Overall, the belief of some commentators that the Chinese LTL resembles the DMCA’s notice and takedown is misplaced although it is less so for RCIN. The DMCA was not enacted to specify when intermediaries would be held liable for third party content; it was enacted to specify when intermediaries would not be held liable. Therefore, the fact that an intermediary does not take corrective action under the DMCA is in itself not a cause for liability.22 Its failure to take corrective action under the DMCA only starts a substantive analysis of joint and several liability under general tort law. Intermediaries are not required to take corrective action under the DMCA but, if and when they choose to do so, they receive the legal benefit of exemption from joint and several liability for the infringing content. In contrast, LTL 36 specifies when intermediaries will be held liable when it states ‘where the service provider on receipt of the notification fails to take prompt measures, the service provider shall be jointly liable for the harm resulting from this failure’. Such a transformation of the liability-exempting regime into a liability-imposing regime has tremendous consequences for intermediaries’ liability exposure and therefore their behaviour. First, once a liability-imposing regime has been created—depending on how broadly intermediaries’ obligations are interpreted to respond under the notice-and-takedown regime—intermediaries can be held liable for unknown content. The Supreme People’s Court issued the Judiciary Interpretation for Statutory Application in Adjudicating Torts to Personal Rights Through Information Networks of which Article 5 requires an provider meets the following conditions: (1) clearly represents itself as a provider of storage services and posts its name, the contact person, and its web address; (2) does not modify the stored material; (3) does not know and has no reasonable ground to know that the stored material is infringing; (4) does not obtain financial benefit directly from the user who make available to the public the stored material; and (5) upon receiving notification from a copyright owner, deletes the stored material according to the Regulation); ibid. Art. 17 (‘[u]pon receiving a written explanatory statement delivered by a service recipient, a network service provider shall promptly replace the removed work, performance, or sound or video recording, or may replace the disconnected link to such work, performance, or sound or video recording and, at the same time, transfer the written explanatory statement delivered by the service recipient to the right owner. The right owner shall not notify the network service provider anew to remove the work, performance, or sound or video recording, or to disconnect the link to such work, performance, or sound or video recording’). Translations excerpted from He (n. 14). Some commentators believe that Chinese judicial practice as well as these provisions on copyright have lived up to their promise of delivering a liability-exempting regime. See Jie Wang, ‘Regulating Hosting ISPs’ Responsibilities for Copyright Infringement: The Freedom to Operate in the US, EU and China’, Ph.D. Dissertation at Maastricht University, October 2016 . 22  See DMCA, s. 512(l).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

256   Kyung-Sin Park effective notification to include the following: (1) the identity of the complainant; (2) the web address which is the target of remedial measures or the information necessary to identify the infringing material; and (3) the grounds for removing the target material.23 One company’s complaint stating that ‘we learned that there are comments posted on the Yudao Search Engine stating that our water is of poor quality’ without specifying the comments was not considered sufficient to trigger the notice-and-takedown ­obligations.24 However, when Baidu refused to respond to a copyright holder’s (Fanya) request to address illegal copies of ‘about 700 songs’ without any specified URL, the Supreme Court still held Baidu responsible for not requesting Fanya to produce the web ad­dresses.25 Although Baidu v Fanya was a RCIN 16 case, intermediaries’ obligations are considered no more lenient for personal rights than for copyright and it can be easily inferred that an intermediary receiving an LTL 36 notice is under a similar obligation to affirmatively require the complainant to supplement its first defective notice. Indeed, intermediaries’ obligations under LTL 36 may be even more austere than under RCIN 16. In one instance, even when an intermediary, on receiving an incomplete notice, did request a URL,26 the court said that the intermediary should have taken earlier proactive measures to remove the insulting comments even without the web address having been notified. The impact of Fanya-like cases is clear: once intermediaries are held liable for content not effectively notified, intermediaries will start to take down all notified content, regardless of the substance or procedural efficacy of that notice. When intermediary li­abil­ity rules are liability-exempting, intermediaries’ compliance with the intermediary liability regime will give certainty that they will not be held liable under general tort law. In contrast, a liability-imposing regime only adds another layer of liability in addition to general tort liability. Intermediaries not liable under general tort law (e.g. lack of previous knowledge) may still be liable for not acting on notification and, conversely, compliance with notice and takedown does not guarantee that an intermediary will not be held liable. Under this regime, intermediaries are bound to over-enforce. Secondly, LTL 36 imposes liability not only in cases of non-response but also in cases of knowledge, and does so explicitly: ‘when the service provider knows that a user injures another person’s rights and interests and does not take the necessary measures, the service provider shall be jointly liable with the user’. Although section 512 of the DMCA also has a red flag component whereby exemption for infringing content is not recognized when the infringement is ‘apparent’ to the intermediary, such knowledge only disarms the exemption. It does not, in itself, become a basis for liability as in the

23  See Judiciary Interpretation for Online Torts (n. 19) Art. 5. 24  See Shanghai No. 2 Interim People’s Ct Michun (Shanghai) Beverage & Food Co. Ltd v Qihu (Beijing) Tech. Co. Ltd et al. [2015] [pkulaw.cn] CLI.C.4275913 (Ch.). 25  See Sup. People’s Ct Fanya e-Commerce Co. Ltd v Baidu (Beijing) Network Tech. Co. Ltd [2009] [pkulaw.cn] CLI.C.1766439 (Ch.). 26  See Nanjing City Gulou Dist Ct Chen Tangfa v Blog Information Tech. (Hangzhou) Co. Ltd [2006] [pkulaw.cn] CLI.C.1436674 (Ch.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   257 case of LTL 36. It is in this sense that LTL 36 is a liability rule while section 512 of the DMCA is an exemption rule. The liability rule may be a restatement of general tort liability but, as such, the end result brings us back to face the perils of the pre-safe-harbour days: no matter how cautious courts claim to be in finding constructive knowledge,27 counter-examples abound. For instance, a video platform was held liable under RCIN for not proactively looking out for the release of a famous film28 and Baidu was held liable under LTL for not pro­active­ly disabling nude photographs of a popular celebrity alleged to be having an affair,29 all without any notification from the rightholders. It is a very serious threat if liability can be incurred for infringing someone’s right even if no one cries foul, especially where so much information is automatically de­livered to so many people without anyone’s knowledge. Intermediaries will be tempted into general monitoring or pre-approval. It is difficult to maintain a happy medium since once some level of editing begins, it is deemed that there is more capacity for editing, and eventually liability will be incurred for third party content. Article 9 of the Judiciary Interpretation of Online Torts instructs courts deciding on knowledge issues to consider ‘the technological capabilities of the internet service provider [ISP] to take measures to prevent infringement and what the ISP in fact did’.30 The more measures are taken to prevent unlawful content, the more content an ISP will be held responsible for. Hence a liability trap. Thirdly, even if LTL 36 clearly states that intermediaries will be held responsible only for unlawful content and even if the requirements of knowledge and notification are carefully guarded, it is very difficult for intermediaries to know in advance whether certain content is unlawful, and they will seek shelter by taking down even lawful content. 27  See Zhejiang Province Shaoxing City Dist. Ct Ma Weiying v Weimeng Kechuang (Beijing) Network Tech. Co. Ltd [2012] [pkulaw.cn] CLI.C.2680424 (Ch.) (finding that knowledge on the part of the ISP must be assessed according to the cognizant capability of a normal reasonable person. In the present case, the disputed blog did not include any obvious ‘vilifying, insulting, denigrating or tarnishing words . . . the defendant could not learn that the disputed blog infringed any private rights’). Translations excerpted from He (n. 14). 28 See Beijing no. 2 Interim People’s Ct Huayi Bros Media Group v Pomoho [2009] [pkulaw.cn] CLI.C.188931 (Ch.). 29  See Shanghai no. 2 Interm People’s Ct Yin Hong v Baidu (Beijing) Network Information Co. Ltd [2009] [pkulaw.cn] CLI.C.1765615 (Ch.). 30  See Judiciary Interpretation of Right of Communication through Information Networks, Art. 9 (Ch.) (‘Courts should consider the following factors in assessing whether an ISP should know an Internet user was using its service to infringing other’s right of communication through information networks: (1) The nature of the Internet service involved, its probabilities of being used to infringe the right, and the capabilities of the ISP to police the service; (2) The nature and publicity of the disputed work of authorship, performance or video/audio recording, and whether they can be easily characterized as infringing materials; (3) Whether the ISP selected, edited, modified or recommended the disputed work of authorship, performance or video/audio recording; (4) Whether the ISP takes reasonable measures to prevent infringement; (5) Whether the ISP designated convenient procedures to receive notifications of infringement and to take expeditious measures to check infringement upon notification; (6) Whether the ISP took reasonable measures towards repeated infringing activities by the same Internet user’). Translations excerpted from He (n. 14).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

258   Kyung-Sin Park Now, one may wonder about the practical consequences of this result. Even under the DMCA, a great deal of lawful content is taken down because intermediaries have chosen to benefit from the exemption provision. Under the LTL, they are also incentivized to take down lawful content because they do not know in advance whether the content is lawful or unlawful. In both scenarios, the intermediaries are incentivized to take down: the key difference being that they choose to do so under the DMCA while they are ef­f ect­ive­ly coerced into doing so to avoid liability under the LTL. The difference manifests itself on put-back. If the author of content attempts to prove legitimacy, intermediaries under an exemption rule will have the leeway to reinstate the content because losing the exemption is not directly equivalent to being liable. However, intermediaries under a liability rule will not have that leeway because being wrong about the lawfulness of the content directly translates into liability.

1.3 Conclusions The Chinese intermediary liability regime for defamation is neither strict liability nor a true safe harbour. It is limited liability in a sense that intermediaries are held liable only for: (i) unlawful content that (2) they are aware of. However, instead of formulating an exemption rule (e.g. ‘shall not be liable for unknown or lawful content’), China created a liability rule that imposes liability: (a) for known unlawful content or (b) for unlawful content notified by the rightholder. Such a liability-imposing rule suffers from the interpretation of the courts gravitating towards broad conceptions of knowledge and ef­fect­ive notification, unfairly holding intermediaries liable, and naturally incentivizing them into proactive censorship. Also, even if knowledge and effective notification are strictly interpreted, intermediaries are likely to act on lawful content as well as erring on the safe side of deleting the notified content.

2. India 2.1  Basic Laws and Regulations India’s intermediary safe harbour provisions are in the Information Technology (IT) Act amended in 2008. Under section 79 of the IT Act, intermediaries can avail themselves of a safe harbour as long as they did not have ‘actual knowledge’ of the third party content, and they have complied with the various due diligence requirements promulgated by the government in 2011 (hereafter Intermediary Guidelines 2011).31 Intermediaries are required to take down such content within thirty-six hours of receiving ‘actual know­ledge’ 31  See IT Act, s. 79 (India) (the amendment in 2008 provided immunity for intermediaries which did not initiate or interfere with the transmission, stipulated that the government would provide guidelines

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   259 (including a complaint from a user). In the landmark judgment of Shreya Singhal, the Supreme Court interpreted the requirement of ‘actual knowledge’ to mean receipt of a court order by the intermediary.32 The court clearly stated that intermediaries should not be made to take down content until they receive a judicial order requiring them to do so.33 It is an anomaly to require knowledge to be established by a court order. We shall see how this transpired in the following. Some commentators seem to think that immunity and liability are co-extensive (e.g. the minimum requirements for obtaining immunity are identical to the minimum requirements for avoiding liability), and therefore losing immunity is equivalent to establishing liability. Hence, ‘[w]hether intermediaries might be exposed to secondary liability in the context of online defamation depends on whether or not they qualify for immunity under the IT Act.’34 However, a close reading suggests otherwise. Exemption from liability of intermediary in certain cases—(1) Notwithstanding ­anything contained in any law for the time being in force but subject to the provisions of sub-section (2) and (3), an intermediary shall not be liable for any third party information, data, or communication link made available or hosted by him. (2) The provisions of sub-section (1) shall apply if—(a) the function of the intermediary is limited to providing access to a communication system over which information made available by third parties is transmitted or temporarily stored or hosted; or (b) the intermediary does not—(i) initiate the transmission, (ii) select the receiver of the transmission, and (iii) select or modify the information contained in the transmission; (c) the intermediary observes due diligence while discharging his duties under this Act and also observes such other guidelines as the Central Government may prescribe in this behalf. (3) The provisions of sub-section (1) shall not apply if—(a) the intermediary has conspired or abetted or aided or induced, whether by threats or promise or otherwise in the commission of the unlawful act; (b) upon receiving actual knowledge, or on being notified by the appropriate Government or its agency that any information, data or communication link residing in or connected to a computer resource controlled by the intermediary is being used to commit the unlawful act, the intermediary fails to expeditiously remove or disable access to that material on that resource without vitiating the evidence in any manner.35

As can be seen, the text shows that the safe harbour is liability-exempting not ­li­abil­ity-imposing. It does not set out when intermediaries will be held liable but when inter­medi­ar­ies will not be held liable. Therefore, it is not the fact that an intermediary’s liability for third party content depends on whether it has obtained immunity under that would apply to intermediaries as well as requiring intermediaries to expeditiously take down content once they received ‘actual knowledge’). 32  Shreya Singhal (decided May 2015) 12 SCC 73, s. 117 (Ind.). 33  Sunita Tripathy, Vasudev Devadasan, and Bani Brar, ‘Understanding the Recent Developments in the Law Regulating Cyber Defamation and the Role of the Online Intermediary in India’ (undated) . 34 ibid. 35  IT Act, s. 79 (Ind.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

260   Kyung-Sin Park section 79 of the IT Act because there can be situations where the intermediary may fail to obtain immunity and yet it is not held liable.

2.2  Liability-Imposing v Liability-Exempting The problem arises from interpretation of the Intermediary Guidelines 2011, which state in paragraph 3 as follows: Due diligence to be observed by intermediary—The intermediary shall observe ­following due diligence while discharging his duties, namely: . . . (4) The intermediary, on whose computer system the information is stored or hosted or published, upon obtaining knowledge by itself or been brought to actual knowledge by an affected person in writing or through email signed with electronic signature about any such information as mentioned in sub-rule (2) above, shall act within thirty six hours and where applicable, work with user or owner of such information to disable such information that is in contra­ven­tion of sub-rule (2). Further the intermediary shall preserve such information and associated records for at least ninety days for investigation purposes.

Now, what makes this provision confusing is that section 79 of the IT Act merely seems to state the requirement of immunity, not of liability. One such requirement is observance of due diligence. Thus far, due diligence is not an obligation but a choice which is relevant only in the event that intermediaries wish to apply for immunity. However, the Intermediary Guidelines 2011 may have their own binding force. One civil society commentator seems to think so when describing the Intermediary Guidelines 2011 as ‘the rule that mandates the intermediary to disable the content’ and as ‘endowing an adjudicating role to the intermediary in deciding questions of fact and law, which can only be done by a competent court’.36 It is noticeable that the applicability of immunity does not decide whether inter­medi­ ar­ies will be held liable. The inapplicability of immunity does not in itself mean that intermediaries will be held liable since it has yet to go through the pre-safe-harbour regu­lar torts analysis. The facts that disqualify the said intermediary from immunity may also help to prove its liability. For instance, failure to take action after receiving notification will be a factor contributing to its liability. However, failure to take action will be only one of the factors deciding the liability question. Also, the language ‘endowing an adjudicating role’ will make sense only if intermediaries are held liable for ­getting it wrong (e.g. retaining unlawful content and mistaking it for being lawful). The Intermediary Guidelines 2011 do not impose a blanket requirement that intermediaries must remove all unlawful content on knowledge of the same and must do so correctly; it simply grants the immunity of section 79A of the IT Act only to intermediaries who 36 Software Freedom Law Center, ‘Intermediaries, users and the law—Analysing intermediary li­abil­ity and the IT Rules’ (2012) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   261 remove all unlawful content once they have knowledge. Intermediaries getting it wrong are not immediately held liable for getting it wrong. In a sense, the prevailing view of the Intermediary Guidelines 2011 is that it is liability-imposing, not liability-exempting.

2.3  Dialectical Turn of Singhal In an unexpected turn, the court in Singhal accepted the liability view of section 79 of the IT Act and the Intermediary Guidelines 2011 and dissolved what they believed to be a problem in the only relevant paragraph in the case as follows: Section 79(3)(b) has to be read down to mean that the intermediary upon receiving actual knowledge that a court order has been passed asking it to expeditiously remove or disable access to certain material must then fail to expeditiously remove or disable access to that material. This is for the reason that otherwise it would be very difficult for intermediaries like Google, Facebook etc. to act when millions of requests are made and the intermediary is then to judge as to which of such requests are legitimate and which are not. We have been informed that in other countries worldwide this view has gained acceptance, Argentina being in the forefront.37 Also, the Court order and/or the notification by the appropriate Government or its agency must strictly conform to the subject matters laid down in Article 19(2). Unlawful acts beyond what is laid down in Article 19(2) obviously cannot form any part of Section 79. With these two caveats, we refrain from striking down Section 79(3).38

Section 512 of the DMCA does not require takedown notices to come from courts. The whole idea of the DMCA safe harbour is that intermediaries do not have to correctly decide on the substantive legitimacy of takedown requests. As long as they comply with them (and comply with restoration requests with the same level of automation), they will not be held responsible either for the unlawfulness of any third party content or for its removal. Section 79 of the IT Act is also an exemption rule on its surface: as long as intermediaries comply with the due diligence rule in the Intermediary Guidelines 2011 (i.e. take down on notification), they will not be held liable for third party content. If we take this exemption view of section 79 of the IT Act, the Supreme Court’s holding does not make sense. Under a true safe harbour like section 512 of the DMCA, Facebook and Google are not even supposed to try to decide whether the notified content is lawful. They are expected to claim exemption for all content simply by complying with takedown requests (and with restoration requests). Likewise, intermediaries under section 79 do not have to decide whether notified content is unlawful: they can claim 37 See Argentinean Supreme Court Rodriguez  M.  Belen v Google y Otro s/daños y perjuicios (29  October 2014) R.522.XLIX (Arg.). One can only think that this case is what the Indian Supreme Court had in mind but the Argentinean decision requires a court order as a prerequisite for imposing liability, not for exempting it. 38  Shreya Singhal (n. 32) 117.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

262   Kyung-Sin Park exemption for all content simply by removing content they come to know to be unlawful. They do not have to make any effort to try to be accurate because being inaccurate does not mean that they will be held liable. The Intermediary Guidelines 2011 state that intermediaries ‘shall act within 36 hours’ but only if they wish to claim exemption—that is, if the exemption-rule view of section 79 correct. The Singhal court may have taken a liability-rule view of section 79: that this provision holds intermediaries liable if they do not comply with due diligence rules under the Intermediary Guidelines 2011. If so, they may have been right to be worried about private, non-judicial notifications pulling intermediaries into liability. Of course, such a liability-rule view is, however, contrary to section 512 of the DMCA which explicitly states that failure of immunity does not mean liability39 and whose optional character has been established by the legislators.40 There are obviously differences between section 512 of the DMCA and section 79 of the IT Act. One such difference concerning the Singhal court could be that section 79 provides an incentive only in one direction: towards taking down as much as they can, while section 512 of the DMCA provides incentives for reinstating content. The Singhal court may have been worried that too much content would be taken down if inter­ medi­ar­ies operating under the unidirectional incentive faced difficulty in deciding on the legitimacy of content. In the end what is important is that the Singhal court tried to undo what it believed to be a problem and created one of the strongest safe harbours whereby intermediaries are shielded from liability even if they refuse to take any action on all non-judicial takedown requests. Indeed, the stories of platforms refusing to take down and not being held liable until they are ordered to take down by the courts abound in relation to section 230 of the US Communications Decency Act (CDA).

2.4 Conclusions India’s section 79 ‘safe harbour’ is properly configured as an exemption rule. However, the enforcement rule of section 79 and the Intermediary Guidelines 2011 impose 39  Falling outside the safe harbours does not incur liability for infringement. See 17 USC § 512(l) (‘[t]he failure of a service provider’s conduct to qualify for limitation of liability under this section shall not bear adversely upon the consideration of a defense by the service provider that the service provider’s conduct is not infringing under this title or any other defense’). 40  See Senate Report 105-190—The Digital Millennium Copyright Act of 1998, 55 (‘[n]ew section 512 does not define what is actionable copyright infringement in the online environment, and does not create any new exceptions to the exclusive rights under copyright law. . . . Even if a service provider’s activities fall outside the limitations on liability specified in the bill, the service provider is not necessarily an infringer; liability in these circumstances would be adjudicated based on the doctrines of direct, vicarious or contributory liability for infringement as they are articulated in the Copyright Act and in the court decisions interpreting and applying that statute, which are unchanged by section 512. In the event that a service provider does not qualify for the limitation on liability, it still may claim all of the defenses available to it under current law’).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   263 af­fi rma­tive obligations to take down prohibited content on notification. The Singhal court may have been worried about the censorship-incentivizing effects of section 79 or may have taken the liability-rule view of the Intermediary Guidelines 2011, and decided to restrict the action-triggering notifications to court decisions only. The end result is one of the ‘safest’ harbours in the world where intermediaries do not have to take down anything unless the courts find the content to be unlawful. This is similar to section 230 of the CDA where intermediaries do not have to take down anything unless ordered by a court to do so.

3. Japan 3.1  Basic Laws and Regulations The Diet passed the Provider Liability Law in 2001, which covers both copyright and non-copyright law.41 According to section 3(1), when someone’s rights are infringed by a flow of information, intermediaries ‘shall not be held liable for the damage caused unless it is technically feasible to take measure to prevent the transmission of infringing information to unspecified persons’ and either: (1) the intermediary had known the fact of the infringement; or (2) the intermediary knew the existence of the relevant information and there were ‘reasonable grounds’ for the intermediary to know the fact of the infringement. This is similar to EU’s e-Commerce Directive where the intermediary has exemption for unknown content even if it is unlawful as long as the intermediary has neither actual knowledge nor apparent knowledge, and when the intermediary does have that know­ledge it takes down the content.42 (The comparison to the e-Commerce Directive is especially appropriate since both laws are horizontal; that is, they cover all fields of

41 See Act on the Limitation of Liability for Damages of Specified Telecommunications Service Providers and the Right to Demand Disclosure of Identification Information of the Senders, Act no. 137 of 2001, amended by Act no. 10 of 2013 (Jap.) (English translation) (‘Provider Liability Limitation Act’ or PLL). 42  To ensure that it is appropriate to compare an EU transnational law to a Japanese national law, it may be advisable to look at how Member State laws are shaped by the e-Commerce Directive in the EU which resulted in national laws such as in Germany, which amended the Act on Utilization of Teleservices as follows: ‘Section 8 General Principles . . . (2) Providers as defined under Sections 9 to 11 shall not be obliged to supervise information they have transmitted or stored or to research to determine circumstances that indicate an illegal activity. Obligations to remove or block the use of information under binding law shall remain unaffected even if the provider is not responsible pursuant to Sections 9 to 11. . . . Section 11 Storage of Information (Hosting). Providers shall not be responsible for third-party information that they store for a user if (1) they have no actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent, or (2) acts expeditiously to remove or to disable access to the information as soon as they become aware of such circumstances’).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

264   Kyung-Sin Park li­abil­ity, although we are comparing all safe harbours across national and sectoral lines anyway.) Japanese commentators are not satisfied: [t]he ISP is faced with a delicate and hard choice . . . When the ISP received a complaint that some statements posted on its bulletin board was defamatory, however, it had no means to know whether the defendant was negligent or whether the defendant could assert immunity. As a result, for the fear that the ISP might be held liable if it refuses to remove defamatory statements, it is likely to remove them even if the defendant could not be held liable. Such an attitude of the ISP would wipe out many protected speech from the Internet. The Provider Liability Law is, I believe, unconstitutional so long as it subjects the ISP to liability when there is a reasonable ground to believe that it should have known that the information it was carrying invaded someone’s rights . . . I therefore believe that if someone wants the ISP to remove the defamatory statements from its bulletin boards, he or she should go to the court first seeking order to remove them and the court must review the claim and decide whether the defendant would be likely to be held liable before issuing an order to remove them.43

However, it is not the case that the Provider Liability Limitation Act ‘subjects [any] ISP to liability’ on the basis of the existence of ‘reasonable ground’ for an intermediary’s ­constructive knowledge. The Act merely states that an intermediary will not be held li­able if it had neither actual knowledge nor constructive knowledge. The statement ‘if no notice, no liability’ is different from the statement ‘if notice, then liability’. The enforcement regulations confirm this: ‘[c]onsequently, liability for damage shall not always be incurred if the response is not made in accordance with these Guidelines. Conversely, the provider, etc. shall not be absolved from liability for damages even if the provider, etc. responds in accordance with these Guidelines.’44

3.2  Comparison With Other Safe Harbours There may be concern that even the liability-exempting safe harbour could unduly favour takedown over retention45 but such a critique takes on the entire class of safe ­harbour regimes including Article 14 of the EU’s e-Commerce Directive and section 512 of the DMCA in the United States, albeit at differing degrees. The idea behind a safe h ­ arbour is to create a balance away from section 230 of the CDA in order to make inter­medi­ar­ies do something about known unlawful content, or at least to make 43  Shigenori Matsui, ‘The ISP’s Liability for Defamation on the Internet—Japan’ (Seoul, South Korea, May 2015) (on file with the author) (emphasis added). 44  Provider Liability Limitation Act Guidelines Review Council, ‘Provider Liability Limitation Act Guidelines Relating to Defamation and Privacy’ (May 2002, amended December 2014) (English translation). 45  See Mark Lemley, ‘Rationalizing Internet Safe Harbors’, Stanford Public Law Working Paper 979836 (10 April 2007) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   265 ­inter­medi­ar­ies do something without worrying that their actions will be used as ­evidence against them (e.g. of their ability to take down content which will justify penalizing their inactions). Any such system is bound to invite some criticism because it is designed to increase the censorship activities of intermediaries beyond doing nothing. The three safe harbours (Art. 14 of the e-Commerce Directive, s. 512 of the DMCA, and s. 3 of Japan’s Provider Liability Limitation Act (PLL)) vary in their degree of incentivizing intermediaries into censorial activities. Section 512 of the DMCA conditionally46 requires taking down all notified content expeditiously and Article 14 of the e-Commerce Directive conditionally requires intermediaries to take down only known content expeditiously. On its surface, section 3 of the PLL seems similar: the ‘technical feasibility’ of removal also disqualifies intermediaries which probably means that inter­medi­ar­ ies should remove unlawful content as soon as they have knowledge (or constructive knowledge). However, on closer inspection, the Japanese safe harbour seems narrower than the EU’s in the following hair-splitting sense: broad constructive knowledge (e.g. ‘reasonable grounds’ for know­ledge of illegality) is sufficient to disqualify an intermediary from the safe harbour while in the EU only actual knowledge of illegality or apparent illegality is sufficient for disqualification. What is interesting is that the intermediary is also given exemption for removing unknown content. At the same time, according to section 3(2) the intermediary shall not be held liable to the originator of the information for taking measures to prevent the circulation of infringing information if: (1) such measures are reasonably necessary; or (2) the intermediary received notification of infringement, then relayed it to the originator, and did not receive any notice of non-consent within seven days of doing so. The reality is that it is unusual for intermediaries to be held liable for removing content in any event, with or without the PLL, so that there is little incentive for giving the originator notice of the proposed takedown. This is unlike the Canadian notice-and-notice where notification to the author of a post is mandatory.47

3.3 Conclusions Japan has a proper exemption-type safe harbour but it is narrower in scope than the EU e-Commerce Directive because even ‘reasonable grounds’ short of apparent illegality become a basis for disqualifying exemption. However, it is broader in scope than the DMCA’s safe harbour in the sense that failure to act on notification does not in itself 46  ie. if intermediaries want exemption. 47  See Copyright Act, s. 31.1 (Can.) (providing that OSPs are ‘exempt from liability when they act strictly as intermediaries in communication, caching and hosting activities’). Canada has already established section 230-type exemption for intermediaries in the area of copyright. As well as and independent of this safe harbour, OSPs must comply with the notice-and-notice system (s. 41.25–41.26) whereby, on receipt of any notice of infringement, the intermediary service provider does not have to remove the alleged infringing content, but must forward the notice to the alleged infringer (s. 41.26(1)(a)) on penalty of an administrative fine.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

266   Kyung-Sin Park disqualify intermediaries from exemption, only failure to act on knowledge does so, although knowledge itself is interpreted broadly to include ‘constructive knowledge’.

4. Indonesia On 30 December 2016, the Ministry of Communication and Informatics released Circular Letter 5 of 2016 on the Limitations and Responsibilities of Trade Platform Providers and Merchants through Electronic Commerce Systems in the Form of UserGenerated Content, its first evrer attempt at an intermediary safe harbour in any field of law.48 The ministry dubbed the circular as Indonesia’s safe harbour policy for e-commerce platforms, therefore it does not apply to all intermediaries, only those mediating the sale of goods and services. The ministry intends to follow this circular with the Ministerial Regulation on Safe Harbor Policy for User Generated Content, which is expected to contain more detail than the 2016 circular. However, the Regulation has not yet been released. Prior to the Regulation’s release, the Ministry of Communications and Informatics has indicated that it will hold public consultations in order to collect input. The circular on safe harbours aims to clearly distinguish the roles and responsibilities of e-commerce platforms, users, and all other parties in the e-commerce ecosystem. It aims to establish a safety and reporting protocol for e-commerce platforms, as well as defining restricted content for both users and platforms, which includes (but is not limit­ed to): negative content (e.g. pornography, gambling, violence, and goods/services deemed illegal by other legislation); intimidating content (e.g. goods/services depicting gore, blood, horrific accidents, and torture); violation of intellectual property rights; hacking and illegal access to electronic systems; provision and/or access to drugs, ad­dict­ive substances, and hallucinogenic substances; illegal weapons; trafficking of people and organs; and protected flora and fauna.49 The circular on safe harbour obligates e-commerce platforms to include a mechanism which allows users to report discovered illegal goods and services. When a platform is alerted of illegal goods and services, it is mandated to take it down within one, seven, or fourteen days depending on the severity of the content. Content deemed harmful to national security and human health must be taken down within one day, pornography within seven days, and goods/services that infringe intellectual property rights within fourteen days.50

48  See Indonesian Ministry of Communication, Circular Letter 5 of 2016 on the Limitations and Responsibilities of Trade Platform Providers and Merchants through Electronic Commerce Systems in the Form of User-Generated Content (2016) (Indonesian only) (hereafter Circular Letter). 49 ibid. 50 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   267 E-commerce platform providers are also obligated under the circular to provide clear information on which items are illegal and forbidden from being traded on their systems.51 Users and merchants must enter an agreement (agreeing on terms and conditions) with the platform provider prior to the use of its services. Concerns about overuse or criminalization of the Circular Letter’s definition of illegal content appear to be somewhat limited, considering that it does not set any sanctions or penalties for non-compliance, and e-commerce platform providers are also not obligated to report illegal activity to law enforcers. However, it is very likely that illegal activities by merchants and/or users will be dealt with through criminal sanctions detailed in the Information and Electronic Transactions Law.52 One anomaly is that intermediaries are obliged to actively evaluate and monitor merchants’ activities on their platforms. It is not clear whether the obligation to ‘actively evaluate and monitor merchants’ activities in their platforms’ will require providers to  establish an advanced content-filtering system in their platforms.53 That said, the Indonesian safe harbour also suffers from confusion over the distinction between a li­abil­ity rule and an exemption rule. It is unclear what is meant by the Circular Letter: Section V, titled ‘Limitation and responsibility of Platform Providers or Electronic System Providers and Merchants in Trade Through Electronic Systems (Electronic Commerce) in the Form of User Generated Content’, of the Circular states: C.  Obligations and Responsibilities of UGC [User-Generated Content] Platform Providers 1. The obligations of the UGC Platform Provider include: a. Presenting the terms and conditions for using the UGC Platform which at least contains the following: 1) obligations and rights of Merchants or Users to use the UGC Platform services; 2) the obligations and rights of the Platform Provider in carrying out the UGC Platform business activities; 3) Provisions regarding accountability for uploaded content. b. Providing Reporting Facilities that can be used to submit complaints regarding Prohibited Content on the UGC Platform it manages, to obtain the least information including: 1) specific links leading to Prohibited Content; 2) reasons/ basis for reports of Prohibited Content; 3) supporting evidence of the report, such as screenshots, statements, brand certificates, power of attorney. c. Acting on complaints or reporting on content, including: 1) conduct the examination of the truth of the report and ask the reporter to complete the requirements and/or include other additional information related to the complaint and/or reporting in the event that is needed; 2) take action to remove and/or block the prohibited content; 3) give notification to the Merchant that the content uploaded is Prohibited Content; 4) provide a means for Merchants (Merchants) to argue that the content they upload is not Prohibited content; 5) reject complaints and/or reporting if the reported content is not prohibited content. d. Pay attention to the period of removal and/or blocking of 51 ibid. 52  See Law no. 11 of 2008, Information and Electronic Transactions Law (Indon.). 53  See Kristo Molina, ‘Indonesia Implements a Safe Harbor Policy for E-Commerce (Marketplace) Platforms’ (White and Case blog,13 March 2017) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

268   Kyung-Sin Park reporting Prohibited Content: 1) For Prohibited Content that is urgent is no later than 1 (one) calendar day from the report received by the UGC Platform Provider. Prohibited Content is urgent including, but not limited to: i) Products of goods or services that are harmful to health; ii) Products/services that threaten state security; iii) human trafficking and/or human organs; iv) terrorism; and/or v) other content determined by the laws and regulations. 2) Prohibited Content as stated in Roman Letter V Letter B other than urgent Prohibited Content is not later than 7 (seven) calendar days from the report received by the UGC Platform Provider; 3) Prohibited Content as mentioned in Roman Letter V Letter B number 1 letter e, that is, content related to goods and/or services that contain content that violates intellectual property rights is not later than 14 (fourteen) calendar days since the complaint and/or reporting is received by the UGC Platform Provider with supporting evidence required. e. Evaluate and/or actively monitor the activities of the Merchants in the UGC platform. f. Comply with other obligations established under the provisions of the legislation. 2. Responsibilities of the UGC Platform Provider include: a. being responsible for implementing electronic systems and content in a platform that is reliable, safe and responsible. b. The provisions of letter (a) above do not apply if there is an error and/or negligence of the merchant or platform user.54

The impact of paragraph V.C.2 of the Circular Letter is unclear. Paragraph V.C.1 states the ‘obligations’ of the platform provider while paragraph V.C.2 states the ‘responsibilities’, which seem to refer to the technical reliability of the platform. If so, the noticeand-takedown regime discussed in paragraph V.C.1, standing alone, reads as though it is liability-imposing, meaning that platform operators must comply with the noticeand-takedown process and their failure to remove the prohibited content on notification will directly translate into liability. This is evinced by the requirement that the platform ‘conduct examination of the truth of the complaint . . . and reject the complaint if it finds the content not prohibited’.55 It thus requires the platform provider to ‘get it right’. Again, the risk of a liability-imposing regime discussed earlier in relation to China also apply to the Indonesian legal framework.

5. Malaysia The Copyright Act of 1987 as amended in 2012, Part VIB ‘Limitation of Liabilities of the Service Provider’ sections 43B to 43I, provide a safe harbour provision for internet intermediary liability for copyright infringement.56 The provisions are similar to the DMCA in the United States whereby ISPs and content aggregators are provided with immunity from liability for copyright infringement if they protect copyright owners by removing or disabling access to infringing content. 54  Circular Letter s. V.C.1–2 (emphasis added). 55  ibid. s. V.C.1.c(1)–(5). 56  See Copyright Act of 2012, Part VIB s. 43B–I (Malay.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   269 43E. Storage and information location tools (1) The service provider shall not be held liable for storing infringing work at the direction of a user of its primary network; or linking a user via a hyperlink, directory or search engine to an online location which makes an infringing work available. Under the condition that the service provider: (i) does not have actual knowledge that the electronic copy of the work or activity is in­frin­ging; is not aware of the facts or circumstances from which the infringing activity is apparent, which is the red flag requirement (contributory liability); (ii) that the service provider does not receive any financial benefit directly attributable to the infringement of the copyright and that the service provider does not have the right and ability to control the infringing activity (vicarious liability); (iii) that upon receipt of a notification of any infringement, the service provider responds within the time specified to remove or disable access to the material. If no notification is given the service provider shall not be held liable. (2) A test is given for vicarious liability, which includes taking into account any industry practice in relation to charging of services by a service provider.57

Malaysian commentators seem content:58 in relation to the exemption for removal (s.  43F), the penalty for bad faith (s. 43I), and the counter-notification procedure (s. 43H(3)–(5)), they are confident that the Malaysian safe harbour effectively implements section 512 of the DMCA. Notification by copyright owner and its effect (1) The copyright holder can notify the service provider to remove or disable any access to copyright infringing content, provided that the copyright owner shall compensate the service provider or any other person against any damages, loss or liability arising from the compliance by the service provider of such notification. (2) The service provider shall remove or disable any access to work that is infringing copyright within 48 hours after he received the notification.59

However, if the provision above (s. 43H) is read carefully, it implements a liabilityimposing framework. It does not conditionally require removing the ‘infringing work’ but absolutely requires it. This reads like a separate obligation under section 43(I)(1)(iii) to respond to ‘notification of infringement’ since there is a substantive difference between an ‘infringement’ and a ‘notification of infringement’.60 Section 43H is therefore able to create all the problems that the Chinese liability-imposing system has generated. As to the defamation area, there is no separate liability safe harbour but the case of Kho Whai Phiaw v Chong Chieng Jen [2009] 4 MLJ 103 discussed the liability of bloggers for third party content, and ruled that bloggers are not to be considered publishers of the

57  ibid. Part VIB s. 43E. 58  See Ida Madieha Abdul Ghani Azmi, Suzi Fadhilah Ismail, and Mahyuddin Daud, ‘Internet Service Providers Liability for Third Party Content: Freedom to Operate?’, Conference Paper DOI: 10.1109/ CITSM.2017.8089226 (August 2017). 59  Copyright Act of 2012 (n. 56) Part VIB s. 43H. 60  ibid. s. 43(I)(1)(iii).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

270   Kyung-Sin Park defamatory comments where there is no specific knowledge of the comments even if the bloggers retain the ability to edit and control access to the blog space. However, the new section 114A of the Malaysian Evidence Act 1950 came into force in July 2012 and deems all persons who act as owners, hosts, administrators, editors, or sub-editors, or who facilitate to publish or republish any publication are to be ­presumed as publishers under the law unless otherwise stated. The imagined impact of this law on intermediary liability will be tremendous but has not yet materialized in real cases.

6.  South Korea 6.1  Introduction: Basic Laws and Regulations We began with China’s limited liability rule and contrasted it with the true safe harbours in Japan and India which, albeit misunderstood, pointed to the risks of the Indonesian and Malaysian legal frameworks, which, despite claiming to implement ‘safe harbours’, are in fact liability-imposing regimes by nature. In this section, we will look at South Korea where a liability-imposing rule was established earlier and we will examine its impact on intermediary behaviour. The theory to test runs as follows: pre-safe-harbour, intermediaries are liable for content that they know (1) exists and (2) is known to be unlawful. The liability-imposing rule applies to a subset of these knowledge-laden cases where the intermediary has received a notice of infringement and therefore has knowledge of the existence of the unlawful material. Technically, when it receives notification all it has is knowledge of the existence of some controversial material, which is not equal to knowledge of unlawful material. However, the liability-imposing rule holds them immediately liable if they wrongly decide on the illegality. Cornered by fear of liability, intermediaries tend to take down clearly lawful content. Is this also the case in South Korea? Article 44-2 (Request to Delete Information) of the Act Regarding Promotion of Use of Information Communication Networks and Protection of Information reads: Anyone whose rights have been violated through invasion of privacy, defamation, etc., by information offered for disclosure to the general public through an information communication network may request the information communication service provider handling that information to delete the information or publish rebuttal thereto by certifying the fact of the violations. Paragraph 2. The information communication service provider, upon receiving the request set forth in Section 1 shall immediately delete or temporarily blind, or take other necessary measures on, the information and immediately inform the author of the information and the applicant for deleting that information. The service provider shall inform the users of the fact of having taken the necessary measures by posting on the related bulletin board. [omitted] Paragraph 4. In spite of the request set forth in Section 1, if the service provider finds

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   271 it difficult to decide whether the rights have been violated or anticipates a dispute among the interested parties, the service provider may take a measure temporarily blocking access to the information (‘temporary measure’, hereinafter), which may last up to 30 days [omitted] Paragraph 6. The service provider may reduce or exempt the damage liability by taking necessary actions set forth in Paragraph 2.61

As is immediately apparent, the provision is structured not with phrases such as ‘the ­service provider shall not be liable when it removes . . .’ but starts out with the phrase ‘the service provider shall remove . . .’ Paragraph 6, referring to the ‘exemption from or reduction of liability in event of compliance with the aforesaid duties’, makes a feeble attempt to turn the provision into an exemption provision but the exemption here is not mandatory. This means that intermediaries will accept paragraph 2 obligations as mandatory, not conditional. Historically, the predecessors of Article 44-2 simply required the service provider to take down content on the request of a party injured by that content and did not provide any exemption.62 The law was amended in 2007 in Article 44-2 to create a ‘temporary (blind) measure’ for ‘border-line’ content, which the service provider can now resort to in fulfilling its responsibility under the previous law.63 The central idea that continued to remain was that the intermediary must take action (temporary or not) on infringing content on notification. Again, the general idea of holding intermediaries liable for identified infringing content seems innocuous but the South Korean cases are compelling for why it should be abandoned.64 As you will see later, intermediaries respond by removing even lawful content, and courts impose liability when the illegality is apparent only in hindsight reinforcing the censorial tendencies of intermediaries. Now, this failure to set up a liability-exempting regime does not mean that the law directly incentivizes inter­medi­ar­ies into a general monitoring obligation or prior approval for uploading. It may just mean maintaining the pre-safe-harbour status quo based on general torts joint li­abil­ity. However, the reality in Korea shows that such a half-baked attempt may worsen the situation.

6.2  Proof: Intermediary Behaviour and Courts’ Expansionist Interpretation Politicians and government officials often make takedown requests for clearly lawful postings critical of their policy decisions, such as postings critical of a Seoul City

61  Act Regarding Promotion of Use of Information Communication Networks and Protection of Information, Art. 44-2 para. 1 (Kor.). South Korean legislation can be found at . 62  See Law no. 6360 of 16 July 2001, Network Act, Art. 44(1)–(2) (Kor.). 63  See Law no. 8289 of 27 July 2007 (Kor.). 64  See Woo Ji-Suk, ‘A Critical Analysis of the Practical Applicability and Implication of the Korean Supreme Court Case on the ISP Liability of Defamation’ (2009) 5(4) Law & Tech. 78, 78–98.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

272   Kyung-Sin Park mayor’s ban on assemblies in Seoul Square;65 a posting critical of a legislator’s drinking habits;66 clips of a television news report on the Seoul police chief ’s brother who al­leged­ly ran an illegal brothel hotel;67 a posting critical of politicians’ pejorative remarks on the recent deaths of squatters and police officers in a redevelopment dispute;68 a posting calling for immunity from criminal prosecution and civil damage suits on labour strikes;69 and a posting by an opposition party legislator questioning a conservative media executive’s involvement in a sex exploitation scandal relating to an actress and her suicide.70 Some of these requests were accepted by the intermediaries and the requests submitted to an independent progressive intermediary were not accepted, and others were restored after public scrutiny intensified. MP Choi Moon-soon obtained the relevant data from the top three content host intermediaries though the Korea Communications Commission and revealed that they took down 60,098 postings in 2008, 96,503 in 2009, and 100,000 estimated for 2010 in November of that year.71 MP Nam Kyung-pil obtained similar data on the top two content hosts Naver and Kakao taking down 209,610 postings in 2011 and MP Shin Yong-Hyun reports 1,643,528 takedowns by Naver and 442,330 takedowns by Kakao between January 2012 and June 2017.72 Notice a possibility that the announced availability of legal recourse may have encouraged more people to submit takedown requests, many of which intermediaries blindly comply with just to be safe. The courts have worsened the situation by taken expansionist approaches in the same  way as the Chinese courts, which might be expected since both China and South Korea are liability-imposing regimes. In a crushing judgment in 2009,73 the South Korean Supreme Court held Naver, Daum, SK Communications, and Yahoo Korea liable for the defamation of a plaintiff when user postings on those sites accused him of deserting his girlfriend during her second pregnancy after he had talked her into aborting the first, after which the girlfriend committed suicide. The court upheld judgments of 10 million won, 7 million won, 8 million won, and 5 million won, respectively, against those services, stating that: [i]ntermediary shall be liable for illegal contents . . . when (1) the illegality of the content is clear; (2) the provider was aware of the content; and (3) it is technically 65  See (Korean only). 66  The original posting now taken down was at (Korean only). 67  See (Korean only). 68  See (Korean only). 69  See (Korean only). 70  The original posting now taken down was at (Korean only). See the following news report on the takedowns: ‘NHN-Daum took down 298 postings related to Chang Ja-Yeon’ (ZD Net, 15 April 2009) 71 See (Korean only). 72 See , (Korean only). 73  See Supreme Court 2008Da53812 (16 April 2009) (Kor.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   273 and financially possible to control the contents . . . The Court will find the provider’s requisite awareness under (2) above: a) when the victim has requested specifically and individually for the takedown of the content b) when, even without such request, the provider was concretely aware of how and why the content was posted OR c) when, even without request, it was apparently clear that the provider could have been aware of that content.74

The intermediaries here could not investigate the plaintiff ’s affairs with his girlfriend and yet were held liable for not removing them upon notifications. Only the courts, armed with hindsight, could comfortably find the comments defamatory, which means that intermediaries will be forced to err on the side of deleting. On top of that, the most renowned part of the judgment concerned what inter­medi­ ar­ies must do with content for which no notification is given at all. The conclusion of the court was that the intermediary will be absolutely liable for a posting later found to be ‘clearly’ defamatory if ‘it was apparently clear that the provider could have been aware of that content’ even if the victim did not notify the intermediary of the existence of the content. This sets up probably one of the strictest intermediary liability regime because it imposes liability for ‘unknown but could-have-known’ content. Note the parallel that can be drawn with the Chinese Fanya case discussed earlier. Anupam Chander plainly describes this ruling as stating that a web service ‘must delete slanderous posts or block searches of offending posts, even if not requested to do so by the victim’.75 Of course, the DMCA’s notice-and-takedown immunity76 also does not apply to content where the OSP had ‘actual knowledge’ of its infringing nature or an ‘awareness of facts or circumstances from which infringing activity [was] apparent’. However, the DMCA is a safe harbour provision. It merely says that the safe harbour will not apply in the event of ‘actual knowledge’ or ‘awareness’. It does not say that the OSP will be held liable in the case of such knowledge or awareness. Furthermore, in 2012 the Constitutional Court even interpreted Article 44-2 of the Network Act as requiring the takedown of unlawful content as well as lawful content. The court stated: ‘if the pre­requis­ites are met, the service provider must without hesitation take the temporary measure’.77 Now, the prerequisites do not include the illegality of the content as shown in this paragraph asserting that requiring intermediaries to blind lawful content is constitutional: [t]he instant provisions are purported to prevent indiscriminate circulation of the information defaming or infringing privacy and other rights of another . . . Temporary blocking of the circulation or diffusion of the information that has the possibility of such infringement is an appropriate means to accomplish the purpose.78

74 ibid. 75  Anupam Chander, ‘How Law Made Silicon Valley’ (2014) 63 Emory L.J. 639. 76  See DMCA (n. 5) ss. 512(c)(1)(A)(ii) and 512(d)(1)(A). 77  Constitutional Court 31 May 2012 Decision 2010 Hun-ma 88 (Kor.). 78 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

274   Kyung-Sin Park

6.3  Origins: Syntactical Error in Adopting Section 512 of the DMCA? Interestingly, this gratuitous and possibly unconstitutional censorship obligation on lawful content is often justified by reference to section 512 of the DMCA. Table 13.1 shows a cross-jurisdictional comparison of the relevant provisions relating to third party postings against which a takedown notice was sent to the hosting intermediary (hereinafter ‘noticed posting’). As can be seen from Table 13.1, it is clear that South Korea tried to adopt a provision approximating the safe harbour provisions of the EU, the United States, and Japan, through the Copyright Act, Article 102. The problem is the existence of Article 103 of the Copyright Act. Other countries’ laws consist of just one provision corresponding to South Korea’s Article 102 but South Korea adds the superfluous Article 103, which might

Table 13.1  Takedown notices for third party postings Liability exemption for ‘noticed posting’

Liability

Europe: e-Commerce Dir., Art. 14(1)

On obtaining knowledge or awareness of the infringing information

On condition that the provider acts expeditiously to take down the content, the service provider is not liable for the information stored

N/A

United States: DMCA, s. 512(c)

On obtaining knowledge or awareness of facts and circumstances of infringing material or activity; or on notification of claimed infringement

If the service acts expeditiously to take down the material claimed to be infringing, the service provider shall not be liable

N/A

Japan: Provider Liability Law, Art. 3(1)

When the relevant service provider knew or there were reasonable grounds for the provider to know

Unless it is technically possible to take down the infringing content, the service provider shall not be liable for any loss incurred from such infringement

N/A

South Korea: Copyright Act, Arts 102 and 103

If an OSP actually knows of or has received a takedown notice and thereby learned of the fact or circumstances of an infringement

If an OSP immediately takes down the noticed posting, the intermediary shall not be liable for any infringement

In the event of a takedown request, the OSP must immediately take down (Art. 103(2)), in order to limit its liability (Art. 103(5))

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   275 be read like an on-demand obligation for online intermediaries to take down lawful content. The legislative intent was clearly not that seen in Article 103(5) which states that any OSP engaging in notice and takedown will reduce its liability. The intent was to replicate section 512 of the DMCA but in doing so the South Korean legislators broke down one sentence into two as follows: ‘If notice is given and inter­medi­ar­ies take down, the intermediaries will be exempt from liability.’

(1) When notice is given, intermediaries must take down. (2) If intermediaries do so, they will be exempt from liability.

Standing alone, Article 103(2) reads as if intermediaries have independent mandatory obligations to take down lawful content when the DMCA, e-Commerce Directive, and PLL obligations are all conditional (i.e. applicable if intermediaries seek exemption). The fact that the exemption-giving language of Article 103(5) is non-committal (i.e. ‘may reduce’) does not help to prevent the misreading. See Table 13.2. More problematic is that many in South Korea overlooked Article 103 and believed the Copyright Act to be a sound adaptation of other international norms and especially section 512 of the DMCA, and a similar mandatory notice-and-takedown system was replicated in other areas such as defamation and privacy infringement, hence Article 44-2 of the Network Act. This is very relevant to this discussion because the Malaysian Copyright Act 43H and the Indonesian Safe Harbor Circular V.C.1.c carry the extra­or­ din­ary risk of reading like stand-alone obligations of on-demand takedown and resemble the very strict liability regime in South Korea. These provisions textually apply only to unlawful content but so does South Korea’s INCA 44-2.

Table 13.2  Takedown obligations When notice is given, intermediaries shall not be liable if they expeditiously take down

When notice is given, intermediaries must take down; if intermediaries do so, they may reduce their liability

Europe (all claims)

Applicable

N/A

United States (copy­right)

Applicable

N/A

Japan (all)

Applicable

N/A

South Korea (copy­right)

Applicable

Applicable

South Korea (def­am­ation)

N/A

Applicable

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

276   Kyung-Sin Park

6.4 Conclusions As with its Chinese counterpart, the South Korean safe harbour for defamation is not a true safe harbour but a limited liability regime which tends to suffer from expansionist judicial interpretations of ‘knowledge’ and ‘notification’. In turn, this leads to inter­medi­ ar­ies’ strategic behaviour of erring on the side of takedown.

7. Epilogue Chinese and South Korean ‘safe harbour’ provisions for defamation and Malaysian copy­right ‘safe harbour’ and Indonesia’s all-purpose ‘safe harbour’ all have the structure of a liability-imposing regime despite their claims to create a safe harbour. Although they do not explicitly impose intermediary liability for unknown contents, the main evil to be prevented by safe harbour efforts in relation to general monitoring obligations or prior approval, the problem of contents known to exist but not yet known to be illegal, remains there to incentivize intermediaries into deleting lawful contents through fear of liability. This state of affairs is in a sense worse than the pre-safe-harbour general torts liability because the existence of procedure invites more people to submit takedown requests. In the light of this, the operation of true safe harbours built into section 512 of the US DMCA, Article 14 of the EU e-Commerce Directive, and section 3 of the Japanese PLL (and s. 79 of the Indian IT Act, which has grown stronger than its original mandate) should be carefully examined in order to avoid aggravating intermediaries’ liability through inaccurate processes of legal cross-pollination. It will be beneficial not only for informing the discussion in Asia but also discussions in Europe such as Germany’s new Network Enforcement Act, which is clearly based on the belief that intermediaries should be held liable for not removing unlawful content when they are notified of its existence. Korea’s case shows how intermediaries may end up responding, that is, almost automatic on-demand takedowns. Alternatively, intermediaries may respond by investing in human resources to review all notifications and trying to make decisions as ac­cur­ ate as possible79 but such a response becoming standard is not satisfactory because platforms not afforded such resources will struggle under legal liability forming around that new standard and continue the dominance of the current global giants, eroding the promise of ihe Internet in another abysmal way.

79  Facebook has ‘150,000 content moderators reviewing notifications around the world, taking down more than 10 million postings each year’. Remark by a Facebook representative at a workshop ‘Addressing Terrorist and Violent Extremist Content’ at 14th Internet Governance Forum (Berlin), November 2019.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 14

Chi na’s IP R egu l ation a n d Om n iscien t I n ter m edi a r ies: Oscill ati ng from Sa fe H a r bou r to Li a bilit y Danny Friedmann

There are opposing forces that influence intermediary liability (IL) regulation in the People’s Republic of China (China). China’s E-Commerce Law draft1 raises the standard for knowledge before infringing information might be removed; while the many laws and regulations involved in China’s Great Firewall to regulate the internet domestically, exclude the possibility of ignorance.2 Moreover, developments in big data3 and artificial intelligence4 have caught up with discussions about the desirability of safe 1  On 26 December 2016, the Standing Committee of the National People’s Congress (NPC) issued the first draft of China’s first E-Commerce Law. The public consultation period ended on 26 January 2017. No drastic changes from the first draft are expected. See ‘Legislation to regulate the market order is imminent’ (NPC, 27 December 2016) . 2  China’s Great Firewall effectively censors information the government deems unfit for a harmonious society. The regulations are partly overlapping and fragmented in relation to the organizational ­layers of the internet. Examples include Art. 57 of the Telecommunications Regulation and Art. 15 of the Measures on the Administration of Internet Information Services. Subsequently, online service pro­viders will receive lists of words that need to be immediately censored. To be able to abide by the comprehensive censorship requirements, online platforms have put human review systems in place. Qian Tao, ‘The knowledge standard for the Internet Intermediary Liability in China’ (2011) 20(1) IJLIT 11. 3  See Tom Brennan, ‘Alibaba Launches “Big Data Anti-Counterfeiting Alliance” ’ (Alizila, 16 January 2017) . 4  See Cade Metz, ‘Google’s AI Wins Fifth And Final Game Against Go Genius Lee Sedol’ (Wired, 15  March 2016) . See also, Sarah Zhang, ‘China’s Artificial-Intelligence Boom’ (The Atlantic, 16 February 2016) .

© Danny Friedmann 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

278   Danny Friedmann harbours5 and the degree of filtering requirements.6 On the other hand, there is case law, codified in 2016 in guidelines for Beijing courts,7 which reinforces the duties of care. This chapter applies a holistic approach by analysing the individual forces to assess their influence on IL case law . Since a growing part of copyrighted works and trade-marked goods in China is consumed via network service providers,8 the regulation of IL has become ever more rele­vant. In July 2016, it was estimated that about 52 per cent of the population in China (721 million people) had access to the internet.9 Goods are traded between businesses and consumers, between consumers, and between businesses, at enormous online market platforms, such as those of the Alibaba Group.10 To give an indication of the magnitude of online sales in China, on 11 November 2016, ‘Singles’ Day’, the Alibaba Group sold US$18 billion in one day.11 China took notice of the introduction of the Digital Millennium Copyright Act (DMCA)12 in the United States and the e-Commerce Directive13 in the EU, in 1998 and 1999 respectively. China’s IL regulation has evolved from broad and granular to more specific and sophisticated. The regulation of IL copyright infringement led the way, followed by case law in regard to trade mark infringement, which was applied by analogy. The big leap forward that refined and codified China’s experience with IL for copyright infringement was the promulgation of the Regulations for the Protection of the Right of Communication through the Information Network in 2006 (Regulations 2006).14 5  See Danny Friedmann, ‘Sinking the Safe Harbour with the Legal Certainty of Strict Liability in Sight’ (2014) 9(2) JIPLP 148–55. 6  Online platforms use different kinds of digital fingerprinting systems to help content creators to manage and enforce their copyrighted works; and provide systems to submit complaints of copyright and trade mark infringement. Untitled (AliProtect, undated) . 7  See Beijing High People’s Court Guidelines on Trial of IP Cases involving Networks (13 April 2016) (Ch.) (hereafter Beijing Guidelines). 8  Most western intermediaries for user-generated content are blocked in China. Therefore, national champions have developed in China which, after first having imitated their western counterparts, went on to emulate some of them by combining different functionalities. Danny Friedmann, ‘Rise and demise of U.S. social media in China: A touchstone of WTO and BIT regulations’ in Paolo Farah and Elina Cima (eds), China’s Influence on Non-Trade Concerns in International Economic Law (Routledge 2016). 9  Internet Live Stats; elaboration of data by the International Telecommunication Union, World Bank, and United Nations Population Division. 10  Taobao Marketplace is an online marketplace for consumer-to-consumer business; Taobao Mall (TMall) for business-to-business, and Alibaba for business-to-business. They are all part of the Alibaba Group in Hangzhou, Zhejiang. 11  ‘Singles Day: Alibaba breaks record sales total’ (BBC News, 11 November 2016) . 12  17 USC § 512, 112 Stat. 2860 Pub. L. 105-304, 28 October 1998 (US). 13  Council Directive 2000/31/EC of the European Parliament and the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1. 14  Regulations for the Protection of the Right of Communication through the Information Network promulgated by decree of the State Council no. 468, adopted at the 135th Executive Meeting of the State Council on 10 May 2006 and coming into effect on 1 July 2006 (Ch.) (hereafter Regulations 2006).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   279 The Regulations 2006 clarify which kind of network service providers are eligible for safe harbours, and makes clear when immunities are lifted. In the case of trade mark law, China, for the first time, issued an E-Commerce Law in 2017.15 Copyright law,16 which touches on the expressions of ideas, has always been politicized in China. With filter technology and monitoring obligations, Chinese authorities have focused on unwelcome expressions that were deemed to endanger social har­mony.17 Trade mark law,18 instead, was considered apolitical and the filtering of counterfeit products more or less a private affair.19 However, convergence between IL regulation for copyright and trade mark infringement has been taking place. With the emergence of the comprehensive and extremely ambitious Social Credit system that rewards moral and punishes amoral conduct from a Chinese socialist perspective, the realm of trade mark law also becomes political.20 The success of the Social Credit system, and the ability to monitor online conduct, depends on real-name registrations. Although the Chinese authorities have tried real-name registrations since 2012, with little success,21 overcoming this challenge would have far-reaching consequences for identifying direct infringers. This redirection of liability away from intermediaries will probably be coupled with increased monitoring obligations for intermediaries and foreshadows the end of safe harbours.22 Although the doctrine of internet sovereignty rules supreme in China, the country does not operate in a vacuum. China acceded to the 1967 Stockholm Act of the Paris Convention in 1984, and the 1971 Paris Act of the Berne Convention in 1992.23 These World Intellectual Property Organization (WIPO) conventions were negotiated before 15  See E-Commerce Law (n. 1). 16  See Copyright Law amended up to the Decision of 26 February 2010 by the Standing Committee of the National People’s Congress on Amending the Copyright Law (Ch.). 17  See Danny Friedmann, ‘Paradoxes, Google and China’ in Aurelio Lopez-Tarruella (ed.), Google and the Law: IT and the Law (TMC Asser 2012) 15. 18  See Trademark Law (as amended up to Decision of 30 August 2013, of the Standing Committee of National People’s Congress on Amendments to the Trademark Law). 19  Although counterfeit products, threatening health and safety, can cause serious social upheaval. An example: ‘China “fake milk” scandal deepens’ (BBC News, 22 April 2004) . 20 See State Council Guiding Opinions concerning Establishing and Perfecting Incentives for Promise-Keeping and Joint Punishment Systems for Trust-Breaking, and Accelerating the Construction of Social Sincerity, State Law no. (2016)33 of 30 May 2016 (Ch.), original and translation at China Copyright and Media . See also Catherine Lai, ‘China announces details of social credit system plan, deemed “Orwellian” by critics’ (Hong Kong Free Press, 3 January 2016) . 21  See Catherine Shu, ‘China attempts to reinforce real-name registration for Internet users’ (TechCrunch, 1 June 2016) . 22  See Friedmann (n. 5). 23  See Berne Convention for the Protection of Literary and Artistic Works of 1886, last amended in 1979.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

280   Danny Friedmann the digital revolution, and remain silent about the internet, the relationship between network service providers, rightholders, and internet users. In 2001, with the digital revo­lu­tion in full swing, China became a member of the World Trade Organization (WTO). It thereby accepted the 1994 Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs),24 which largely incorporated the Paris and Berne Conventions, without specifically taking the internet into account. However, in 1996, WIPO filled the gap by introducing the WIPO Internet Treaties (WIPO Copyright Treaty and WIPO Performances and Phonograms Treaty). Section 512 of the Digital Millennium Copyright Act (DMCA) in the United States in 1998, and Article 14 of the e-Commerce Directive in the EU, which implemented the WIPO Internet Treaties in 1999, have heavily influenced China’s policy towards IL for copyright and trade mark infringement. In 2006, the State Council of China promulgated the Regulations 2006,25 which prepared China to become a member in 2007 of the WIPO Internet Treaties. Although IL regulation for trade mark infringement in China followed the copyright regulation, this chapter will discuss trade marks first, since IL regulation for trade mark infringement provides an indicator of the convergence of IL regulation in China. In doing so, Section 2 deals first with the E-Commerce Law 2017. Then it will cover the historical development of IL for trade mark infringement. Section 3 deals with IL legislation and case law for copyright infringement; Section 4 provides the conclusion. Due to limited space this chapter will not deal with the obligations for intermediaries to disclose the identity of direct infringers, and the consequences if they do not do so, such as withdrawal of their immunity.

1.  Intermediary Liability for Trade mark Infringement Counterfeiters in China make extensive use of online market platforms, infringing intellectual property rights (IPRs),26 and endangering the safety and health of the public with their supply of fake and substandard goods. By sending one or two packages at the same time, counterfeiters remain under the criminal thresholds,27 which makes the enforcement of online trade mark infringement particularly challenging. 24  See Marrakesh Agreement Establishing the World Trade Organization, Annex 1C: Agreement on Trade-Related Aspects of Intellectual Property Rights (15 April 1994) 1869 UNTS 299, 33 ILM 1197. 25  See Regulations 2006 (n. 14). 26  IL in the case of patent infringement is another important issue. See e.g. Zaigle v TMall, where the Zhejiang High People’s Court held TMall jointly liable for patent infringement in 2015. See also ‘SBZL Awarded a Top-Ten China IP Case of 2015’ (SBZL Intellectual Property, undated) . 27  The threshold for criminal liability is RMB 50,000, pursuant to Art. 140 of the Criminal Law, adopted by the Second Session of the Fifth National People’s Congress on 1 July 1979 and amended by the Fifth Session of the Eighth National People’s Congress on 14 March 1997 (Ch.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   281 The Alibaba Group which includes Taobao, TMall, and Alibaba, was identified in the United States Trade Representative’s (USTR) Special 301 Reports of 2008, 2009, 2010, and 2011 for facilitating the sale of counterfeit goods to consumers and businesses. Assurances and measures taken against counterfeit products were the reason that Taobao was removed from the Lists of Notorious Markets in 2012, 2013, 2014, and 2015. However, in 2016 Taobao returned to the list. Not only the USTR complained that Taobao was taking insufficient measures against counterfeit products, but also the State Administration for Industry and Commerce (SAIC), China’s authority responsible for trade mark registration and administration nationwide, made complaints in a 2015 White Paper.28 Subsequently, in 2016, the head of the SAIC, Zhang Mao, did so again in a TV interview saying that the executive chairman of the Alibaba Group, Jack Ma, is not outside the law and should take responsibility in regard to counterfeit products being traded on online market platforms.29 On 13 April 2016, Alibaba became the first online market platform to become a member of the International Anti-Counterfeiting Coalition (IACC). However, a month later the IACC suspended Alibaba’s membership.30 Alibaba vowed ‘to keep fighting fakes’,31 but a month later Ma was harshly criticized after he asserted that the fakes offered online are often of better quality and better priced than the genuine products.32 During the ‘Two Sessions’ of the National People’s Congress and China People’s Political Consultative Conference in March 2017, Ma urged legislators in an open letter to increase the punishments for counterfeiters.33 Besides deflecting attention away from online market platforms facilitating counterfeiting to direct infringers,34 online trading platforms were also successful in lobbying to increase the knowledge standard in China’s first E-Commerce Law draft. In this E-Commerce Law draft,35 the emphasis seems to be 28  The White Paper was retracted on 30 January 2015, two days after being issued. See Megha Rajagopalan, John Ruwitch, and Edwin Chan, ‘Alibaba meets with China regulator, controversial report retracted’ (Reuters, 30 January 2015) . 29 See Scott Cendrowski, ‘Chinese Regulator Again Calls Out Alibaba for Counterfeit Goods’ (Fortune, 11 August 2016) . 30  Several brands had complained of its non-compliance; another reason was that there was a conflict of interest for the IACC president who did not disclose that he had stocks in the Alibaba Group. See Rishiki Sadam, ‘Anti-Counterfeiting group suspends Alibaba’s membership’ (Reuters, 13 May 2016) . 31  See Haze Fan, ‘Alibaba vows to keep fighting fakes despite IACC snub’ (CNBC, 16 May 2016) . 32  See David Ramli and Lulu Yilun Chen, ‘Alibaba’s Jack Ma: Better-Than-Ever Fakes Worsen Piracy War’ (Bloomberg Technology, 14 June 2016) . 33  See Meng Jing, ‘Alibaba’s Jack Ma calls for laws on counterfeiting to be “as tough as those on drunk driving” ’ (SCMP, 7 March 2017) . 34  However, Art. 54 of the E-Commerce Law draft protects vendors against abuse of IP rights, via a system of counter-notifications and imposing liability on the rightholder if he causes losses to a vendor who was not selling counterfeit goods. See n. 1. 35  See Mark Cohen, ‘E-Commerce Law Up for Public Comment’ (China IPR, 30 December 2016) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

282   Danny Friedmann more on facilitating e-commerce than on regulating it. The draft can be perceived as a way of trying to ‘dilute duties of care already applicable to online trade platforms’.36 Any obligations to take preventive measures have been removed. The draft also leaves uncertainty about the role of administrative enforcement authorities such as the SAIC.37 Article 88 of the E-Commerce Law draft provides that ‘[i]f a third-party platform for e-commerce violates the provisions of Article 5338 and clearly knows or has actual knowledge that the operator of the platform does not take the necessary measures for infringement of IPRs, the relevant departments of the people’s governments at various levels shall order it to make corrections within a prescribed time limit.’39 By contrast, other Chinese laws adopt the lower standard of ‘knowledge’.40 Judicial decisions also make reference to a standard of deemed knowledge (‘knew or should have known’) that creates a duty for intermediaries to intervene against infringements. This proactive measures standard was already accepted in Article 27 of the Beijing Guidelines.41 Although formally this opinion guides only Beijing courts, it can be seen as the accumulation and codification of authoritative case law, and therefore has persuasive power. The Guidelines work horizontally and are relevant for IL for both trade mark and copyright infringement. They provide eight factors that indicate when a platform service provider knows that a network vendor is infringing IPRs:42 (1) if the alleged infringing information is located on the front page of the website, on the front page of a section, or in other obvious visible locations; (2) if the platform service provider initiated the editing, selection, sorting, ranking, recommendation, or modification of the alleged infringing information; (3) if there is notification by the rightholder enabling the platform service provider to know that the alleged infringing information or transaction is transmitted or implemented through its network service; (4) if the platform service provider did not take appropriate reasonable measures even though the same network vendors repeated the infringement;

36 SIPS Asia, ‘China: Trade Marks, Draft Ecommerce Law issue’ (Managing Intellectual Property Magazine, March 2017). 37  They have legal authority ex officio and/or inter partes and can request evidence held by platforms and other parties, e.g. payment services and transport companies. See ‘Draft Ecommerce Law Issued for Public Comment’ (SIPS Asia, 28 February 2017) . 38  See E-Commerce Law draft (n. 1) Art. 53 (providing that ‘[i]f the e-commerce operator infringes the IP rights within the platform, the online trade platform shall take the necessary measures such as deleting, shielding, breaking the link, terminating the transaction and service according to the law’). Whether the scope of the provision includes social media or search engines or domestic and international purchasers, is not yet known. 39  ibid. Art. 88. 40  See Tort Liability Law, adopted at the 12th Meeting of the Standing Committee of the Eleventh National People’s Congress on 26 December 2009, in effect as of 1 July 2010, Art. 36 (Ch.); Trademark Law (n. 18) Art. 52. 41  See Beijing Guidelines (n. 7) Art. 27. 42 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   283 (5) if there exists information that the network vendor acknowledged that it infringed rights; (6) if the sale or offering of goods or services is at a price that is clearly unreasonable; (7) if the platform service provider directly obtains an economic benefit from the spread or transaction of the accused infringing information; (8) if the platform service provider knows of the existence of an alleged infringement from infringing behaviour on other trade mark rights. A ninth factor, that can be found in the case law but not in the guidelines, although it is probably implied, is that network service providers have to develop and implement methods for notification and takedown. Before the E-Commerce Law draft, there was no particular nationwide regulation for IL in the case of trade mark infringement. In its absence, the horizontal43 General Principles of Civil Law 198644 and the Tort Liability Law 200945 could be used to base an action against online trade mark infringement. Article 118 of the General Principles of the Civil Law 1986 states that IPR holders have the right to demand that infringement by plagiarism, alteration, or imitation be stopped, that its ill effects be eliminated, and that damages be compensated.46 This Article, together with Article 130 of the same law, which imposes joint liability on two or more persons if they jointly infringe another person’s rights and cause that person damage, served as a basis for IL in the case of trade mark infringement.47 The legislation and case law regarding IL for online copyright infringement was applied by analogy in the case of online trade mark infringement. This was especially the case with the notice-and-takedown and safe harbour provisions of Regulations 200648 and the Tort Liability Law 2009. Article 36(1) of the Tort Liability Law 2009 imposes tort liability on a network user or network service provider who infringes the civil rights or interests of another person through the network.49 Therefore, if the network service provider directly infringes a trade mark, he falls within the provision. Article 36(2) of the Tort Liability Law 2009 provides that the network service provider will be jointly and severally liable if it fails to take the necessary measures in a timely manner, such as deletion, blocking, or disconnection after being notified.50 The following case law answers the question of what the courts consider as ‘necessary’ and what they understand to be ‘timely’. The Provisions on Relevant Issues Related to the Trial of Civil Cases Involving Disputes over Infringement of the Right of Dissemination through Information

43  cf. the e-Commerce Directive, that works across all sorts of IPRs. 44  See General Principles of Civil Law, adopted at the Fourth Session of the Sixth National People’s Congress on 12 April 1986 and promulgated by Order no. 37 of the President on 12 April 1986 (Ch.). 45  See Tort Liability Law (n. 40). 46  See General Principles of Civil Law (n. 51) Art. 118. 47  See Du Ying, ‘Secondary Liability for Trademark Infringement Online: Legislation and Judicial Decisions in China’ (2014) 37 Colum. J. of L. & Arts 541. 48  In Section 3, the Regulations 2006 will be dealt with more comprehensively. 49  See Tort Liability Law (n. 40) Art. 36(2). 50 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

284   Danny Friedmann Networks of 2012, a judicial interpretation by the Supreme People’s Court,51 sets out a range of factors for determining the liability of online service providers. Although these are intended to address online copyright infringements, judges of the Supreme People’s Court have indicated that those principles may also be applied to online infringements of other IPRs, including counterfeiting.52

1.1  Establish Internal Procedures and Enforce Accordingly Chinese courts have defined criteria for enforcement obligations by online platforms in multiple cases of trade mark infringements brought to their attention. In turn, this has led platforms to establish related internal procedures to comply with courts’ requests. In Aktieselskabet AF v eBay Shanghai in 2006, the establishment of a procedure to notify the online market platform of trade mark infringement by a vendor was sufficient, because, according to the judge, not all advertised goods can be checked.53 In 2010, E-land Fashion sued Taobao for contributory liability on the ground that it complained five times about Xu’s offer for sale and sale on Taobao of goods bearing the E-LAND trade mark, licensed to the plaintiff.54 However, the court held that Taobao had fulfilled its reasonable duty of care as a network service provider.55 In 2011, E-Land Fashion sued Taobao again, this time with more success. The Shanghai First Intermediate People’s Court held an online platform jointly liable with the counterfeiter for the first time in China.56 The court decided that Taobao knew or should have known that its user was selling counterfeit goods but nevertheless had not adopted effective measures to end the illegal activity. The Shanghai Pudong New Area People’s Court held that although Taobao might qualify for immunity from liability—because it had deleted the infringing links—it was aware of Du Guofa’s acts of infringement after reviewing relevant complaints

51  On 17 December 2012 the Supreme People’s Court promulgated the Provisions on Relevant Issues Related to the Trial of Civil Cases involving Disputes over Infringement of the Right of Dissemination through Information Networks. 52  See Joe Simone, ‘Comments on the PRC Trademark Law Amendments’ (China IPR, 27 January 2013) . 53  See Shanghai First Intermediate People’s Court, Aktieselskabet AF v eBay Network Info. Services (Shanghai) Co. [21 August 2006] 2005 no. 371 (Ch.) (deciding that it would be sufficient for defendants to establish a reporting system to stop online IP violations. The court argued that online market platforms cannot investigate every piece of merchandise sold and even if the defendants had checked the products advertised online, which might be genuine, then fake products could still be delivered offline). 54  See Shanghai Huangpu District People’s Court E-land Fashion (Shanghai) Trade Co. v Taobao Network Co. & Xu [10 September 2010] (Ch.). 55 ibid. 56  See Shanghai First Intermediate People’s Court E-Land Fashion (Shanghai) Trade Co. v Taobao Network Co. & Du Guofa [25 April 2011] 2011 no. 40 (Ch.) affirming Shanghai Pudong New Area People’s Court [17 January 2011] 2010 no. 426 (Ch.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   285 and notice-and-takedown requests from the plaintiff. Despite this knowledge, the defendant did not adhere to its own rules.57

1.2  Appropriate Reasonable Measures in the Case of Repeat Infringement In 2010, after seven complaints E-Land Fashion sued Du Guofa and Taobao for trade mark infringement at the Shanghai Pudong New Area People’s Court,58 even though Taobao had removed the infringing links. The defendants, however, did not take any further measures. The case was decided in favour of E-Fashion because Taobao had knowledge of further infringements and intentionally continued to provide services to the infringer, thus becoming guilty of contributory infringement according to Article 50(2) of the Implementing Rules of the Trademark Law 2001. Taobao appealed to the Shanghai First Intermediate People’s Court. The court considered that E-Land Fashion and its licensee, Yinian, had filed 131,261 notice-and-takedown requests (with the rele­ vant explanations and justifications) to Taobao concerning links on its online platform that offered counterfeit goods from September to November 2009. Taobao deleted 117,861 infringing links, and from February to April 2010 E-land Fashion filed another 153,277 complaints, which resulted in 124,742 deletions. The sheer number of complaints and deletions, after investigation by Taobao, demonstrated that Taobao was aware of the high frequency of infringements, therefore the defendants should have put in place appropriate reasonable measures for repeat infringement.59 In 2012, in Descente v Today Beijing City Info Tech, the Beijing Higher People’s Court held that because the defendant was running a website for group purchasing it should not only apply reactive but also proactive measures.60 In other words, depending on its involvement in the services offered and the process used, its duty of care would increase or decrease.

57  See ‘Rules of Taobao.com Governing the Management of User Behaviors (non-mall)’ (adopted by Taobao on 15 September 2009) (providing that if a Taobao user committed any act of infringement, Taobao could prevent the accused vendor from releasing the goods, withdraw the disputed goods, make public announcements about the penalty, deduct points from the accused vendor, freeze the accused’s account on Taobao.com, cancel the account concerned, etc.). 58  See Shanghai Pudong New Area People’s Court E-Land Fashion v Taobao and Du Guofa [19 July 2010] (Ch.) (where E-Land Fashion claimed compensation for its losses and a public apology in newspapers from the defendants). 59  ibid. It certainly did not help Du Guofa’s defence that he had explicitly stated on his Taobao vendor’s site that some of his goods were high-quality counterfeits. Moreover, Du Guofa did not react with any counter-notifications to Taobao’s notifications that they would remove infringing links. In the eyes of the court, that was another indication that Du Guofa was an infringer. 60  See Beijing Second Intermediate People’s Court Descente Ltd v Today Beijing City Info. Tech. Co. [25 April 2012] 2011 no. 11699 (Ch.) affirming Beijing Higher People’s Court [19 December 2012] 2012 no. 3969 (Ch.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

286   Danny Friedmann

1.3 Inferences In 2005, Taobao was sued by the Swiss watch brands Longines, Omega, and Rado. The Swiss watch brands argued that Taobao should have known of counterfeits because of the large price differences between genuine and counterfeit watches offered on their online platforms. The parties settled.61 In 2013, the Trademark Law was amended. Article 57(6) of the Trademark Law 2013 became: ‘Providing, intentionally, convenience for such acts that infringe upon others’ exclusive right of trademark use, to facilitate ­others to commit infringement on the exclusive right of trademark use.’62 Compare this with the previous provision in Article 52(5) of the Trademark Law 2001, which did not include the condition ‘intentionally’ for infringement.63 Prior to 2013, language that included the intentional requirement only appeared in Article 50(2) of the Regulations Implementing the Trademark Law 2002.64 The inclusion of the intentionality language in the Trademark Law 2013, demonstrates the importance that the National People’s Congress’ drafters attached to the issue. Trade mark holders expressed concern that the term ‘intentionally’ excluded constructive knowledge and suggested use of the phrase ‘knew or should have known’. The concern was picked up by the legislators, since Article 75 of the Implementing Regulations of the Trademark Law 2014 does not include the word ‘intentionally’.65

2.  Intermediary Liability in the Case of Copyright Liability Before the e-Commerce Directive and the DMCA were able to inspire China’s IL regulation, Article 130 of the General Principles of the Civil Law 1987 could be applied to network service providers which together with a direct infringer infringed on a copyright 61  ‘Taobao sued for selling fakes’ (Global Times, 22 July 2011) . 62  Trademark Law (n. 18) Art. 57(6). 63  See Article Trademark Law 2002 (n. 18) Art. 52(5) (stating that ‘impairing by other means another person’s exclusive right to the use of its registered trademark shall constitute an infringement on the exclusive rights to the use of a registered trademark’). 64  See Regulations for the Implementation of the Trademark Law, promulgated by State Council Decree no. 358 of 3 August 2002, and effective as of 15 September 2002, Art. 50(2) (Ch.) (defining infringement as ‘intentionally providing facilities such as storage, transport, mailing, concealing, etc. for the purpose of infringing another person’s exclusive right to use a registered trademark’). 65  See Regulations for the Implementation of the Trademark Law, promulgated by State Council Decree no. 358 of 3 August 2002, revised and promulgated by State Council Decree no. 651 of 29 April 2014, and effective as of 1 May 2014, Art. 75 (Ch.) (‘[a]n act of providing such facilities as storage, transport, mailing, printing, concealing, business premises, or an online goods trading platform for infringing upon another person’s exclusive right to use a registered trademark constitutes an act of providing convenience prescribed in subparagraph (6) of Article 57 of the Trademark Law’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   287 holder’s rights and caused that party damage, in which case they should bear joint li­abil­ity.66 This fault-based liability doctrine was applied in Music Copyright Society of China v Netease & Mobile Communications in 2002.67 In that case, a distinction emerged between primary and secondary liability. Allegedly, Netease would be primarily liable for direct infringement of the right of communication to the public of the works, in that instance ringtone files. Mobile Communications, instead, would be secondarily liable for alleged negligence in its duty of care to examine the works it was disseminating or, after being informed by the copyright holder, to stop the transmission of the infringed works. However, the Beijing Second Intermediate People’s Court accepted Mobile Communications’ defence that it was merely providing a technical and passive service of network dissemination. It only received ringtones from Netease and forwarded them to its subscribers. According to the court, Mobile Communications was unable to select, examine, or delete the infringing ringtone files it transmitted, and was not at fault. This doctrine is also called the passive conduit or transmission doctrine.68 Article 20 of Regulations 2006 would later codify this conduit or transmission defence.69

2.1 Non-Interference The same non-interference principle can be observed in Go East Entertainment v Beijing Century Technology in 2004.70 Here, the Beijing High People’s Court held that ChinaMP3.com by selecting and organizing various links to infringing third party sources, demonstrated that it could discriminate between licensed and unlicensed recordings, which showed the defendant negligent in its own duties and having intentionally participated in the illegal dissemination of unlicensed recordings. This made ChinaMP3.com jointly liable with the third party websites under Article 130 of the General Principles of the Civil Law 1987.71 According to the court, it was irrelevant that the plaintiff had not sent any notice-and-takedown requests to enable the defendant to take the necessary measures to remove the infringing links.

66  See Opinions of the Supreme People’s Court on Several Issues concerning the Implementation of the General Principles of Civil Law, Art. 148 (stating that ‘a person who instigates or assists others to perform a tortuous act is joint tortfeasor, and shall bear civil liabilities jointly’). 67  See Beijing Second Intermediate People’s Court Music Copyright Society of China v Netease.com Inc. & Mobile Communications Corp. [20 September 2002] 2002 no. 3119 (Ch.). 68  cf. Transitory Digital Network Communications in 17 USC § 512(a) (US). 69  See Regulations 2006 (n. 14) (under the conditions that the network service provider should not choose or alter the transmitted works, and that the transmitted works be offered only to its subscribers). 70  See Beijing High People’s Court, Go East Entertainment Co. Ltd (HK) v Beijing Century Technology Co. Ltd [2 December 2004] no. 713 (Ch.). 71  Note that Art. 4 of the Interpretation of the Supreme People’s Court on Several Issues Concerning the Application of Law in the Trial of Cases Involving Copyright Disputes over Computer Network 2003 was the basis for the case. The 2006 version of the Interpretation supersedes the previous one and the provision can be found in Art. 3.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

288   Danny Friedmann More specific regulation in regard to IL for online infringements was promulgated in 2000. This included the Regulation on Internet Information Services, which deals with providing information services to online subscribers.72 Article 15 of the Regulation postulates the prohibition to produce, copy, publish, or distribute information containing content forbidden by laws and regulations.73 Article 16 requires network service pro­ viders to immediately terminate a transmission, keep a record of it, and report it to the relevant authorities when it finds unlawful information.74 The Interpretation of Several Issues Relating to Adjudication of and Application to Cases of Copyright Disputes on a Computer Network was enacted by the Supreme People’s Court in 2000 to clarify IL for online copyright infringement.75 Rule 4 imposes joint liability on the direct infringer and the intermediary for aiding and abetting copyright infringement. Rule 5 also imposes joint liability if the intermediary obtained clear knowledge or was warned by a rightholder based on solid evidence of copyright infringement. After the above-mentioned Regulation on Internet Information Services, the copyright statutes were updated in 2001. However, the copyright law reform of 2001 only prepared China’s accession to TRIPs and did not deal with IL.76 Then, in 2006, the most important specific piece of legislation for IL for copyright infringement was promulgated: Regulations 2006.77 These Regulations distinguish between the following cat­ egor­ies of network service providers: (1) those who provide automatic access to their subscribers;78 (2) those who provide automatic storage of works, performances, and audiovisual recordings from other network service providers to their subscribers;79 (3) those providing subscribers with storage space to make works, performances, and 72  Regulation on Internet Information Services, promulgated by State Council Decree no. 292 of 20 September 2000, Art. 2 (Ch.). 73 ibid. 74 ibid. 75 See Interpretation of Several Issues Relating to Adjudication of and Application to Cases of Copyright Disputes on Computer Network, adopted at the 1144th Meeting of the Adjudication Commission of the Supreme People’s Court on 21 December 2000 and in effect on 21 December 2000. 76  See Yong Wan, ‘Safe Harbors from Copyright Infringement Liability in China’ (2012–13) 60 J. of Copyright Soc’y USA 635. 77  See Regulations 2006 (n. 14). The Regulations were amended on 30 January 2013, by Order No. 634 of the State Council. However, for IL the changes are insignificant. Although the most important laws in China are promulgated by the National People’s Congress or sometimes the Standing Committee of the National People’s Congress, Regulations 2006 and its 2013 amendment were promulgated by the State Council. 78  See Regulations 2006 (n. 14) Art. 20. Cf. 17 USC § 512(a): Transitory Digital Network Communications. 79  See Regulations 2006 (n. 14) Art. 21 (providing that a network service provider caching works, performances, and audiovisual recordings from another network service provider ‘for the purpose of elevating the efficiency of network transmission’, will not be held liable for compensating the rightholder in damages, if it did not alter any of the automatically cached materials, and did not affect the originating network service provider’s ability to obtain information about use of the cached materials, and automatically revises, deletes, or disables access to the materials where the originating network service provider does the same). Caching links fall outside the scope of this safe harbour as Flyasia v Baidu makes clear. See Beijing High People’s Court Zhejiang Flyasia E-Business Co. Ltd v Baidu Inc. [2007] no. 1201 (Ch.) (where Baidu provided on its own initiative access to an archived copy and modified that copy). Cf. 17 USC § 512(b): System Caching.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   289 audiovisual recordings available to the public;80 and (4) those providing searching or linking services to their subscribers.81 Network service providers in the first two categories cannot be held liable if they did not interfere with their automatic access or storage processes. For the latter two categories, network service providers have to comply with the following rules to benefit from the protection of the safe harbour.

2.2  Removal after a Notice-and-Takedown Request Article 15 of Regulations 2006 clarifies that network service providers that provide storage space or searching or linking services for their subscribers, after they receive a notice-and-takedown request82 from a rightholder should promptly remove or disconnect the link to the work, performance, or audiovisual recording that is thought to be infringing.83 There case law is inconsistent on whether one notice can concern multiple works. The court in Guangdong Mengtong Culture Development v Baidu in 2007 confirmed that this was possible.84 Other courts, however, have demanded that each work should receive its own notification.85 Article 14 of Regulations 200686 states that when rightholders file a notice-andtakedown request they must detail the contact information, information regarding the work, and how it can be located as well as preliminary evidence of the infringement. After notification, the storage-space provider should promptly87 remove the alleged in­frin­ging

80  See Regulations 2006 (n. 14) Art. 22. Cf 17 USC § 512(c): Information Residing on Systems or Networks At Direction of Users. 81  See Regulations 2006 (n. 14) Art. 23. In 2006, the Beijing District High People’s Court held that search engines are prima facie not liable for referring to infringing resources because their indexing engines cannot predict, distinguish, or control the content of unrestricted websites they search. See Beijing High People’s Court EMI Group Hong Kong Ltd v Beijing Baidu Network Technology Co. Ltd [17 November 2006] no. 593 (Ch.). However, the safe harbour based on the referrer’s defence is lifted where search engines fail or insufficiently remove the infringing links after notification by the copyright holder. This was decided by the Beijing High People’s Court in 2007 holding the search engine Alibaba liable for removing only fifteen out of twenty-six allegedly infringing recordings, pursuant to takedown notices. After receiving from the copyright holder a request to remove twenty-six links, the court held that Alibaba should have known that its search engine contained infringing links to those recordings, and was thus negligent in not terminating all links pointed out. See Beijing High People’s Court Go East Entertainment Co. Ltd (HK) v Beijing Alibaba Technology Co. Ltd [20 December 2007] no. 02627 (Ch.). Compare this with 17 USC § 512(d): Information Location Tools. 82  See Regulations 2006 (n. 14) Art. 22(1) (stating that the network service provider that provides storage space to subscribers, should make this clear and indicate its name, contact person, and network address; otherwise it will be held liable). 83  See Regulations 2006 (n. 14) Art. 15. 84 See Guangdong Mengtong Culture Dev. Co. Ltd v Baidu Inc. [2007] no. 17776 (Ch.). 85  See e.g. Warner Music Hong Kong Ltd v Alibaba Info. Tech. Co. Ltd [2007] no. 02630 (Ch.). 86  Regulations 2006 (n. 14) Art. 14. 87  Art. 14 of the Draft of the Regulations of September 2005 defined ‘promptly’ as within five days. However, in the final Regulations 2006 draft, that definition was removed.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

290   Danny Friedmann work, performance, or audiovisual recording otherwise it will be liable.88 If the network service provider removes or disconnects a link to a work, performance, or audiovisual recording that turns out to be non-infringing, the rightholder who requested the removal will be held liable.89 Network service providers will replace or restore a link to the work, performance, or audiovisual recording90 on receipt of a counter-notification by the subscriber in the case of evidence of non-infringement.91

2.3  Necessary Measures Article 36(2) of the Tort Liability Law 200992 imposes obligations on the network service provider to take necessary measures such as deletion, blocking, or disconnection after notification by a rightholder or if it knows that a network user is infringing on the civil rights or interests of another person through its network services.93 If the network service provider fails to take the necessary measures in a timely manner, it will be jointly and severally liable for any additional harm along with the network user. Qiao has highlighted the debate about the exact scope of the knowledge standard: in the first and second drafts of the Tort Liability Law the term ‘actually knew’ was used, followed by ‘knew’ in the third draft, and ‘knew or should have known’ in the fourth draft, and back to ‘knew’ in the fifth and final version.94 The Tort Liability Law makes a distinction between direct tort liability95 and indirect liability.96

2.4  No Financial Benefits Article 22(4) of Regulations 200697 states that the safe harbour is not for network service providers providing storage space that receive financial benefits from copyright infringements. However, unlike section 512(c) of the DMCA, it does not mention ‘without the ability to control’, which is the other condition for the US safe harbour. The logic is that a network service provider, that has no control over the infringement, should not be held liable. However, in China, in principle, a network service provider without control can still be held liable. In practice, the specificity of the connection between the

88  See Regulations 2006 (n. 14) Art. 22(5). 89  ibid. Art. 24. 90  ibid. Art. 17. 91  ibid. Art. 16. 92  See Tort Liability Law (n. 40) Art. 36(2). 93  ibid. Art. 36(3) (noting that notice that the infringement of the civil rights or interests of another person could concern copyright, trade mark, or patent infringement). 94  Qiao Tao, ‘The Knowledge Standard for the Internet Intermediary Liability in China’ (2012) 20 IJLIT 3. 95  See Tort Liability Law (n. 40) Art. 36(1). 96  ibid. Art. 36(2). 97  See Regulations 2006 (n. 14) Art. 22(4).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   291 infringed work and the financial benefit for the network service provider determines whether it will lose the immunity provided by the safe harbour.98

2.5  Inducement and Contributory Liability Tortuous liability consists of inducement or contributory liability which are based on encouragement or aid to the wrongdoer.99 Tortuous liability is created where the network service provider has or should have knowledge of the illegal nature of the subscriber’s activity that it has instigated or assisted.100 Article 22(3) of Regulations 2006 states that a network service provider that provides storage space is not liable if it does not know or has no reasonable grounds to know that the works, performances, or audiovisual recordings provided by a subscriber infringe another person’s rights.101 In other words, the network service provider will be held li­able if it has actual knowledge—knowledge of the infringement itself—or should have deduced it from the circumstances, as any reasonable person would do.102 This standard of knowledge is noticeably lower than apparent knowledge, where a network service provider deliberately proceeds despite ‘red flags’ to that effect;103 is wilfully blind to the infringement. Therefore, a network service provider becomes liable if it has actual, constructive, or apparent knowledge.104

98  Guiding Opinions of Beijing Higher People’s Court of 2010, s. 25 (noting that ‘[g]enerally, the advertising fees charged by an ISP for the information storage space service provided shall not be determined as the directly gained economic interests; the advertisements added by an ISP to specific works, performances or sound or video recordings may be taken into account in the determination of the fault of the ISP as the case may be’). 99 Before Metro-Goldwyn-Mayer Studios Inc. v Grokster Ltd, 545 US 913 (2005) (US), contributory liability could be used (defendant has knowledge of infringement by another and materially contributed to the infringement) as well as vicarious liability (defendant had control over another’s infringement and had a direct financial interest in it) as actions in the case of copyright infringement. Grokster added intentional inducement (defendant acts with the object of promoting infringement by others) to the possible actions. See Alfred Yen, ‘Torts and the Construction of Inducement and Contributory Liability in Amazon and Visa’ (2009) Colum. J. of L. & Arts 32, 513. 100  Qiao Tao (n. 94) 5. 101  See Regulations 2006 (n. 14) Art. 22(3). 102  See Yong Wan, ‘Safe Harbors from Copyright Infringement Liability in China’ (2012) 60 J.  of Copyright. Soc’y USA 646 . 103  Shanghai First People’s Court Zhongkai Co. v Poco.com [2008] (Ch.) (noting ‘[a]s a professional video website, in its daily operation of the website, it must have known, or at least should have known that the movie concerned was unauthorized by viewing the poster and the introduction of the movies by the users’). See also Shanghai First Intermediate People’s Court Sohu v Tudou [2010] (Ch.). 104  Guiding Opinions of Beijing Higher People’s Court of 2010, s. 19(1)–(4) (providing that current content on a prominent page; infringing content on prominent page; infringing content recommended or ranked (apparent knowledge); any selection, organization, or classification of the alleged infringing material uploaded by service receivers is conducted (constructive knowledge)).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

292   Danny Friedmann

2.6 Non-Interference A few norms provide general principles of non-interference. Article 22(2) of Regulations 2006 asserts that works, performances, or audiovisual recordings cannot be altered by a network service provider that provides storage space, otherwise liability will be incurred.105 On 17 December 2012, the Supreme People’s Court promulgated the Provisions on Relevant Issues Related to the Trial of Civil Cases involving Disputes over Infringement of the Right of Dissemination through Information Networks.106 Article 10 of these provisions107 clarifies that if a network service provider provides web services, by establishing charts, catalogues, indexes, descriptive paragraphs, or brief introductions or in other ways recommends the latest film and television programmes that can be downloaded or browsed or are otherwise accessible by the public on its web pages, that is an indication to the court that the network service provider should have known that its users were infringing.108 Article 12 of the same Provisions states that the court may determine that the network service provider should know that uploaded content is infringing if popular films or TV shows are posted on the homepage or other prominent web pages; or the hosting service provider actively selected, edited, arranged, or recommended the topic of the work, or produced specific ‘popularity lists’.109 The court will assume that the infringing nature of the uploaded work could not have escaped the attention of the hosting service provider, which nevertheless took no reasonable measures to stop the infringement.

3. Conclusions In a complex interplay of successive laws, regulations, case law, judicial interpretations, and guidelines, a convergence has evolved in the case law of China in regard to IL for 105  See Regulations 2006 (n. 14) Art. 22(2). 106 See Provisions on Several Issues Concerning Application of Law in Civil Dispute Cases of Infringing the Right of Dissemination via Information Network, Judicial Interpretation 2012 no. 20 (passed on 26 November 2012 at the Trial Committee of Supreme People’s Court Conference no. 1561) (original Chinese text). (English translation) (hereafter Provisions 2012). See also Gabriela Kennedy and Zhen Feng, ‘China’s highest court clarifies copyright liability of network service providers’ (Hogan Lovells, 5 February 2013) . 107  See Provisions 2012 (n. 106) Art. 10. 108  The Supreme People’s Court issued five model cases decided by lower courts on 23 June 2014, which included a civil case concerning copyright infringement: CCTV International v Shanghai TuDou Network Tech. Co. Ltd. See Susan Finder, ‘A model copyright infringement case—‘A Bite of China’’ (SupremePeople’sCourtMonitor,26June2014). 109  See Provisions 2012 (n. 106) Art. 12.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   293 copyright and trade mark infringement. As a result, the Beijing Guidelines 2016,110 have successfully codified many trends in the IL case law. These guidelines have only mandatory authority over Beijing courts, however their persuasive authority can also reach other courts. Eight factors are indicative for the determination of knowledge by the platform service provider: (1) prominent placement of the infringing information on the site of the platform service provider; (2) the platform initiated the editing, selection, sorting, ranking, recommendation, or modification of the alleged infringing information, indicating its interference; (3) notification by the rightholder is sufficient to establish knowledge by the platform service provider of the alleged infringement; (4) repeat infringement by the network vendor is an indication that the platform service provider has not taken appropriate reasonable measures; (5) the existence of information that the network vendor acknowledged that it had infringed rights; (6) the sale or offering of goods or services at a price that is unreasonably low; (7) the platform service provider obtains a direct economic benefit from the spreading or transaction of the accused infringing information (this seems harsh without a corresponding condition that the platform service provider should have control; however, with advanced technology full control becomes a reality); (8) the platform service provider knows of the existence of the alleged infringement from the infringing behaviour on other trade mark rights (the platform service provider should make logical connections; here too technology, especially deep learning and preventive analytics, will be able to offer solutions). An extra ninth factor which can be gleaned from the case law and is implied in the Guidelines: that platforms have not set up and implemented a system to enforce infringements via notice and takedown. China, with its multi-layered censorship regime, extensively uses human reviewers to find and remove information that the government finds disagreeable to socialist values or endangers state security and social harmony. However, technology has passed a crit­ic­al threshold, whereby cost-effective computing power can be effectively used to filter servers proactively against infringements. Platforms, such as Alibaba and Baidu, with their wealth of data are best positioned to continuously teach their machines via deep learning. This can lead to ongoing improvement in the platforms’ predictive analytics which then allows them to detect and identify infringements. In short, artificial intelligence makes each platform omniscient, which will deal a fatal blow to the already weak safe harbour protection in China. The trend for China’s duty of care shifts from an obligation to take appropriate reasonable measures in the case of specific knowledge to an obligation to take reasonable measures in the case of general knowledge. The implication is 110  See Beijing Guidelines 2016 (n. 7).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

294   Danny Friedmann that the filtering standard for platform service providers will continuously be intensified and that they will gradually have to proactively monitor and cleanse their servers. Despite the censorship system and technological advancements, the pendulum has swung back in the direction of weakening the duties of care in the E-Commerce Law draft. This is probably caused by regulatory capture by the powerful platforms and a Confucian reflex of the Chinese government to prefer self-regulation, alternative disputeresolution mechanisms, including mediation and arbitration, to not stifle intermediaries’ innovative power, and confidence that direct infringers can be detected and identified. However, platforms, especially in China, which are now mature and financially powerful, are best positioned to make full use of technologic advancements to filter what is on their servers. Until the impunity of direct infringers evaporates, it can be expected that pressure will increase for platforms to take on more responsibility.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 15

A N ew Li a bilit y R egim e for Il l ega l Con ten t i n the Digita l Si ngl e M a r k et Str ategy Maria Lillà Montagnani

Within the European Digital Single Market (DSM) strategy,1 online intermediaries, often categorized as the gatekeepers of information,2 have become major protagonists of a variety of policy and legislative actions. Indeed, over the last few years the European Commission has issued a great number of documents that touch on illegal content online and platforms’ liability.3 Newly adopted rules, or those currently under adoption, contribute to the formation of an emerging liability regime for unlawful content that differs profoundly from the regime of conditional liability exemption envisaged in the e-Commerce Directive (ECD).4 Although the safe harbours adopted within the ECD

1  See European Commission, ‘A Digital Single Market Strategy for Europe’ COM(2015) 192 final (6 May 2015). 2  See Jonathan Zittrain, ‘A History of Online Gatekeeping’ (2006) 19(2) Harv. J. of L. and Tech. 253. 3  In this chapter, the terms ‘intermediaries’ and ‘online platforms’ are used interchangeably. Although the focus is on illegal content hosted by online platforms, the term ‘intermediaries’—which is an umbrella term—will still be used when considering both the existing liability regime under the e-Commerce Directive and the one that emerges as a result of the legislative initiatives adopted within the DSM strategy. On the difference between intermediaries and online platforms, see Rodriguez de las Heras Ballell, ‘The Legal Autonomy of Electronic Platforms: A Prior Study to Assess the Need of a Law of Platforms in the EU’ [2017] 3 Italian L.J. 149, 156–60. 4  See Directive 2000/31/EC of the European Parliament and of the Council of 17 July 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1.

© Maria Lillà Montagnani 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

296   Maria Lillà Montagnani remain in principle intact,5 the context in which they apply is now enriched by a new Directive on Copyright in the Digital Single Market (the DSM Directive),6 a revised version of the Audiovisual Media Services Directive (the AVMSD),7 the extension to commercial platforms of the obligations provided by the Unfair Commercial Practice Directive (the USPD),8 as well as many communications and recommendations dealing with the issue of illegal content online.9 All these initiatives introduce an enhanced liability regime for online intermediaries for third parties’ unlawful content,10 which should complement the current safe ­harbour regime but which results in a framework that lacks coherence and consistency. In fact, although the ECD exemptions continue to be a key element of the European system, there are further layers of obligations on online platforms, such as monitoring obligations, duties of diligence, reporting and flagging, demands to cooperate with rightholders, requirements to conclude licensing agreements, and the request to adopt appropriate measures to prevent the upload of unlawful content and to design platforms in a way that avoids consumers from being misled. The conditional liability regime and its exemptions were introduced by the ECD as a means for the development of online services and for enabling the information society to flourish.11 This system has so far horizontally shielded internet intermediaries—or in the ECD’s terminology ‘information society service providers’12—performing mere conduit, caching, and hosting from liability for the unlawful conduct of third parties as long as they are in no way involved with the information transmitted or, in the case of hosting services, do not have knowledge or awareness of the illegal activities and, if acquired, promptly act to stop them. In a different way, the liability regime that emerges as a result of the new hard and soft law provisions adopted within the DSM strategy 5  See European Commission, ‘Online platforms and the Digital Single Market—Opportunities and Challenges for Europe’ (25 May 2016) COM(2016) 288/2, 9. 6  See Directive 2019/790/EU of the European Parliament and of the Council of 17 April 2019 on copy­ right and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92. 7  See Directive 2018/1808 of the European Parliament and of the Council of 14 November 2018 amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) in view of changing market realities [2018] OJ L303/69. 8  See Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market [2005] OJ L149/22 (UCPD). See also European Commission, ‘Commission Staff Working Document—Guidance on the implementation/ application of Directive 2005/29/EC on Unfair Commercial Practices’ (25 May 2016) SWD(2016) 163 final. 9  See European Commission, ‘Tackling Illegal Content Online—Towards an enhanced responsibility of online platforms’ (28 September 2017) COM(2017) 555 final; European Commission, ‘Commission Recommendation on measures to effectively tackle illegal content online’ (1 March 2018) C(2018) 1177 final. 10 ibid. 11  See Carsten Ullrich, ‘Standards for Duty of Care? Debating Intermediary Liability from a Sectoral Perspective’ [2017] 8(2) JIPITEC 111. 12  On the lack of a coherent definition of internet intermediaries in the EU legal framework, see Hosuk Lee-Makiyama, ‘The Economic Impact of Online Intermediaries’ in Mariarosaria Taddeo and Luciano Floridi (eds), The Responsibilities of Online Service Providers (Springer 2017) 325.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A New Liability Regime for Illegal Content in the Dsm Strategy   297 shields intermediaries on the basis of the actions that they take in relation to the nature of the content at stake, thus introducing a vertical exemption—which is no longer linked to knowledge or awareness of the infringing nature of the content but relies more on the organization that the intermediaries have adopted. On the other hand, it also goes beyond a vertical liability as it anchors the liability to the organization that a platform should have adopted for each kind of content at stake. The liability appears in fact linked to the structure that platforms should adopt in order to prevent illegal content from being uploaded, or re-uploaded. In order to show this shift from a conditional liability regime to an organizational one13—that is, a regime where liability depends on the organization adopted by the undertaking in relation to the diverse types of content—in what follows I will unpack the category of illegal content (copyright infringing contents, harmful content, and misleading content) and detail for each type the obligations that have been envisaged by the European institutions. This is necessary to establish the boundaries of the online intermediaries’ liability regime for illegal content that is emerging within the DSM strategy, analyse its main features, and address its coherence.

1.  The Issue of Illegal Content within the DSM Strategy In one of its first documents on the creation of a European Digital Single Market,14 the EU Commission noted how the digital environment, while bringing about op­por­tun­ ities for innovation, growth, and jobs, also raises regulatory challenges that ought to be properly faced and overcome. To this end, the DSM strategy places special emphasis on three key pillars: (1) access for consumers and businesses to online goods and services across Europe; (2) creating the right conditions for digital networks and services to flourish; and (3) maximizing the growth potential of the digital economy. The first two pillars are those touching particularly on the issue of liability for unlawful content. Part of the first pillar is the aim of proposing EU legislation which will reduce the differences between national copyright laws and allow for wider online access to works. The Communication launching the DSM strategy clearly stated that the ‘rules on the activities of intermediaries in relation to copyright-protected content’ were to be clarified.15 Similarly, in the context of optimizing digital networks and services, the Commission stressed the market power of online platforms, which can potentially impact other participants in the marketplace.16 From that power also derives the need to guarantee that minors are protected from harmful content and that internet users are 13  See Herman Aguinis, ‘Organizational Responsibility: Doing Good and Doing Well’ in Sheldon Zedeck (ed.), APA Handbook of Industrial and Organizational Psychology (APA 2011) 855. 14  See DSM strategy (n. 1) 2. 15  ibid. 8. 16  ibid. 9.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

298   Maria Lillà Montagnani protected from hate speech and misleading content. Such an enhanced responsibility was thus deemed to have to be reflected in a review of copyright rules and of the other provisions regulating content online.17 Following the statements of principle mentioned earlier, a public consultation was held in 2015 to carry out a comprehensive assessment of the role of online platforms. Among several issues, the consultation also covered how best to tackle illegal content on the internet, including copyright-infringing content, hate speech, and terrorism-related content.18 Although the issues of online intermediaries’ liability and tackling illegal content online were dedicated a section of their own, the responses did not reflect one uniform view due to the vast differences between the type of respondents. For example, on the question of fitness of the liability regime under the ECD, most individual users, content uploaders, and intermediaries considered it fit for purpose, while rightholders, their associations, and notice providers identified gaps and were unsatisfied with its effectiveness.19 It is in the subsequent Communication, ‘Online platforms and digital single market, opportunities and challenges for Europe’, adopted in May 2016, that the Commission sets out its official position on supporting the further development of online platforms in Europe and lays down the basis for a problem-driven approach to regulation, which has also been followed in relation to unlawful content.20 Having mentioned that online platforms ‘come in various shapes and sizes and continue to evolve at a pace not seen in any other sector of the economy’,21 the Commission stresses their rising importance as well as the need for them to operate in a balanced regulatory framework.22 A framework in which, for their role in providing access to information and content, they are to bear more responsibility. Even here it is not completely clear how balanced a framework can be where the safe harbour system remains untouched yet the problem-driven approach adopted requires that platforms behave—according to the nature of the content—more responsibly. The same principles mentioned previously are confirmed in the following mid-term review assessment of the progress towards the implementation of the DSM,23 where the issue of illegal content online is tackled from a practical point of view, for example in relation to the mechanisms and technical solutions necessary for its removal, which in turn must be effective and, at the same time, fully respectful of fundamental rights.24

17  ibid. 11. 18  See European Commission, ‘Public consultation on the regulatory environment for platforms, online intermediaries, data and cloud computing and the collaborative economy’ (24 September 2015) . 19  See European Commission, ‘Synopsis report on the public consultation on the regulatory environment for platforms, online intermediaries and the collaborative economy’ (25 May 2016) 15. 20  See European Commission (n. 5) 9. 21  ibid. 2. 22  ibid. 4. 23  See European Commission, ‘Mid-Term Review on the implementation of the Digital Single Market Strategy’, COM(2017) 228 final (10 May 2017). 24  ibid. 9.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A New Liability Regime for Illegal Content in the Dsm Strategy   299 This last, more practical, issue is addressed in the subsequent Commission Communication on tackling illegal online content,25 adopted in September 2017, which focuses on online platforms’ liability in a somewhat broader context. In fact, all kinds of content—ranging from incitement to terrorism, illegal hate speech, child sexual abuse material to infringements of intellectual property rights (IPRs)—is deemed to require an enhanced liability regime for intermediaries, which, by mediating access to content for most internet users, carry a significant societal responsibility. Interestingly, the Commission notes that online platforms should ‘adopt effective proactive measures to detect and remove illegal online content and not only limit themselves to reacting to notices which they receive’.26 Whatever content platforms would be tackling, the Communication provides a brief discussion of the principles to follow in order to comply with fundamental rights and to implement the safeguards needed to limit the risk of removing legal online content.27 In its discourse, the Commission emphasizes the importance both of transparency and of adopting appropriate counter-notice mech­an­ isms leading potentially to restoring without undue delay content that has been removed.28 Still, the use and development of automatic technologies to prevent the re­appear­ance of illegal content online keeps being strongly encouraged.29 As a follow-up to the Communication mentioned earlier, in March 2018 the Commission issued a Recommendation on measures to effectively tackle illegal content online.30 Even here the Commission, on the one hand, details the notice and action procedure and calls for the adoption of proactive measures for all types of illegal content. On the other hand, it emphasizes the need to implement such procedure in a diligent and proportionate manner.31 In particular, when automation in removing illegal content is involved, effective and appropriate safeguards should be established to ensure that decisions are accurate and well founded.32 Human oversight and verifications should be in place whenever a detailed assessment of the content is needed to establish its illegal nature.33 In comparison to the previous instruments, the Recommendation puts more em­phasis on safeguards and transparency principles, by explicitly referring to takedown policy and reports and by stating under which conditions automation can be used. It nonetheless mentions the need to respect fundamental rights only in the recitals.34 Notwithstanding this mention, both the Communication on tackling illegal content and the related Recommendation strongly remark on the necessity for adopting automatic filtering systems and encourage online platforms to invest in automatic detection technologies, thereby generating a profound conflict with the ECD, in particular Articles 14 and 15 in relation to, respectively, the safe harbour for hosting activity and the ban on statutorily imposing filtering systems on intermediaries. The use of detection and filtering systems has an impact on the distinction between active and passive hosting pro­viders elaborated 25  See European Commission, ‘Tackling Illegal Content Online’ (n. 9). 26  ibid. 10–11. 27  ibid. 14–16. 28  ibid. 17. 29  ibid. 19. 30  See European Commission, ‘Commission Recommendation’ (n. 9) points 18–20. 31  ibid. point 19. 32  ibid. point 20. 33 ibid. 34  European Commission, ‘Commission Recommendation’ (n. 9), recitals 13 and 40.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

300   Maria Lillà Montagnani by the Court of Justice of the European Union (CJEU). Over the years, case law has progressively linked the requirements of knowledge and awareness to the active role that an intermediary performs, which goes beyond the provision of services neutrally and the mere technical and automatic processing of data provided by users,35 to include cases in which the provider operates in a way that triggers knowledge or control.36 In other words, adopting proactive measures implying control may cause hosting providers to fall outside the shield developed for (passive) hosting providers and into the liability envisaged for (active) hosting providers. The inconsistency holds true even if the Commission in its Communication on tackling illegal content online states that such measures, when taken as part of the application of the terms of services of the online platform, would not render those service providers active ones, in particular when they process large volumes of material through automatic means.37 Moreover, Article 15 of the ECD expressly prohibits Member States from imposing on intermediaries monitoring obligations of a general nature and this is also a well-established prin­ciple developed in the CJEU case law,38 from which the rules of new adoption seem to detach if not expressly, at least for the ex ante nature of the obligations that they impose on online intermediaries.

2. Copyright-Infringing Content and the Copyright in the DSM Directive To provide an instrument that established a copyright market which would be efficient for all parties involved and provide the right incentives for investment in, and dis­sem­in­ ation of, creative content in the online environment,39 in May 2016 the European Commission presented as part of the copyright legislative package a draft Directive on Copyright in the DSM.40 Among the goals of the proposal was also that of reinforcing the position of rightholders to negotiate and be remunerated for the online exploitation of their content by online services storing and giving access to content uploaded by their users,41 an initiative needed to fill the so-called ‘value gap’.42 The rationale behind this 35 C-324/09 L’Oreal v eBay [2011] ECLI:EU:C:2011:474, para. 113. 36  See Enrico Bonadio, ‘Trade marks in online marketplaces: the CJEU’s stance in L’Oréal v eBay’ (2012) 18(2) Computer and Telecommunications L. Rev. 37; Eleonora Rosati, ‘Why a Reform of Hosting Providers’ Safe Harbour is Unnecessary Under EU Copyright Law’, CREATe Working Paper 2016/11, 9 (2016). 37  See European Commission, ‘Tackling Illegal Content Online’ (n. 9) 13. 38 ibid. 39 ibid. 40  For further information on this policy, see European Commission, ‘Modernisation of the EU copy­right rules’ . 41  See European Commission Communication, ‘Promoting a fair, efficient and competitive European copyright-based economy in the Digital Single Market’ COM(2016) 592 final. 42  See IFPI, ‘Rewarding creativity—fixing the value gap’ .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A New Liability Regime for Illegal Content in the Dsm Strategy   301 was the assumption that certain intermediaries, operating on an ad-funded as opposed to a subscription-based business model, do not obtain licences from rightholders for the works they store on their platforms and that this would deprive rightholders from gaining the revenues to which they are entitled. While theoretically the value gap may sound like a convincing narrative, in practice it is questionable whether sound empirical evidence exists to back it up.43 In order to tackle the issue of copyright-infringing content, the proposal encompassed the strongly debated Article 13,44 now Article 17, which carved out a special regime for the intermediaries hosting a large amount of works, and required them either to conclude agreements with rightholders and implement measures ensuring the functioning of such agreements or, in any case, to prevent the availability of copyright-infringing content. In practice, Article 13 introduced a twofold duty on such providers to conclude licensing agreements with rightholders for the use of their works and to (make best efforts to) prevent the availability on their services of works identified by rightholders through cooperation with the service providers.45 In practice Article 13 established a monitoring obligation to actively filter third parties’ content to identify and prevent copyright infringement, which has to be implemented via appropriate and proportionate content-recognition technologies. After two years of negotiations, in September 2018 the Parliament approved its version of amendments to the Commission’s proposal, which was sent to the trilogue between Parliament, Commission, and Council. The version resulting from the trilogue was finally approved by the Parliament in March 2019.46 While licensing remains the main obligation for the addressees of Article 17, the provision presents at least three main relevant changes in comparison with that of the Commission’s. First, it clearly states in paragraph 1 that ‘online content sharing service providers perform an act of communication to the public’, which is the reason why they should conclude fair and appropriate licensing agreements with rightholders, which cover ‘the 43  See Giuseppe Colangelo and Mariateresa Maggiolino, ‘ISPs’ copyright liability in the EU digital single market strategy’ (2018) 26(2) IJLIT 156–9. 44  See Open Letter #2 from European Research Centres to Members of the European Parliament and the Council of the European Union, ‘The Copyright Directive is Failing’ (26 April 2018) ; Reto Hilty and Andrea Bauer, ‘Position Statement of the Max Planck Institute for Innovation and Competition on the Proposed Modernisation of European Copyright Rules’ (1 March 2017) ; Open letter to the European Commission by 40 academics (30 September 2017) ; Martin Senftleben and others, ‘The Recommendation on Measures to Safeguard Fundamental Rights and the Open Internet in the Framework of the EU Copyright Reform’ (2018) 40(3) EIPR 149, 149–63 ; Mathias Leistner and Axel Metzger, ‘The EU Copyright Package: A Way Out of the Dilemma in Two Stages’ (2017) 48(4) IIC 381; Giancarlo Frosio, ‘To Filter or Not to Filter? That Is the Question in EU Copyright Reform’ (2018) 32(6) Cardozo Arts & Entertainment L.J. 331. 45  See Art. 13 of the Proposal for a Directive of the European Parliament and of the Council on copy­right in the Digital Single Market, COM/2016/0593 final. 46  See also Chapter 28.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

302   Maria Lillà Montagnani liability for works uploaded by the users of such online content sharing services in line with the terms and conditions set out in the licensing agreement’.47 Alternatively, according to paragraph 4, letters a) and b) of Article 17, when rightholders do not wish to conclude licensing agreements, online content-sharing service providers shall, in order to avoid liability, demonstrate that they have ‘made best efforts to obtain an authorisation, and . . . to ensure the unavailability of specific works and other subject matter for which the rightholders have provided the service providers with the relevant and necessary information’. The introduction of a form of direct liability on platforms for unlawful content posted by their users is prompted by the CJEU’s Filmspeler48 and Ziggo49 cases both in 2017. In these judgments, in the absence of the application of the safe harbour, the intermediaries in question were indeed directly liable, even if the content was uploaded by their users.50 Moreover, the Parliament’s version mirrors the negotiating mandate agreed on 25 May 2018 by the Council’s Committee of Permanent Representatives (COREPER)51 where clarification was sought as to under which conditions intermediaries engage in an act of communication or making available to the public under Article 3 of Directive 2001/29/EC.52 In fact, Article 13 of the previously mentioned Council’s version clearly affirmed that online content-sharing service providers are liable for communication or making available to the public, unless they employ ‘best efforts’ to prevent the availability of specific works via effective and proportionate measures and provided that those services act expeditiously to remove or disable access to the works on notification by rightholders. In this respect, the final version of Article 17 goes beyond the above, by affirming intermediaries’ direct liability and disregarding the knowledge requirement, actual or constructive, that has always been a pivotal element of the conditional liability exemption under Article 14 of the ECD.53 47  Directive 2019/790/EU (n. 6) Art. 17(1)–(2). 48  See C-527/15 Stichting Brein v Filmspeler [2017] ECLI:EU:C:2017:300. 49  See C-610/15 Stichting Brein v Ziggo BV [2017] ECLI:EU:C:2017:456. 50  See Eleonora Rosati, ‘The CJEU Pirate Bay Judgment and Its Impact on the Liability of Online Platforms’ (2017) 39(12) EIPR 737; Jane Ginsburg, ‘The Court of Justice of the European Union Creates an EU Law of Liability for Facilitation of Copyright Infringement: Observations on Brein v. Filmspeler [C-527/15] and Brein v. Ziggo [C-610/15]’, Columbia Law and Economics Working Paper no. 572 (2017); Jane Ginsburg and Jake Ali Budiardjo, ‘Liability for Providing Hyperlinks to Copyright-Infringing Content: International and Comparative Law Perspectives’ (2018) 41(2) Colum. J. of L. & Arts 153; João Pedro Quintais, ‘Untangling the Hyperlinking Web: In Search of the Online Right of Communication to the Public’ (2018) 21(5)–(6) J. of World Intellectual Property 385, 385–420. 51  See Council of the European Union, ‘Proposal for a Directive of the European Parliament and of the Council on copyright in the Digital Single Market—agreed negotiating mandate’ (25 May 2018) 2016/0289 (COD) (hereafter Agreed Negotiating Mandate). 52  See Directive 2001/29/EC of 22 June 2001 on the harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L167/10, Art. 3 (Information Society Directive). 53  For a complete analysis of the knowledge requirement, see Christina Angelopoulos, ‘On Online Platforms and the Commission’s New Proposal for a Directive on Copyright in the Digital Single Market’, SSRN Research Paper no. 2947800, 8–12 (6 April 2017) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A New Liability Regime for Illegal Content in the Dsm Strategy   303 Secondly, the final version of the Directive redefines the platforms to which Article 17 applies. It still refers to ‘online content sharing service providers storing and giving access to large amounts of works’—which is only slightly different from the previous ‘information society service providers storing and giving access to large amounts of works and other subject-matter uploaded by their users’—but it does not include ‘pro­viders of services, such as not-for-profit online encyclopedias, not-for-profit educational and scientific repositories, open source software-developing and-sharing platforms, providers of electronic communications services as defined in Directive (EU) 2018/1972, online marketplaces, business-to-business cloud services and cloud services that allow users to upload content for their own use’.54 Even here the version of the mandate position is embraced.55 However, despite the exclusion of non-profit service platforms, the question as to how large the amount of content stored by the service providers should be to cause them to fall outside the boundaries of Article 17 remains unclear. Thirdly, while the Commission focused—in both Article 13 and its corresponding recitals 38 and 39—on effective content-recognition technologies and referred, more generally, to technological measures to be adopted by platforms in order to prevent infringing content from being uploaded, Article 17 does not make any explicit reference to technology. While collaboration between platforms and rightholders remains an important means for tackling illegal online content, any mention of envisaging technological measures or content-recognition technologies is dropped. However, even though not explicitly, Article 17 also points in the direction of monitoring systems where its letter c) clearly requires that online content-sharing service providers, on receiving notice by rightholders, act expeditiously not only ‘to disable access to, or to remove from, their websites’ the signalled content, but also to prevent that content from being re-uploaded. As a matter of fact, such an obligation can only be complied with through a system that filters the content that users try to upload.56 Despite—or perhaps because of—these changes, the issue of making sense of this new regime with both Article 15 of the ECD and, more generally, with the safe harbour regime remains. On the one hand, in fact, it is likely that when licensing agreements are not available, in order to avoid direct liability for violation of the right of communication to the public, platforms will adopt—when they have not already done so57—mech­an­isms that do monitor third parties’ content. On the other hand, there is a general lack of coordination between Article 17 and the safe harbour regime as interpreted in the CJEU’s case law on active and passive hosting providers. Specifically, platforms that do not comply with the obligations mentioned above will lose the benefit of Articles 14 of the 54  See Directive 2019/790/EU (n. 6) Art. 2(6). 55  Agreed Negotiating Mandate (n. 51), recital 37a and b and Art. 2(5). 56  For the trend towards algorithmic content regulation and enforcement, see Maria Lillà Montagnani, ‘Virtues and Perils of Content Regulation in the EU—A Toolkit for a Balanced Algorithmic Copyright Enforcement’ forthcoming in (2019–2020) 11(1) Case Western J. of L., Tech. & the Internet. 57 The Content ID developed by Google falls into this category. Regarding its functioning, see Christina Angelopoulos and others, ‘Study of Fundamental Rights Limitations for Online Enforcement through Self-regulation’, Institute for Information Law (IViR) (2016) 63–70 .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

304   Maria Lillà Montagnani ECD and no mention is made of the need to make an assessment of the role of the intermediary.58 Instead, recital 38 of the Commission’s proposal still clarified that, had such scenario occurred, a case-by-case assessment would be needed, and it expressly referred to the need to verify whether the online platforms played an active role before making them liable.59 This raises the question as to how to reconcile Article 17 with the general safe harbour regime set out under the ECD and judicially developed by the CJEU’s case law on active and passive hosting providers. More generally, if, in the absence of a compulsory licensing scheme,60 the way to avoid direct liability is by adopting a filtering system, there is the concrete risk that certain uses of protected content which are permitted as exceptions and limitations to copy­right infringement would no longer be available to their beneficiaries.61 This will therefore significantly limit the already narrow exception regime to copyright law in the digital environment.62 Some argue that this would be mainly due to the fact that platforms would resort to cheaper filtering systems.63 However, it is highly doubtful whether any filtering system currently in use, expensive or cheap, is sophisticated enough to draw the line adequately between, for instance, a parody and an infringing use or any of the other exceptions provided in Article 5 of Directive 2001/29/EC or in the diverse Member States’ copyright laws.64 What, in fact, exacerbates the issue is the lack of EU-level harmonization in relation to copyright exception and limitations, which requires that, for filtering systems to be effective, they should ascertain on a case-by-case basis the infringing nature of the content in a geographically sound manner, that takes into account the diverse existing national exception regimes.

3.  Harmful Content within the Reformed AudioVisual Media Service Directive Between 6 July 2015 and 30 September 2015, a public consultation on the review of the AVMSD was also undertaken, with the objective of making the European audiovisual 58  See Directive 2019/790/EU (n. 6) Art. 17(3). 59  See Maria Lillà Montagnani and Alina Yordanova Trapova, ‘Safe Harbours in Deep Waters: a New Emerging Liability Regime for Internet Intermediaries in the Digital Single Market’ (2018) 26(4) IJLIT 299–302. 60  See Christina Angelopoulos and others, ‘The Copyright Directive: Misinformation and Independent Enquiry’ (29 June 2018) 3 . 61  See Information Society Directive (n. 52) Art. 5. 62  See Guido Westkamp, ‘The Three-Step Test and Copyright Limitations in Europe: European Copyright Law between Approximation and National Decision Making’ (2008) 1(1) J. of Copyright Soc’y USA 1. 63  See Angelopoulos and others (n. 60) 3. 64  See Directive 2001/29/EC (n. 52) Art. 5. On the fragmentation of the exceptions regime, see also Bernt Hugenholtz, ‘Why the Copyright Directive is Unimportant, and Possibly Invalid’ (2000) 11 EIPR 501.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A New Liability Regime for Illegal Content in the Dsm Strategy   305 media landscape fit for purpose and aligned to the twenty-first century media framework. In particular, the aim was to establish a level playing field among operators and align rules valid for both traditional broadcasters and new video-on-demand services.65 As a result of the consultation, in May 2016 the Commission proposed a revision of the AVMS where new provisions focusing on combating hate speech and dissemination of harmful content to minors were adopted. This introduced a new set of obligations for AVMSD operators as they are involved first-hand in tackling unlawful content—in particular, content which is harmful to minors, hate speech, and terrorism-related content. These obligations must be complied with regardless of the nature of the service offered by AVMS operators—hosting or broadcasting—in line with the urgent need to make online platforms more responsible and to introduce an enhanced liability regime.66 On 2 October 2018, the European Parliament plenary session approved the revised AVMSD as part of the DSM strategy.67 The revision was adopted by the European Parliament and the Council on 14 November 2018 to be implemented by the Member States within the following 21 months. To illustrate the new liability regime, it is first necessary to understand the definition of ‘video-sharing platforms’ as, besides now being included within the scope of the AVMSD, they are also the specific addressees of several new obligations. Video-sharing platforms are defined as commercial services addressed to the public, where the principal purpose of the service (or an essential functionality of it) is devoted to providing programmes and user-generated videos to the general public. The scope of such service must then be that of informing, entertaining, or educating and the means adopted for it should be through electronic communications networks; finally, the content should be organized in a way determined by the provider of the service, in particular by displaying, tagging, and sequencing.68 The above-defined platforms fall within the regulation introduced by the newly-adopted Chapter IXa, entitled ‘Provisions applicable to Video-Sharing Platform Services’, which is much more articulated than that initially proposed by the Commission. In particular, Article 28b69 brings in an obligation for Member States to ensure that appropriate measures are taken by video-sharing platforms to protect not only—as already stated in the Commission’s proposal70—minors from content which may impair their physical, mental, or moral development and the general public from content containing incitement to violence or hate speech,71 but also the latter from content the dissemination of 65  See Andrej Savin, ‘Regulating Internet Platforms in the EU—the Emergence of the “Level Playing Field” ’ (2018) 34(6) Computer L. & Security Rev. 1215–31. 66 ibid. 67  See European Parliament, ‘Legislative resolution on the proposal for a directive of the European Parliament and of the Council amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services in view of changing market realities’ (2 October 2018) COM(2016)0287– C8-0193/2016–2016/0151(COD). 68  See Directive 2018/1808 (n. 7) Art. 1. 69  Whereas in the Commission’s proposal it was Art. 28a which encompassed the new obligations. 70  See Directive 2018/1808 (n. 7) Art. 28b(3). 71  ibid. Art. 28b(1)(a)–(b).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

306   Maria Lillà Montagnani which constitutes a criminal offence, namely ‘public provocation to commit a terrorist offence’, ‘child pornography’, and ‘racism and xenophobia’.72 In addition, under the revised Directive, video-sharing platforms also have to respect certain obligations for the commercial communications for which they are responsible and be transparent about commercial communications that are declared by users when uploading content of a commercial nature. These duties—which are without prejudice to Articles 12 to 15 of the ECD (while in the Commission’s proposal the reference was only to Art. 14 ECD)—must be observed through the adoption of measures that are appropriate ‘in light of the nature of the content in question, the harm it may cause, the characteristics of the category of persons to be protected as well as the rights and legitimate interests at stake’.73 Similarly, such measures should also be ‘practicable and proportionate, taking into account the size of the video-sharing platform service and the nature of the service that is provided’,74 and more importantly should not lead to any ex ante control or upload-filtering of content insofar as it does not comply with Article 15 of the ECD. In particular, appropriate measures include reporting/flagging operation systems, operating systems through which video-sharing platform providers explain to their users what effect has been given to the reporting and flagging systems, age verification systems, content-rating systems, parental control systems, media literacy measures and tools, and raising users’ awareness of those measures and tools, and, finally, easy-to-use and effective procedures for handling and resolving users’ complaints to the video-sharing platform provider in relation to the implementation of the measures.75 Compared to the DSM Directive, the proposal to amend the AVMSD has prompted less debate. However, even Article 28b—and previously Article 28a initially proposed by the Commission—raises some issues in relation to its interaction with the ECD provisions. In principle, the revised Directive does not amend the scope of intermediaries’ liability as set by the ECD and is in line with the problem-driven approach declared by the Commission in the initial Communication on the DSM strategy,76 such as the protection of minors and hate speech. Similarly, still in principle the ‘appropriate measures’ may seem—in comparison with the Commission’s proposal—less at risk of violating the ban on general monitoring obligations pursuant to Article 15 of the ECD77 since Article 28b clearly states that the adopted measures ‘shall not lead to any ex-ante control measures or upload-filtering of content which do not comply with Article 15 of Directive 2000/31/ EC’. However, the issue of how to reconcile, in practice, the new duties with the ban imposed by Article 15 remains critical. Even more confusing is paragraph 6 of Article 28b, according to which ‘Member States may impose on video-sharing platform providers measures that are more detailed or stricter than the measures’ mentioned earlier. Yet, at the same time ‘when adopting such measures, [they] shall comply with the requirements set out by . . . Articles 12 to 15 72  ibid. Art. 28a(1)(c). 73  ibid. Art. 28(b)(3). 74 ibid. 75  ibid. Art. 28b(3). 76  See European Commission, ‘Online Platforms and the Digital Single Market’ (n. 5) 9. 77  See Ullrich (n. 11) 117.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A New Liability Regime for Illegal Content in the Dsm Strategy   307 of Directive 2000/31/EC’. Such phrasing raises the question as to what measures could be stricter than a notice-and-takedown procedure if not an ex ante filtering obligation, which, however, is clearly banned as it is in violation of Article 15 of the ECD. This, once again, shows how critical it is to determine what is meant by ‘appropriate measures’ in order to establish a framework that is coherent.

4.  Misleading content and the Unfair Commercial Practices Directive Always with a view to increasing consumer confidence in the digital market and offer more certainty, in May 2016 the Commission adopted a Communication on cross-border e-Commerce, in which it outlines the strategies to undertake to stimulate and promote it.78 The Communication is accompanied by a set of guidelines on the application and implementation of the Unfair Commercial Practices Directive (UCPD Guidance),79 which proceeds to clarify some concepts and provisions of the Directive itself and illustrates the relationship between it and other European rules—an activity that is made necessary given the growing body of principles that derive from the decisions of the CJEU and several national courts. More importantly, the UCPD Guidance clarifies that the rules on unfair commercial practices (UCP) are also applicable to the new business models that develop in a digital environment,80 for example online commercial platforms.81 Although this should not, in principle, clash with the safe harbour established by the ECD, in practice the coordination between the obligations that compliance with UCP requires and the liability regime—in particular Articles 14 and 15—is not easy to realize. In the first place, a platform must observe the UCPD when it qualifies as a ‘trader’ under Article 2(b) of the UCPD on a case-by-case assessment. For example, a platform that, while acting for purposes relating to its business, charges a commission on transactions between suppliers and users, provides additional paid services, or draws revenue from targeted advertising, can definitely be deemed to be a trader and falling within the scope of the UCPD. It derives from such qualification that, pursuant to Article 5(2) of the Directive, platforms must comply with the rules of professional diligence and adopt the normal degree of special competence and attention that it is reasonable to expect from a professional towards its consumers.82 Moreover, online platforms have to comply with the transparency requirements set out in Articles 6 and 7 of the UCPD, which entail that they refrain 78  See European Commission, ‘A comprehensive approach to stimulating cross-border e-Commerce for Europe’s citizens and businesses’ COM(2016) 320 final (25 May 2016). 79  See European Commission, ‘UCPD Guidance’ (n. 8). 80  ibid. 5. 81  ibid. 127. 82  ibid. 128.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

308   Maria Lillà Montagnani from misleading actions and omissions whenever intermediating the promotion, sale, or supply of a product to consumers. These professional diligence duties as well as the transparency requirements to which online platforms must abide must coordinate with the exemption regime established by the ECD for content-hosting activities. On the one hand, Article 14 is often invoked by platforms claiming that as mere ‘hosts’ they are not to be held liable for the information hosted even if it violates consumer protection and constitutes an unfair commercial practice.83 On the other hand, Article 1(3) of the ECD states that the Directive ‘complements Community law applicable to information society services without prejudice to the level of protection for, in particular, public health and consumer interests, as established by Community acts and national legislation implementing them in so far as this does not restrict the freedom to provide information society services’. While in principle the ECD and the relevant consumer acquis communautaire concerning the Union should apply in a complementary manner,84 in practice that is not always the case. The UCPD Guidance, by expressly requiring that platforms comply with the UCPD’s obligations, increases the number of clashes between platforms invoking Article 14 safe harbour for their hosting activity and parties that—harmed by the misleading content hosted—complain of a lack of compliance with the UCPD provisions. This is because platforms which are considered ‘traders’ should take appropriate measures to enable relevant third party traders to comply with EU consumer and marketing law requirements and help users to clearly understand with whom they are concluding contracts. Such measures could imply, for example, enabling relevant third party traders to indicate that they act as traders; and letting all platform users know that they will only benefit from protection under EU consumer and marketing laws in their relations with those suppliers who are traders. Most importantly, platforms should design the structure of their websites in a manner that allows professional third parties to present information to users of vivel platform in compliance with Union rules on commercial law and consumers—in particular, when the information at stake amounts to the invitations to purchase referred to in Article 7(4) of the Directive.85 Now, to comply with such professional diligence requirements, online platforms adopt measures such as those described earlier that make them intervene on the content. This turns them into active hosting providers and makes them no longer be able to invoke the intermediary liability exemption under the ECD. In addition, this kind of compliance also runs the risk of being in violation of Article 15 of the same Directive. It seems thus that instead of being complementary, the UCPD and the ECD are alternatives, if not—as in the case of ­certain appropriate measures—conflicting.

83  See Christoph Busch and others, ‘The Rise of the Platform Economy: A New Challenge for EU Consumer Law?’ (2016) 5(3) J. of European Consumer and Market L. 3. 84  See European Commission, ‘ UCPD Guidance’ (n. 8) 132. 85  See Lupiáñez-Villanuev and others, ‘Behavioural Study on the Transparency of Online Platforms’ (EuropeanCommission,2018).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A New Liability Regime for Illegal Content in the Dsm Strategy   309 In other words, it is questionable how the liability exemption contained in Article 14 of the ECD can be reconciled with the due diligence contained in Article 5 of the UCPD since from the latter emerges a ‘duty of activation’, could find concretization in a general obligation to monitor or carry out fact-finding in turn. This on the one hand makes the platform take on an active role and, on the other, violates Article 15 of the same ECD. This is also the case for those platforms that merely provide the rating of goods and services sold or supplied by other parties for which it may be difficult to prove the liability exemption granted by the ECD. Insofar as they are to be considered ‘traders’, they also have a general duty to actively seek facts or circumstances indicating that users have uploaded misleading reviews; a duty that is clearly in conflict with the prohibition of actively and generally monitoring users’ uploaded content under Article 15 of the ECD.86

5.  Final Remarks From the analysis of the new provisions touching on the intermediaries’ liability within the DSM strategy, in particular the Copyright in the DSM Directive, the AVMSD, and the UCPD Guidance, a new liability regime for online platforms emerges in relation to unlawful content uploaded by third parties. This emerging system—amounting to an ‘enhanced liability regime’87—is quite complex and, although in principle it does not modify the safe harbours, it does introduce a new set of obligations and duties of care, the coexistence of which, with the ECD exemption regime, raises significant doubts.88 The first and main complexity of the emerging liability regime is the scope of the new duties of care that weigh on online platforms. In line with the problem-driven approach adopted in the Communication on the DSM,89 each of the hard and soft law instruments analysed introduces obligations for online intermediaries targeted at content of a specific nature. This approach generates a regime that lacks coherence both internally—in terms of consistency of the diverse obligations that a platform must implement simultaneously— and externally when these same obligations are to coordinate with the safe harbour regime under the ECD and, more generally, with the overall acquis. As far as the internal dimension is concerned, while the introduction of a duty of care is not novel in this context,90 however, a lack of certainty as to its scope and limits—as 86  See Alberto De Franceschi, ‘Uber Spain and the “Identity Crisis” of Online Platforms’ (2018) 7(1) J. of European Consumer and Market L. 3. 87  See European Commission, ‘Tackling Illegal Content Online’ (n. 9). 88  See Reto Hilty, ‘Use of Protected Content on Online Platforms’ in Reto Hilty and Valentina Moscon, ‘Modernisation of the EU Copyright Rules. Position Statement of the Max Planck Institute for Innovation and Competition’, Max Planck Institute for Innovation and Competition Research Paper no. 17-12/2017 (2017) 99. 89  See European Commission, ‘Tackling Illegal Content Online’ (n. 9). 90 Carsten Ullrich, ‘A Risk-based Approach Towards Infringement Prevention on the Internet: Adopting the Anti-Money Laundering Framework to Online Platforms’ (2018) 26(3) IJLIT 226, 234.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

310   Maria Lillà Montagnani well as to the diverse degree of ‘care’ required depending on the nature of the content at stake—prompts intermediaries to be proactive in the fight against illegal content online. Indeed, a lack of detail on the intermediaries’ obligations leaves market operators to be in charge of self-regulating their actions within the general duties of care. The several instruments adopted by the EU institutions often refer to the need for cooperation and information sharing between platforms and rightholders, encourage best practice sharing between both parties,91 and the creation of codes of conduct—which in the specific case of the AVMSD can also be the result of co-regulatory processes.92 In particular, in the area of hate speech and terrorist content regulatory efforts already rest on a (nonbinding) code of conduct between major social media platform operators, which is now in its third round of monitoring.93 This call for more self-regulation generates a paradoxical scenario. In a way, the EU institutions are aware that intermediaries should be involved in the fight against and prevention of illegal online content and that their activity—which should take place in a regulated environment—is a form of enforcement.94 To this end, they have not entirely delegated to platforms the fight against and prevention of illegal online content and within the DSM they have imposed statutory duties of care on them. However, on the other hand, the vagueness of such duties of care requires intermediaries to resort to self-regulation to implement them, recourse that is also re­com­mend­ed by the EU institutions themselves. As a result, intermediaries end up once again taking into their own hands the online enforcement of the EU rules against illegal content. This time not to shield them from liability—as it was when the EDC was adopted—but to comply with the obligations newly adopted. As for the external dimension, namely coordination between those duties of care and the other provisions of the acquis—in particular the safe harbour regime and the established related case law of the CJEU—a profound lack of consistency emerges. This is the case with the appropriate measures that are to be taken according to the Copyright in the DSM Directive, the AVMSD, and the UCPD, on the one hand, and Articles 14 and 15 of the ECD, on the other. In particular, within the enhanced liability regime a new ­interpretation is attached to ‘active hosting providers’. It is sufficient simply to ­organize activity, even via automatic means, to trigger the duty to remove illegal content,95 and the mere fact of not having adopted such effective measures could make intermediaries liable. Similarly, Article 15’s prohibition of monitoring obligations of a general nature constitutes, in principle, a real obstacle to imposing on intermediaries duties of care within the DSM framework. Member States and national courts continue to experience difficulties in deciding when a specific infringement prevention becomes general and in deciding whether a content-verification process—like those already adopted by several platforms—actually amounts to an act of general monitoring.96 91  See Directive 2019/790/EU (n. 6) Art. 17(10). 92  See Directive 2018/1808 (n. 7) Art. 4a(1). 93  See European Commission, ‘Results of Commission’s last round of monitoring of the Code of Conduct against online hate speech’ . 94  See Angelopoulos and others (n. 57). 95  See Montagnani and Trapova (n. 59) 309. 96  See Ullrich (n. 90) 231.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A New Liability Regime for Illegal Content in the Dsm Strategy   311 The emerging liability regime introduced within the DSM strategy raises further concerns also in relation to the way in which online intermediaries will behave in order to be in compliance with this ‘new’ liability regime. It is likely that, to be on the safe side, and given their nature as economic agents, they will design their platforms in a way that will make them ex ante compliant with all the obligations at issue. Besides modifying the nature of the liability of online intermediaries itself—which from ‘conditional’, as set out in the ECD, mutates into ‘organizational’, that is, depending on the structure of the platform97—the new regime also runs the risk of exacerbating the role of ‘cyber-regulators’ or ‘cyber-police’ that online intermediaries have already initiated.98 97  See Natali Helberger and others, ‘Governing online platforms: From contested to cooperative responsibility’ (2017) 34(1) The Information Society 11. 98  See Luca Belli and Cristiana Sappa, ‘The Intermediary Conundrum: Cyber-Regulators, Cyber-Police or Both’ (2017) 8 JIPITEC 189.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

pa rt I V

A SU BJ E C T M AT T E RSPE C I F IC OV E RV I E W

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 16

H a r mon izi ng I n ter m edi a ry Copy r ight Li a bilit y i n the EU: A Sum m a ry Christina Angelopoulos

Intermediaries are ubiquitous in the modern digital environment. Everything an individual does on the web requires, in one way or another, the involvement of an internet intermediary. Inevitably, intermediaries are also used to infringe copyright online. This raises an obvious question: what is their responsibility for such infringements? The topic is a vexed and multi-dimensional one. Traditionally, different national jurisdictions in Europe have approached it in different ways. To address this fragmentation, harmonizing momentum has picked up at the EU level. Over the past decade, the Court of Justice of the European Union (CJEU) has issued a series of judgments covering the liability of intermediaries for the infringements of their end-users. More recently, the European legislator has also taken action. In spring 2019, the European Parliament and the Council of the EU approved a new Directive on Copyright in the Digital Single Market.1 Article 17 of this is dedicated to the liability of certain internet service pro­ viders for uploads of infringing material by end-users. The co-decision process leading up to the adoption of the Directive was trailed by heated political debate. As lawmakers grapple with the economic and policy stakes of intermediary liability, it is also necessary for legal theorists to consider its doctrinal implications: a legal problem deserves, first and foremost, a legal solution. In research conducted from 2011 to 2015 at the Institute for Information Law (IViR) of the University of Amsterdam, I examined

1  See Directive 2019/790/EU of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92.

© Christina Angelopoulos 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

316   Christina Angelopoulos the lessons of European tort law for intermediary liability in copyright.2 The objective was to plot a path to the European harmonization of the area. After all, national divergences in legislation do not make sense in the transnational environment of the internet. Given the difficulties encountered in the Member States in constructing a coherent legal edifice for intermediary liability in copyright, and in view of copyright’s essential nature as a subcategory of tort law, a return to the tort basics presents itself as the most cogent avenue towards the doctrinal reconstruction of this fraught field of law—and the emergence of a more stable and comprehensive analytical structure. This chapter provides a brief summary of that research.

1.  The Current Incomplete EU Framework for Intermediary Liability in Copyright Existing EU law offers the natural starting point for any European harmonization of intermediary liability in copyright. Undoing work already done would create confusion. Properly understanding the current state of harmonization in the area is therefore essential. The centrepiece of the current EU framework on intermediary liability can be found in the e-Commerce Directive (ECD). Here, the EU legislator has set out a specialized immunity regime for internet intermediaries—the so-called ‘safe harbours’. These shield providers from liability in the provision of three types of service: mere conduit, caching, and hosting.3 The immunities are horizontal, meaning that they apply across a variety of different areas of law, including copyright. Article 15 of the Directive supplements the safe harbours with a general monitoring prohibition. This bars Member States from requiring that intermediaries protected by an immunity be subjected to general obligations to monitor the information which they transmit or store or actively to seek facts or circumstances indicating illegal activity.4 In the specific area of copyright, these provisions are complemented by Article 8(3) of the Information Society Directive (ISD).5 This requires that copyright owners be able, 2  For more detail please see Christina Angelopoulos, European Intermediary Liability in Copyright: A Tort-Based Analysis (Kluwer Law Int’l 2016). 3  See Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (Directive on Electronic Commerce) [2000] OJ L178/1, Arts 12–14. 4  Clarification on what is meant by the term ‘general monitoring obligation’ has been requested of the CJEU in pending case C-18/18 Facebook Ireland. 5  See Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the ­harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L167/10.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Harmonizing Intermediary Copyright Liability in the Eu   317 under national law, to apply for an injunction against intermediaries whose services are used by third parties to infringe copyright. A significant division exists between the intermediaries that are eligible for the ECD’s safe harbours and those that are subject to the ISD’s demands on injunctions. The CJEU has interpreted the safe harbours as requiring that intermediaries be ‘neutral’.6 This condition finds its basis in recital 42 of the ECD, according to which, to enjoy the safe harbours, the service provided by the intermediary must be ‘of a mere technical, automatic and passive nature’. This means that the provider may have ‘neither knowledge of nor control over the information which is transmitted or stored’. On the other hand, recital 59 of the ISD establishes that Article 8(3) applies to any intermediary that is optimally placed to bring the infringement to an end.7 This immediately indicates that, under the EU rules, different conditions govern the applicability of different remedies. The bifurcation occurs along the lines of intermediary culpability: injunctions may be issued against any intermediary, but substantive liability (i.e. for all other available remedies) requires non-neutrality—in other words, active involvement that results in knowledge or control. While the ECD and ISD provisions are of course useful, they do not provide a ­complete harmonized framework for intermediary liability. The safe harbours are only negatively stated, telling Member States when they cannot impose liability on intermediaries, not when they should. Article 8(3) of the ISD only deals with injunctions. In its case law, the CJEU has sought to bridge this gap by directing its attention beyond the secondary law of the Directives towards European primary law—in particular, the EU’s Charter of Fundamental Rights (EU Charter). The relevant rulings rest heavily on the need for a ‘fair balance’ between conflicting fundamental rights: according to the CJEU, copyright, as a fundamental right protected under Article 17(2) of the EU Charter, must be weighed against other fundamental rights of equal normative value.8 In cases of intermediary liability, the Court has found these to include the right of the intermediary to conduct its business (Art. 16 EU Charter) and the rights of its users to their freedom of expression (Art. 11 EU Charter) and the protection of their privacy and personal data (Arts 7 and 8 EU Charter). This development has elevated the discussion on intermediary liability to hierarchically higher legal plains, while also providing a legal basis for harmonized answers on issues of intermediary liability not covered in the Directives. As 6  See C-236–238/08 Google France [2010] ECLI:EU:C:2010:159, paras 114–15 and C-324/09 L’Oréal SA and Others v eBay International AG and Others [2011] ECLI:EU:C:2011:474, para. 113. The concept of ‘neutrality’ is due to be clarified by the CJEU in pending case C-500/19 Puls 4 TV GmbH & Co. KG v YouTube LLC and Google Austria GmbH, request for a preliminary ruling from the Oberster Gerichtshof [Austrian Supreme Court] lodged on 1 July 2019. 7  The conditions for injunctions under Art. 8(3) ISD will be discussed by the CJEU in pending cases C-500/19 (n. 6); C-682/18 LF v Google LLC, YouTube Inc., YouTube LLC, Google Germany GmbH, request for a preliminary ruling from the Bundesgerichtshof (Germany) lodged on 6 November 2018; and C-683/18 Elsevier Inc. v Cyando AG, request for a preliminary ruling from the Bundesgerichtshof (Germany) lodged on 6 November 2018. 8  See e.g. C-70/10 Scarlet Extended SA v Société belge des auteurs, compositeurs et éditeurs SCRL (SABAM) [2011] ECLI:EU:C:2011:771, paras 43–4.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

318   Christina Angelopoulos a result, fundamental rights have emerged as a driving force behind the European harmonization of intermediary liability. The effect of the ‘fair balance’ doctrine on intermediary liability was first explored by the Court in the area of injunctions. The CJEU established that broad filtering obligations may not be imposed by courts on hosting providers for the prevention of copyright infringement.9 By contrast, blocking injunctions may be issued against internet access providers, as long as end-users are not unnecessarily deprived of lawfully accessing information.10 Similarly, while filtering and the termination of the service could not be required of wi-fi providers, the Court concluded that an order to password-protect a wi-fi network would be possible.11 Finally, in a case concerning trade marks, the CJEU found that orders may be issued to suspend the primary infringers, as well as to take measures that make it easier to identify them.12 To date, no reliable method of assessing when a ‘fair balance’ has been struck by a given injunctive order has been developed in EU law.13 More recently, the CJEU has also examined the consequences of fundamental rights for substantive liability. In GS Media, the Court was called upon to assess the liability of  the providers of hyperlinks to infringing content.14 Absent harmonized rules on accessory liability in copyright, the Court approached the issue through the vehicle of communication to the public, one of the rightholders’ exclusive economic rights, harmonized by the ISD.15 Recital 27 of the ISD makes clear that the mere provision of physical facilities for enabling or making a communication does not in itself amount to a communication. At the same time, exactly where the dividing line should be positioned is not clear from the text of the Directive. The Court noted that hyperlinks are essential for the proper operation of the internet and therefore for users’ freedom of expression. Pursuing a ‘fair balance’, it then depended the providers’ liability on their actual or constructive knowledge. It concluded that only where the provider ‘knew or ought to have known’ that the link led to content placed online without the authorization of the rightholder could liability be imposed.16 Subsequently, the same knowledge-based standard was applied to the assessment of the liability of other promoters of third party copyright infringements.17 9  See ibid. and C-360/10 Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v Netlog NV [2012] ECLI:EU:C:2012:85. 10  See C-314/12 UPC Telekabel v Constantin Film and Wega [2014] ECLI:EU:C:2014:192. 11 See C-484/14 Tobias Mc Fadden v Sony Music Entertainment Germany GmbH [2016] ECLI:EU:C:2016:689. 12  See C-324/09 (n. 6). 13 Some indications of the CJEU’s tendencies in this regard may be discerned in Telekabel, see Christina Angelopoulos, ‘Sketching the Outline of a Ghost: The Fair Balance between Copyright and Fundamental Rights in Intermediary Third Party Liability’ (2015) 17 Info. 87. 14  See C-160/15 GS Media BV v Sanoma Media Netherlands BV and Others [2015] ECLI:EU:C:2016:644, para. 24. 15  See Directive 2001/29/EC (n. 5) Art. 3. 16  C-160/15 (n. 14) para. 49. 17 See C-527/15 Stichting Brein v Jack Frederik Wullems [2017] ECLI:EU:C:2017:300 and C-610/15 Stichting Brein v Ziggo BV and XS4All Internet BV [2017] ECLI:EU:C:2017:456.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Harmonizing Intermediary Copyright Liability in the Eu   319 The GS Media approach straddles the fence between primary and accessory liability. While primary liability occurs where a person engages in a copyright infringement themselves, accessory liability concerns itself with the participation of one person in the copyright infringement of another. Liability for the first has been traditionally understood to be strict. By contrast, the second incorporates a mental element. In a creative twist, the CJEU’s solution imposes primary liability, yet requires a condition of know­ledge.18 This sleight of hand may be doctrinally challenging, but it dovetails nicely with the Court’s previous findings on ‘neutrality’: the new case law confirms that where neutrality-destroying knowledge is present, not only will immunity be denied (as per the ECD), but substantive liability may in fact be imposed. Interestingly, the Court’s know­ledge-infused language also moves intermediary liability towards fault liability and accordingly the general principles of tort law. Despite these strides, questions remain: what type of involvement in the infringement of another is sufficient for substantive liability? Is the necessary mental element the same in all cases? How does intermediary liability operate in relation to exclusive rights other than communication to the public? What are the precise benchmarks that determine the appropriate injunctive relief? A complete harmonized framework governing intermediary liability for third party copyright infringement remains absent. Rather than slowly tease out the EU system through case law, a better approach would be to lay it out in legislation. For this purpose, dismounting the EU rules on inter­medi­ ary copyright liability from the lofty heights of fundamental rights and reintegrating them within the tort law rules, which naturally govern legal relationships between private parties, can ensure a more stable basis. Indeed, this is the approach often taken in the national systems of the Member States, in cases where the copyright provisions reach their limits.

2.  The National Regimes on Intermediary Liability in Copyright In the absence of a complete EU framework for intermediary accessory copyright li­abil­ity, the Member States have been forced to rely on home-grown solutions. In this section, the approaches taken in three national jurisdictions is briefly examined: those of the UK, France, and Germany.19 These were selected as corresponding to each of the three major tort law traditions of Europe.20 From their examination, three crossjurisdictional approaches to intermediary liability emerge. 18  Further information on the role of knowledge and intent in accessory liability in copyright at the EU level is expected in C-682/18 (n. 7) and C-683/18 (n. 7). 19  For more detail see Angelopoulos (n. 2) Ch. 3. 20  See Christina Angelopoulos, European Intermediary Liability in Copyright: A Tort-Based Analysis (Kluwer Law Int’l 2016) section 1.5.3. See also Cees van Dam, European Tort Law (OUP 2013) 102-2.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

320   Christina Angelopoulos

2.1  Intra-Copyright Solutions All three national systems have experimented to a certain extent with intra-copyright solutions to intermediary liability. These may take the form of rules of either primary or accessory liability. In the UK, the courts’ first tool of choice for the resolution of cases of intermediary liability in copyright has been the doctrine of authorization. Authorization is embedded in the Copyright, Designs and Patents Act (CDPA) 1988. According to section 16(2), not only the doing, but also the authorizing of another to do an act restricted by copyright amounts to an infringement. The courts have interpreted authorization as covering the ‘granting or purporting to grant to a third person the right to do’ an act restricted by copyright.21 Foreshadowing the ISD’s instructions on the ‘mere provision of physical facilities’, the House of Lords has been clear in its case law that ‘mere facilitation’ may not amount to authorization: a distinction is thus drawn between granting somebody the right to copy and granting them the power to copy.22 As modern intermediaries merely enable their end-users, but rarely ‘purport to grant them the right’ to engage in restricted acts, the notion must therefore be stretched beyond its natural meaning if it is to be applied to them.23 This has, nevertheless, not impeded the UK courts from using it for this purpose.24 Anticipating the CJEU’s eventual repurposing of the right of communication to the public as a tool of accessory liability,25 the UK’s lower courts have also found intermediaries directly liable for communication to the public,26 provided certain conditions are met.27 The ‘primary liability’ approach is also the one favoured by the French courts. The French courts apply the general provisions of the French intellectual property code (Code de la propriété intellectuelle, CPI) on exclusive rights to intermediaries, by holding that primary liability encompasses the ‘taking over’ of infringements performed by others or even the ‘provision of the means’ to infringe.28 In this way, the French system allows for liability in cases that the UK would label ‘mere facilitation’. It is worth noting 21  CBS Songs v Amstrad [1988] UKHL 15 [604] (UK). See also Bankes and Atkin LJJ in Falcon v Famous Players Film Co. Ltd [1926] 2 KB 474 (UK), referenced by Lord Templeman in Amstrad. 22 See Amstrad [1988] RPC 567 [603]–[604] (HL) (UK). 23 See Eddy Ventose and Javier Forrester, ‘Authorisation and Infringement of Copyright on the Internet’ (2010) 14(6) J. of Internet L. 3. 24  See e.g. Newzbin (No. 1) [2010] FSR 21 [90] (UK) and Dramatico [2012] RPC 27 [81] (UK). 25  Although not the requirement of knowledge introduced by that case law. 26  See e.g. Newzbin (n. 24) [125]. 27  These are the following: ‘the nature of the relationship between the alleged authoriser and the primary infringer, whether the equipment or other material supplied constitutes the means used to infringe, whether it is inevitable it will be used to infringe, the degree of control which the supplier retains and whether he has taken any steps to prevent infringement.’ See ibid. [90]. 28  See Cour de cassation [Supreme Court] La société Google France v la société Bach films (L’affaire Clearstream) [12 July 2012] decision no. 831, 11-13669 (Fra.); Cour de cassation La société Google France v La société Bac films (Les dissimulateurs) [12 July 2012] decision no.  828, 11-13666 (Fra.). See also Jane Ginsburg and Yves Gaubiac, ‘Contrefaçon, fourniture de moyens et faute: perspectives dans les systèmes

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Harmonizing Intermediary Copyright Liability in the Eu   321 that this has been criticized by French copyright scholars, who note that the strict li­abil­ity rule that governs primary infringement is inappropriate for mere contributors.29 As a  result of its expansive interpretation of direct infringement, the French case law on intermediary liability has relied heavily on the conditions of the safe harbours.30 Initially, the courts were strikingly demanding on this front, holding that providers that failed to ensure that infringing content was not reposted on their platforms after it had been brought to their attention would lose safe harbour protection and become liable.31 This so-called ‘notice-and-stay-down’ regime was eventually overturned by the French Supreme Court (Cour de cassation), which concluded that it amounted to the im­pos­ ition of a prohibited general monitoring obligation.32 Contrary to both the UK and France, in Germany a finding of direct liability against an online host is very difficult to substantiate. So, for example, in a 2015 decision, the Court of Appeal of Hamburg found that YouTube could not be held directly liable for copyright infringements appearing on its website.33 As the court explained, the relevant act of making available to the public was performed not by the platform but by its users. A notion of ‘adoption’ (Zu-Eigen-Machen) of third party content does exist in German law, following the German Bundesgerichthof ’s (BGH) decision in marionskochbuch.de.34 This held that ‘adopting’ another’s infringement is equivalent to doing it oneself.35 While this is reminiscent of the French ‘take-over’ approach, the doctrine of ‘Zu-Eigen-Machen’ has not had much further success before the German courts in copyright. de common law et civilistes à la suite des arrêts Grokster et Kazaa’ (2006) 228 Revue Internationale du Droit d’Auteur 3, 48. 29 Andrè Lucas, Agnès Lucas-Schloetter, and Carine Bernault, Traité de la Propriété Littéraire et Artistique (LexisNexis 2017) 992–3; Tristan Azzi, ‘La responsabilité des nouvelles plates-formes: éditeurs, hébergeurs ou autre voie?’ in Colloque de l’IPRI, Contrefaçon sur internet: les enjeux du droit d’auteur sur le WEB 2.0 (Litec 2009) 59–75. 30  These have been transposed into French law with art. 6 of Loi no. 2004-575 du 21 juin 2004 pour la confiance dans l’économie numérique [Act no. 2004-575 of 21 June 2004 on Confidence in the Digital Economy]. 31  See e.g. Cour d’appel [Court of Appeal] (CA) Paris (Pôle 5, chambre 2) Google Inc. v Compagnie des phares et balises [14 January 2011] (Fra.); CA Paris (Pôle 5, chambre 2) Google Inc. v Bac Films, The Factory [14 January 2011]; CA Paris (Pôle 5, chambre 2) Google Inc. v Bac Films, The Factory, Canal+ [14 January 2011]; CA Paris (Pôle 5, chambre 2) Google Inc. v Les Films de la Croisade, Goatworks Films [14 January 2011]. 32 See L’affaire Clearstream (n. 28); Les dissimulateurs (n. 28). 33  Oberlandesgericht [Higher Court] (OLG) Hamburg [1 July 2015] 5 U 87/12 (Ger.). The case is now pending before the BGH, which has submitted a request for a preliminary ruling to the CJEU (C-682/18 (n. 7)). 34  Bundesgerichtshof [Supreme Court] (BGH) marions-kochbuch.de [12 November 2009] I ZR 166/07 (Ger.). 35  See also the subsequent case of RSS-Feed on the personality right, where the court explained that an intermediary will ‘make third party content its own when it identifies itself with it and inserts it into his own train of thought so that it appears as its own’, BGH [17 December 2013] VI ZR 211/12, para. 19.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

322   Christina Angelopoulos

2.2  Tort-Based Solutions To supplement the intra-copyright solutions, the courts in all three countries have turned to their general tort law provisions. Two main tort-based approaches can be discerned. These can be labelled the ‘residual liability’ or ‘single fault’ model and the ‘concurrent liability’ or ‘multiple faults’ model. The prime proponent of the ‘single fault’ model is the UK. In England and Wales, the common law doctrine of joint tortfeasance holds multiple persons liable where they are involved in the same wrongdoing. To substantiate liability, each joint tortfeasor must be ‘so involved in the commission of the tort as to make the infringing act their own’.36 This can occur through the establishment of a ‘participation link’37 connecting the primary and accessory wrongdoers. In the context of copyright, two such links are generally acknowledged: inducement or procurement and combination. Inducement requires that the defendant intentionally incites or persuades another to engage in an infringement. For combination to be found, a common design is necessary between the defendant and the primary infringer, aimed at securing the infringement. As with authorization, mere facilitation will not suffice to establish joint tortfeasance.38 The courts have generally been happy to find joint tortfeasance in cases where services are offered that are used by others to infringe.39 France prefers a ‘multiple faults’ approach. While in the single-fault system the accessory is held liable for the same tort as the primary infringer, in the multiple-faults model, she will be understood to have performed a different, though connected, tortious act. In practice, the basic tort principles of articles 1382 and 1383 of the French civil code, will be employed for this purpose. These set out the general standard of conduct that requires that all legal subjects avoid causing harm to others. Significantly, no distinction in principle is made between positive and negative violations of this standard—it instead stretches and bends to accommodate both misfeasance (violation through activity) and non-feasance (violation through omission). Specifically in the area of intermediary li­abil­ity, the applicable standard is determined by reference to the ‘prestataire diligent et avisé’; that is, the diligent and prudent intermediary. According to the case law, such an intermediary will be expected to undertake all the reasonable measures (diligences appropriées) that a prudent professional would take to avoid having its services used by 36  SABAF SpA v MFI Furniture [2003] RPC 14 [59] (UK). 37  Hazel Carty, ‘Joint Tortfeasance and Assistance Liability’ (1999) 10 LS 489. 38  Lionel Bently and Brad Sherman, Intellectual Property Law (OUP 2018) 1287–8. See also Nicholas McBride and Roderick Bagshaw, Tort Law (Pearson 2018) 824–8. 39 See Dramatico Entertainment v British Sky Broadcasting [2012] EWHC 268 (Ch) [83] (UK); EMI Records v British Sky Broadcasting [2013] EWHC 379 (Ch) [71]–[74] (UK); The Football Association Premier League v British Sky Broadcasting [2013] EWHC 2058 (Ch) [43] (UK); Paramount Home Entertainment International v British Sky Broadcasting [2013] EWHC 3479 (Ch) [35] (UK); 1967 Ltd v British Sky Broadcasting [2014] EWHC 3444 (Ch) [23] (UK); Twentieth Century Fox Film Corp. v Sky UK [2015] EWHC 1082 (Ch) [56] (UK); Football Association Premier League v British Telecommunications [2017] EWHC 480 (Ch) [39] (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Harmonizing Intermediary Copyright Liability in the Eu   323 third parties to infringe the rights of others.40 The parallel with the CJEU’s concept of a ‘diligent economic operator’ is obvious. Given the ease with which direct copyright infringement can be established in France, French court rulings relying on articles 1382 and 1383 in copyright are scant. Relevant decisions date from before the introduction of the safe harbours and concern other areas of law.41 It should be noted that the ‘multiple faults’ approach is theoretically possible under the English system as well, through the vehicle of the tort of negligence. Yet English tort law has traditionally taken a much more restrictive approach to liability for omissions. It  has thus attempted to keep narrowly-defined the circumstances under which an af­fi rma­tive duty of care to take action to protect the interests of others and prevent wrongdoing by third parties may be found.42 This makes the application of the tort to intermediaries in copyright difficult.43 A mixed system is found in Germany. Article 823(1) of the German Civil Code (Bürgerliches Gesetzbuch, BGB) imposes liability on any person who, intentionally or negligently, unlawfully injures a legal right of another. At first sight, this resembles the French approach and appears to enable the application of a ‘concurrent liability’ model. Yet the requirement of unlawfulness (Rechtswidrigkeit) sets the German system apart. A positive act that breaches the right of another is presumed to be unlawful unless a justification applies. Things are different in the area of omissions. These require either intention or failure to satisfy the standard of care demanded by society; that is, the violation of a duty of care (Verkehrspflicht). As intermediary liability will generally centre around omissions, the same reluctance encountered in the UK thus re-emerges in Germany. An alternative is offered by Article 830(1) of the BGB. According to this, if more than one person has caused damage by a jointly committed tort, then each of them is responsible for the damage. Article 830(2) of the BGB makes clear that instigators and ac­ces­ sor­ies are to be treated as joint tortfeasors. A ‘residual liability’ model is thus embraced. At least contingent intent is necessary for a finding of liability under this heading; that is, the participant must have seriously considered the risk of infringement and approvingly accepted it. Knowledge of the objective circumstances that form the main offence and

40  See Michel Vivant, ‘Conditions de la responsabilité civile des fournisseurs d’hébergement d’un site sur le réseau Internet’ (2000) 13 La Semaine Juridique Édition Générale 577. 41  See e.g. CA Versailles, SA Multimania Production v Madame Lynda L [8 June 2000]. For an analysis, see Agathe Lepage, Libertés et droits fondamentaux à l’épreuve de l’internet (Litec 2003) 283–6. 42  This is usually considered to be the case where: (1) the defendant created the risk of harm, including through interference to prevent help; (2) the defendant has assumed responsibility for the plaintiff ’s welfare; (3) the defendant is in a special position of control over the source of the damage; and (4) the defendant has a special relationship with the third party that caused the damage. See Nicholas McBride and Roderick Bagshaw, Tort Law (Pearson 2012) 207–43. 43  See e.g. McBride and Bagshaw (n. 38) 101–2. For an argument in favour of replacing duty of care with a reasonable person test, see Basil Markesinis, ‘Negligence, Nuisance and Affirmative Duties of Action’ [1989] LQR 104.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

324   Christina Angelopoulos awareness of their unlawfulness are accordingly required.44 In Germany, this condition has not usually been understood to be met by online intermediaries. Having said that, the German lower courts have, on occasion, extended substantive liability as a participant (Gehilfe) to cases of gross and insistent breach of the obligation to examine a notification of alleged infringements. Hence, a defendant may also be held liable, if she consistently violates duties of care over a longer period of time.45 This kind of conduct is considered ‘infringement by forbearance’, with the forbearance essentially amounting to an indication of intent.46 As the previous analysis makes clear, the main difference between the ‘residual’ and ‘concurrent liability’ models lies in the fault standard: while residual liability requires that the joint tortfeasors have acted with the intention of causing the infringement, the concurrent liability model is laxer, being satisfied with mere negligence.

2.3  Injunction-Based Solutions A final option is injunctive relief. Given that Article 8(3) of the Copyright Directive is positively stated, making active demands on national legal systems, it is unsurprising that this solution is employed in all three examined jurisdictions. The UK has implemented Article 8(3) with section 97A of the CDPA 1988. Litigation in the UK has focused heavily on injunctions. These have usually been targeted at internet access providers, requiring them to block websites carrying infringing content.47 In France, article 6-I-8 of the LCEN48 enables injunctions against internet access pro­viders and host service providers. Article L.336-2 of the CPI further expands the reach of injunctions to encompass any person capable of contributing to the prevention or termination of a copyright infringement. The broad terms of the later provision go far beyond the requirements of Article 8(3) of the ISD. As in the UK, the main recipients of injunctive orders have been internet access providers,49 although orders have also been issued against, for example, Google Search’s auto-complete function.50 44  See Haimo Schack, ‘Täter und Störer: Zur Erweiterung und Begrenzung der Verantwortlichkeit durch Verkehrspflichten im Wettbewerbs- und Immaterialgüterrecht’ in Festschrift für Dieter Reuter zum 70 Geburtstag am 16 Oktober 2010 (De Gruyter 2010) 1167. 45  See e.g. OLG Munich, 11 January 2006, 21 O 2793/05, (2006) 3 ZUM 255; OLG Hamburg, 13 May 2013, 5 W 41/13, (2013) 8 MMR 533—Hörspiel; LG Frankfurt a. M., 5 February 2014, 2-06 O 319/13, (2015) 2 ZUM 160—File-Hosting. See also Michael Gruenberger and Adolf Dietz, ‘Germany’ in Lionel Bently (ed.), International Copyright Law and Practice (Matthew Bender 2017), para. 8[1][c][i]. 46  See Thomas Hoeren and Silviya Yankova, ‘The Liability of Internet Intermediaries—The German Perspective’ [2012] IIC 501. 47 See João Pedro Quintais, ‘Global Online Piracy Study—Legal Background Report’ (July 2018) (under Questionnaire United Kingdom). 48  Loi no. 2004-575 du 21 juin 2004 pour la confiance dans l’économie numérique [Law no. 200-–575 of 21 June 2004 on confidence in the digital economy]. 49  See e.g. Tribunal de grande instance [High Court] (TGI) Paris Association des producteurs de cinema (APC) et autres v Auchan Telecom et autres [28 November 2013] (Fra.) and TGI Paris La société civile des producteurs phonographiques (SCPP) v la société Orange [4 December 2014] (Fra.). 50  See Cour de cassation Syndicat national de l’édition phonographique v la société Google France [12 July 2012] no. 11-20.358 (Fra.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Harmonizing Intermediary Copyright Liability in the Eu   325 The German system comes into its own in the area of injunctions. Germany mainly approaches intermediary liability through its distinctive Störerhaftung doctrine. Often translated as ‘disturber’ or ‘interferer’ liability, this allows for the issue of injunctions against persons who causally contribute to an infringement in violation of a reasonable duty to review (Prüfungspflicht). Importantly, it is not possible to claim against a Störer (interferer) for damages. While, to date, the German courts have been reluctant to find Störerhaftung in cases involving internet access providers,51 they have imposed an exceptionally strict interpretation of the doctrine on host service providers. A duty to review is considered to be unreasonable if it would unduly impair the business of the alleged disturber.52 At the same time, under the so-called ‘Kerntheorie’ (core theory), once notification of an infringement has been sent, the duty to review extends to all other infringements simi­lar in their core (Kern), including future infringements of an essentially similar nature.53 This has allowed the German courts to adopt the ‘notice-and-stay-down’ regime that was rejected in France.54 The German courts view this approach as compatible with the general monitoring prohibition, on the logic that the monitoring is targeted at infringements of pre-identified works making it ‘specific’.55

3.  Intermediary Liability and Tort Law The examination of the national approaches to intermediary liability reveals both ­overlaps and divergences. In order to help organize the messages emerging from the comparative analysis, it is useful to relate the three systems back to the main building blocks of tortious liability: fault and causation. Through associating the national intermediary liability rules with the underlying tort norms of each respective jurisdiction, the forces shaping those rules can be identified. As the research showed, the affinity of English and German law for the concept of joint tortfeasance can be explained through the concern both jurisdictions traditionally display for controlling against the overexpansion of fault liability. This has led both countries to set strict limits to liability for omissions.56 As shown previously, English 51  See e.g. BGH GEMA v Deutsche Telekom [26 November 2015] I ZR 3/14 (Ger.) and BGH Universal Music v O2 Deutschland [26 November 2015] I ZR 174/14 (Ger.). 52  See Thomas Hoeren and Viola Bensinger, Haftung im Internet—Die Neue Rechtslage (De Gruyter 2014) 366. 53  Joachim Bornkamm, ‘E-Commerce Directive vs. IP Rights Enforcement—Legal Balance Achieved?’ [2007] GRUR Int. 642. 54  See e.g. BGH Rapidshare I [12 July 2012] I ZR 18/11 (Ger.); BGH Rapidshare III [15 August 2013] I ZR 80/12 (Ger.). See also OLG Hamburg [1 July 2015] 5 U 87/12 (Ger.). 55  See Bornkamm (n. 53). Whether this argument is convincing is questionable. According to the CJEU, obligations to monitor all of the information transmitted by an access provider would be incompatible with the prohibition on general monitoring obligations. A similar interpretation could also apply to hosting providers. The issue is due to be explored by the CJEU in C-18/18 Facebook Ireland. 56  See Angelopoulos (n. 2) section 4.3.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

326   Christina Angelopoulos tort law achieves this objective through requiring that ‘duties of care’ be violated before a person be held liable for causing another damage through negligence, and taking a restrictive approach to the finding of affirmative duties of care. Along similar lines, Germany relies on the concepts of unlawfulness and Verkehrspflichten to achieve a simi­lar effect.57 These solutions are supported by the conservative theories the two countries apply to causation. English law focuses on the concept of ‘reasonable foreseeability’ (i.e. the idea that damage is too remote to be attributed to the defendant if it is of such a kind that a reasonable person could not have foreseen it). German law has elaborated complicated theories around whether a cause is ‘adequate’ (i.e. generally apt to cause the harm or at least to significantly increase the chance that it will happen) or whether the damage falls within the ‘scope of the rule’ (i.e. within the ‘scope of protection’ of the rule allegedly violated).58 The effect of these interpretations is to restrict substantially the possibility of accessory liability based on negligence. Instead, as seen in Section 2, the English and German systems show a preference for a ‘residual liability’ approach, which relies on findings of intention to overcome the causation gap and substantiate the liability of third parties.59 France stands apart, taking an expansive, unitary approach to both fault and caus­ ation. Although in practice the French legal system also connects the violation of the standard of reasonableness to the existence of an obligation to take action before liability for an omission will be found, French legal theory in this area is much more flexible. Similarly, the French courts are satisfied with a mere showing of conditio sine qua non60 in order to accept that a causative link binds the defendant’s behaviour to the damage. As a result, the French rules on liability have no need for complex joint tortfeasance the­or­ies. Instead, they can rely readily on the basic rules of negligence to support accessory liability. While these discrepancies might, at first sight, appear to doom the construction of a harmonized European regime in intermediary liability, reasons for optimism exist. Importantly, while the theoretical clashes between the European national systems might be significant, in practice the outcomes do not differ greatly. Indicatively, while France does not embed liability-restricting norms into its tort law, in the area of intermediary liability, the ECD safe harbours have been applied to achieve roughly the same effect as in the UK and Germany.61 EU influence additionally means that all three 57  Christian Von Bar, The Common European Law of Torts (Clarendon Press 2000), Vol. II, 211–12. 58  See Angelopoulos (n. 2) section 4.4.1. 59  In Germany, this approach is termed ‘psychisch vermittlete Kausalität’, e.g. ‘psychological caus­ation’, whereby the third party is connected to the act of the primary infringer through their psychological identification with it, see Walter van Gerven, Jeremy Lever, and Pierre Larouche, Tort Law (Hart 2000) 430. 60  Otherwise known as the ‘but for’ test this requires that, in order for the defendant’s behaviour to be considered as a cause of the plaintiff ’s injury, it must be demonstrated that the injury would not have occurred but for that behaviour. 61  Christina Angelopoulos, ‘Study on Online Platforms and the Commission’s New Proposal for a Directive on Copyright in the Digital Single Market’ (February 2017) 24–5 .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Harmonizing Intermediary Copyright Liability in the Eu   327 systems are pivoting towards a greater emphasis on injunctive relief. Thus, structure, rather than substance, is what diverges.62 It is structure that should also be the focus of EU har­mon­iza­tion proposals. Promisingly, existing projects on the harmonization of European tort law show how such structural harmonization can be achieved. The Principles of European Tort Law (PETL), developed by the European Group on Tort Law,63 and the Draft Common Frame of Reference (DCFR), compiled by the Study Group on a European Civil Code and the Acquis Group, are particularly helpful in this regard.64 The two projects provide useful harmonizing ‘glue’. Through its application, the niche area of European intermediary accessory copyright liability can be grounded in strong theory derived from the general principles for European tort law. Under the guidance of these projects, the building blocks of a truly European inter­ medi­ary copyright liability were investigated in the research. It was concluded that what is necessary is that both fault on the part of the intermediary and a causal link to the copyright infringement be shown. Comparative European tort law teaches that fault may consist of either intent or negligence. Intent is straightforward. It will be found where the defendant either meant to cause copyright infringement or knew that its behaviour could result in such infringement and meant to engage in that behaviour.65 Where intent exists, the importance of a causative link between the intermediary’s behaviour and the damage is diminished.66 Negligence depends on an assessment of reasonableness.67 In European tort law, reasonableness is found at the intersection between risk and care: if a defendant creates a risk of a sufficiently grave and probable harm, she will be obliged to take corresponding measures of care.68 Two elements relating to the person of the defendant must also be considered: the knowledge of the defendant regarding the risk of damage and her ability to avoid it. The defendant will only be unreasonable if she knew the risk or ought to have known it and if she could have avoided the risk or ought to have been able to avoid it.69 62  This is a known phenomenon in comparative law, see Zweigert and Kötz’s principle of praesumptio similitudinis (presumption of similarity). As they argue, this approach ‘rests on what every comparatist learns, namely that the legal system of every society faces essentially the same problems, and solves these problems by quite different means through very often with similar results’. See Konrad Zweigert and Hein Kötz, An Introduction to Comparative Law (Clarendon Press 1998) 34. 63  Available at . See also European Group on Tort Law, Principles of European Tort Law—Text and Commentary (Springer 2005). 64  See Christian von Bar, Eric Clive, and Hans Schulte-Nölke (eds), Principles, Definitions and Model Rules of European Private Law: Draft Common Frame of Reference (DCFR), prepared by the Study Group on a European Civil Code and the Research Group on EC private law (Acquis Group), Outline edn (Sellier 2009). 65  See DCFR Art. VI.-3:101. See also Angelopoulos (n. 2) section 4.2.1. 66  See ibid. section 4.2.1. See also Pierre Widmer, ‘Comparative Report on Fault as a Basis of Liability and Criterion of Imputation (Attribution)’ in Pierre Widmer (ed.), Unification of Tort Law: Fault (Kluwer Law Int’l 2005) 337–8 and Roderick Bagshaw, ‘Causing the Behaviour of Others and Other Causal Mixtures’ in Richard Goldberg (ed.), Perspectives on Causation (Hart 2011) 361. 67  Consider e.g. DCFR Art. VI.-3:102 and PETL Art. 4:102(1). See also Angelopoulos (n. 2) section 4.2.2.1. 68  See eg van Dam (n. 20) 805. 69  See van Dam (n. 20) 805-2.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

328   Christina Angelopoulos As this demonstrates, the assessment of liability in European tort law turns on a ­bal­an­cing exercise. Promisingly, this provides a connector with the fundamental rightsbased ‘fair balance’ jurisprudence of the CJEU. Particular attention is necessary in cases of omissions. Omissions are central to accessory liability—indeed, liability for others has been said to be the ‘prototype’ of liability for omissions.70 Although the examined jurisdictions differ in the ease with which they find liability for omissions, all agree that, to be objectionable, an omission must violate an affirmative duty to act. Such duties may exist in a variety of situations. While the DCFR folds the issue of omissions into causation, making no distinction between positive actions and omissions,71 the PETL lists the circumstances under which a duty to act positively to protect others from damage may exist. For the purposes of intermediary liability, the most relevant of these concern the creation and control of a source of danger and the ease of addressing the danger.72 The PETL and DCFR also take different approaches to causation. This is illustrative in itself. The DCFR is the more cautious of the two, depending causation on whether the damage can be regarded to be a ‘consequence’ of the defendant’s conduct.73 This allows it to incorporate a set of ‘causation controlling’ factors into its causation standard.74 To overcome the obstacle in which this results for handling accessories, the DCFR then introduces ‘residual liability’ rules along the lines of those found in the English and German systems.75 As a result, it uses intent to justify liability where a weaker causative connection exists.76 On the other hand, the PETL follows the principle of conditio sine qua non.77 This allows it to embrace an approach closer to France’s ‘multiple faults’ solution.78

4.  Building a Complete Framework for European Intermediary Liability in Copyright On the basis of the previous analysis, the research on which this chapter is based formulated a proposal for a harmonized European framework for intermediary liability in copyright.

70  Christian von Bar, The Common European Law of Torts (Clarendon Press 2000), Vol. II, 203. 71  See von Bar, Clive, and Schulte-Nölke (n. 64) 3423. 72  See PETL Art. 4:103. According to this, ‘a duty to act positively to protect others from damage may exist if law so provides, or if the actor creates or controls a dangerous situation, or when there is a special relationship between parties or when the seriousness of the harm on the one side and the ease of avoiding the damage on the other side point towards such a duty.’ 73  See DCFR Art. VI.-4:101. 74  See von Bar, Clive, and Schulte-Nölke (n. 64) 3425–6. 75  DCFR Art. VI.-4:102. See also Gerhard Wagner, ‘The Law of Torts in the DCFR’ in Gerhard Wagner (ed.), The Common Frame of Reference—A View from Law and Economics (Sellier 2009) 254. 76  See von Bar, Clive, and Schulte-Nölke (n. 64) 3445. 77  See PETL Art. 3:101. 78  See ibid. Art. 9:101(1)(a).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Harmonizing Intermediary Copyright Liability in the Eu   329 The proposal opted for a negligence-based approach along the ‘multiple faults’ paradigm. It thus relies on a concept of a ‘Basic Norm’ that prohibits harming others.79 Under this scheme, intermediary liability is understood not as a participation in the infringement of another, but as the violation of a prohibition against behaviour supportive of such infringements. The approach is well suited to EU harmonization, both because of its simplicity and due to the fact that all examined European systems include the necessary legal theoretical equipment for its application, even if they do not all choose to make use of it in the area of intermediary liability for copyright infringement. Moreover, it enables considerations of carelessness and intent to be taken into account. It is thus capable of delivering nuanced solutions appropriate for complex legal problems. Pursuant to this approach, the investigation of whether an intermediary should be liable for its users’ copyright infringements is connected to the extent of its fault; that is, how its behaviour compares with that of a ‘reasonable person’. The central question of the analysis accordingly becomes ‘what would a reasonable intermediary do?’ The answer depends on two fundamental elements: a conduct element, which examines the extent to which the intermediary caused the infringement, and a mental element, which considers whether the intermediary demonstrated the mindset of a reasonable inter­ medi­ary that found itself in the same circumstances. An optional third pillar of duties of care becomes relevant depending on the severity of the mental element involved. The conduct element is determined by the rules of causation. The proposal took an expansive approach to the conduct element inspired by the straightforward French conditio sine qua non solution. It thus embraced any non-minimal causal participation in the copyright infringement of another party.80 Under this model, ‘mere facilitation’ is accepted as sufficient to satisfy the conduct element. This approach was favoured in order to minimize the effect of subjective characterizations of the conduct. Terms such as ‘authorization’, ‘instigation’, or ‘common design’, employed by the English and German joint tortfeasance theories, imply an inherent mental dimension.81 Yet this should not be subsumed within the conduct element, but addressed directly through the analysis of the mental element. As all internet intermediaries will meet the requirement of causal participation as soon as their services are used by another to commit a copyright infringement, it follows that it is on the mental element that liability will turn. The mental element can be subdivided into two types: intent and knowledge. If an intermediary intended an infringement, its behaviour must be understood to be unreasonable by definition. Liability should therefore automatically ensue. As Yen observes, ‘[i]t is one thing to distribute technology that could be used to infringe in the hope that others will use it legitimately. It is something else to distribute the same technology in the hope that others will use it to infringe.’82 In this way, pleasingly to the fair-minded comparative lawyer, where intent exists the system reverts from the ‘multiple faults’ approach to the ‘residual liability’ or ‘single fault’ model: if an intermediary intended an 79  For examples of a model ‘Basic Norm’ or ‘Basic Rule’, see ibid. Art. 1:101 and DCFR Art. 1:101. 80  For more details on this analysis, see Angelopoulos (n. 2) sections 5.1.1 and 5.2. 81  See Philip Sales, ‘The Tort of Conspiracy and Civil Secondary Liability’ (1990) 49 CLJ 491. 82  Alfred Yen, ‘Third Party Copyright after Grokster’ (2007) 16(3) Information & Communications Tech. L. 233.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

330   Christina Angelopoulos infringement, that infringement should be seen as its own and its liability should be no different to that of the primary infringer. The second type of mental element is knowledge. In line with CJEU case law,83 the threshold of knowledge that was opted for in the proposal was a low one, permitting both general and specific knowledge. It was similarly suggested that both actual and constructive knowledge should suffice. Following the example of Article 14 of the ECD and the ‘ought to have known’ standard set by the CJEU, the proposal interprets ‘constructive knowledge’ in a hybrid objective/subjective manner: the intermediary will only be considered to have knowledge if it was aware of facts or circumstances which would have made the infringement apparent to a ‘diligent economic operator’—or, in the language of tort law, the ‘reasonable intermediary’.84 If the intermediary has no intent, but does have knowledge of the infringement, the possibility of negligence liability must be explored. This raises the question of identifying the duties of care that burden the reasonable intermediary. While listing duties of care leads to greater legal certainty, flexibility is necessary for law that can remain re­sili­ent in the face of changing technologies. Laying out the process for determining the duties of care is thus a preferable model. This, again, allows for an amalgamation of the examined national approaches. As noted previously, the behaviour of a reasonable intermediary (and thus the standard of care) will depend on the interaction between the risk created by the inter­medi­ary’s actions and the care necessary to avoid that risk. In other words, a b ­ alancing exercise is required. This follows both from the principles of European tort law85 and from the Charter-based approach taken by the CJEU.86 The research into European tort law revealed that four fundamental ‘criteria of care’ would be appropriate to guide the balancing between risk and care.87 In particular, the existence of a risk will depend on: (1) the seriousness of the damage; and (2) the probability of damage.88 The height of the necessary care is assessed by reference to: (3) the benefit of the conduct; and (4) the burden of the precautionary measures.89 The latter two criteria should be examined in view of the following: (a) the interests of the intermediary; (b) the interests of the intermediaries’ users; and (c) the general interest.90 83  See C-610/15 (n. 17) para. 45. 84  See C-324/09 (n. 6) paras 120–4. 85  See e.g. van Dam (n. 20) 805. 86  See Section 1. 87  See van Dam (n. 20) 805. See also Stijin Smet, ‘Resolving Conflicts between Human Rights: A Legal Theoretical Analysis in the Context of the ECHR’, unpublished PhD thesis, Ghent University (2014) 192–3. The factor-infused approach to balancing taken in the case law of the European Court of Human Rights on intermediary liability is also worth examining, esp.: Delfi v Estonia [GC] App. no. 64569/09 (ECtHR, 16 June 2015); Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary App. no. 22947/13 (ECtHR, 2 February 2016); Rolf Anders Daniel Pihl v Sweden App. no. 74742/14 (ECtHR, 9 March 2017). 88  See van Dam (n. 20) 805-2. 89  ibid. 805-2. 90  See PETL Art. 2:102(6): ‘In determining the scope of protection, the interests of the actor, especially in liberty of action and in exercising his rights, as well as public interests also have to be taken into

OUP CORRECTED PROOF – FINAL, 05/06/2020, SPi

Harmonizing Intermediary Copyright Liability in the Eu   331 In addition to the above, European tort law teaches that two elements relating to the person of the defendant must also be considered: (5) the knowledge of the defendant regarding the risk of damage; and (6) the defendant’s ability to avoid it.91 Finally, as the duties of care in question are affirmative, an extra criterion must be added in the form of: (7) the existence of a ‘special reason’ justifying an obligation to act. The final three criteria can be digested into a single heading of the defendant’s ‘responsibility’ to act. As a result, ‘responsibility’ emerges as an additional consideration to the two key notions of risk and care. A responsibility can arise from: (a) the type of knowledge of the risk that the intermediary has (i.e. the foreseeability of the risk); (b) the intermediary’s skills with regard to the measure in question (i.e. the avoidability of the risk); and (c) the existence of a special duty to take affirmative care (generally resulting from the creation and control of the source of danger, see the PETL noted earlier). The resultant system can be represented as shown in Figure 16.1: (1) The RISK of harm, including:

The CARE needed to avoid the harm

(2) The benefit of the conduct, including:

(a) the seriousness of the damage (b) the probability of the damage (a) the interests of the intermediary (b) the interests of the intermediary’s users (c) the general interest

(3) The burden of the measures of care, including:

(a) the interests of the intermediary (b) the interests of the intermediary’s users (c) the general interest

(4) The RESPONSIBILITY of the intermediary, including:

(a) the foreseeability of the risk (b) the avoidability of the risk (c) the existence of a special duty

Figure 16.1  The criteria of balancing

Ultimately, the ‘criteria of care’ can be organized into two sets of antipodal factors: those pushing in favour of liability (the risk and the responsibility of the defendant) and consideration.’ In the area of intermediary liability, the case law of the CJEU requires adding the interests of the intermediary’s users to this list.’ 91  See van Dam (n. 20) 805-2.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

332   Christina Angelopoulos

risk + responsibility

benefit + burden

Figure 16.2  Antipodal balancing criteria Source: image taken from http://www.publicdomainpictures.net, © Karen Arnold.

those pushing against (the benefit of the conduct and the burden of the measures of care) (see Figure 16.2). Happily, analyses of balancing in the area of human rights appear to move in a strikingly similar direction.92 In a final analytical step, the research applied the ‘criteria of care’ to six measures that current case law indicates as candidate duties of care. This benchmarking ­exercise concluded that the notification to the authorities, as well as obligations to issue general warnings to end-users to avoid copyright infringement, should always be deemed ­proportionate. Similarly, the blocking or removal of infringements and the suspension of the perpetrator of the infringement from the intermediary’s services will usually be justifiable as well, although a court order may be necessary. Duties to identify primary infringers should require a court order. Duties to filter content or otherwise generally monitor it should invariably be considered out of bounds, even to courts.93 While the reliance on balancing may appear to make intermediary copyright liability a moving target, the flexibility with which it imbues the framework should be understood to be a feature rather than a bug. The provision of clear and appropriate factors creates structure, which can help guide effective and targeted judicial reasoning. This allows for a future-proof regime that can provide greater long-term legal certainty. Lastly, the proposal suggested a proportionate liability approach. While solidary (i.e. joint and several) liability has deep roots in European tort law,94 extracting equal 92  See Stijn Smet, ‘Resolving Conflicts between Human Rights: A Theoretical Analysis in the Context of the ECHR’, unpublished PhD thesis, Ghent University (2014) 192–3. See also Angelopoulos (n. 2) section 5.4.2. 93  For more details on this analysis, see Angelopoulos ibid. section 5.5. 94  See European Group on Tort Law (n. 63) 138–41.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Harmonizing Intermediary Copyright Liability in the Eu   333 li­abil­ity from both a primary tortfeasor and an accessory will not always lead to a fair result. Instead, a proportionate distribution of liability should be considered, dependent on the extent of each party’s contribution to the infringement.95 On this reasoning, the proposal suggested that where the intermediary acts with intent, solidary liability should apply. However, where there is mere negligence, restraint is necessary. Under the influence of the current EU framework’s remedy-based division between neutral and non-neutral intermediaries, and inspired by Germany’s pragmatic Störerhaftung regime, the proposal recommends that, in such cases, only injunctive relief should be available.

5.  Closing Remarks Once a peripheral issue in copyright law, in recent years the liability of intermediaries for the copyright infringements of their users has burst on to centre stage. The topic is a high-stakes one, affecting the competing interests of high-powered stakeholders, as well as those of the general public. Traditional answers developed in the Member States have been stretched by modern fact patterns, while for a long time harmonization at the EU level rested solely on the negatively stated immunities and the injunction-focused Article 8(3) of the ISD. The CJEU has set the ball rolling towards a more comprehensive European solution. In its case law, the Court has identified the heart of intermediary liability in the doctrine of a ‘fair balance’ between fundamental rights.96 More recently, the Court’s judgments have embraced a knowledge-infused solution that evokes the general principles of fault liability. This approach appears appropriate. The challenging topic of intermediary li­abil­ity touches on the very fundaments of our legal system. As a result, any harmonized framework must be grounded in the fundamental rights that form the foundation of the EU legal order. At the same time, a substantive solution cannot be found in the vague edicts of primary law alone. Instead, the governance of horizontal relationships between individuals is best dealt with through the rules of tort law that have been developed for precisely that purpose. Properly crafted, these should represent the mirror image of fundamental rights, reflected from the other side of the private/public law divide. Inspired from these starting points, the research on which this chapter is based sought to develop a proposal for a harmonized EU framework for intermediary liability that takes appropriate account of both areas of law. The objective was the development of 95  For a broader discussion on proportionate liability, see W.V.H. Rogers, Winfield & Jolowicz on Tort (Sweet & Maxwell 2010) 988–9; Kit Barker and Jenny Steele, ‘Drifting towards Proportionate Liability: Ethics and Pragmatics’ (2015) 74 CLJ 49. 96  See e.g. C-160/15 (n. 14) para. 31.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

334   Christina Angelopoulos a well-grounded, principled solution that is informed by existing EU and national law on copyright, tort, and fundamental rights. By learning from, incorporating, and sim­ ultaneously improving on these sources, a sensitive and comprehensive system that allows for fine distinctions and delivers clear definitions can be built. Instead of pursuing piecemeal and reactive proposals, future attempts at harmonization by the EU legislator should follow such a principled approach.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 17

The Dir ect Li a bilit y of I n ter m edi a r ie s Eleonora Rosati*

One of the most interesting and relevant developments in respect of intermediaries concerns their direct (primary)—rather than just secondary (accessory)—liability in relation to user activities, including user-uploaded content (UUC). The Court of Justice of the European Union (CJEU) expressly envisaged the possibility of direct liability for copyright infringement in the context of its increasingly expansive case law on the right of communication to the public in Article 3(1) of Directive 2001/29 (the Information Society Directive),1 including the 2017 decision in C-610/15 Stichting Brein (The Pirate Bay case).2 This chapter explains how the CJEU has come to consider the possibility of direct li­abil­ity of intermediaries in relation to user activities and undertakes a reflection on the implications of said approach, also including the possibility of extending the reasoning in Stichting Brein to less egregious scenarios than the Pirate Bay.

1.  The right of communication to the public as construed through case law The right of communication to the public in Article 3(1) of the Information Society Directive has been subject to a significant number of referrals since the first ruling in *  This chapter is partly derived from Chapter 3 of my monograph Copyright and the Court of Justice of the European Union (OUP 2019). 1  See Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L167/10. 2  See C-610/15 Stichting Brein v Ziggo BV and XS4All Internet BV [2017] ECLI:EU:C:2017:456.

© Eleonora Rosati 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

336   Eleonora Rosati 2006 in C-306/05 SGAE.3 By relying on international sources and a purpose-driven interpretation of the Information Society Directive, the CJEU has construed this exclusive right broadly and in such a way as to encompass, under certain conditions, different types of acts, including the making available of TV sets in certain contexts, linking to protected content, the provision of certain types of set-up boxes, indexing activities by a platform, and cloud-based recording services.4 At the international level, the right of communication to the public received its first formulation in Article 11bis of the Berne Convention, as adopted in 1928 and later revised with the Brussels Act 1948.5 The World Intellectual Property Organization (WIPO) Copyright Treaty supplemented the Berne Convention6 and introduced the concept of ‘making available to the public’.7 The wording of Article 3(1) of the Information Society Directive is derived from Article 8 of the WIPO Copyright Treaty.8 However, Article 3(1) of the Information Society Directive does not define the concept of ‘communication to the public’. This provision, in fact, only states that EU: 3  They are (in chronological order): C-306/05 SGAE v Rafael Hoteles [2006] ECLI:EU:C:2006:764; C-136/09 Organismos Sillogikis Diacheirisis Dimiourgon Theatrikon kai Optikoakoustikon Ergon v Divani Akropolis Anonimi Xenodocheiaki kai Touristiki Etaireai [2010] ECLI:EU:C:2010:151; C-283/10 Circul Globus Bucureşti v UCMR—ADA [2011] ECLI:EU:C:2011:772; C-403 and 429/08 Football Association Premier League and Others v QC Leisure [2011] ECLI:EU:C:2011:631; C-431/09 Airfield and Canal Digitaal v SABAM [2010] ECLI:EU:C:2011:648; C-135/10 Società Consortile Fonografici (SCF) v Marco Del Corso [2012] ECLI:EU:C:2012:140; C-162/10 Phonographic Performance (Ireland) Ltd v Ireland and Attorney General [2012] ECLI:EU:C:2012:141; C-607/11 ITV Broadcasting v TVCatchup [2012] ECLI:EU:C:2013:147; C-466/12 Nils Svensson and others v Retriever Sverige AB [2014] ECLI:EU:C:2014:76; C-351/12 OSA [2014] ECLI:EU:C:2014:110; C-348/13 BestWater International GmbH v Michael Mebes and Stefan Potsch [2014] ECLI:EU:C:2014:2315; C-279/13 C More Entertainment AB v Linus Sandberg [2014] ECLI:EU:C:2015:199; C-151/15 Sociedade Portuguesa de Autores CRL v Ministério Público and Others [2015] ECLI:EU:C:2015:468; C-325/14 SBS Belgium v SABAM [2015] ECLI:EU:C:2015:764; C‑117/15 Reha Training v GEMA [2016] ECLI:EU:C:2016:379; C-160/15 GS Media BV v Sanoma Media Netherlands BV [2016] ECLI:EU:C:2016:644; C-527/15 Stichting Brein v Jack Frederik Wullems [2017] ECLI:EU:C:2017:300; C-138/16 AKM v Zürs.net Betriebs GmbH [2017] ECLI:EU:C:2017:218; C-610/15 (n. 2); C-265/16 VCAST v RTI [2017] ECLI:EU:C:2017:913; and C-161/17 Land Nordrhein-Westfalen v Dirk Renckhoff [2018] ECLI:EU:C:2018:634. 4  According to some commentators, rather than a unified concept of communication to the public, in its case law the CJEU has created specific sui generis groups of communication to the public cases. See Birgit Clark and Sabrina Tozzi, ‘“Communication to the Public” under EU Copyright Law: an Increasingly Delphic Concept or Intentional Fragmentation?’ (2016) 38(12) EIPR 715, 717. 5 See World Intellectual Property Organization, Guide to Copyright and Related Rights Treaties Administered by WIPO and Glossary of Copyright and Related Rights Terms (2003) BC-11bis.1. 6  Art. 1(4) WIPO Copyright Treaty (WCT) mandates compliance with Arts 1–21 of and the Appendix to the Berne Convention. 7  On the concept of making available within Art. 8 WCT, see Michel Walter, ‘Article 3 Right of Communication to the Public of Works and Right of Making Available to the Public of Other Subject Matter’ in Michel Walter and Silke von Lewinski, European Copyright Law—A Commentary (OUP 2010) 975–80. 8  It may be interesting to contrast EU lawmaking (and subsequent expansive interpretations of the CJEU) with the United States, which took the position that the existing rights of distribution and public performance under the US Act were sufficient to comply with the WCT’s making available right and no changes to the statute were needed in the light of its new international obligations. See United States Copyright Office, The Making Available Right in the United States—A Report of the Register of Copyrights (February 2016) 15–18 .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Direct Liability of Intermediaries   337 Member States shall provide authors with the exclusive right to authorise or prohibit any communication to the public of their works, by wire or wireless means, including the making available to the public of their works in such a way that members of the public may access them from a place and at a time individually chosen by them.

Lacking a definition of the notion of ‘communication to the public’, the CJEU has sought to determine the meaning and scope of this concept in the light of the objectives pursued by the Information Society Directive, notably that of ensuring a high level of protection of intellectual property and for authors (recitals 4 and 9). In its rich body of case law on Article 3(1) of the Information Society Directive, the CJEU has consistently stated that the essential requirements of Article 3(1) are an ‘act of communication’, directed to a ‘public’. In addition, the CJEU has also highlighted the importance of considering additional criteria, which are not autonomous and are interdependent, and may—in different situations—be present to widely varying degrees. Such criteria must be applied both individually and in their interaction with one another.9 Starting from ‘public’, this is a concept that has not been straightforward to comprehend, also because the relevant understanding may change depending on the context.10 In general terms, the notion of ‘public’ is that of an indeterminate and fairly large (above de minimis) number of people.11 In the case of a communication concerning the same works as those covered by the initial communication and made by the same technical means (e.g. the internet), the communication must be directed to a ‘new’ public. Derived from the interpretation given by the 1978 WIPO Guide to the Berne Convention of Article 11bis(1)(iii) of the Berne Convention as first employed by Advocate General La Pergola in his Opinion in C-293/98 EGEDA,12 the ‘new public’ that is relevant to the establishment of Article 3(1) applicability is the public which was not taken into account by the relevant rightholder when it authorized the initial communication to the public.13 With regard to the notion of ‘act of communication’, case law now appears solidly orient­ed in the sense of requiring the mere making available of a copyright work—not 9  See C-135/10 (n. 3) para. 79; C-162/10 (n. 3) para. 30; C‑117/15 (n. 3) para. 35; C-160/15 (n. 3) para. 34; C-527/15 (n. 3) para. 30; C-610/15 (n. 2) para. 25. 10  See Stavroula Karapapa, ‘The requirement for a “new public” in EU copyright law’ (2017) 42(1) E.L. Rev. 63, 66. 11  See C-306/05 (n. 3) para. 38; C-135/10 (n. 3) para. 84; C-162/10 (n. 3) para. 33; C-607/11 (n. 3) para. 32; C-466/12 (n. 3) para. 21; C-351/12 (n. 3) para. 27; C-151/15 (n. 3) para. 19; C-325/14 (n. 3) para. 21; C-160/15 (n. 3) para. 36; C-527/15 (n. 3) para. 45; C-138/16 (n. 3) para. 24; C-610/15 (n. 3) paras 27 and 42. 12 C-293/98 EGEDA [1999] ECLI:EU:C:1999:403, Opinion of AG La Pergola, para. 20. See further Bernt Hugenholtz and Sam Van Velze, ‘Communication to a New Public? Three Reasons Why EU Copyright Law Can Do Without a “New Public”’ (2016) 47(7) IIC 797, 802–3. 13  See C-306/05 (n. 3) paras 40 and 42; C-136/09 (n. 3) para. 39; C-403 and 429/08 (n. 3) para. 197; C-431/09 (n. 3) para. 72; C-466/12 (n. 3) para. 24; C-351/12 (n. 3) para. 31; C‑117/15 (n. 3) para. 45; C-160/15 (n. 3) para. 37; C-527/15 (n. 3) para. 47; C-610/15 (n. 3) para. 28; C-161/17 (n. 3) para. 24. But cf. C-138/16 (n. 3) paras 26–7, suggesting that consideration of whether the communication in question is addressed to a ‘new public’ is also required when the specific technical means used is different. On whether terms and conditions of use of a certain website might be relevant to determine whether the public targeted by the defendant’s link is ‘new’, see (arguing in the negative) Pauline McBride, ‘The “New Public” Criterion after Svensson: the (Ir)relevance of Website Terms and Conditions’ (2017) 2017/3 IPQ 262, 275–7.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

338   Eleonora Rosati also its actual transmission14—in such a way that the persons forming the public may access it, irrespective of whether they avail themselves of such opportunity.15 In cases where the CJEU has held the making available of a work to be sufficient, the Court has however indicated the need to consider whether there is a necessary and deliberate intervention on the side of the user/defendant, without which third parties could not access the work at issue. More specifically, the user performs an act of communication when it intervenes—in full knowledge of the consequences of their action—to give access to a protected work to its customers, and does so, in particular, where, in the absence of that intervention, their customers would not, in principle, be able to enjoy the work.16 In this sense, the intervention of the user/defendant must result from a role that is ‘incontournable’; that is, an essential/indispensable role.17 With particular regard to the notion of essentiality/indispensability of one’s own intervention, the Court has recently clarified that an intervention which facilitates access to unlicensed content that would otherwise be more difficult to locate qualifies as an essential/indispensable intervention. Over time, the CJEU has dismissed attempts to interpret this criterion narrowly. A clear example is C-160/15 GS Media. In his Opinion in that case, Advocate General Wathelet had excluded tout court that the unauthorized provision of a link to a copyright work—whether published with the consent of the rightholder or not—could be classified as an act of communication to the public. This would be so on consideration that, to establish an act of communication, the intervention of the ‘hyperlinker’ must be vital or indispensable in order to benefit from or enjoy the relevant copyright work. Hyperlinks posted on a website that direct to copyright works freely accessible on another website cannot be classified as an ‘act of communication’: the intervention of the operator of the website that posts the hyperlinks is not indispensable to the making available to users the works in question.18 Another criterion considered by the CJEU is whether or not the user/defendant merely provides physical facilities. While the mere provision of physical facilities does not amount to an act of communication to the public (recital 27), the installation of such 14  This appeared to be the case in: C-283/10 (n. 3) para. 40; C-403 and 429/08 (n. 3) paras 190, 193, and 207; C-351/12 (n. 3) para. 25; C-325/14 (n. 3) para. 16; and C‑117/15 (n. 3) para. 38. 15  See C-306/05 (n. 3) para. 43; C-466/12 (n. 3) para. 19; C-160/15 (n. 3) para. 27; C-527/15 (n. 3) para. 36; C-138/16 (n. 3) para. 20; C-610/15 (n. 3) para. 19; C-161/17 (n. 3) para. 20. On the accessibility criterion, see (critically) Justine Koo, ‘Away we Ziggo: the Latest Chapter in the EU Communication to the Public Story’ (2018) 13(7) JIPLP 542, 545–6. 16  See C-306/05 (n. 3) para. 42; C-403 and 429/08 (n. 3) paras 194 and 195; C-431/09 (n. 3) para. 79; C-135/10 (n. 3) para. 82; C-162/10 (n. 3) para. 31; C‑117/15 (n. 3) para. 46; C-160/15 (n. 3) para. 35; C-527/15 (n. 3) para. 31; C-610/15 (n. 3) para. 26. 17  While the original language versions (French) of relevant judgments use the adjective ‘incontournable’ consistently, in the English versions that is not always the case: e.g. in C‑117/15 (n. 3) para. 46 and C-160/15 (n. 3) para. 35 the adjective used is ‘indispensable’, while in C-527/15 (n. 3) para. 31 and C-610/15 (n. 2) para. 26 the adjective ‘essential’ is employed. See, however, Giancarlo Frosio, ‘To Filter or not to Filter? That is the Question in EU Copyright Reform’ (2018) 36(2) AELJ 101, 114, suggesting that there would instead be a difference between the standards of ‘indispensability’ and ‘essentiality’ of one’s own role. Recently, see also the Opinion of Advocate General Maciej Szpunar in STIM and SAMI, C-753/18, EU:C:2020:4, suggesting a narrower construction of the notion of intervention ‘incountournable’. 18  Opinion of AG Melchior Wathelet in C-160/15 GS Media BV v Sanoma Media Netherlands BV [2016] ECLI:EU:C:2016:221, paras 57–60.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Direct Liability of Intermediaries   339 facilities may make public access to copyright works technically possible, and thus fall within the scope of Article 3(1) of the Information Society Directive.19 In addition to the requirements of an act of communication directed to a public, the Court has also considered—from time to time—other non-autonomous and inter­ depend­ent criteria (having no clear textual basis), necessary to undertake an individual assessment of the case at hand. Such criteria may, in different situations, be present to widely varying degrees. They must be applied both individually and in their interaction with one another.20 In C-160/15 GS Media, the Court, among other things, relied in particular on the ‘profit-making’ character of the communication at issue to determine the potential liability of the ‘hyperlinker’ for the posting of links to unlicensed content. Prior to C-160/15 GS Media, the profit-making character of the communication in question had not been given the centrality that it acquired in that case: in C-117/15 Reha Training, for instance, the Grand Chamber of the CJEU considered that this criterion, while not irrelevant, would not however be decisive.21 In GS Media, instead, the Court adopted a rebuttable presumption that ‘when the posting of hyperlinks is carried out for profit, it can be expected that the person who posted such a link carries out the necessary checks to ensure that the work concerned is not illegally published on the website to which those hyperlinks lead, so that it must be presumed that that posting has occurred with the full knowledge of the protected nature of that work and the possible lack of consent to publication on the internet by the copyright holder’.22 Overall, in the context of communication to the public by linking, the Court deemed it necessary to move towards an assessment in which the subjective element is decisive to determine prima facie liability.23 The operation of this presumption was confirmed in the subsequent ruling in C-527/15 Stichting Brein.24 As discussed in detail elsewhere, it might not be self-evident whether the presence of a profit-making intention should be assessed in relation to the specific act of communication at hand, or the broader context in which such an act is performed. Although both alternatives may be plausible, consideration of the context in which the relevant link is provided is more in line with existing CJEU case law, both preceding and following GS Media.25 In C-306/05 SGAE, C-403 and 429/08 Football 19  See C-306/05 (n. 3) paras 45–7. 20  See C-160/15 (n. 3) para. 34, referring to C-135/10 (n. 3) para. 79; C-162/10 (n. 3) para. 30 and C‑117/15 (n. 3) para. 35. 21  See C‑117/15 (n. 3) para. 49, referring to C‑607/11 (n. 3) para. 43 and C‑403 and 429/08 (n. 3) para. 204. Commenting favourably on the consideration of the profit-making character of the communication at issue, see Poorna Mysoor, ‘Unpacking the right of communication to the public: a closer look at international and EU copyright law’ (2013) 2013/2 IPQ 166, 182. 22  C-160/15 (n. 3) para. 51. 23  On this, see (critically) Tatiana Synodinou, ‘Decoding the Kodi Box: to Link or not to Link?’ (2017) 39(12) EIPR 733, 735. 24  See C-527/15 (n. 3) paras 49 and 51. 25 See Eleonora Rosati, ‘GS Media and its Implications for the Construction of the Right of Communication to the Public within EU Copyright Architecture’ (2017) 54(4) CML Rev. 1221, 1237–8. In a similar sense, see also Birgit Clark and Julia Dickinson, ‘Theseus and the Labyrinth? An Overview of “Communication to the Public” under EU Copyright Law: After Reha Training and GS Media Where Are We Now and Where Do We Go From Here?’ (2017) 39(5) EIPR 265, 269–70. Submitting instead that the

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

340   Eleonora Rosati Association Premier League and Others, and C-117/15 Reha Training, in fact, the Court considered that the profit-making nature of the communication would be apparent from the fact that the defendants transmitted the relevant works in their own establishment (hotels, a public house, and a rehabilitation centre, respectively) in order to benefit therefrom and attract customers to whom the works transmitted were of interest.26 In C-527/15 Stichting Brein, the CJEU identified the profit-making intention of the defendant in the circumstance that the relevant multimedia player ‘is supplied with a view to making a profit, the price for the multimedia player being paid in particular to obtain direct access to protected works available on streaming websites without the consent of the copyright holders’.27

2.  Liability of Platform Operators for the Making of Acts of Communication to the Public: The Pirate Bay Case In its 2017 judgment in C-610/15 Stichting Brein, the CJEU further developed its construction of the right of communication to the public in Article 3(1) of the Information Society Directive, and clarified under what conditions the operators of an unlicensed online platform are potentially liable for copyright infringement. The operators of a platform that makes available to the public third party uploaded copyright content and provides functions such as indexing, categorization, deletion, and filtering of content may be liable for copyright infringement, jointly with the users. For a finding of liability, it is not necessary for the operator to possess actual knowledge of the infringing character of the content uploaded by users. This reference for a preliminary ruling from the Dutch Supreme Court arose in the context of litigation between the Dutch anti-piracy foundation BREIN and two internet access providers regarding the application, by the former, for an order that would require the latter to block access for their customers to the website of the Pirate Bay. An engine for peer-to-peer (P2P) file-sharing, the Pirate Bay does not host any protected works. However, it operates a system by means of which metadata on protected works which is present on users’ computers is indexed and categorized for users thereby allowing users to trace, upload, and download the protected works. It is estimated that the near totality (90–5 per cent) of the files shared on the network of the Pirate Bay

profit-making intention of the ‘hyperlinker’ is to be appreciated with regard to the particular act of hyperlinking, see Tito Rendas, ‘How Playboy Photos Compromised EU Copyright Law: the GS Media Judgment’ (2017) J. of Internet L. 11, 14. 26  See C-306/05 (n. 3) para. 44; C-403 and 429/08 (n. 3) paras 205–6; C‑117/15 (n. 3) paras 63–4. 27  C-527/15 (n. 3) para. 51.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Direct Liability of Intermediaries   341 contain copyright works distributed unlawfully.28 Despite several attempts to prevent access to the Pirate Bay, including blocking injunctions against internet service providers (ISPs) in several jurisdictions, the platform—also by using different domain names— remains easily accessible. The Dutch Supreme Court sought guidance from the CJEU on whether the operators of a website such as the Pirate Bay are to be regarded as doing acts of communication to the public within the meaning of Article 3(1). To answer this question, the CJEU noted that the right of communication to the public, on the one hand, has a preventive character and must be interpreted broadly and, on the other hand, requires an individual assessment that depends on the circumstances of the case.29 The Court agreed with Advocate General Szpunar that in the case at hand there could be no dispute that acts of communication to the public were being performed,30 and were directed to a ‘public’ (a ‘new public’).31 The point was, however, to determine whether the platform operators were responsible for them. Considering the first requirement in Article 3(1)—that is, the need for an ‘act of communication’—the Court acknowledged that the works made available to the users of the Pirate Bay were placed online on the platform not by the platform operators but by the users. However, by making the platform available and managing the platform, its op­er­ators provided users with access to the works concerned. They could, therefore, be regarded as playing an essential role in making the works in question available. As regards the requirement of full knowledge of the relevant facts, this was satisfied by consideration of how the Pirate Bay operators indexed torrent files so to allow users of the platform to locate those works and share them within the context of a P2P network. Without such intervention, it would not be possible, or it would be more difficult, for users to share the works. The Court also dismissed the argument that the Pirate Bay operators could be regarded as providing mere physical facilities for enabling or making a communication, thus falling outside the scope of Article 3(1). The undertaking by the Pirate Bay op­er­ators of indexing, categorization, deleting, or filtering activities ruled out any assimilation to the mere provision of facilities within the meaning of recital 27. The making available and management of an online sharing platform must therefore be considered an act of communication for the purposes of Article 3(1).32 Turning to the requirement that the communication must be directed to a ‘new public’—that is, a public not taken into account by the copyright holders when they authorized the initial communication—the CJEU concluded that this requirement would also be met. The Court referred to the fact that the Pirate Bay operators were informed that their platform provided access to works published without the authorization of the relevant rightholders.33 However, the CJEU did not limit liability to situations 28 C-610/15 Stichting Brein v Ziggo BV and XS4All Internet BV [2017] ECLI:EU:C:2017:99, Opinion of AG Szpunar, para. 23. 29  See C-610/15 (n. 3) para. 22. 30  ibid. para. 35. 31  ibid. paras 40–4. 32  ibid. paras 36–9. 33  ibid. para. 45.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

342   Eleonora Rosati of actual knowledge (as the Advocate General had done): it also included constructive know­ledge (‘could not be unaware’) and arguably more. In relation to constructive know­ledge, the Court observed how the Pirate Bay operators: could not be unaware that this platform provides access to works published without the consent of the rightholders, given that, as expressly highlighted by the referring court, a very large number of torrent files on the online sharing platform [the Pirate Bay] relate to works published without the consent of the rightholders. In those circumstances, it must be held that there is communication to a ‘new public’.34

Although the Court did not mention it, liability based on ‘constructive’ knowledge echoes the reasoning in the decision in C-324/09 L’Oréal and Others, notably the part in which the CJEU suggested that the safe harbour in Article 14 of the e-Commerce Directive35 would not apply to an information society service which is aware of facts or circumstances on the basis of which a diligent economic operator should have identified the illegality in question and acted in accordance with Article 14(1)(b) of that Directive.36 The Court could have limited liability to situations of actual or constructive know­ ledge (as per the ‘diligent economic operator’ criterion). However, if that were the case, it would be difficult to understand the meaning of paragraphs 46 and 47 of the judgment, in which the CJEU referred to the profit-making intention of the defendants and seemingly linked that to a finding of prima facie liability: [46] Furthermore, there can be no dispute that the making available and management of an online sharing platform, such as that at issue in the main proceedings, is carried out with the purpose of obtaining profit therefrom, it being clear from the observations submitted to the Court that that platform generates considerable advertising revenues. [47] Therefore, it must be held that the making available and management of an online sharing platform, such as that at issue in the main proceedings, constitutes a  ‘communication to the public’, within the meaning of Article  3(1) of Directive 2001/29.

Although it did not refer explicitly to it, the Court had C-160/15 GS Media in mind (the Judge-Rapporteur was the same in both cases: Marko Ilešič), when it appeared to link together the making available and management of an online sharing platform, the 34  ibid. (emphasis added). 35  Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1. 36  See C-324/09 L’Oréal SA and Others v eBay International AG and Others [2011] ECLI:EU:C:2011:474, para. 120. See further, Eleonora Rosati, ‘The CJEU Pirate Bay Judgment and its Impact on the Liability of Online Platforms’ (2017) 39(12) EIPR 737, 743–4. For a discussion of the nature of ‘limitation’ or ‘exemption’ of the safe harbour within Art. 14 of the e-Commerce Directive, see further Eleonora Rosati, ‘Why a Reform of Hosting Providers’ Safe Harbour is Unnecessary under EU Copyright Law’ (2016) 38(11) EIPR 668, 671–2.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Direct Liability of Intermediaries   343 profit-making intention of their operators, and prima facie liability under Article 3(1). In particular, the relevant part of that judgment is paragraphs 47 to 54. As in that case, in Stichting Brein the CJEU implied that the operator of an online platform that does so ‘with the purpose of obtaining profit therefrom’37 can be expected to have undertaken all the necessary checks to ensure that the work concerned is not illegally published on the website to which those hyperlinks lead, so that it must be presumed that the posting has occurred with the full knowledge of the protected nature of that work and the pos­sible lack of consent to publication on the internet by the copyright holder. In such circumstances, so far as that rebuttable presumption is not rebutted, the act of posting a hyperlink to a work which was illegally placed on the internet constitutes a ‘communication to the public’ within the meaning of Article 3(1) of the Information Society Directive.38 This interpretation finds support in two additional considerations. The first is that the reasoning of the Court follows Stichting Brein extensively (also here the JudgeRapporteur was Ilešič). In particular, the Court referred with approval to paragraph 50 of that judgment, in which the CJEU had concluded that both the indispensable intervention of the defendant/user and its profit-making intention would lead to a finding of liability under Article 3(1). As mentioned earlier, in Stichting Brein the CJEU confirmed the validity and application of the GS Media presumption of knowledge. The second consideration is that ‘knowledge’ must not be intended in a subjective sense, that is as actual awareness of third party infringements by the platform operators, but rather—in line with earlier CJEU case law—as knowledge and acceptance of the possible consequences of one’s own conduct. Hence, it is not convincing to suggest that Stichting Brein is silent regarding the treatment of situations in which the operators of an online platform that makes available third party uploaded content have no actual knowledge of the unlawful character of the content thus made available, but nonetheless pursue a profit. On the contrary, the decision follows the same reasoning of the earlier CJEU decisions in GS Media and Stichting Brein: a profit-making intention on the part of the defendant may be sufficient to trigger a rebuttable presumption of knowledge, by the defendant, of the character—licensed or otherwise—of the content communicated through its platform.39

3.  Applicability of C-610/15 Stichting Brein to Less Egregious Scenarios A long time after the decision in Stichting Brein, it still remains uncertain to what extent the conclusion reached may be applicable to less egregious scenarios than the Pirate Bay. According to the CJEU, an ‘intervention’ for the purpose of determining what amounts 37  C-610/15 (n. 2), para. 46. 38  C-160/15 (n. 3), para. 51. 39 See contra, Christina Angelopoulos, ‘Communication to the Public and Accessory Copyright Infringement’ (2017) 76 CLJ 496, 498.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

344   Eleonora Rosati to an act of communication merely requires, in fact, the making of acts of indexing, categorization, deleting, or filtering of content. It is not relevant whether such activities are carried out manually or automatically, for example algorithmically: it is sufficient that a system is put in place to perform those activities. How many platforms would be caught within such a broad understanding of intervention as incontournable? National case law has begun emerging, although the issue remains controversial. This is also due to the fact that it is still uncertain whether the safe harbour for hosting providers in Article 14 of the e-Commerce Directive is available.40 A court in Austria (in the context of interim proceedings) ruled in 2018 that YouTube performs acts of communication to the public and may therefore be liable, on a primary basis, for the making available of infringing UUC.41 The Regional Court of Hamburg ruled that the Usenet provider UseNeXT would be liable if it promoted third party unauthorized making available and sharing of protected content.42 Germany’s Federal Court of Justice in Germany is also expected to rule on whether YouTube might be regarded as primarily responsible (and liable) for acts of communication to the public.43 The claimant in the latter case is a music producer who sued Google/ YouTube over the unauthorized making available, on the defendants’ platform, of videos containing musical works from the repertoire of a soprano. The claimant signed an exclusive contract with the singer in 2006, allowing him to exploit recordings of her performances. In 2008, unauthorized videos featuring such performances were made available on YouTube. Following a takedown request, the videos were removed but infringing material was made available once again shortly afterwards. In 2010, the first instance court sided with the claimant in respect of three songs, and dismissed the action for the remaining claims.44 Both the producer and Google/YouTube appealed the decision and in 2015 the appellate court only partly sided with the producer. Most importantly, it rejected the idea that YouTube could be regarded as primarily liable for the making available of infringing content, although it found that liability would subsist under the ‘Störerhaftung’ doctrine (a form of accessory liability) under section 97(1) of the Act on Copyright and Related Rights (Urheberrechtsgesetz, UrhG).45 In September 2018, the German court decided to stay the proceedings and a make a reference for a preliminary ruling to the CJEU.46 The referral seeks guidance on the question whether the operator of an online video platform on which users make available to the public copyright protected content without the right owners’ consent performs acts of communication to the public within the meaning of Article 3(1) when: 40  For a discussion of selected national experiences (in the EU: France, Germany, Netherlands, Poland, Spain, Sweden, and the UK), see further the questionnaires in João Pedro Quintais, ‘Global Online Piracy Study—Legal Background Report’ (July 2018) . 41  See Handelsgericht [Commercial Court] Vienna [2018] 11Cg65/14t/56 (Aust.). 42  See Landgericht [District Court] (LG) Hamburg [2018] 308 O 314/16 (Ger.). 43  See Bundesgerichtshof [Supreme Court] (BGH) Haftung von YouTube für Urheberrechtsverletzungen [2018] I ZR 140/15 (Ger.). 44  See LG Hamburg [2010] 308 O 27/09 (Ger.).  45  See Oberlandesgericht [Higher Court] (OLG) Hamburg [2015] 5 U 175/10 (Ger.). 46 C-682/18 YouTube, pending.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Direct Liability of Intermediaries   345 • the platform makes revenue from advertisements, the uploads are an automated process without any control or checks by the platform before the content goes online; • the platform receives (according to the terms of service) a worldwide, non-exclusive, and free licence for the uploaded videos for the duration the video is online; • the platform reminds users in the terms of service and during the upload process that uploading content that infringes third parties’ copyrights is prohibited; and • the platform provides rights owners with tools to remove infringing content—the platform sorts videos into categories and lists them by ranking, and suggests further videos to registered users according to videos previously watched, provided the platform does not have actual knowledge of illegal activity or information or on obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information.47

4.  Other Implications: Primary/Secondary Liability and Safe Harbours The decision in Stichting Brein also affected primary and secondary liability, by em­bra­ cing an autonomous (EU) concept of liability through a process that, according to some commentators, was initiated as early as in C-466/12 Svensson and Others.48 While EU legislature has harmonized the conditions for primary liability, the existence of and conditions for a finding of liability as a secondary infringer have been left to the legal systems of individual Member States.49 By introducing a knowledge requirement in the 47  At the time of writing the referral had not yet been assigned a case number nor had the questions referred been translated into English. For an explanation of the background to the question referred, see Mirko Brüβ, ‘BREAKING: FCJ Refers Case Regarding YouTube’s Liability for Damages to the CJEU’ (The IPKat, 13 September 2018) . 48  See Ansgar Ohly, ‘The Broad Concept of “Communication to the Public” in Recent CJEU Judgments and the Liability of Intermediaries: Primary, Secondary or Unitary Liability?’ (2018) 13(8) JIPLP 664, 670–1. In the same sense, see also with regard to the impact on German law, Jan Nordemann, ‘Recent CJEU Case Law on Communication to the Public and its Application in Germany: a New EU Concept of Liability’ (2018) 13(9) JIPLP 744, 745; and with regard to UK law, Neville Cordell and Beverley Potts, ‘Communication to the Public or Accessory Liability? Is the CJEU Using Communication to the Public to Harmonise Accessory Liability Across the EU?’ (2018) 40(5) EIPR 289, 293. 49  Giancarlo Frosio, ‘From Horizontal to Vertical: an Intermediary Liability Earthquake in Europe’ (2017) 12(7) JIPLP 565, 570, recalls that in the majority of EU Member States, secondary liability is subject to highly demanding conditions that are derived from miscellaneous doctrines of tort law, e.g. the doctrines of joint tortfeasance, authorization, inducement, common design, contributory liability, vicarious liability, or extra-contractual liability. See also Matthias Leistner, ‘Structural Aspects of Secondary (provider) Liability in Europe’ (2014) 9(1) JIPLP 75, 87–90, addressing the question whether common principles of secondary liability can be discerned.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

346   Eleonora Rosati scope of primary liability, the CJEU has blurred the distinction between what has been traditionally regarded as a strict liability tort (primary infringement) and liability informed by the defendant’s subjective state of actual or constructive knowledge (secondary infringement).50 All this is likely to result in practical uncertainties for those EU jurisdictions with a secondary liability regime, notably liability by authorization.51 The decision in Stichting Brein also raises the question whether a platform that is primarily liable for unauthorized acts of communication to the public can nonetheless invoke the safe harbour regime available to hosting providers under Article 14 of the e-Commerce Directive. Considering the relationship between liability under the Information Society Directive and applicability of the e-Commerce Directive safe harbours, while the former is without prejudice to the provisions of the latter (recitals 16 and 20 of the Information Society Directive), confirmation that the operators of an online platform may be jointly liable with users for copyright infringement would indeed have an impact on the applicability of Articles 12 to 14 of the e-Commerce Directive. By proposing the adoption of the e-Commerce Directive, the European Commission sought to clarify the responsibility of providers for transmitting and storing information at the request of third parties; that is, when providers act as mere intermediaries. Although outside the scope of this chapter, a similar trend towards a greater accountability of providers may be also found in some recent decisions of the European Court of Human Rights (ECtHR), for example Delfi v Estonia52 and Magyar Tartalomszolgáltatók v Hungary,53 which suggest that in certain situations the mere provision of a notice-and-takedown system may be insufficient. It appears that the insulation54 provided by the safe harbour regime does not apply to providers that go beyond the passive role of an intermediary. This means that a provider that was found liable for making unauthorized acts of communication to the public would also be likely to be regarded as playing an ‘active role’ (in the sense clarified by the CJEU in C-324/09 L’Oréal and Others) and would, as such, be ineligible for the protection offered under Article 14 of the e-Commerce Directive.55 This conclusion, which remains open to discussion,56 is supported by both textual references to the wording of the e-Commerce 50  See Christina Angelopoulos, ‘CJEU Decision on Ziggo: The Pirate Bay Communicates Works to the Public’ (Kluwer Copyright Blog, 30 June 2017) . 51 See Graeme Dinwoodie, ‘A Comparative Analysis of the Secondary Liability of Online Service Providers’ in Graeme Dinwoodie (ed.), Secondary Liability of Internet Service Providers (Springer 2017) 8 (noting that he concept of ‘authorization’ in this context is such as to establish ‘an act of nominally primary liability that clearly maps in substance to conventional forms of secondary or joint tortfeasor liability’). 52 See Delfi AS v Estonia App. no. 64569/09 (ECtHR, 16 June 2015). 53 See Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary App. no. 22947/13 (ECtHR, 2 February 2016). 54  See Jaani Riordan, The Liability of Internet Intermediaries (OUP 2016) para. 12.11. 55  In the same sense, see also Jan Nordemann, ‘Liability of Online Service Providers for Copyrighted Content—Regulatory Action Needed?’, Directorate General for Internal Policies—Policy Department A: Economic and Scientific Policy, IP/A/IMCO/2017-08-PE 614.207 (2018), 23. 56  Arguing that the safe harbour protection would be available in cases of primary and secondary infringements alike, see Martin Husovec, Injunctions Against Intermediaries in the European Union:

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Direct Liability of Intermediaries   347 Directive and CJEU case law.57 In C-236–238/08 Google France and Google, the CJEU held that the exemptions from liability established in the e-Commerce Directive cover only cases in which the activity of the information society service provider is ‘of a mere technical, automatic and passive nature’, which implies that that service provider ‘has neither knowledge of nor control over the information which is transmitted or stored’.58 Further clarity on this point may be required. A possible solution, however, may be to interpret the presumption imposed by the CJEU in GS Media as part of a broader obligation to conform to the behaviour of a ‘diligent economic operator’. In this sense, operators of platforms with a profit-making intention would have an ex ante reasonable duty of care and be subject to an ex post notice-and-takedown system,59 which would also include an obligation to prevent infringements of the same kind, for example by means of re-uploads of the same content. Albeit in the different context of intermediary injunctions, the CJEU has already clarified that requiring a provider to take measures which contribute not just to bringing to an end existing infringements, but also preventing further infringements of that type are compatible with Article 15(1) of the e-Commerce Directive as long as the relevant order is effective, proportionate, dissuasive, and does not create barriers to legitimate trade.60

5. Conclusion Over time, the CJEU—prompted by a significant number of preliminary referrals—has envisaged a broad construction of the right of communication to the public. One of the most significant developments has been holding that the operators of a platform that permit UUC may, under certain conditions, be deemed to be making acts relevant to Article 3(1) of the Information Society Directive. The resulting far-reaching issues, which are yet to be fully worked out, are: first, whether the findings in Stichting Brein can also be applied to platforms other than those whose core business is piracy; secondly, whether a distinction between (harmonized) primary and (unharmonized) secondary liability still makes sense; and thirdly, whether the safe harbours protection are even in principle available to platforms that are deemed to make acts of communication to the public.

Accountable but not Liable? (CUP 2017) 56, also referring for support to C-291/13 Papasavvas [2014] ECLI:EU:C:2014:2209 and C-324/09 (n. 36); Christina Angelopoulos, European Intermediary Liability in Copyright: A Tort-based Analysis (Wolters Kluwer 2017) 68; Jaani Riordan, The Liability of Internet Intermediaries (OUP 2016) paras 12.11, 12.01, and 12.37. 57  See C-324/09 (n. 36) para. 113, referring to C-236–238/08 Google France and Google [2010] ECLI:EU:C:2010:159, paras 114 and 120. 58  See C-236–238/08 (n. 57) para. 113. 59  In this sense, Matthias Leistner, ‘Closing the Book on the Hyperlinks: Brief Outline of the CJEU’s Case Law and Proposal for European Legislative Reform’ (2017) 39(6) EIPR 327, 331. 60  See C-324/09 (n. 36) paras 139 and 144.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

348   Eleonora Rosati While the second point mandates further reflection, an answer to the first and final questions seems to have been provided in Article 17 of the Directive on Copyright in the Digital Single Market. That provision, in both the original proposal61 and the final version adopted by the Council and the European Parliament moves from the assumption that a platform that gives access to UUC directly performs acts of communication to the public.62 With regard to safe harbour availability, while the EU Commission’s original proposal also envisaged the applicability of Article 14 of the e-Commerce Directive to platforms that are potentially liable under Article 3(1) of the Information Society Directive, the final text of the Directive does not regard liability under Article 3(1) of the Information Society Directive and enjoyment of the insulation provided by Article 14 of the e-Commerce Directive as complementary or even compatible.63 All of this stands as a demonstration that the path towards increased accountability and enhanced liability of intermediaries is already well underway, and judicial and policy discourse are proceedings along similar—if not the same—paths.64 What remains to be seen is whether the approach taken in Article 17 of the Digital Single Market Directive is to be regarded as a clarification of the law as already existing under the Information Society Directive65 or whether, instead, it is an innovation that provides a new liability regime for a particular type of hosting provider; that is, what the Directive defines as ‘online content sharing service providers’. It appears that the former is the correct answer, and hopefully also the CJEU will agree.

61  See Proposal for a Directive of the European Parliament and of the Council on copyright in the Digital Single Market, COM(2016) 593. 62  See Directive 2019/790/EU of the European Parliament and of the Council 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92, Art. 17(1). See also, for a discussion of Art. 17 of the reform, Chapter 28. 63  ibid. Art. 17(3). 64  I made this point in Eleonora Rosati, ‘The CJEU Pirate Bay Judgment and its Impact on the Liability of Online Platforms’ (2017) 39(12) EIPR 737, 746–8. 65  See, in this sense, Directive 2019/790/EU (n. 62) recital 64 (‘It is appropriate to  clarify  in this Directive that online content-sharing service providers perform an act of communication to the public or of making available to the public when they give the public access to copyright-protected works or other protected subject matter uploaded by their users. Consequently, online content-sharing service providers should obtain an authorisation, including via a licensing agreement, from the relevant rightholders. This does not affect the concept of communication to the public or of making available to the public elsewhere under Union law, nor does it affect the possible application of Article 3(1) and (2) of Directive 2001/29/EC to other service providers using copyright-protected content’) (emphasis added).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 18

Secon da ry Copy r ight I n fr i ngem en t Li a bilit y a n d User- Gen er ated Con ten t i n th e U n ited States Jack Lerner*

In the United States, the question of whether and when online service providers can be held liable for copyright infringement committed by their users has been one of the most heavily litigated controversies of the digital age. The answer to this question begins with the common law doctrine of secondary copyright infringement, as articulated by leading US Supreme Court opinions in Sony Corp. of America v Universal City Studios, Inc. and MGM v Grokster.1 In 1998, Congress enacted a statutory ‘safe harbour’ as part of the Digital Millennium Copyright Act (DMCA).2 Section 512 of the DMCA limits secondary copyright infringement liability for various types of online intermediaries. Over the next two decades, the entertainment and technology industries waged epic court battles over the meaning of section 512 and the shape of the secondary infringement doctrine in common law—most famously in Viacom International Inc. v YouTube, Inc., in which the parties spent over $100 million in attorney’s fees.3 Though controversies *  The author wishes to thank Shaia Araghi and Kyoolee Park for their research assistance, Giancarlo Frosio for his patient editing, and Andrew Bridges and Eric Goldman for their comments on this topic. 1 See Sony Corp. v Universal City Studios Inc., 464 US 417 (1984) (US); MGM Studios Inc. v Grokster Ltd, 545 US 913 (2005) (US). 2  See Digital Millennium Copyright Act, Pub. L. no. 105-304, § 103, 112 Stat. 2860 (1998) (US). 3  See Liz Shannon Miller, ‘Google’s Viacom Suit Legal Fees: $100 Million’ (Gigaom, 15 July 2010) . See generally Viacom Intern Inc. v YouTube Inc., 676 F.3d 19 (2d Cir. 2012) (US).

© Jack Lerner 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

350   Jack Lerner around new business models and market entrants continue to arise4 and will do so for the foreseeable future, and some uncertainty remains—particularly with respect to the wilful blindness doctrine5—over the last ten years the dust has mostly settled in the law for intermediaries that host user-generated content (UGC). In order for such intermediaries to avail themselves of the section 512 safe harbour, they must comply with section 512’s many requirements—such as terminating the account of repeat infringers—but also may wish to take more active steps to identify and take down infringing content posted on their platforms. To that end, private ordering solutions such as YouTube’s Content ID and Copyright Strikes programs abound.6 There have been multiple attempts at reform, including 2012’s Stop Online Piracy Act and companion bills, but none have been fruitful.7 It remains an open question whether the law strikes the right balance between preventing copyright infringement and en­ab­ling free expression and innovation—and, if not, what reforms are warranted.

1.  Secondary Copyright Infringement in Common Law The Copyright Act does not expressly hold anyone liable for infringement committed by another, but the courts have developed two basic doctrines of secondary liability: vicarious infringement and contributory infringement. The first, vicarious liability, is based on the concept of respondeat superior, in which one with significant supervisory authority over another can be held liable for the subordinate’s actions. The elements are: the defendant (1) has the right and ability to control the infringer’s conduct; and (2) receives a direct financial benefit from the infringement.8 The classic example is a dance hall or swap meet, in which the proprietor controls sales, receives attendance fees, and enjoys increased revenue with more infringement.9 Compare that example with that of a landlord who merely rents a space, does not know about infringing conduct, and exerts no other control; generally, the former would result in liability while the latter would not.10 4  See e.g. Capitol Records LLC v ReDigi Inc., 934 F.Supp.2d 640 (SDNY 2013) (US). In addition, infringement suits against service providers continue. See e.g. BMG Rights Mgmt (US) LLC v Cox Commc’ns Inc., 881 F.3d 293 (4th Cir. 2018) (US); American Chemical Society v Researchgate GBMH, no. 8:18-cv-03019-GJH (D Md. 2018) (US). 5  See the discussion of wilful blindness in Section 2.2. 6  See the discussion of technical measures in Section 3.1. 7  In fact, some attempts at reform (e.g. the introduction of the Stop Online Piracy Act and Protect IP Act) generated a significant backlash. Stephanie Condon, ‘SOPA, PIPA: What you need to know’ (CBS News, 18 January 2012) . See later for discussion of attempts at reform. 8 See Fonovisa Inc. v Cherry Auction Inc., 76 F.3d 259, 262 (9th Cir. 1996) (US) (holding swap meet organizer liable where organizer had right and ability to control sales, received attendance fees, and attendance increased with infringing vendors). 9 ibid. 10 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Secondary Copyright Infringement Liability in the Us   351 The second doctrine, contributory liability, has traditionally applied when ‘[o]ne who, with knowledge of the infringing activity, induces, causes or materially contributes to the infringing conduct of another.’11 Sony Corp. of America v Universal City Studios, Inc. (colloquially called the ‘Betamax case’ after the brand of the technology at issue), decided in 1984, remains a pivotal case on secondary copyright liability as it pertains to technology.12 Universal Studios, Inc. and others brought suit against Sony for manufacturing video tape recorders (VTRs) that some consumers used for recording a programme to view it at a later time (called ‘time-shifting’ by the Court). Drawing on patent law’s ‘staple article of commerce’ doctrine, the Court held that ‘the sale of copying equipment, like the sale of other articles of commerce, does not constitute contributory infringement if the product is widely used for legitimate, unobjectionable purposes. Indeed, it need merely be capable of substantial noninfringing uses.’13 The Court went on to determine that most copyright holders that license their works for broadcast would not object to time-shifting by private viewers, and that time-shifting constituted fair use. Therefore, held the Court, the Betamax was capable of substantial non-infringing uses and Sony’s sale of it did not constitute contributory infringement.14 The Betamax case is of towering importance in the law of secondary copyright li­abil­ity—so much so that it has been referred to as the ‘Magna Carta of the digital era’.15 In the eyes of many in the digital technology industry, the holding in the case essentially paved the way for future technological innovation,16 as innovators felt free to introduce new products and services to the market as long as their creations had ‘substantial non-infringing uses’. That assumption would be tested with the invention of peer-topeer file-sharing applications, litigation over which would lead to the Supreme Court’s next major pronouncement on secondary liability. In 1999, Napster, Inc. launched the first major peer-to-peer file-sharing application. Its MusicShare technology made music files stored on individual computers available 11  Fonovisa (n. 8). As discussed later, the Supreme Court’s opinion in MGM v Grokster arguably r­eformulated the contributory infringement standard to apply when a party is ‘actively encouraging (or  inducing) infringement through specific acts . . . or on distributing a product distributees use to infringe copyrights, if the product is not capable of “substantial” or “commercially significant” noninfringing uses.’ MGM v Grokster (n. 1) 943 (Ginsburg, J., concurring). 12  Sony (n. 1). 13  ibid. 442. 14  When analysing whether a system is ‘capable of commercially significant noninfringing uses’, the court must assess both current and future non-infringing uses. A&M Records, Inc. v Napster, Inc., 239 F.3d 1004, 1021 (9th Cir. 2001). 15  Lawrence Hurley and Liana Baker, ‘Echo of 1984 Betamax landmark in U.S. high court Aereo TV fight’ (Reuters, 20 April 2014) ; Bruce Boyden, ‘The Most Important Supreme Court Case in Copyright Law: Sony Corp. v. Universal Studios (1984)’ (Marquette University Law School Faculty Blog, 1 November 2010) . 16  See Eduardo Porter, ‘Copyright Ruling Rings With Echo of Betamax’ (New York Times, 26 March 2013) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

352   Jack Lerner to  other users; indexed the contents of those files on Napster’s servers and enabled searching of those indexes; and enabled users to transfer copies of other users’ music files directly to one another via the internet.17 This and subsequent file-sharing applications were found to be secondarily liable under both vicarious and contributory liability the­or­ies based on both the control element in vicarious liability—the courts distinguished the overall, and sometimes close, control of a digital system from the consumer products at issue in the Betamax case18—and the knowledge element in contributory liability.19 The question of secondary liability for peer-to-peer technologies came to a head in 2005 with Metro-Goldwyn-Mayer Studios, Inc. v Grokster, Ltd.20 Grokster distributed software that allowed computer users to share files through peer-to-peer networks, but unlike Napster it did not operate a central indexing server, and claimed not to be able to tell whether its users were distributing unauthorized copyrighted music and video files. As the case made its way to the US Supreme Court, parties and observers alike anticipated that the Court would revisit the Betamax holding. Instead, the Court left Betamax undisturbed, and articulated a new theory of contributory infringement liability: active inducement. Again drawing from patent law, the Court held that ‘one who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, is liable for the resulting acts of infringement by third parties’.21 Examples of such steps include ‘advertising an infringing use or instructing how to engage in an infringing use, [which] shows an affirmative intent that the product be used to infringe’.22 The Court made clear, however, that ‘mere knowledge of infringing potential or of actual infringing uses would not be enough here to subject a distributor to liability. Nor would ordinary acts incident to product distribution, such as offering customers technical support or product updates, support liability in themselves.’ Rather, the Court held, liability would be premised on ‘purposeful, culpable expression and conduct’. Like Aimster and Napster before it, Grokster was found liable for secondary copyright infringement.23 It is not clear whether the traditional ‘knowledge plus material contribution’ standard for contributory infringement survives Grokster; the opinion of the Court simply defined contributory infringement as ‘intentionally inducing or encouraging direct infringement’,24 and construed Betamax to mean that if the product or service in question is not capable of substantial non-infringing uses, such intent will be presumed. ‘In sum, where an article is “good for nothing else” but infringement, there is no legitimate public interest in its unlicensed availability, and there is no injustice in presuming or imputing an intent to infringe.’25 Justice Ginsburg endorsed this formulation in her concurrence, declaring, ‘Liability under our jurisprudence may be predicated on actively encouraging (or inducing) infringement through specific acts (as the Court’s 17 See A&M Records Inc. v Napster Inc., 239 F.3d 1004 (9th Cir. 2001) (US). 18 See In re Aimster Copyright Litigation, 334 F.3d 643, 648 (7th Cir. 2003) (US). 19  ibid. 650; A&M Records (n. 17) 1020. 20 See MGM Studios (n. 1). 21  ibid. 936–7. 22  ibid. 936. 23 See MGM Studios (n. 1); In re Aimster (n. 18); A&M Records (n. 17). 24 See MGM Studios (n. 1) 930. 25  ibid. 932.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Secondary Copyright Infringement Liability in the Us   353 opinion develops) or on distributing a product distributees use to infringe copyrights, if the product is not capable of “substantial” or “commercially significant” noninfringing uses.’26 Nevertheless, numerous circuits on the Court of Appeals continue to apply the ‘knowledge plus material contribution’ standard.27 The Grokster and Betamax cases remain the twin pillars of common law secondary copyright liability jurisprudence in the United States. Thus, for example, the Ninth Circuit relied on Grokster and Betamax in 2007 when it held that a credit card processing company could not be held liable for processing payments to websites that infringed the copyright of a magazine publisher,28 and in 2013 when it affirmed a permanent injunction against a group of websites that collected and organized files facilitating downloads using the BitTorrent protocol.29

2.  The Digital Millennium Copyright Act In 1998, Congress passed the Digital Millennium Copyright Act, which among other provisions created a safe harbour for certain types of online intermediaries.30 In passing the Online Copyright Liability Limitation Act, which became Title II of the DMCA, Congress expressed a fear that ‘without clarification of their liability, service providers may hesitate to make the necessary investment in the expansion of the speed and cap­acity of the Internet. In the ordinary course of their operations service providers must engage in all kinds of acts that expose them to potential copyright infringement liability’.31 The Senate Committee on the Judiciary cited several recent cases involving online service providers that involved claims of contributory and vicarious copyright liability, and explained that, ‘[r]ather than embarking upon a wholesale clarification of 26  ibid. 942. See also Average Joe’s Entm’t Grp, LLC v SoundCloud, LTD., No. 3:16-CV-3294-JPM-JB, 2018 WL 6582829, at *3 (M.D. Tenn. Oct. 17, 2018) (applying this formulation and declining to apply earlier standard). 27 In Perfect 10, Inc. v Amazon.com, Inc., 508 F.3d 1146, 1170–3 (9th Cir. 2007), the court attempted to square the Grokster test with the ‘knowledge plus contribution’ test, holding that a computer system operator can be held contributorily liable if it ‘has actual knowledge that specific infringing material is available using its system, and can take simple measures to prevent further damage to copyrighted works, yet continues to provide access to infringing works’ (emphasis in original; internal citations and quotation marks omitted). ibid. 1172. 28 See Perfect 10 Inc. v Visa Int’l Serv. Ass’n, 494 F.3d 788 (9th Cir. 2007) (US). 29 See Columbia Pictures Indus Inc. v Fung, 710 F.3d 1020 (9th Cir. 2013) (US). 30 See  S.  Rep. no. 105-190, 8 (1998) (US) . Congress also enacted special legal protections against and remedies for circumventing technological measures such as encryption and password protection, codified at 17 USC §§ 1201 ff, citing ‘the ease with which digital works can be copied and distributed worldwide virtually instantaneously’ and the fear that ‘copyright owners will hesitate to make their works readily available on the Internet without reasonable assurance that they will be protected against massive piracy’. 31  ibid. 8.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

354   Jack Lerner these doctrines, the Committee decided to leave current law in its evolving state and, instead, to create a series of “safe harbors”, for certain common activities of service pro­viders’, in which the service provider is protected from monetary damages and most forms of injunctive relief if it qualifies.32 In that sense, the DMCA abrogates secondary copyright infringement liability when it comes to certain activities, but it does not replace either the vicarious liability or contributory liability doctrines. Section 512 establishes four categories of intermediary activity that are eligible for the safe harbour: (a) transitory digital network communications, or the ‘transmitting, routing, or providing connections for, material through a system or network controlled or operated by or for the service provider’; (b) system caching, or the ‘intermediate and temporary storage of material on a system or network controlled or operated by or for the service provider’; (c) information residing on systems or networks at the direction of users, or ‘the storage at the direction of a user of material that resides on a system or network controlled or operated by or for the service provider’—essentially, UGC sites; and (d) information location tools, or ‘referring or linking users to an online location containing infringing material or infringing activity, by using information location tools, including a directory, index, reference, pointer, or hypertext link’—in other words, search engines and web indexes. In order to qualify for each safe harbour, numerous conditions must be met. As a threshold matter, a party must be a ‘service provider’, which the statute defines as ‘a provider of online services or network access, or the operator of facilities therefor’,33 and each service provider must establish a policy for terminating the accounts of repeat infringers.34 In the case of ‘information residing on systems or networks at direction of users’, for example UGC sites, the service provider must designate an agent to receive takedown notifications and file the agent’s contact information with the Register of Copyrights.35 The statute also sets forth a detailed notice-and-takedown regime in which the service provider must remove infringing content on receiving a notification from the rightholder, and then must put the content back up on receiving a counternotification from the user; the content then stays up unless the rightholder returns with notice that it has ‘filed an action seeking a court order to restrain the subscriber from engaging in infringing activity’.36 32  ibid. 19. 33  17 USC § 512(k)(1)(B). 34  See 17 USC § 512(i)(1)(A). All service providers must also accommodate and refrain from interfering with ‘standard technical measures,’ § 512(i)(1)(B), defined as ‘technical measures that are used by copyright owners to identify or protect copyrighted works’ and are developed as an industry-wide standard, § 512(i)(2). No such standard has ever been promulgated for UGC sites that offer music or video. 35  See 17 USC § 512(c)(2). 36  ibid. § 512(c), (g).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Secondary Copyright Infringement Liability in the Us   355 Congress also made clear that service providers have no obligation to monitor their service or ‘affirmatively seek . . . facts indicating infringing activity’.37 In addition, the statute provides with respect to ‘information residing on systems or networks at direction of users’ that a notification that fails to comply substantially with the elements set forth in section 512(c)(3)—such as signature and contact information—shall not constitute actual or ‘red flag’ knowledge under section 512(c)(1)(A).38 The most vexing requirement for courts and litigants applies to categories (3) and (4) in the previous list—UGC sites and search engines, respectively. For these activities, the service provider can only qualify for the safe harbour if it: (A)(i) does not have actual knowledge that the material or an activity using the material on the system or network is infringing; (ii) in the absence of such actual knowledge, is not aware of facts or circumstances from which infringing activity is apparent; or (iii) upon obtaining such knowledge or awareness, acts expeditiously to remove, or disable access to, the material; [and] (B) does not receive a financial benefit directly attributable to the infringing activity, in a case in which the service provider has the right and ability to control such activity.39

Hundreds of millions of dollars in legal fees have been spent litigating the meaning of these two provisions. With respect to subpart (A), what level of knowledge constitutes actual knowledge and awareness of ‘facts or circumstances from which infringing activity is apparent,’ often called ‘red flag’ knowledge? Does generalized knowledge count, or must the service provider know of specific instances of infringement? With respect to subpart (B), the provision closely resembles the elements for vicarious liability,40 leading to confusion as to the difference between the common law doctrine and the statutory safe harbour, and the meaning of the terms ‘financial benefit’ and ‘right and ability to control’. For example, does the ability to take content down and terminate the accounts of repeat infringers constitute the ‘right and ability to control’? In considering these questions, it is important to keep in mind that notifications of claimed infringement are generally not enough to constitute knowledge of infringing activity, given that service providers regularly receive faulty notices.41 For UGC sites, answers to these questions were largely determined by two landmark appellate cases decided in 2012 and 2013. In Viacom International, Inc. v YouTube, Inc., Viacom and a coalition of sports leagues, film studios, television networks, and music companies filed suit against the operator of the UGC video-sharing site YouTube in 2008 for direct and secondary copyright infringement arising from the presence of clips 37  ibid. § 512(m)(1). 38  ibid. § 512(c)(3)(B)(i). 39  ibid. § 512(c), (d). 40 See Fonovisa (n. 7). 41 Jennifer  M.  Urban, Joe Karaganis, and Brianna Schofield, ‘Notice and Takedown in Everyday Practice’, UC Berkeley Public Law Research Paper No. 2755628 (22 March 2017), available at SSRN: .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

356   Jack Lerner on YouTube that originated from Viacom and other copyright holders.42 YouTube claimed the protection of the section 512 safe harbour. The US Court of Appeals for the Second Circuit affirmed the district court’s interpretation of section 512, while remanding for further fact-finding.43 In UMG Recordings Inc. v Shelter Capital Partners, Universal Music Group and other music companies brought suit against Veoh Networks, which operated Veoh.com, a UGC video-sharing site.44 There, the US Court of Appeals for the Ninth Circuit affirmed the lower court’s grant of summary judgment, holding that the defendants were protected by the section 512(c) safe harbour.45

2.1  Actual and ‘Red Flag’ Knowledge Both courts held that section 512(c)(1) requires knowledge or awareness of specific infringing activity, not generalized knowledge that infringing content is present on the site. In both cases, the plaintiffs submitted evidence showing that the sites’ operators knew that a significant amount of infringing content had been uploaded to the site, and argued that this constituted either actual knowledge or ‘red flag’ knowledge. The Second Circuit in Viacom rejected this argument, reasoning that the argument would have rendered section 512 internally inconsistent given that section 512(c)(1)(A)(IIII) requires that the service provider must ‘act . . . expeditiously to remove, or disable access to’ infringing material of which it has been made aware.46 That obligation contemplates that the service provider has knowledge or awareness of specific infringing material, the court explained, ‘because expeditious removal is possible only if the service provider knows with particularity which items to remove’ and because the statute requires action to remove ‘“the material” at issue’.47 The court then confronted the question of how actual knowledge differs from red flag knowledge if both require knowledge of specific instances of infringement. The difference, the court held is not between specific and generalized knowledge, but instead between a subjective and an objective standard. In other words, the actual knowledge provision turns on whether the provider actually or ‘subjectively’ knew of specific infringement, while the red flag provision turns on whether the provider was subjectively aware of facts that would have made the specific infringement ‘objectively’ obvious to a reasonable person. The red flag provision, because it incorporates an objective standard, is not swallowed up by the actual knowledge provision under our construction of the § 512(c) safe harbor. Both provisions do independent work, and both apply only to specific instances of infringement.48 42 See Viacom Int’l Inc. v YouTube Inc., 676 F.3d 19 (2d Cir. 2012) (US). 43 ibid. 44 See UMG Recordings Inc. v Shelter Capital Partners LLC, 718 F.3d 1006 (9th Cir. 2013) (US). 45 ibid. 46 See Viacom (n. 42) 30. 47  ibid. 31 (quoting 17 USC § 512(c)(1)(A)(iii)) (emphasis in original). 48  ibid. 31. In UMG Recordings, the Ninth Circuit embraced this approach to the difference between actual and ‘red flag’ knowledge. See UMG Recordings (n. 44) 1025.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Secondary Copyright Infringement Liability in the Us   357 The Second Circuit also observed that ‘no court has embraced the contrary ­prop­os­ition—urged by the plaintiffs—that the red flag provision “requires less specificity” than the actual knowledge provision’.49 While affirming the district court’s interpretation of the statute, the court vacated the order granting summary judgment in the light of evidence showing website surveys and various data which revealed that, amongst other examples, 75–80 per cent of YouTube’s streams contained copyrighted material, and that emails between the YouTube co-founders showed some awareness of infringing activity on the site.50 The case subsequently settled.51 In UMG Recordings, the Ninth Circuit likewise held that section 512(c)(1)(A) requires knowledge of specific infringing activity. With respect to the actual knowledge requirement, the court relied on its earlier decision in A&M Records v Napster Inc., in which it held that ‘absent any specific information which identifies infringing activity, a  computer system operator cannot be liable for contributory infringement merely because the structure of the system allows for the exchange of copyrighted material’.52 Drawing on the legislative history, the court observed that copyright holders are best positioned to identify infringing copies than service providers, ‘who cannot readily ascertain what material is copyrighted and what is not’.53 As further support for its conclusion, the court also pointed out that the statute prohibits substantially deficient notices from being used to determine actual or red flag knowledge, and specifies that section 512 does not create an obligation to monitor or to affirmatively seek facts indicating infringing activity.54 The court reached a similar conclusion with respect to the ‘red flag’ know­ledge provision in section 512(c)(1)(A)(ii): if general knowledge that infringing materials are on the site were enough to constitute red flag knowledge, ‘the notice and takedown procedures would make little sense and the safe harbors would be effectively nullified’.55

2.2  Wilful Blindness Both courts also addressed arguments that the service providers were ‘wilfully blind’ to infringing activity. Though the statute does not mention wilful blindness, the Second Circuit examined this doctrine, and held that it may be applied, when appropriate, to show knowledge or awareness of specific infringing activity;56 the court reasoned that, although Congress addresses knowledge in the statute, it did not ‘“speak directly” to the willful blindness doctrine.’57 The court defined wilful blindness as follows: ‘A person is 49  ibid. 32. 50  ibid. 32–5. 51  See Jonathan Stempel, ‘Google, Viacom Settle Landmark YouTube Lawsuit’ (Reuters, 14 March 2014) . 52  UMG Recordings (n. 44) 1021 (quoting A&M Records (n. 17)). 53  ibid. 1022. 54  ibid. 1022 (citing 17 USC §§ 512(c)(3)(B)(i) and 512(m)). 55  ibid. 1024. 56  ibid. 1023; Viacom (n. 42) 34–5. 57 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

358   Jack Lerner willfully blind or engages in conscious avoidance amounting to knowledge where the person was aware of a high probability of the fact in dispute and consciously avoided confirming that fact.’58 Further, it held, section 512(m) (which makes clear that service providers have no affirmative duty to monitor for infringement) ‘limits—but does not abrogate—the doctrine’. Therefore, held the court, ‘the willful blindness doctrine may be applied, in appropriate circumstances, to demonstrate knowledge or awareness of specific instances of infringement under the DMCA’.59 The Ninth Circuit addressed this question more briefly, remarking that, ‘[o]f course, a service provider cannot willfully bury its head in the sand to avoid obtaining such specific knowledge.’60 The Second Circuit’s discussion of wilful blindness was the first major treatment of this doctrine in the context of the DMCA safe harbour, and it has created some confusion. Given that Congress does ‘speak . . . directly’ to a knowledge requirement in the language of the statute, where does wilful blindness fit in? The interpretation most consistent with section 512 is that if the safe harbour requirements are otherwise met, in order for wilful blindness to be found the defendant must knowingly commit affirmative acts to avoid learning of specific infringing content. (Or, in the language of the Second and Ninth Circuits, the evidence the provider is knowingly committing affirmative acts to avoid must be enough to establish subjective or objective knowledge of specific instances of infringement.) Recall that both Viacom and UMG Recordings held that actual knowledge of infringing content is not enough to overcome the safe harbour if it does not direct the provider to specific instances. In the light of that directive, it seems clear that wilful avoidance of general knowledge of infringement, even if that infringement is substantial, cannot be enough for liability in the section 512 context.

2.3  Right and Ability to Control Section 512(c)(1)(B) provides that in order for the safe harbour to apply, the service provider cannot ‘receive a financial benefit directly attributable to the infringing activity, in a case in which the service provider has the right and ability to control such activity’. In both Viacom and UMG Recordings, the plaintiffs argued that these terms should be interpreted in the same way as the common law doctrine of vicarious liability, and that because the defendants had the ability to take down infringing material, they had the ‘right and ability to control’ infringing activity. Both courts rejected this argument. In UMG Recordings, the Ninth Circuit observed that the plaintiffs’ interpretation would have rendered much of section 512(c) a ‘confusing, self-contradictory catch-22 situation’. Indeed, the court found, ‘Congress could not have intended for courts to hold that a service provider loses immunity under the safe harbor provision of the DMCA because it engages in acts that are specifically required by [section 512(c)(1)(C) of] the

58  ibid. 35 (internal quotations omitted). 60  UMG Recordings (n. 44) 1023.

59 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Secondary Copyright Infringement Liability in the Us   359 DMCA to obtain safe harbor protection’.61 Instead, the court held after discussing the  legislative history, Congress had made clear that it intended to ‘carv[e] out ­permanent safe harbors to vicarious liability even while the common law standards continue to evolve’.62 Therefore, concluded the court, section 512(c) requires ‘something more’ than the general ability to locate infringing material and terminate access to it.63 The Second Circuit in Viacom reached the same conclusion, holding that ‘the control provision dictates a departure from the common law vicarious liability standard’ and requires ‘something more’ than the ability to remove or block access to infringing materials.64 What constitutes that ‘something more’? In grappling with this question, the Second Circuit in Viacom observed that until the time of its decision, only one court had found that a service provider had the right and ability to control infringing activity under section 512(c)(1)(B).65 In Perfect 10 Inc. v Cybernet Ventures Inc., the US District Court for the Central District of California found control for the purposes of the section where a credit card verification service pre-screened participating sites, gave them ‘extensive advice’ and ‘detailed instructions regard issues of layout, appearance, and content’ and the site prohibited certain types of images, prevented the proliferation of identical sites, and monitored participating sites for infringing content.66 The Second Circuit discussed this case, and surmised that inducement of copyright infringement67 might also constitute control under section 512(c)(1)(B) because the inducement standard ‘premises li­abil­ity on purposeful, culpable expression and conduct’.68 In the end, both courts held that the service provider must ‘exert substantial influence on the activities of users’.69 The Second Circuit also distinguished the right and ability to control requirement in section 512(c)(1)(B) from the knowledge requirements in section 512(c)(1)(A): the service provider may, indeed, exert substantial influence ‘without necessarily—or even frequently—acquiring knowledge of specific infringing activity’.70 After remand, YouTube settled with the plaintiffs; reportedly, the settlement did not require monetary payments by either party.71 And though Veoh was vindicated

61  ibid. 1029 citing Ellison v Robertson, 189 F.Supp.2d 1051, 1061 (CD Ca. 2002), aff ’d in part and rev’d in part on different grounds, 357 F.3d 1072 (9th Cir. 2004), 1027 citing UMG Recordings Inc. v Veoh Networks Inc., 665 F.Supp.2d 1099, 1113. 62  ibid. 1028. 63 ibid. 64  Viacom (n. 42) 38. 65  ibid. 37. 66  Perfect 10 Inc. v Cybernet Ventures Inc., 213 F.Supp.2d 1146, 1173, 1182 (CD Cal. 2002) (US). Since Viacom and UMG Recordings were decided, one other court has found the right and ability to control in the s. 512 context: SunFrog LLC involved an online marketplace where third party sellers uploaded designs and logos onto clothing and other items and sold them. The court denied a motion to dismiss based in part on a finding that the defendant had s. 512 right and ability to control, finding that the marketplace was ‘intimately—indeed indispensably—involved in transactions involving infringing goods’, unlike other marketplaces like eBay that do not print, ship, inspect, and promote the goods it sells. See H-D USA LLC v SunFrog LLC, 282 F.Supp.3d 1055, 1062 (ED Wis. 2017) (US) citing Hendrickson v eBay Inc., 165 F.Supp.2d 1082, 1094 (CD Cal. 2001). 67 See MGM Studios (n. 1). 68  Viacom (n. 42) 38 quoting MGM Studios (n. 1) 937. 69 ibid.; UMG Recordings (n. 44) 1030 (quoting Viacom). 70  Viacom (n. 42) 38. 71  See Stempel (n. 50).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

360   Jack Lerner by the decision in UMG Recordings, the litigation bankrupted the company and its assets were sold off.72

3.  User-Generated Content in The Shadow of The DMCA and Case Law Though new secondary copyright infringement cases continue to be filed on a regular basis, little new law on section 512 has been made since Viacom and UMG Recordings. In this sense, the dust has largely settled in this area—at least in the United States. New UGC entrants, however, still face significant uncertainty. First, there is still some question as to what constitutes wilful blindness in the section 512 context and how the doctrine should be applied to UGC sites.73 This uncertainty could make litigation prohibitively expensive for many new entrants—an important consideration given how much the legal victories cost YouTube and Veoh. Another continuing problem is that section 512 simply contains so many requirements that it is arguably difficult to remain in compliance. For example, section 512(c)(2) requires that UGC service providers designate an agent to receive notifications of claimed infringement registering the agent with the US Copyright Office. In 2016, the Copyright Office began a switch from paper records to an online registration system and announced that registrations would be terminated for any providers who had not re-registered by 31 December 2017; service providers also must re-register every three years.74 The statute’s complexity acts as a deterrence to defendants seeking to use the section 512 safe harbour at trial, because the sheer number of requirements can be confusing to a jury. Indeed, for some defendants invoking the DMCA safe harbour may be more trouble than it is worth; in several recent cases, defendants have defeated secondary liability claims at the summary judgment stage without invoking section 512.75 Meanwhile, rightholders argue that the DMCA is insufficient to stem copyright infringement, and claim that infringement is still rampant online: users ‘rip’ from streams for personal, offline use, view illegal streams from ‘cyberlocker’ sites, and 72  See Eliot Van Buskirk, ‘Veoh Files for Bankruptcy After Fending Off Infringement Charges’ (Wired, 12 February 2010) . 73 See Eric Goldman, ‘It’s Really Hard to Win a Motion to Dismiss Based on 512(c)–Myeress v. Buzzfeed’ (Technology and Marketing Law Blog, 5 March 2019) (arguing that the wilful blindness and active inducement doctrines are sufficiently uncertain to survive most motions to dismiss); see also Eric Goldman, ‘How the DMCA’s Online Copyright Safe Harbor Failed’ (2014) 3 NTUT J. of Intell. Prop. L. & Mgmt 195 (2014). 74 See 37 CFR Part 201 . 75 See Average Joe's Entm’t Grp; Perfect 10, Inc. v Giganews, Inc., 847 F.3d 657 (9th Cir. 2017).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Secondary Copyright Infringement Liability in the Us   361 download via peer-to-peer networks such as BitTorrent.76 In addition, rightholders devote substantial resources to finding and removing content on UGC sites. According to Google, in 2017 rightholders sent YouTube ‘over 2.5 million takedown requests from over 300,000 copyright claimants, requesting the removal of more than 7 million videos’.77 This represents just 2 per cent of removal requests; the other 98 per cent came to YouTube via its Content ID program.78

3.1  Technological Measures In addition to the measures required by the DMCA such as terminating the accounts of repeat infringers, today many UGC sites go further by using technological measures to reduce infringement. YouTube’s suite of rights management tools like its Content ID and copyright claim system is the most prominent of these, and due to YouTube’s dom­in­ance among video-streaming sites79 it has been the subject of great debate. For works that are part of the Content ID program, an algorithm scans an uploaded video to compare its audio content to other uploads on the site.80 Rightholders need to apply for the program, which requires evidence that the rightholder owns the works in question.81 When the system registers an infringing video, the user receives either a Content ID claim or a copyright strike. A copyright strike means that the video has been removed and carries a series of consequences for users including temporary suspension of some privileges and eventual account termination.82 A Content ID claim does not carry the same severe penalties as copyright strikes; when such a claim is issued, rightholders may choose to divert the advertising revenue to themselves, mute the audio of the file in question, block it on certain platforms, or block the video entirely or in certain regions.83 The Content ID and copyright claim system has been the subject of criticism from uploaders, who complain of wrongful claims and unresponsiveness from YouTube. For example, in 2016 YouTube user TheFatRat uploaded a track he created titled ‘The 76  See Laura Snapes and Ben Beaumont-Thomas, ‘More than one third of music consumers still pirate music’ (The Guardian, 9 October 2018) ; Digital Citizens Alliance, ‘Good Money Still Going Bad: Digital Thieves and the Hijacking of the Online Ad Business’ (May 2015). 77  Google.com, ‘How Google Fights Piracy’ (2018) 30. A 2017 study by Urban, Karaganis, and Schofield identified numerous problems with the takedown system including incomplete or erroneous notices, and concluded that ‘the notice and takedown system is important, under strain, and that there is no “one size fits all” approach to improving it’. See Jennifer Urban, Joe Karaganis, and Brianna Schofield, ‘Notice and Takedown in Everyday Practice’, UC Berkeley Public Law Research Paper no. 2755628 (22 March 2017) . 78  Google.com (n. 77) 23. 79  See Alexa . 80 See ‘How Content ID works’ ; ‘YouTube Content ID: An Explainer for Musicians’ (Audiosocket, 1 May 2018) . 81  See ‘YouTube Content Verification Program’ . 82  See ‘Copyright Strike Basics’ . 83  See ‘What is a Content ID claim?’ .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

362   Jack Lerner Calling’ that received nearly 50 million views.84 Over a year later, a music company representing a musician that had also uploaded a song titled ‘The Calling’ placed a content claim on TheFatRat’s song.85 Although TheFatRat appealed this claim, he reportedly lost some monetization of the video.86 Many content creators have expressed frustration at the arbitrary nature of these copyright claims, and have called for penalties on in­di­vid­uals who file fraudulent copyright infringement claims against other YouTube creators.87 Other sites have implemented technological tools for detecting copyright infringement, although rightholders complain that many sites have faulty flagging mechanisms. Facebook’s help centre offers a form to report copyrighted content, but users claim that Facebook’s method of identifying and removing copyrighted content is not quick enough and does not implement enough punitive action.88 In 2014, Vimeo began a Copyright Match system in order to prevent copyright infringement. Similar to YouTube’s Content ID system, Vimeo’s system uses a database to compare an uploaded video to other videos to find a potential match for copyright infringement. Vimeo’s system is less stringent than YouTube’s in that it uses a ‘tiered process’ rather than strict removal of videos that are allegedly infringing.89

3.2  Government Enforcement Efforts The US government conducts substantial copyright enforcement efforts directed at online intermediaries. A major instance of government seizure occurred on 19 January 2012, when the US Federal Bureau of Investigation (FBI) seized and closed Megaupload, a popular file-storage and file-sharing site that was alleged to have facilitated massive copyright infringement that was costing copyright holders more than $500 million in revenue.90 The owners of Megaupload were criminally indicted on several counts. 84  See Josh Katzowitz, ‘YouTube Stars say Unfair Copyright Claims are Making Their Lives Hell’ (The Daily Dot, 18 December 2018) . 85 ibid. 86 ibid. 87  See Charlotte Hassan, ‘What About All That Copyright Takedown Abuse, YouTube?’ (Digital Music News, 29 February 2016) . 88 See Rob Price, ‘Facebook’s New Video Business is Awash with Copyright Infringement and Celebrities are Some of the Biggest Offenders’ (Business Insider, 6 May 2015) ; Facebook, ‘Reporting Copyright Infringements’ . 89  See Andrew Flanagan, Vimeo to Launch Music Copyright ID System (Exclusive) (Billboard Biz, 21 May 2014) ; Vimeo, ‘How do Vimeo moderators decide if a video qualifies as fair use?’ . 90  See Nick Perry, ‘Popular file-sharing website Megaupload shut down’ (USA Today, 20 January 2012) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Secondary Copyright Infringement Liability in the Us   363 The government also conducts domain name seizures from time to time. For ex­ample, the US domain names of Rojadirecta, a Spanish website, were seized in January 2011 based on allegations that the site contained links to pirated streams and copyrighted videos. After nineteen months, the government dropped its claim against the site and a federal court required that the domain names be returned to the site’s owner.91

4.  Policy Activity There have been numerous attempts at legislative reform and other policymaking activity since the DMCA was passed in 1998, though Congress has made no major changes to the statute. This section presents a brief, non-exhaustive92 overview of legislative and executive activity towards reform.

4.1  Stop Online Piracy Act and Companion Bills In 2011, the US Congress considered legislation aimed at improving enforcement of copy­right online. The Senate’s Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act (PIPA) and the House’s Stop Online Piracy Act (SOPA) expanded government and private enforcement actions towards websites containing substantial copyright infringement.93 The bills would have authorized the Attorney General to force internet access providers and domain name server operators to block access to and redirect users away from infringing sites to an Attorney General notice. The bills included various provisions for that goal, such as court orders to end monetary payments to those websites from external financial providers such as credit card processers and advertisement platforms94 and court orders requiring that internet 91 See Ryan Singel, ‘Oops! Copyright Cops Return Seized Rojadirecta Domain Names—19 Months Later’ (Wired, 29 August 2012) . In November 2010, the domain to Dajaz1.com, a hip-hop blog, was seized based on allegations that Dajaz1 posted unreleased songs on the site in violation of copyright. The owner of Dajaz1 claimed, however, that he received those songs as a means of promotion by artists and record companies. After a full year, the government dropped its claims and returned the domain name with minimal explanation. See Ben Sisario, ‘Hip-Hop Copyright Case Had Little Explanation’ (New York Times, 6 May 2012) . 92  In 2008, Congress enacted the Prioritizing Resources and Organization for Intellectual Property Act, which increased enforcement penalties and funding for enforcement. The Act also created a new executive officer, the United States Intellectual Property Enforcement Representative, and gave new powers to various federal agencies. Reports of enforcement efforts conducted pursuant to the Act are available at . 93  See Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act of 2011 (PROTECT IP Act) s. 968, 112th Cong. (2011) (US); Stop Online Piracy Act (SOPA) HR 3261, 112th Cong. (2011) (US). 94 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

364   Jack Lerner service providers (ISPs) block access to those types of websites.95 The bills would also have imposed stringent penalties for the unauthorized streaming of copyrighted works. Opponents of the bills argued that they would stifle innovation and the First Amendment’s guarantee of freedom of speech, and would essentially lead to widespread internet censorship.96 In response to the bills, on 18 January 2012 thousands of websites including major sites such as Google, Wikipedia, and Reddit coordinated an unprecedented protest against the bills in the form of a service blackout.97 Millions of people signed petitions in protest, and thousands called or emailed members of Congress to indicate their opposition to the bills. After the protest, many members of Congress rescinded their support for the bills, and they were withdrawn.98

4.2  US Department of Commerce Internet Policy Task Force In 2010, the Department of Commerce formed the Internet Policy Task Force, comprising several bureaus in the department including the US Patent and Trademark Office and the National Telecommunications and Information Administration to ‘identify leading public policy and operational challenges in the digital economy’.99 Over the next several years, the Task Force conducted a fact-finding process involving listening sessions, a symposium, and numerous stakeholder comments, aimed at ‘assessing current policy related to copyright and the Internet, identifying important issues that are being addressed by the courts and those that are ripe for further discussion and development of solutions’. In 2013, the Task Force published ‘Copyright Policy, Creativity, and Innovation in the Digital Economy’, a comprehensive ‘green paper’ that made various findings and recommendations pertinent to secondary copyright infringement liability online.100 Among other suggestions, the Task Force recommended improvement of enforcement tools to lessen the frequency of online infringement. These included an increase in  penalties for the streaming of copyrighted works, similar to that which exists for crim­inal reproduction and distribution; further study of statutory damages in the 95  See Amy Goodman, ‘The Sopa blackout protest makes history’ (The Guardian, 18 January 2012) . 96  See Larry Magid, ‘What Are SOPA and PIPPA and Why All the Fuss?’ (Forbes, 18 January 2012) . 97  See Condon (n. 7). 98  See Jonathan Weisman, ‘After an Online Firestorm, Congress Shelves Antipiracy Bills’ (New York Times, 20 January 2012) . 99  USPTO Internet Policy Task Force, ‘Copyright Policy, Creativity, and Innovation in the Digital Economy’ (July 2013) Green Paper (‘ITPF Green Paper’) . 100 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Secondary Copyright Infringement Liability in the Us   365 context of individual file-sharers, and secondary liability for large-scale online infringement; improvement of the Copyright Office’s database of designated DMCA agents; voluntary private sector initiatives to improve online enforcement; and public education and outreach efforts to inform consumers about both rights and exceptions and to encourage the use of legitimate online services.101 The Task Force also recommended a multistakeholder forum to develop best practices for service providers aimed at improving the operation of the notice-and-takedown system under the DMCA.102 The work culminated in 2015 with a seven-page ‘List of good, bad, and situational practices’ aimed at improving the notice-and-takedown process. The list contains basic but useful guidance for service providers, notice senders, and counter-notice senders.103

4.3  US House Judiciary Committee Copyright Review In 2013, the US House Judiciary Committee initiated a comprehensive review of US copy­right law104 and convened twenty hearings lasting several years.105 Among other policy proposals that arose out of this process, in 2016 the Committee proposed that the Copyright Office host a small claims system to handle low-value infringement cases as well as bad faith section 512 takedown notices.106

44  US Copyright Office Study In 2015, the US Register of Copyrights commenced a study ‘to evaluate the impact and effectiveness of the safe harbor provisions contained in section 512’.107 The study sought 101 ibid. 102  See Multistakeholder Forum on the DMCA Notice and Takedown System . 103  See Final Multistakeholder Document on DMCA Notice-and-Takedown Processes: List of Good, Bad and Situational Practices (April 2015) . 104  See John Koetsier, ‘Copyright, DMCA, and public interest: House Judiciary Committee to conduct “comprehensive review” of U.S. copyright law’ (Venturebeat, 24 April 2013) : Judiciary Committee, ‘Press Release: Chairman Goodlatte Announces Comprehensive Review of Copyright Law’ (Judiciary.house.gov, 24 April 2013) . 105  See Judiciary Committee, US Copyright Review . 106 See Judiciary Committee, Reform of the U.S.  Copyright Office . 107 See Section 512 Study ; Section 512 Study: Announcement of Public Roundtable, 84 FR 1233-01.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

366   Jack Lerner written comments to thirty questions covering eight categories. The Copyright Office received over 92,000 written submissions in response, and held two public roundtable hearings. The Office sought further comments in 2016 and requested empirical research studies ‘assessing the operation of the safe harbor provisions on a quantitative or qualitative basis’, which yielded numerous additional comments and nine empirical studies. At the time of writing, the study is ongoing.

5.  Secondary Copyright Infringement Liability for Online Intermediaries Outside the United States 5.1  International Treaties and Trade Agreements The two most comprehensive instruments governing international law—the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs) and the Berne Convention—do not contain safe harbour provisions. However, ‘safe harbour’ regimes similar to section 512 exist in numerous other nations, and are found in many trade agreements into which the United States has entered, and many other trade agreements.108 These include the US–Chile Free Trade Agreement,109 the US–Australia Free Trade Agreement,110 the Korea–US Free Trade Agreement,111 and the US–Central America Free Trade Agreement.112 The Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), a trade agreement between Pacific rim countries, requires the parties to limit liability along lines similar to section 512.113

5.2  European Union The EU’s e–Commerce Directive, promulgated in 2000, contains several provisions similar to those found in the section 512 safe harbour.114 Articles 12 and 13 direct 108  See, for further in-depth discussion, Chapter 9. 109  See US–Chile Free Trade Agreement, Art. 17.11(23). 110  See US–Australia Free Trade Agreement, Art. 17.11(29). 111  See Korea–US Free Trade Agreement, Art. 18.10(30). 112  See CAFTA, Art. 15.11(27). 113  See CPTPP, Art. 18.82. The United States withdrew from a predecessor agreement, the TransPacific Partnership Agreement, in 2017. The CPTPP incorporates Art. 18.82 of the TPP, entitled ‘Legal Remedies and Safe Harbours’. 114  Directive 2000/31/EC of the European Parliament and of the Council of 17 July 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1, Arts 12–15.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Secondary Copyright Infringement Liability in the Us   367 Member States to limit liability for service providers that conduct ‘routing’ and ‘caching’ under certain conditions, and Article 14 limits liability for services that provide ‘hosting’, where the service ‘does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent’, and if the provider acts expeditiously to remove infringing content.115 The EU Copyright Directive,116 promulgated in 2001, does not address service provider liability, but beginning in 2016 the EU began considering a comprehensive update to the Copyright Directive that could have important consequences for UGC providers. The Directive on Copyright in the Digital Single Market was approved in April 2019.117 The new Directive explicitly withdraws Article 14 of the e-Commerce Directive as it pertains to copyright and imposes new requirements on UGC sites. These include that the service provider must make ‘best efforts to prevent the availability of specific works or other subject matter by implementing effective and proportionate measures . . . to prevent the availability on its services of the specific works or other subject matter identified by rightholders’; the service provider must ‘act expeditiously to remove or disable access to these works or other subject matter and . . . demonstrate . . . that it has made its best efforts to prevent their future availability’.118 These provisions are widely seen as imposing, in direct opposition to the US regime and many other regimes around the world, an affirmative obligation for service providers to monitor their sites for in­frin­ging content, and to install technical measures similar to YouTube’s Content ID system. The provisions have been extremely controversial.119

6. Conclusions For much of the digital age, secondary copyright infringement liability for online intermediaries was one of the most controversial, uncertain, and fast-changing areas of the law. Recent years have been relatively calm as reform activity and litigation seemed to slow (though they have certainly not stopped). Still, unauthorized streaming and downloading continues, even though new authorized streaming services have become increasingly important components of the entertainment ecosystem. Meanwhile, many 115  The e-Commerce Directive contains no similar provision for search engines. 116  Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the ­harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L167/10. 117 Directive 2019/790/EU of the European Parliament and of the Council of 17 April 2019 on ­copy­right and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/ EC [2019] OJ L130/92. See, for further review, Chapters 15 and 28. 118  ibid. Art. 17(4). 119  See Sarah Jeong, ‘New EU copyright filtering law threatens the internet as we knew it’ (The Verge, 19 June 2018) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

368   Jack Lerner established UGC providers have chosen for practical reasons to take more active steps than the law requires to combat infringement, including working with content pro­viders and using technological tools to find and remove infringing content. In the United States, new entrants to the UGC market face daunting challenges. They must comply with the numerous detailed requirements set forth in section 512, but they also must contend with uncertainty as to what conduct will expose them to claims of inducement and wilful blindness. Ultimately, they must be prepared to face litigation even if they respond to takedown notices expeditiously and actively seek to remove infringing content. Though reform activity has slowed in the United States, that could change at any time. In particular, the EU’s approval of Article 17 of the Directive on Copyright in the Digital Single Market represents a stunning withdrawal from a broad international consensus in favour of DMCA-like safe harbours—and in the United States it could embolden new attempts to modify section 512. In addition, as large platforms like Facebook and YouTube face increasing scrutiny from US lawmakers on issues unrelated to copyright, the political balance of power could shift in ways that would embolden critics of section 512. As recent events in Europe show, the compromise that section 512 represents will probably always be tenuous; for this reason, scholars and practitioners should keep abreast of current developments in secondary copyright infringement liability and understand how UGC providers and other intermediaries deal with infringement on their sites.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 19

I n ter m edi a ry Li a bilit y a n d On li n e Tr a de M a r k I n fr i ngem en t: Em ergi ng I n ter nationa l Com mon A pproach e s Frederick Mostert*

In view of the exponential expansion of the volume and velocity of counterfeits which pop up on social media1 and web platforms,2 intellectual property counsels around the world are confronted with the growing necessity to focus intently on the various legal measures (and their cost) at their disposal on a daily basis. Intermediaries are equally occupied by the measures they may be required to take on board in dealing with the listing of fakes online.3 Against this reality, the pressing pivotal issue in the digital space remains how best to develop a balanced legal approach to the implementation of new *  Substantially drawn (with updates to 2018) from my Study on Approaches to Online Trademark Infringement, WIPO/ACE/12/9 REV.2. I wish to express my gratitude to Stephanie Aronzon (LLM, King’s College London; J.D., New York University School of Law) for her invaluable research and contribution to this chapter. 1  See UK Intellectual Property Office, ‘Share and Share Alike: The Challenges from Social Media for Intellectual Property Rights’ (2017). 2  See World Intellectual Property Organization (WIPO), ‘IP Infringement Online: the Dark Side of Digital’ (WIPO Magazine, April 2011) . 3  See Frederick Mostert, ‘Counterfeits on Ebay: who is responsible?’ (Financial Times, 17 July 2008) .

© Frederick Mostert 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

370   Frederick Mostert digital tools and the appropriately calibrated touchstone for intermediary liability to ensure the equanimity of underlying fundamental interests including free speech, the freedom to compete, and intellectual property. De facto guidelines have already developed around the world where right holders, intermediaries,  social media and web platforms,  and government law enforcement authorities have voluntarily cooperated with each other and across borders to effectively combat digital counterfeits and pirated goods.4 Some intermediary liability legal prin­ ciples have also started to emerge internationally in a surprisingly uniform manner. This is not to deny that substantive differences between jurisdictions remain. However, it is suggested that recognizing these common principles will provide an important first step towards developing consistent, cross-border technical and legal standards. Effective measures can only be implemented if voluntary technical and legal standards are developed and implemented uniformly on an international basis. As it stands now, the better option is for the further development of a ius gentium around joined-up approaches on voluntary digital measures.

1.  The ‘Ratio’ Principles of Intermediary Responsibility— International Common Approaches In general terms, although some jurisdictions may differ on certain aspects, it is found that liability for intermediaries falls into the following categories. First, the most basic is primary (or direct) infringement—doing the infringing act. Secondly, there is accessory (or secondary or contributory) liability—liability for assisting another person to do the infringing act. Both primary infringement and accessory liability expose the wrongdoer to an injunction and payment of damages. Thirdly, there is intermediary liability. This exposes an intermediary to an injunction only, not damages. It does not require the intermediary to be either a primary infringer or an accessory. Primary infringement is generally strict liability, whereas both accessory liability and intermediary liability involve a mental element, usually some form of knowledge.5 Although primary liability is in principle available, especially when the inter­medi­ar­ies use trade marks in their online advertising,6 online intermediaries typically only provide services which allow the online infringer to sell counterfeit products. Therefore, the law’s 4  See Frederick Mostert, ‘Fakes Give Alibaba Chance to Turn Crisis into Opportunity’ (Financial Times, 8 June 2016) See also Mostert (n. 3). 5  Presented in WIPO, Advisory Committee on Enforcement: F.  Mostert, ‘Study on Approaches to Online Trademark Infringement’ (1 September 2017) WIPO/ACE/12/9 REV.2, 14. 6  In the United States, liability based on s. 43(a) of the Lanham Act; Tiffany v eBay, 600 F.3d 93 113–14 (2d Cir. 2010) (US).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Emerging International Common Approaches   371 response in most jurisdictions rests on the accessory liability of online inter­medi­ar­ies which have failed to meet the necessary ‘reasonable measures’ when conducting their business of offering their services online.7 It is of interest to note that throughout different jurisdictions in the world there are three key similarities between approaches to intermediary and accessory liability internationally.8 In effect, a ‘ratio’ of common principles have emerged. First, across all jurisdictions, intermediaries are not liable for accessory liability9 if they had no knowledge of the specific infringement in question, subject to a finding of wilful blindness.10 Accessory liability is the liability of one person—the ‘accessory’—for their participation in an infringement committed by another—the ‘primary’ or ‘principal’ wrongdoer.11 In other words, accessory liability is liability that is dependent on the prior liability of another party.12 Secondly, an intermediary who fails to take measures expeditiously upon gaining specific knowledge of an infringement may lose its ability to enjoy safe harbour protection from liability.13 Generally speaking, safe harbour provisions refer to rules which grant immunity to intermediaries from liability for damages under certain conditions. While the loss of safe harbour protection will not result in an automatic finding of liability, the underlying principles of intermediary liability suggest such a finding may be likely in certain jurisdictions.14 It is of interest to note that in virtually all jurisdictions safe harbours do not provide a defence to an injunction. Thirdly, there is increasing recognition of the availability of blocking injunctions against intermediaries or platforms—who while innocent of trade mark infringement—must assist rightholders in stopping and preventing further infringement. Website-blocking orders are confined to taking proportionate measures and whether they are necessary

7  See Annette Kur, ‘Secondary Liability for Trademark Infringement on the Internet: The Situation in Germany and Throughout the EU’ (2013‒14) 37 Colum. J. of L. & Arts 525, 532. 8  Similarly, in the European context, Christina Angelopoulos has noted that while ‘the theoretical clashes between the European national systems might be considerable, in practice the results do not greatly differ. Structure, rather than substance is what diverges’. Christina Angelopoulos, European Intermediary Liability in Copyright: A Tort-Based Analysis (Kluwer Law Int’l 2016) 330. 9  While different terms have been used to refer to this concept across jurisdictions, the substance remains the same. Other terms that have been used internationally include ‘contributory liability’, ‘indirect liability’, ‘secondary liability’, ‘tertiary liability’, ‘third party liability’, and ‘interferer liability’. 10  Tiffany v eBay (n. 5) 109–10 (‘A service provider is not, we think, permitted willful blindness. When it has reason to suspect that users of its service are infringing a protected mark, it may not shield itself from learning of the particular infringing transactions by looking the other way . . .’). See also Louis Vuitton Malletier SA v Akanoc Solutions Inc., 658 F.3d 936, 1108 (9th Cir. 2011) (US). 11  Angelopoulos (n. 7) 16‒18. 12  However, it does not necessarily follow that an intermediary who does have knowledge will be held liable as an accessory, as an additional component of intent or gross negligence is generally required for an intermediary to be held liable. Ibid. 282–3. 13  This may vary depending on the nature of the intermediary. E.g. caching and mere conduit inter­medi­ar­ies might retain safe harbour protection even if they do have knowledge of infringement. 14  e.g. the principles of joint tortfeasance in the UK or the application of arts 1382 and 1383 of the Civil Code in France.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

372   Frederick Mostert and effective, among other requirements.15 While it is unclear if there are substantive differences in the extent of preventative action required, there appears to be a common red line as no country currently appears to impose a general monitoring obligation on intermediaries. As a result, the scope of blocking injunctions is typically confined to taking preventative measures against the specific trade mark violation identified by the rightholder in the initial notice. The three common tenets outlined can be distilled into a transnational ‘ratio’ prin­ ciple of intermediary responsibility: upon notice of a specific infringement, an internet service provider (ISP) is required to take all proportionate and reasonable measures which a reasonable ISP would take in the same circumstances to address the specific instance of infringement brought to their attention. The term ‘ratio’ principle derives from Roman law. First, from the term ratio decidendi which refers to the core synthesis of a particular decision (usually by a court); in other words, the ‘reason’ or ‘rationale’ on which a decision is based. Secondly, the word for reasonable is rationabile in Latin. The ‘reasonable man’ standard under a duty of care in common law is well known—as is the bonus pater familias principle in civil law systems, which equally relies on ‘reasonable’ conduct under the same circumstances by the responsible party. Both these standards, and especially the term ‘reasonable’, have surfaced frequently in discussions of online liability in a number of jurisdictions as a common thread.16 This is an objective test analogous to the reasonable man test in common law jurisprudence and the bonus pater familias standard in civil law. The value of recognizing this principle is clear: it provides a strong basis for a common core framework for intermediary liability internationally. Common standards are essential to facilitate cross-border enforcement which is vital in effectively addressing the challenge of counterfeiting in the borderless online environment. While formal harmonization of legal standards would be procedurally challenging, it is a suggested that this framework could take the form of a set of non-binding ‘voluntary country guidelines’ to facilitate the quick and efficient implementation among Member States in today’s fast-moving digital environment. The word ‘reasonable’ has surfaced in judgments around the world, providing greater clarity on the measures internet services providers must take to satisfy the ‘ratio’ prin­ ciple. A core guiding principle is striking an appropriate balance between the competing legitimate interests at play: the property interests of rightholders in their trade marks, the freedom of ISPs to conduct their business, the freedom of expression of internet users and intermediaries,17 and data protection and privacy interests.18 As Professor 15  See n. 36. 16  e.g. Christina Angelopoulos has adopted a similar approach in using the term ‘bonus medius interretialis’ to refer to the reasonable intermediary. Angelopoulos (n. 7) 258. 17  The freedom of expression of intermediaries was considered by the European Court of Human Rights in Delfi AS v Estonia [GC] App. no. 64569/09 (ECtHR, 16 June 2015); Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary App. no. 22947/13 (ECtHR, 2 February 2016); and Pihl v Sweden (Decision) App. no. 74742/14 (ECtHR, 7 February 2017). 18  Data protection rights were considered by the CJEU in C-70/10 Scarlet Extended SA v Société belge des auteurs, compositeurs et éditeurs SCRL (SABAM) [2011] ECLI:EU:C:2011:771; C-360/10 Belgische

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Emerging International Common Approaches   373 Dinwoodie points out, the need to balance these principles was identified in the European context by the decision of the Court of Justice of the European Union (CJEU) in Telekabel.19 This reasoning also appears to underpin the judgment of Sullivan J in Tiffany v eBay, which held that eBay had implemented anti-counterfeiting measures as soon as it was ‘reasonably and technologically capable of doing so’ against the claimant’s assertion that eBay could have done more to prevent the sale of counterfeit goods on its platform. This suggests the concept of ‘reasonable’ is not static, and the measures inter­medi­ar­ies should adapt can change over time. In China, for example, the standard of intermediary ‘indirect’ liability is one of ‘reasonable care’.20 Since the 2011 case of E-Land International v Taobao, the Chinese courts have gone further and established that the standard of reasonable care, which excuses the service provider from indirect liability, requires more than simply compliance with a ‘notice-and-takedown’ procedure.21 Service providers must take a more stringent and proactive approach to end unlawful activities, such as ‘publicly condemning the online seller’s unlawful activities; downgrading the seller’s “trustworthiness rating” (information available on the website); limiting the use of the website (by prohibiting the seller from selling certain products) and,22 as a last resort, banning the online seller from using the online platform’.23 The shift seen in Chinese courts was also accompanied by further policy measures. The Circular of the Ministry of Commerce imposed several duties on the online inter­medi­ar­ ies and e‑commerce operators, including the duties to ‘(i) establish a trade mark and patent inquiry system; (ii) adopt technical measures to monitor IPR infringements; (iii) improve accessibility to information on sellers, payments details, logistics, after-sale service, dispute resolution, compensation, process monitoring; (iv) institute a daily online inspection system; (v) investigate and remove infringing content in a timely manner; (vi) handle violations of regulations and laws; and (vii) report serious cases to the competent authorities in a timely manner’.24 These measures were reiterated in the Administrative Measures for Online Trading by the State Administration for Industry and Commerce in 2014.25 Furthermore, the 2016 Beijing Higher People’s Court Guidelines on Trial Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v Netlog NV [2012] ECLI:EU:C:2012:85; and C-230/16 Coty Germany GmbH v Parfümerie Akzente GmbH [2017] ECLI:EU:C:2017:941. Privacy rights were considered in C-275/06 Productores de Música de España (Promusicae) v Telefónica de España SAU [2008] ECLI:EU:C:2008:54. This will certainly be relevant where the court is examining information disclosure orders. 19  See Graeme Dinwoodie, ‘A Comparative Analysis of the Secondary Liability of Online Service Providers’ in Graeme Dinwoodie (ed.), Secondary Liability of Internet Service Providers (Springer 2017) 56‒99. 20  Tort Law of the People’s Republic of China of 26 December 2009, Art. 36 (Ch.). 21  See First Intermediate People’s Court of Shanghai E-Land International Fashion (Shanghai) Co. Ltd v Du Guofa and Zhejiang Taobao Internet Co. Ltd [25 April 2011] case no. 40 (Ch.). 22  Michele Ferrante, ‘E-commerce Platforms: Liability for Trade Mark Infringement Reflections on Chinese Courts’ Practice and Remedies Against the Sale of Counterfeits on the Internet’ (2015) 10(4) JIPPL 255, 258. 23 ibid. 24 ibid. 25 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

374   Frederick Mostert Cases Involving Intellectual Property Rights lay out a requirement that intermediaries ­proactively ‘take appropriate reasonable measures’ to avoid repeat infringement.26 In effect, this points to how the ‘ratio’ principle is an inherently flexible standard that can be tailored to the specific parties in question. This is important as it allows the prin­ ciple to account for platform differentiation. The diversity of platforms and ways trade marks are used on different platforms makes a one-size-fits-all approach inappropriate.27 For instance, a framework that is effective and appropriate in the context of commercial advertising networks or hosted e-commerce marketplaces might antagonise free-speech interests in relation to less commercially oriented products. Similarly, measures appropriate for a large, sophisticated ISP could likely differ from what might be appropriately imposed on one that is a smaller provider with fewer resources. A series of judgments by Lord Justice Arnold imposing blocking orders requiring ISPs to block access to infringing websites in the UK demonstrates the virtues of adopting a flexible standard. Lord Justice Arnold has devised mechanisms to assuage concerns of both intermediaries and internet users on over-blocking, as well as rightholders’ concerns about the ‘whacka-mole’ problem.28 For instance, he has enabled rightholders to ‘amend and augment blocking requests without having to initiate new, separate proceedings’,29 while also recognizing IP address blocking may be inappropriate where the relevant website’s IP address is shared with other non-infringing users.30 It remains to be seen how the relatively new Article 17 of the EU Directive on Copyright in the Digital Single Market might be implemented and change some of the legal principles outlined previously. Article 17 effectively introduces strict, primary copy­right infringement liability for online user-generated content platforms.31 As Professor Senftleben points out, this new strict liability of platforms for infringing content places a heavy burden on them.32 Also, in future, platforms may in these new circumstances no longer be able to rely on ‘safe harbour’ provisions and the importance of specific ‘knowledge’ of infringing content may not be so determinative.

1.1  Injunctions for Blocking Websites by ISPs Another emerging trend is the recognition of the possibility of issuing an injunction against an ISP (a blocking injunction), who although innocent of trade mark infringement 26 See Chapter  14 citing Beijing High People’s Court Guidelines on Trial of IP Cases involving Networks [13 April 2016]. 27  Presented in WIPO, Advisory Committee on Enforcement: F. Mostert, ‘Study on Approaches to Online Trademark Infringement’ (1 September 2017) WIPO/ACE/12/9 REV.2, 16. 28  Dinwoodie (n. 18) 59‒62. 29  ibid. 62. 30 See Dramatico Entertainment Ltd v British Sky Broadcasting Ltd [2012] EWHC 1152 (Ch) [13] (Arnold J) (UK). 31  See Dirk Visser, ‘Trying to Understand Article 13’, SSRN Research Paper no. 3354494 (2019) 4 . 32  See Martin Senftleben, ‘Bermuda Triangle—Licensing, Filtering and Priviliging User-Generated Content Under The New Directive on Copyright in the Digital Single Market’, SSRN Research Paper no. 3367219 (2019) 17 .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Emerging International Common Approaches   375 must assist the trade mark owner in stopping and preventing further infringement. The most notable developments have recently been seen in the UK.33 Outside Europe, both Australia34 and Singapore35 have passed legislation to facilitate website blocking; however, this has only been applied in the copyright context thus far.36 In the UK, there is no specific legislative provision stipulating for the issuing of blocking orders in the case of trade mark infringement. Although there is no statutory provision like section 97(A) of the Copyright, Designs and Patents Act 1988 for copyright, courts have found power to issue blocking orders in online trade mark counterfeit cases based on the broad powers of section 37 of the Senior Courts Act 1981,37 in line with the UK’s obligation under Article 11 of the Enforcement Directive to provide such injunctions. This was established in the case of Cartier, which has been confirmed by the Supreme Court.38 The courts’ discretion to issue a blocking injunction against an ISP on the basis that services were being used to infringe a trade mark is based on four conditions:39 (1) the ISP is an ‘intermediary’ within the meaning of Article 11 of the Enforcement Directive;40 (2) the users and/or the operators of the website are infringing the claimant’s trade mark; (3) the users and/or the operators of the website are using the ISP’s services to do so; and (4) the ISP had actual knowledge of this. In principle, IP address blocking can be proportionate, if the right procedure is followed.41 The court took the following circumstances into consideration in its proportionality analysis:42 (1) the comparative importance of the rights that were engaged and the justifications for interfering with those rights; (2) the availability of alternative measures which were less onerous; (3) the efficacy of the measures that the order required the ISPs to adopt, and in particular whether they would seriously discourage the ISPs’ subscribers from accessing the target websites; (4) the costs associated with those measures, and in particular the costs of implementing the measures; (5) the dissuasiveness of those measures; (6) the impact of those measures on lawful users of the internet; and (6) the substitutability of other websites for the target websites.

33  See ‘Online Intellectual Property Infringements and Court-ordered Site Blocking in the United Kingdom’ in WIPO/ACE/12/10. 34  See Copyright Amendment (Online Infringement) Act 2015 (Aus.). 35  See Copyright Amendment Act 2014 (Sing.). 36 See Roadshow Films Pty Ltd v Telstra Corp. Ltd [2016] FCA 1503 (Aus.). See also Andy Leck and Faith Lim Yuan, ‘Updates on Site Blocking’ (Lexology, 7 February 2017) . 37 See Cartier International AG & Others v British Sky Broadcasting Ltd & Others [2016] EWCA Civ 658 [72] (UK). 38 See Cartier International AG & Others v British Telecommunications Plc & Another [2018] UKSC 28 (UK). 39 See Cartier (n. 36) [139]. 40  See Directive 2004/48/EC of the European Parliament and of the Council of 29 April 2004 on the enforcement of intellectual property rights [2004] OJ L157/16. 41  Non-infringing sites must have the opportunity to be moved to other servers—Court of Appeal in Cartier (n. 36) [77]–[78]. 42  ibid. [162] ff.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

376   Frederick Mostert

2. The Ius Gentium of Voluntary Measures—International Common Approaches In response to an increase in online sales of counterfeit products, online intermediaries and rightholders have, to some extent, engaged in voluntary cooperation,43 which has proved to be successful, albeit not fully effective, in stopping online counterfeit sales. These measures are either designed by online intermediaries themselves,44 drafted in cooperation with rightholders,45 or supported by states and their administrative authorities.46 In effect, a ius gentium of common principles or international common approaches have emerged. Such joined-up principles can be uniformly observed in the trans­ nation­al approaches to voluntary measures that this chapter will proceed to highlight in more detail. This is aligned with calls, such as that by Knud Wallberg, for the development of a code of conduct for voluntary measures relating to counterfeit goods in the context of the cross-border, multi-jurisdictional environment of digital marketplaces.47 First, online intermediaries have developed initiatives to voluntarily remove counterfeit listings on notification through policies such as ‘notice and take down’. Secondly, intermediaries have voluntarily engaged in extensive proactive monitoring of uploaded listings using keywords and other indicia. Thus, when products are offered—particularly in certain categories—the software identifies the use of certain keywords, such as ‘faux’, ‘fake’, and ‘lookalike’, and flags these items for immediate review. Thirdly, inter­medi­ar­ies have adopted filtering systems that use algorithms to automatically remove and prevent counterfeit listings from being displayed (digital fingerprinting). Fourthly, intermediaries have adopted a ‘follow the money’ approach to tackle online infringement, and introduced measures to target payment processors for online traders who engage in the sale

43  Mostert (n 5). 44  Voluntary collaboration practices (VCPs). See EU Intellectual Property Office, ‘Study on Voluntary Collaboration Practices in Addressing Online Infringements of Trade Mark Rights, Design Rights, Copyright and Rights Related to Copyright’ (September 2016) (hereafter EUIPO Study 2016). 45  Presented in WIPO, Advisory Committee on Enforcement: C.  Aubert, ‘The Activities of the Federation of the Swiss Watch Industry in the Area of Preventative Actions to Address Online Counterfeiting’ (31 July 2015) WIPO/ACE/10/22. 46  In China, Administrative Measures for Online Trading (State Administration for Industry and Commerce, 2014) in Art. 6 encourage online commodity dealers and relevant service providers to form industry organizations and conventions, to promote industry self-discipline. On the EU level, Art. 16 of the e-Commerce Directive stipulates that Member States and the European Commission will promote the development of codes of practice among trade, industry, and consumer organizations; similarly, Art. 17 of the Enforcement Directive on codes of practice to prevent IP infringement. This led to the signing of a Memorandum of Understanding on the Sale of Counterfeit Goods (signed on 4 May 2011, now revised and open for signature since 21 June 2016). See EUIPO Study 2016 (n. 43) 16. 47  See Knud Wallberg, ‘Notice and Takedown of Counterfeit Goods in the Digital Single Market: A Balancing of Fundamental Rights’ [2017] JIPLP 922, 922–36.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Emerging International Common Approaches   377 of counterfeit goods.48 In some instances, intermediaries have, as a natural next step to tracing the money, also followed through with ‘notice and track-down’ measures to help rightholders to find the sellers of counterfeit products and stop the problem at source. Fifthly, voluntary registry systems have been implemented to enable rightholders to control how product listings featuring their trade marks can be displayed online—this includes being able to limit listings to a pre-approved list of ­sellers.49 In addition, these registry systems serve as prima facie evidence of rights ownership, facilitating the speedy takedown of counterfeit material when rightholders submit takedown notices. Sixthly, advertising codes of practice have been developed to discourage illegal content online. This is accomplished by preventing advertisements from being displayed on counterfeit websites (‘advertising misplacement’), cutting off advertising revenue.50 Lastly, intermediaries have also engaged in the education of users and businesses through educational campaigns, and at the time of uploading.51 However, it is important to note that technological innovations such as artificial intelligence and machine learning are not a silver bullet for solving the counterfeit problem. As Monahan points out, trade mark infringement is inherently different from copyright infringement and potentially more challenging for the police.52 Policing copyright infringement relies on identifying unauthorized copies, and this is mainly a matter of finding an exact match or close match on a reference file, at which automated systems are adept. In contrast, trade mark infringement occurs in both the underlying product (which is often not available for inspection) and the advertisement or listing. Sophisticated crim­ inals list counterfeits using pictures of the authentic products, real descriptions, and at prices close to the original, making them difficult to flag using automated technologies. For instance, optical-image or character-recognition technology could be circumvented by counterfeiters who use images of genuine products in their listings.

48  See e.g. the US International Anti-Counterfeiting Coalition payment processor initiative and portal programme, later named RogueBlock as discussed in EUIPO Study 2016 (n. 45) 14, 23‒4; Nicolas Gordon, ‘The Canadian Anti-Fraud Centre’s Project Chargeback: Leading the Charge(Back) Against Fakes!’ in WIPO Advisory Committee on Enforcement, ‘Coordinating Intellectual Property Enforcement at the National Level’ (5 July 2016) WIPO/ACE/11/8. 49  However, these efforts have been hampered by legal uncertainty regarding the compatibility of these measures with competition law. 50  See Austrian Ethics Code for the Advertising Industry (2014); UK principles of good practice of the trading of digital display advertising in EUIPO Study 2016 (n. 43) 14, 19‒23. See WIPO Advisory Committee on Enforcement, ‘Interactive Advertising Bureau (IAB) Poland’s Initiatives on Advertising Misplacement to Tackle Intellectual Property Rights (IPR) Infringement’ (20 July 2015) WIPO/ACE/10/21. In the EU, similar practices have been initiated in Austria, Denmark, France, Italy, the Netherlands, Poland, Slovakia, and the UK. 51  e.g. individuals who attempt to list an item that contains a particularly sensitive brand name and/ or a keyword, such as ‘faux’, will be presented with a warning message regarding the sale of counterfeits as a deterrent. 52  Presented in WIPO, Advisory Committee on Enforcement: F. Mostert, ‘Study on Approaches to Online Trademark Infringement’ (1 September 2017) WIPO/ACE/12/9 REV.2, 11, fn. 60.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

378   Frederick Mostert Furthermore, as Chen notes, even rightholders struggle to distinguish between online listings which contain legitimate second-hand or grey goods and counterfeits.53 Therefore, it is important that overreach is avoided, especially in the context of using automated systems that are in the early stages of development and the accuracy of which continues to improve over time. Moreover, these measures ultimately fail to address the actual source of the problem: the pirates and counterfeiters themselves. In summary, on the crucial point of constructive cooperation, as noted in the Financial Times: ‘Where do the recent epic legal battles leave web customers who are saddled with counterfeits daily? That there is a plethora of fakes online is glaringly obvious. Who then is responsible for removing the counterfeit products listed on (platforms). As in so many walks of life, the answer lies in the constructive co-operation. The answer for assessing responsibility lies in the middle—both sides should in equal measure diligently confront the online counterfeit problem together. Brand owners and auction sites need to work together and share the responsibility to stop fakes, like wildfire, to avoid a restraint on the progress of society.’54

2.1  Freedom of Expression, Competition, and Data Protection While one of the main objectives of voluntary measures, codes of practice, or other soft law mechanisms is to increase the effectiveness of intellectual property rights enforcement online, it is important to strike a ‘fair balance’ between the fundamental rights of the main actors involved in such procedures.55 These fundamental rights include freedom of expression information and lawful competition. In considering and balancing the underlying societal interests at stake, Professor Mel Nimmer’s concept of ‘definitional balancing’56 of opposing fundamental interests to arrive at an appropriate demarcation of boundaries is apposite here. As he aptly pointed out, virtually no right can be absolute—an absolutist starting point is both unrealistic and unreasonable. Rights have boundaries and their reach is limited by balancing opposing policies. Trade mark rights must be balanced against freedom of expression, and lawful competition. Trade mark law can be misused by some rightholders to shut down competition from non-preferred resellers of genuine products and to control distribution channels.57 For instance, if misapplied, ‘whitelists’ where product listings or websites are approved by rightholders could put at risk legitimate comparative advertising, sales of used, refurbished, or genuine goods, and commentary on rightholders.58 Analogous concerns have been 53  ibid. 11, fn. 61. 54  Mostert (n. 3). 55  Knud Wallberg, ‘Notice and Takedown of Counterfeit Goods in the Digital Single Market: A Balancing of Fundamental Rights’ [2017] 12(11) JIPLP 922, 922–36. 56  Melville Nimmer, ‘Does Copyright Abridge the First Amendment Guarantees of Free Speech and Press?’ (1970) 17 UCLA L. Rev. 1180, 1184–93. 57  See Mostert (n. 26). 58 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Emerging International Common Approaches   379 highlighted by Professor Hugenholtz in the copyright context, where inter­medi­ar­ies may adopt risk-adverse policies, removing legitimate listings and limiting freedom of expression and information.59 Therefore, it is important that safeguards are in place to prevent misuse of trade mark law to restrict lawful competition from non-preferred resellers of genuine products, and to control distribution channels. Safeguards could include penalties for misuse, counter-notice procedures, and greater transparency. Any ‘whitelists’ introduced must clearly be limited to serving solely as a reference point and checklist of authentic versus counterfeit for platforms, domain name registrars, law enforcement, and administrative authorities to use as a source and provenance reference, and not to control the distribution of or interfere with the sale of genuine goods. But like all such rights, freedom of expression and competition, rights also have their own limits if abused.60 As Professor Thomas McCarthy eloquently puts it: ‘Trademark law serves to protect both consumers from deception and confusion over trade symbols and to protect the plaintiff ’s infringed trademark as property . . . the interest of the public is in not being deceived has been called the basic policy.’61 This tenet of trade mark law also applies when a counterfeiter uses a trade mark to deceive and lure web customers to its site through false pretences and the use of another’s registered trade mark. As a form of fraud, counterfeits usually raise no free speech issues. As the US Supreme Court and a number of other courts around the world have stipulated, deceptive speech is not protected speech.62 Although there is an indisputable need to protect the privacy of individuals, there must also be an acknowledgement that the individuals and businesses that hide behind the rubric of free expression to conceal bad acts should not be allowed to continue. As Justice Abella, along the same lines, cogently put it: ‘This is not an order to remove speech that, on its face, engages freedom of expression values, it is an order to de-index websites that are in violation of several court orders. We have not, to date, accepted that freedom of expression requires the facilitation of the unlawful sale of goods.’63

59  See Bernt Hugenholtz, ‘Code of Conduct and Copyright: Pragmatism v Principle’ . 60 See Frederick Mostert and Martin Schwimmer, ‘Notice and Trackdown’ (Intellectual Property Magazine, June 2011) 18‒19. 61  Thomas McCarthy, Trademarks and Unfair Competition (Thomson Reuters 2014) section 2:1. 62  Counterfeits as a form of deception qualifies as ‘deceptive speech’ which the US Supreme Court has earmarked as unprotected speech under the First Amendment. See Friedman v Rogers, 440 US 1 (1979). See also Canadian Supreme Court in R v Oakes [1986] 1 SCR 103 (finding that the Canadian Charter of Rights and Freedoms allows reasonable limitations on rights and freedoms through legislation if it can be demonstrably justified in a free and democratic society) and the European Commission on Human Rights in X and Church of Scientology [1979] 16 DR 68 [79] (finding that misleading or deceptive commercial expression deserves the least protection). 63  Google Inc. v Equustek Solutions Inc. 2017 SCC 34 [42] (Abella J) (Can.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

380   Frederick Mostert

3. Conclusions As anyone in charge of enforcement efforts will attest, the lack of uniform international guidelines has made tackling counterfeits in a borderless digital environment even more challenging. Nonetheless, even though trade marks are territorial, a well-developed set of global, uniform guidelines have, for example, emerged around ‘famous and wellknown marks’.64 Equally, the internet needs an approach to enforcement of trade mark rights that can extend internationally. Effective measures can only be implemented if consistent, cross-border technical and legal standards are developed and implemented. As it stands now, the better options are the further development of the ius gentium around international common approaches on voluntary measures and the appropriate standards and responsibilities for intermediaries in accordance with the international ‘ratio’ principles. 64  WIPO, ‘WIPO Joint Recommendation Concerning Provisions on the Protection of Well-Known Marks’ (1999) 833(E), Art. 2 .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 20

I n ter m edi a ry Li a bilit y a n d Tr a de M a r k I n fr i ngem en t: Prolifer ation of Filter Obligations i n Ci v il L aw J u r isdictions? Martin Senftleben

The erosion of the safe harbour for hosting in the EU Directive on Copyright in the Digital Single Market (DSM Directive)1 leads to a remarkable climate change in the field of EU copyright law and the civil law jurisdictions of continental EU Member States.2 1  See Directive 2019/790/EU of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92, Art. 17(3). 2  Almost all EU Member States follow the civil law tradition. Even in the UK, Scotland is a civil law jurisdiction. As to the climate change, see Martin Senftleben, ‘Bermuda Triangle—Licensing, Filtering and Privileging User-Generated Content Under the New Directive on Copyright in the Digital Single Market’, VUA Centre for Law and Internet Working Paper Series (2019) . As to the heated debate accompanying the legislative process, see Martin Senftleben and others, ‘The Recommendation on Measures to Safeguard Fundamental Rights and the Open Internet in the Framework of the EU Copyright Reform’ (2018) 40 EIPR 149; Christina Angelopoulos, ‘On Online Platforms and the Commission’s New Proposal for a Directive on Copyright in the Digital Single Market’ ; Giancarlo Frosio, ‘From Horizontal to Vertical: An Intermediary Liability Earthquake in Europe’ (2017) 12 JIPLP 565–75; Giancarlo Frosio, ‘Reforming Intermediary Liability in the Platform Economy: A European Digital Single Market Strategy’ (2017) 112 Northwestern U. L. Rev. 19; Reto Hilty and Valentina Moscon (eds), ‘Modernisation of the EU Copyright Rules—Position Statement of the Max Planck Institute for Innovation and Competition’, Max Planck

© Martin Senftleben 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

382   Martin Senftleben Inevitably, it raises the question of potential repercussions on the safe harbour for hosting and filtering standards in trade mark cases. Even though online marketplaces are explicitly exempted from the new copyright rules3 and the DSM Directive is not intended to neutralize the safe harbour for hosting in trade mark cases,4 the adoption of a more restrictive approach in copyright law may quicken the appetite of trade mark proprietors for similar measures in trade mark law. If primary platform liability and strict filtering standards made their way into copyright law,5 why not adopting a similar approach in trade mark law? Moreover, the cumulation of copyright and trade mark protection—often following from character merchandising6—may lead to system competition already at this point. Does a biting Mickey Mouse parody on YouTube fall under the strict copyright regime (which no longer offers the traditional safe harbour for hosting) or the more flexible approach in trade mark cases7 (still allowing the invocation of the safe harbour rules)? In the light of potential repercussions, it is of particular importance to point out that the intellectual property rights at issue—copyright on the one hand; trade mark rights on the other—are different in many respects. In comparison with the infringement test Institute for Innovation and Competition Research Paper no. 17-12 (2017); Reto Hilty and Valentina Moscon, ‘Contributions by the Max Planck Institute for Innovation and Competition in Response to the Questions Raised by the Authorities of Belgium, the Czech Republic, Finland, Hungary, Ireland and the Netherlands to the Council Legal Service Regarding Article 13 and Recital 38 of the Proposal for a Directive on Copyright in the Digital Single Market’ (2018) ; CREATe and others, ‘Open letter to Members of the European Parliament and the Council of the European Union’ (26 April 2018) ; Eleonora Rosati, ‘Why a Reform of Hosting Providers’ Safe Harbour is Unnecessary Under EU Copyright Law’, CREATe Working Paper 2016/11 (August 2016) ; Sophie Stalla-Bourdillon and others, ‘Open Letter to the European Commission—On the Importance of Preserving the Consistency and Integrity of the EU Acquis Relating to Content Monitoring within the Information Society’ (19 October 2016) . 3  See Directive 2019/790/EU (n. 1) recital 62. 4  ibid. Art. 17(3). 5  ibid. Art. 17(1) and (4). 6 As to the dilemmas arising from cumulative copyright and trade mark protection, see Martin Senftleben, ‘A Clash of Culture and Commerce: Non-Traditional Marks and the Impediment of Cyclic Cultural Innovation’ in Irene Calboli and Martin Senftleben (eds), The Protection of Non-Traditional Marks: Critical Perspectives (OUP 2018) 309; Irene Calboli, ‘Overlapping Rights: The Negative Effects of Trademarking Creative Works’ in Susy Frankel and Daniel Gervais (eds), The Evolution and Equilibrium of Copyright in the Digital Age (CUP 2014) 52; Estelle Derclaye and Matthias Leistner, Intellectual Property Overlaps—A European Perspective, (Hart 2011) 328; Annette Kur, ‘Funktionswandel von Schutzrechten: Ursachen und Konsequenzen der inhaltlichen Annäherung und Überlagerung von Schutzrechtstypen’ in Gerhard Schricker, Thomas Dreier, and Annette Kur (eds), Geistiges Eigentum im Dienst der Innovation (Nomos 2001) 23; D. Feer Verkade, ‘The Cumulative Effect of Copyright Law and Trademark Law: Which Takes Precedence?’ in Jan Kabel and Gerard Mom (eds), Intellectual Property and Information Law— Essays in Honour of Herman Cohen Jehoram (Kluwer 1998) 69. 7  The Mickey Mouse drawing is registered internationally as a trade mark in respect of various goods and services. See international registration nos 135376, 135377, and 296478 under the Madrid System. The particulars of the registrations can be found at . As to the status quo reached at EU level with regard to the safe harbour for hosting in trade mark cases, see C-236–238/08 Google France and Google [2010] ECLI:EU:C:2010:159, para. 114; C‑324/09 L’Oréal SA and Others v eBay International AG and Others [2011] ECLI:EU:C:2011:474, para. 120.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Proliferation of Filter Obligations in Civil Law Jurisdictions?   383 following from the exploitation rights awarded in copyright law, an act of trade mark infringement can be less readily inferred from the mere appearance of a trade mark on an online platform. The infringement test in EU trade mark law is more context-specific than the infringement analysis in copyright law (following Section 2). Moreover, limi­ta­ tions of trade mark rights provide room for unauthorized use that serves (commercial) freedom of expression and freedom of competition.8 The use of filtering mechanisms must not wash away these important nuances of the scope of trade mark protection. By contrast, a distinction is to be made between legitimate comparative advertising and infringing consumer confusion, legitimate brand criticism and infringing defamation, legitimate offers of second-hand goods and infringing sales of replicas. Otherwise, trade mark owners obtain overbroad protection (Section 3). Leading case law of the Court of Justice of the European Union (CJEU) on the question of intermediary liability and filter obligations points towards a cautious approach in trade mark cases—an approach that does not undermine the inherent limits and statutory limitations of trade mark rights.9 However, examples of court decisions in civil law jurisdictions, such as Germany, show a tendency of developing national doctrines that allow the imposition of more extensive filtering duties (Section 4). Against this background, it is to be recalled10 that a balanced approach based on the principle of proportionality should prevail in trade mark cases (concluding Section 5).

1.  Trade Mark Rights Trade mark rights are a specific type of intellectual property. Instead of constituting exploitation rights, they are granted to safeguard market transparency.11 In principle, every trader is bound to use her own individual trade mark for the identification of her goods and services. In this way, trade mark law seeks to ensure fair competition between market participants and protect consumers against confusion about the origin of goods and services offered in the marketplace. From an economic perspective, it can be added that the clear indication of the commercial origin of goods and services reduces consumers’ search costs.12 To achieve these goals, trade mark law allows enterprises to establish an 8  cf. Martin Senftleben and others, ‘The Recommendation on Measures to Safeguard Freedom of Expression and Undistorted Competition: Guiding Principles for the Further Development of EU Trade Mark Law’ (2015) 37 EIPR 337. 9  See n. 7. 10  For an earlier analysis leading to this recommendation, see Martin Senftleben, ‘An Uneasy Case for Notice and Takedown: Context-Specific Trademark Rights’, Amsterdam VU Centre for Law and Governance Research Paper (2012) . 11  Annette Kur and Martin Senftleben, European Trade Mark Law—A Commentary (OUP 2017) para. 1.06. 12  With regard to the search costs argument, see Andrew Griffiths, ‘A Law-and-Economic Perspective on Trade Marks’ in Lionel Bently, Jennifer Davis, and Jane Ginsburg (eds), Trade Marks and Brands—An Interdisciplinary Critique (CUP 2008) 241; Stacy Dogan and Mark Lemley, ‘Trademarks and Consumer

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

384   Martin Senftleben exclusive link with a distinctive sign: the law grants enterprises exclusive rights to their trade marks. This protection mechanism has important ramifications. By exclusively assigning a trade mark to a specific enterprise, trade mark law offers sufficient legal security for substantial investment in the sign concerned. Through advertising, consumers may learn to associate a particular lifestyle or attitude with the trade mark. The maintenance of high product quality will add an additional layer of positive associations. In this way, the trade mark becomes a platform which the enterprise, provided that its marketing strategy is successful, can use to raise positive pictures, associations, and expectations in the minds of consumers.13 The process of attaching these ‘metadata’ to the trade mark can result in the creation of a powerful and valuable brand image.14 The initial step of sign reservation thus leads to a further process of brand image creation.15 Trade marks are capable of serving as carriers of complex information referring to a specific lifestyle, behaviour, or attitude. Viewed from this perspective, trade marks constitute focal points of communication with consumers in the digital environment.16 They summarize the messages subtly conveyed to consumers through advertising and marketing efforts. As a symbol evoking a whole bundle of associations, trade marks begin to ‘speak’ to consumers. They acquire a ‘communication function’.17 Hence, two major protection interests can be identified in trade mark law: first, the interest in a protection system that allows an enterprise the establishment of an exclusive link with the sign that it uses on goods or services, and that safeguards this link once it is established (sign reservation). Protection in this area can be regarded as a precondition Search Costs on the Internet’ (2004) 41 Houston L. Rev. 777; Mathias Strasser, ‘The Rational Basis of Trademark Protection Revisited: Putting the Dilution Doctrine into Context’ (2000) 10 Fordham Intellectual Property, Media & Entertainment L.J. 375, 379–82. 13  cf. Griffiths (n. 12) 255; Ralph Brown, ‘Advertising and the Public Interest: Legal Protection of  Trade Symbols’ (1999) 108 Yale  L.J.  1619, 1619–20; Karl-Heinz Fezer, ‘Entwicklungslinien und Prinzipien des Markenrechts in Europa—Auf dem Weg zur Marke als einem immaterialgüterrechtlichen Kommunikationszeichen’ [2003] Gewerblicher Rechtsschutz und Urheberrecht 457, 461–2; Sabine Casparie-Kerdel, ‘Dilution Disguised: Has the Concept of Trade Mark Dilution Made its Way into the Laws of Europe?’ (2001) 23(4) EIPR 185, 185–6; Michael Lehmann, ‘Die wettbewerbswidrige Ausnutzung und Beeinträchtigung des guten Rufs bekannter Marken, Namen und Herkunftsangaben—Die Rechtslage in der Bundesrepublik Deutschland’ [1986] GRUR Int’l 6, 14–17. 14  See Jonathan Schroeder, ‘Brand Culture: Trade marks, Marketing and Consumption’ in Bently, Davis, and Ginsburg (n. 12) 161. 15  See the description given by Schroeder (n. 14) 161; Lehmann (n. 13) 15. 16  cf. Strasser (n. 12) 382–6; Brown (n. 13) 1619–21; Mark Lemley, ‘The Modern Lanham Act and the Death of Common Sense’ (1999) 108 Yale L.J. 1687, 1690 and 1693. 17  Brown (n. 13) 1634–5 and 1640–4, refers to ‘commercial magnetism’ and a ‘persuasive advertising function’. As to the recognition of this trade mark function in EU trade mark law, see C-487/07 L’Oréal Sa and others v Bellure and others [2009] ECLI:EU:C:2009:378, para. 58. Cf. Kur and Senftleben (n. 11) paras 1.12–1.39. As to the underlying concept of average consumer, see Graeme Dinwoodie and Dev Gangjee, ‘The Image of the Consumer in European Trade Mark Law’ in Dorota Leczykiewicz and Stephen Weatherill (eds), The Images of the Consumer in EU Law: Legislation, Free Movement and Competition Law (Hart 2016) 339.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Proliferation of Filter Obligations in Civil Law Jurisdictions?   385 of investment in the protected sign because it provides the required legal security. In modern trade mark law, this first protection interest is predominantly satisfied by the concept of protection against confusion—with distinctive character as a central substantive requirement and trade mark rights serving the rather defensive purpose of preventing confusing use of identical or similar signs.18 Secondly, however, there is the interest in protecting the investment made in a particular brand image (brand image creation). Protection in this area concerns the value and reputation of a trade mark, resulting from advertising and promotion activities.19 Not surprisingly, exploitation and amortization interests often play an important role in trade mark cases addressing the specific reputation and communication value of trade-marked symbols.20

1.1  Inherent Limits Whatever the stage of trade mark development, however, trade mark rights, by def­in­ition, do not offer a degree of control over the use of the protected sign that is comparable to the control that can be exerted by copyright owners.21 In contrast to copyright, trade mark law does not protect the trade-marked sign as such. Trade mark protection only concerns particular trade mark functions, such as the aforementioned identification function, the quality function, and the communication function.22 Copyright protection of the Mickey Mouse drawing, for instance, means that the drawing as such is protected. The copyright owner acquires a broad right of reproduction23 and a broad right of communication to the public.24 Accordingly, the mere act of copying the Mickey Mouse drawing or making it available on the internet may amount 18  See Directive 2015/2436/EU of the European Parliament and of the Council of 16 December 2015 to approximate the laws of the Member States relating to trade marks [2015] OJ L336/1, Art. 10(2)(b); Regulation 2017/1001/EU of 14 June 2017 on the European Union trade mark [2017] OJ L154/1, Art. 9(2)(b). 19  See Directive 2015/2436/EU (n. 18) Art. 10(2)(c); Regulation 2017/1001/EU (n. 18) Art. 9(2)(c). 20  cf. C-487/07 (n. 17) para. 58. 21  For a more detailed discussion of this point, see Martin Senftleben, ‘The Trademark Tower of Babel—Dilution Concepts in International, US and EC Law’ (2009) 40 Int’l Rev. of Intellectual Property and Competition L. 45. As to the tendency of bringing trade mark rights closer to exploitation rights (as awarded in copyright law), see Jonathan Moskin, ‘Victoria’s Big Secret: Whither Dilution Under the Federal Dilution Act?’ (2004) 93 Trademark Reporter 842, 856–7 (insisting on ‘a property right similar to copyright or patent’). Cf. Fezer (n. 13) 464 and 467; Casparie-Kerdel (n. 13) 188; Brown (n. 13) 1620. In fact, the risk of creating property rights in gross seems to be inherent in the dilution doctrine. It can be traced back to early concepts, such as Schechter’s concept of trade mark uniqueness. See Frank Schechter, ‘The Rational Basis of Trademark Protection’ (1927) 40 Harv. L. Rev. 813, 824–33; Barton Beebe, ‘A Defense of the New Federal Trademark Antidilution Law’ (2006) 16 Fordham Intellectual Property, Media & Entertainment L.J. 1143, 1146–7 and 1174; Graeme Dinwoodie and Mark Janis, ‘Dilution’s (Still) Uncertain Future’ (2006) 105 Mich. L. Rev. First Impressions 98. 22  See C-487/07 (n. 17) para. 58. 23  See Berne Convention for the Protection of Literary and Artistic Works, Art. 9(1); of 22 May 2001, on the harmonisation of certain aspects of copyright and related rights in the information society (2001) OJ L167/10, Art. 2. 24  See WIPO Copyright Treaty, Art. 8; Directive 2001/29/EC (n. 23) Art. 3.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

386   Martin Senftleben to infringement, if the user cannot successfully invoke a copyright limitation as a defence.25 Trade mark protection of the Mickey Mouse drawing,26 by contrast, is confined to ensuring the proper functioning of that drawing as a trade mark. From the outset, protection is confined to use of the mark in the course of trade and as a trade mark.27 Because of these inherent limits of trade mark rights, non-commercial use for private, social, cultural, educational, or political purposes is less likely to fall under the control of the trade mark owner. The mere act of copying is not sufficient to establish infringement. The crucial point here is that the monopoly position of the trade mark owner—the degree of general control over the use of the protected subject matter—is less absolute than the position enjoyed by a copyright owner. Instead of being able to exert general control over certain types of use, such as acts of copying, the success in invoking trade mark rights depends on the particular context in which the use takes place. The mere reproduction or making available of a trade mark is not sufficient. Further conditions are to be fulfilled, in particular use in the course of trade, use in relation to goods or services, use with an adverse effect on a protected trade mark function, and use that is likely to cause confusion or dilution.28 Against this background, it becomes clear that a liability regime relating to trade marks must be context-specific enough to allow the consideration of the inherent limits of trade mark rights. Otherwise, the liability rules will lead to overbroad control over trade mark-related online communications that does not cor­res­pond to the scope of the underlying exclusive rights. The reason for less complete communication control lies in fundamental differences in the underlying rationales of protection. In contrast to exclusive rights in the area of copyright, trade mark rights are not primarily intended to serve as exploitation instruments. They are legal instruments to ensure market transparency, undistorted 25  See Directive 2001/29/EC (n. 23) Art. 5 for limitations of copyright protection that are permissible in EU copyright law. Cf. C-5/08 Infopaq International A/S v Danske Dagblades Forening [2009] ECLI:EU:C:2009:465, paras 56–7; C-145/10 Eva-Maria Painer v Standard Verlags GmbH and Others [2013] ECLI:EU:C:2013:138, paras 132–3; C-201/13 Johan Deckmyn and Vrijheidsfonds VZW v Helena Vandersteen and Others [2014] ECLI:EU:C:2014:2132. paras 22–6. 26  See n. 7. 27  The requirement of use in trade is reflected in Art. 16(1) TRIPs Agreement. As to the debate on the requirement of use as a trade mark, see Stacey Dogan and Mark Lemley, ‘The Trademark Use Requirement in Dilution Cases’ (2008) 24 Santa Clara Computer & High Tech. L.J. 541, 542: ‘By maintaining the law’s focus on misleading branding, the trademark use doctrine keeps trademark law true to its ultimate goal of promoting competitive markets.’ However, see also Graeme Dinwoodie and Mark Janis, ‘Confusion Over Use: Contextualism in Trademark Law’ (2007) 92 Iowa L. Rev. 1597, 1657–8 (doubting that problems arising in the current ‘expansionist climate’ could be solved by recalibrating the notion of trade mark use: ‘Trademark use is simply too blunt a concept, no matter how defined, to capture the full range of values at play in these debates’). For a summary of the debate, see Mark Davison and Frank Di Giantomasso, ‘Use as a Trade Mark: Avoiding Confusion When Considering Dilution’ (2009) 31(9) EIPR 443. With regard to the discussion in the EU, see Annette Kur, ‘Confusion Over Use? Die Benutzung “als Marke” im Lichte der EuGH-Rechtsprechung’ (2008) GRUR Int’l 1 (warning of limiting trade mark protection from the outset on the basis of a restrictive notion of trade mark use, in particular with regard to Community trade marks). 28  For a discussion of these infringement criteria, see Kur and Senftleben (n. 11) paras 5.14–5.272.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Proliferation of Filter Obligations in Civil Law Jurisdictions?   387 competition, and protection of consumers against confusion.29 Accordingly, a finding of trade mark infringement depends on the individual circumstances of use. The identification of trade mark infringement on online platforms—marketplaces, search engines, keyword advertising services, social media, or virtual worlds—requires a careful analysis of each individual case. The mere appearance of a trade mark can only serve as a starting point for this analysis. For a finding of infringement, the assessment of the context in which the trade mark appears is indispensable. In the area of social media, for instance, the inherent limits of trade mark rights become particularly relevant. If a trade mark is used in the context of a social networking site to share information about a product, the question arises whether this use constitutes use in the course of trade and relevant use as a trade mark in the sense of trade mark law.30 Many communications on social media platforms will fall outside the scope of trade mark protection because of their non-commercial nature. The importance of this breathing space for unauthorized, trade mark-based communication must not be underestimated. Online ‘word of mouth’ can help consumers make good choices. The suppression of unfavourable online word of mouth may moreover minimize the cred­ ibil­ity of the internet as an independent information resource.31 The outlined inherent limits of trade mark rights, therefore, fulfil important functions in respect of consumer information and information infrastructure.

1.2  Context-Specific Infringement Analysis In sum, the unauthorized appearance of a trade mark on an online platform does not allow the establishment of infringement with a degree of certainty that is comparable to copyright cases. In trade mark law, mere copying or making available is not sufficient. By contrast, further infringement conditions must be satisfied, such as use in the course of trade, use in relation to goods or services, a likelihood of confusion, harm to the trade mark’s reputation, or the taking of unfair advantage of the trade mark’s distinctive character. The liability question is also complex because of the diversity of online platforms and use modalities involved. Hence, the alignment of trade mark liability and filtering standards with obligations to employ automated, algorithmic filtering mechanisms in copyright law32 will inevitably fail to reflect the inherent limits of trade mark protection— limits that follow from the context-specific nature of the exclusive rights awarded in trade mark law. Serious concerns about overblocking have been expressed with regard

29  ibid. para. 1.06. 30 For a closer analysis of these questions, see Lisa Ramsey, ‘Brandjacking on Social Networks: Trademark Infringement by Impersonation of Markholders’ (2010) 58 Buffalo L. Rev. 851, 872–94. 31  See Eric Goldman, ‘Online Word of Mouth and its Implications for Trademark Law’ in Graeme Dinwoodie and Mark Janis (eds), Trademark Law and Theory: a Handbook of Contemporary Research (Edward Elgar 2008) 404, 428–9. 32  cf. Senftleben (n. 2) 5–10.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

388   Martin Senftleben to the use of algorithmic filtering systems in copyright law.33 When it comes to trade marks, these concerns have even more weight because of the described context-specific nature of exclusive rights.

2.  Limitations of Trade Mark Rights The rights enjoyed by trade mark owners—rights that are more context-specific than copyright from the outset—are additionally limited through several exemptions of use that are deemed important for economic, social, or cultural reasons.34 Competing fundamental freedoms, in particular freedom of expression and freedom of competition, play a crucial role.35 As the inherent limits of trade mark rights, statutory limitations of 33  cf. Maayan Perel and Niva Elkin-Koren, ‘Accountability in Algorithmic Copyright Enforcement’ (2016) 19 Stan. Tech. L. Rev. 473, 490–1. For empirical studies pointing towards overblocking, see Sharon Bar-Ziv and Niva Elkin-Koren, ‘Behind the Scenes of Online Copyright Enforcement: Empirical Evidence on Notice & Takedown’ (2017) 50 Connecticut L. Rev. 3, 37 (noting that ‘[o]verall, the N&TD regime has become fertile ground for illegitimate censorship and removal of potentially legitimate ma­ter­ials’); Jennifer Urban, Joe Karaganis, and Brianna Schofield, ‘Notice and Takedown in Everyday Practice’, UC Berkeley Public Law and Legal Theory Research Paper Series (March 2017) 2 (‘[a]bout 30% of takedown requests were potentially problematic. In one in twenty-five cases, targeted content did not match the identified infringed work, suggesting that 4.5 million requests in the entire six-month data set were fundamentally flawed. Another 19% of the requests raised questions about whether they had sufficiently identified the allegedly infringed work or the allegedly infringing material’). 34  cf. Graeme B. Dinwoodie, ‘Lewis & Clark Law School Ninth Distinguished IP Lecture: Developing Defenses in Trademark Law’ (2009) 13(1) Lewis and Clark L. Rev. 99, 152 (‘[h]owever, as the scope of trademark protection expands and the metes and bounds of protection become more uncertain, we cannot rely exclusively on creative interpretation of the prima facie cause of action to establish limits. Trademark law must more consciously develop defenses that reflect the competing values at stake in trademark disputes’). 35  cf. Wolfgang Sakulin, Trademark Protection and Freedom of Expression—An Inquiry into the Conflict between Trademark Rights and Freedom of Expression under European Law (Kluwer Law Int’l 2010); Lisa Ramsey and Jens Schovsbo, ‘Mechanisms for Limiting Trade Mark Rights to Further Competition and Free Speech’ (2013) 44 IIC 671; Martin Senftleben, ‘Free Signs and Free Use—How to Offer Room for Freedom of Expression within the Trademark System’ in Christophe Geiger (ed.), Research Handbook on Human Rights and Intellectual Property (Edward Elgar 2015) 354; Ilanah Simon Fhima, ‘Trade Marks and Free Speech’ (2013) 44 IIC 293; Robert Burrell and Dev Gangjee, ‘Trade Marks and Freedom of Expression: A Call for Caution’ (2010) 41 IIC 544; Christophe Geiger, ‘Marques et droits fondamentaux’ in Christophe Geiger and Joanna Schmidt-Szalewski (eds), Les défis du droit des marques au 21e siècle/Challenges for Trademark Law in the 21st Century (Litec 2010) 163; Mohammad Nasser, ‘Trade Marks and Freedom of Expression’ (2009) 40 IIC 188; Lionel Bently, ‘From Communication to Thing: Historical Aspects of the Conceptualisation of Trade Marks as Property’ in Dinwoodie and Janis (n. 31); William McGeveran, ‘Four Free Speech Goals for Trademark Law’ (2008) 18 Fordham Intellectual Property, Media and Entertainment L.J. 1205; Christophe Geiger, ‘Trade Marks and Freedom of Expression—the Proportionality of Criticism’ (2007) 38 IIC 317; Lisa Ramsey, ‘Free Speech and International Obligations to Protect Trademarks’ (2010) 35 Yale J. of Int’l L. 405; Katja Weckström, ‘The Lawfulness of Criticizing Big

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Proliferation of Filter Obligations in Civil Law Jurisdictions?   389 protection36 must therefore be factored into the equation when devising an intermediary liability regime in the field of trade mark law.

2.1  Commercial Freedom of Expression The protection of trade mark proprietors must be reconciled with the commercial freedom of expression37 and the freedom of competition of other market participants.38 For instance, the trade mark owner is not entitled to prevent other traders from the use of the protected trade mark, if this is necessary to properly inform the public about product characteristics or the intended purpose of a product or service.39 For this reason, a provider of car repair and maintenance services specializing in BMW cars, for instance, enjoys the freedom of using, in good faith, the indication ‘specialist in BMW’ in advertising. As this indication is necessary to inform consumers about the nature of the services, the use does not amount to an infringement of the trade mark rights of the car producer.40 Similar conclusions can be drawn on the basis of the economic rationale to reduce consumers’ search costs. In circumstances where freedom to use a trade mark better serves the purpose of informing consumers about the goods and services available on the market, it makes sense to permit use of the trade mark for this purpose. Against this background, use of a trade mark in comparative advertising satisfying all requirements of the EU Misleading and Comparative Advertisement Directive 2006/114/EC is exempted from the exclusive rights of trade mark owners.41 Finally, room for legitimate unauthorized use of a trade mark follows from the fact that trade mark rights are exhausted after the first sale of genuine products by or with the consent of the trade Business: Comparing Approaches to the Balancing of Societal Interests Behind Trademark Protection’ (2007) 11 Lewis & Clark L. Rev. 671; Pratheepan Gulasekaram, ‘Policing the Border Between Trademarks and Free Speech: Protecting Unauthorized Trademark Use in Expressive Works’ (2005) 80 Washington L. Rev. 887; Pierre Leval, ‘Trademark: Champion of Free Speech’ (2004) 27 Colum. J. of L. & Arts 187; Kerry Timbers and Julia Huston, ‘The “Artistic Relevance Test” Just Became Relevant: the Increasing Strength of the First Amendment as a Defense to Trademark Infringement and Dilution’ (2003) 93 Trademark Reporter 1278; Rochelle Dreyfuss, ‘Reconciling Trademark Rights and Expressive Values: How to Stop Worrying and Learn to Love Ambiguity’ in Dinwoodie and Janis (n. 31) 261; Keith Aoki, ‘How the World Dreams Itself to be American: Reflections on the Relationship Between the Expanding Scope of Trademark Protection and Free Speech Norms’ (1997) Loyola of Los Angeles Entertainment L.J. 523. 36  See Directive 2015/2436/EU (n. 18) Art. 14(1); Regulation 2017/1001/EU (n. 18) Art. 12(1). 37  As to the recognition of the fundamental freedom of expression with regard to commercial speech, see e.g. Autronic v Switzerland App. no. 12726/87 (ECtHR, 22 May 1990) para. 47. 38  cf. Senftleben and others (n. 8) 337. 39  See Directive 2015/2436/EU (n. 18) Art. 14(1)(c); Regulation 2017/1001/EU (n. 18) Art. 12(1)(c). 40  In this sense C-63/97 Bayerische Motorenwerke AG (BMW) and BMW Nederland BV v Ronald Karel Deenik [1999] ECLI:EU:C:1999:82, para. 59. Cf. Kur (n. 27) 1; Po Jen Yap, ‘Essential Function of a Trade Mark: From BMW to O2’ (2009) 31 EIPR 81. 41  See Directive 2015/2436/EU (n. 18) Arts 10(3)(f) and 14(1)(c); Regulation 2017/1001/EU (n. 18) Arts 9(3)(f) and 12(1)(c). Cf. C-487/07 (n. 17) paras 54 and 65; C-533/06 O2 Holdings Ltd and O2 (UK) Ltd v Hutchison 3G UK Ltd [2008] ECLI:EU:C:2008:339, para. 45. Cf. Kur and Senftleben (n. 11) paras 6.39–6.58; Yap (n. 40) 81.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

390   Martin Senftleben mark owner.42 As trade mark rights can no longer be invoked with regard to those goods, resellers do not only enjoy the freedom of reselling. They are also free to use the trade mark in advertising that brings the further commercialization of the goods to the attention of consumers.43 In the context of online marketplaces, these user freedoms play an important role. Trade marks may be used legitimately in the context of online offers to indicate the characteristics of goods, clarify their intended purpose, or announce a resale of genuine goods after the exhaustion of trade mark rights. Drawing attention to the underlying human rights dimension, Advocate General Jääskinen urged the CJEU in his opinion in L’Oréal v eBay not to forget: that the listings uploaded by users to eBay’s marketplace are communications protected by the fundamental rights of freedom of expression and information provided by Article 11 of [the] Charter of Fundamental Rights of the EU and Article 10 of the European Convention on Human Rights.44

In the context of keyword advertising, issues of commercial freedom of speech can arise with regard to the advertising made by resellers who sell genuine goods, in respect of which trade mark rights have been exhausted after the first sale.45 Keyword advertising can also be used legitimately for the purpose of comparative advertising.46 In Interflora v Marks & Spencer, the CJEU created additional room for keyword advertising that, without comparing goods or services, seeks to inform consumers about an alternative in the marketplace. The Court explained that: where the advertisement displayed on the internet on the basis of a keyword cor­ res­pond­ing to a trade mark with a reputation puts forward—without offering a mere imitation of the goods or services of the proprietor of that trade mark, without causing dilution or tarnishment and without, moreover, adversely affecting the functions of the trade mark concerned—an alternative to the goods or services of the proprietor of the trade mark with a reputation, it must be concluded that such use falls, as a rule, within the ambit of fair competition in the sector for the goods or services concerned and is thus not without ‘due cause’ for the purposes of Article 5(2).47 42  See Directive 2015/2436/EU (n. 18) Art. 15; Regulation 2017/1001/EU (n. 18) Art. 13. 43  In this sense, e.g. C-337/95 Parfums Christian Dior SA and Parfums Christian Dior BV v Evora BV [1997] ECLI:EU:C:1997:517, para. 38 (‘when trade-marked goods have been put on the Community market by the proprietor of the trade mark or with his consent, a reseller, besides being free to resell those goods, is also free to make use of the trade mark in order to bring to the public’s attention the further commercialization of those goods’). 44 C-324/09 L’Oréal SA and Others v eBay International AG and Others [2010] ECLI:EU:C:2010:757, Opinion of AG Jääskinen, para. 49. 45 C-558/08 Portakabin Ltd and Portakabin BV v Primakabin BV [2010] ECLI:EU:C:2010:416, para. 78. 46  See C‑533/06 (n. 41) para. 45. 47 C-323/09 Interflora Inc. and Interflora British Unit v Marks & Spencer plc and Flowers Direct Online Ltd [2011] ECLI:EU:C:2011:604, para. 91. For a more detailed discussion of this point, see Martin Senftleben, ‘Adapting EU Trademark Law to New Technologies—Back to Basics?’ in Christophe Geiger (ed.), Constructing European Intellectual Property: Achievements and New Perspectives (Edward Elgar 2012).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Proliferation of Filter Obligations in Civil Law Jurisdictions?   391 The need to reconcile the protection of trade marks with commercial freedom of expression and freedom of competition enjoyed by other market participants, therefore, makes inroads into the exclusive rights of trade mark proprietors. In particular, they are rendered incapable of objecting to the unauthorized use of their trade marks where this is necessary to properly inform consumers. Advertising by resellers, comparative advertising, and advertising putting forward an alternative product or service serve the ob­ject­ive to inform consumers about competing offers and enhance the efficient functioning of the market.

2.2  Artistic and Political Freedom of Expression Besides the importance attached to competing economic interests, there is growing awareness in the field of trade mark law that protection must be balanced against social and cultural values. In particular, the gradual expansion of trade mark protection in the area of well-known marks led to the recognition of a need to adopt appropriate limi­ta­ tions safeguarding freedom of expression and information in social and cultural contexts.48 In 1990, Professor Rochelle Dreyfuss had already pointed out that trade marks had become focal points of communication: [a]pparently, the graduates of the American educational system are no longer acquainted with the classic literature that in the past formed the basis for rhetorical and literary allusion. Betty Crocker has replaced Hestia in the public consciousness. Accordingly, it is not surprising that speakers and writers are drawn to those devices that are, by dint of heavy advertising, doubtlessly universally familiar.49

Against this background, the importance of counterbalances for use that is socially or culturally valuable must not be underestimated with regard to use of trade marks on online platforms. Considering today’s digital environment, there can be little doubt that trade marks are more than mere identifiers of commercial source. As trade marks are often symbols that raise complex associations in respect of lifestyles, behaviour, and attitudes, they allow internet users to sum up complex social or cultural norms and developments by referring to a simple word, mark, or logo. It has also been pointed out in the debate on the social and cultural dimension of trade marks that the richness of as­so­ci­ations and meanings attached to a trade mark is the result of a joint effort of trade mark owners and consumers. It is the consuming public who frequently imbue trade marks with connotations distinct from and sometimes unrelated to the advertising messages conveyed by the trade mark owner.50 As a result, trade marks become 48  See the references to relevant literature in n. 35. 49  Rochelle Dreyfuss, ‘Expressive Genericity: Trademarks as Language in the Pepsi Generation’ (1990) 65 Notre Dame L. Rev. 397, 424. 50  cf. Deborah Gerhardt, ‘Consumer Investment in Trademarks’ (2010) 88 North Carolina L. Rev. 101; Jessica Litman, ‘Breakfast with Batman: The Public Interest in the Advertising Age’ (1999) 108 Yale L.J. 1,

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

392   Martin Senftleben metaphors with complex meanings that are of particular importance for social and cultural discourse. Recognizing this public interest in the use of trade marks for social and cultural ends, lawmakers and policymakers have been alert to the need for corresponding limitations on trade mark rights, particularly in the area of the protection of well-known marks against dilution. Trade mark legislation in the EU creates breathing space for free expression by providing for a defence of referential use51 and the flexible defence of ‘due cause’ in the area of the protection of marks with a reputation against dilution.52 So far, the CJEU has not had the opportunity to decide cases involving the invocation of these defences with regard to political or artistic speech. National decisions in civil law jurisdictions, however, demonstrate the potential of the due cause defence to serve as a safeguard for political and artistic freedom of expression. The case Lila Postkarte of the German Federal Court of Justice, for example, concerned the marketing of postcards that alluded ironically to trade marks and advertising campaigns of the chocolate producer Milka.53 On a purple background corresponding to Milka’s abstract colour mark, the postcard sought to ridicule the nature idyll with cows and mountains that is evoked in Milka advertising.54 It showed the following poem attributed to ‘Rainer Maria Milka’: Über allen Wipfeln ist Ruh/ irgendwo blökt eine Kuh/ Muh!55

Assessing this ironic play w11.734 pthe German Federal Court of Justice held that for the use of Milka trade marks to constitute relevant trade mark use in the sense of Article 10(2)(c) of the Trade Mark Directive, it was sufficient that the postcard called to mind the well-known Milka signs.56 Even though being decorative, the use in question therefore gave rise to the question of trade mark infringement.57 Accordingly, the 15–16; Steven Wilf, ‘Who Authors Trademarks?’ (1999) 17 Cardozo Arts and Entertainment L.J. 1; Alex Kozinski, ‘Trademarks Unplugged’ (1993) 68 New York U. L. Rev. 960. However, see also the critical comments on the limitation of trade mark rights made by Megan Richardson, ‘Trade Marks and Language’ (2004) 26 Sydney L. Rev. 193. 51  See Directive 2015/2436/EU (n. 18) Art. 14(1)(c); Regulation 2017/1001/EU (n. 18) Art. 12(1)(c). 52  See Directive 2015/2436/EU (n. 18) Art. 10(2)(c); Regulation 2017/1001/EU (n. 18) Art. 9(2)(c). For a more detailed discussion of the role of these defences in cases involving artistic and political expression, see Kur and Senftleben (n. 11) paras 5.265–5.268 and 6.59–6.68; Jens Schovsbo, ‘ “Mark My Words”— Trademarks and Fundamental Rights in the EU’ (2018) 8 UC Irvine L. Rev. 555, 575–80; Sakulin (n. 35) 85–7; Geiger, ‘Proportionality of Criticism’ (n. 35) 317. 53  See Bundesgerichtshof [Supreme Court] (BGH) Lila Postkarte [3 February 2005] I ZR 159/02 (Ger.) in [2005] GRUR 583. 54  ibid. 583. 55  ibid. (‘It is calm above the tree tops, somewhere a cow is bellowing. Moo!’). The attribution to ‘Rainer Maria Rilke’ is an allusion to the famous German writer Rainer Maria Rilke. 56  ibid. 584. 57  ibid. 584–5.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Proliferation of Filter Obligations in Civil Law Jurisdictions?   393 German Federal Court of Justice embarked on a scrutiny of the trade mark parody in the light of the infringement criteria of detriment to distinctive character or repute, and the taking of unfair advantage.58 Weighing Milka’s concerns about a disparagement of the trade marks against the fundamental guarantee of the freedom of art, the court finally concluded that the freedom of art had to prevail in the light of the ironic statement made with the postcard.59 The use of Milka trade marks was thus found to have taken place with ‘due cause’ in the sense of Article 10(2)(c) of the Trade Mark Directive.60 Other national decisions show that the due cause defence may also play an important role in safeguarding political freedom of speech.61 In the light of the fundamental guarantee of freedom of expression, the German Federal Court of Justice permitted the use of the expression ‘gene-milk’ in connection with products bearing the trade marks ‘Müller’, ‘Weihenstephan’, ‘Sachsenmilch’, and ‘Loose’ to make consumers aware of the risks of genetically modified milk in milk products.62 The French Supreme Court allowed the use of the trade mark ESSO for the purposes of an environmental campaign which Greenpeace had organized to criticize environmental policies of the company on the basis of the trade mark caricature ‘E$$O’.63 National case law also shows the limits of the due cause defence in parody cases with a commercial background. In the decision Styriagra, the Austrian Supreme Court prohibited the marketing of pumpkin seeds with blue frosting under the trade mark STYRIAGRA.64 As the blue frosting and the trade mark called to mind Pfizer’s wellknown VIAGRA trademark and the blue Viagra pills, the court was convinced that the defendant sought to profit unfairly from the strong reputation of Pfizer’s trade mark as a vehicle to draw the attention of consumers to his pumpkin product.65 In this context, the court rejected the argument that the use had taken place with due cause.66 As the defendant conducted his business in the Austrian state of Styria, the trade mark STYRIAGRA could be understood as an ironic blend of the name of the Austrian state where the pumpkin seeds came from, and the VIAGRA trade mark of the plaintiff.67 To further support his parody argument, the defendant had also pointed out that his ironic play with Pfizer’s trade mark and pill colour created a sharp contrast between chemical and natural means of treating erectile dysfunction.68 The Austrian Supreme Court, however, remained unimpressed.69 While conceding that the due cause defence could be invoked to justify an artistic trade mark parody, it held that in these specific circumstances, the intention to take unfair advantage of the magnetism of Pfizer’s highly

58 ibid. 59 ibid. 60  ibid. 585. 61  For an overview of national decisions, see Kur and Senftleben (n. 11) paras 5.267 and 6.59–6.70. 62 BGH Gen-Milch [11 March 2008] VI ZR 7/07 (Ger.) in [2008] Neue Juristische Wochenschrift 2110. 63  See Cour de Cassation [Supreme Court] Greenpeace v Esso [8 April 2008] case 06-10961 (Fra.). See also Cour de Cassation Greenpeace v Areva [8 April 2008] case 07-11251 (Fra.). 64  Oberster Gerichtshof [Supreme Court] (OGH) Styriagra [22 September 2009] case 17Ob15/09v, para. 3.4 (Aust.). 65  ibid. para. 2. 66 ibid. 67 ibid. 68 ibid. 69  ibid. para. 3.4.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

394   Martin Senftleben distinctive trade mark was predominant.70 Pfizer’s interest in trade mark protection thus prevailed over the defendant’s free speech interests.71 These examples illustrate how national courts in the EU employ the due cause defence to strike a proper balance between trade mark protection and competing social and cultural interests. With regard to online platforms, in particular social media, this case law indicates that the due cause defence can play a crucial rule when it comes to the use of a trade mark for the purposes of criticism and comment. The defence is capable of preserving room for artistic and political freedom of expression. However, cases of trade mark infringement arise where trade marks are used as a fake username or account name to mislead consumers as to the origin of the information communicated under this name, and where this information does not serve the legitimate purpose of criticism and comment.72

2.3  Context-Specific Limitations In sum, the analysis of measures taken to safeguard competing fundamental freedoms in trade mark law yields the insight that the exclusive rights of trade mark owners are not only context-specific, as pointed out in the previous section, but also limited in several respects. Trade mark law seeks to strike a proper balance between trade mark protection and competing economic, social, and cultural values. As a result, forms of use that are considered particularly relevant in this context—use describing product characteristics or the intended purpose of goods or services, comparative advertising, and advertising of resellers, use criticizing or commenting on trade-marked products and use for the purpose of parody—are exempted from the control of the trade mark owner. The latter cannot invoke trade mark rights against unauthorized use serving these purposes as long as the use keeps within the limits of the respective limitations laid down in the law. Hence, online enforcement systems relating to trade marks would not only have to reflect the context-specific scope of trade mark rights but also the limits set to trade mark rights to satisfy competing economic, social, and cultural interests. Otherwise, trade mark owners would be granted broader control over the use of their trade marks in the digital environment than justified in the light of the underlying exclusive rights. The fundamental freedoms justifying the exemption of the outlined forms of use, in particular freedom of expression and freedom of competition, would be neglected.

70 ibid. 71 ibid. 72  cf. the examples given in WIPO doc. SCT/24/4, dated 31 August 2010, Annex I, 14–18 , and the discussion of these issues by Lisa Ramsey, ‘Brandjacking on Social Networks: Trademark Infringement by Impersonation of Markholders’ (2010) 58 Buffalo L. Rev. 851, 872–919 (concluding at 919 that ‘[i]mpersonating a markholder may be an effective way to attract attention to expression on a social networking site, but this use of the mark may not be protected speech if it causes ordinary consumers to be confused about the source of the expression’).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Proliferation of Filter Obligations in Civil Law Jurisdictions?   395 Against this background, it becomes apparent that the risk of overbroad content blocking that has been identified in the field of copyright law73 arises—arguably to an even greater extent74—with regard to algorithmic enforcement of trade mark rights. As current content identification and filtering systems are not sophisticated enough to lend sufficient weight to competing user freedoms, the extension of filtering obligations, as adopted in the DSM Directive, to the field of trade mark law would curtail breathing space for valuable forms of use and most probably encroach on underlying fundamental freedoms. The risk of excessive filtering of privileged trade mark-related communications must not be played down in the debate on intermediary liability in trade mark cases.

3.  Developments in Case Law The discussion of the scope of trade mark rights in the preceding sections yielded the insight that in the debate on liability and filtering standards, the default position can hardly be the assumption that uploads including a protected trade mark justify the employment of algorithmic filtering tools because the inclusion of the protected sign clearly points in the direction of infringement.75 By contrast, further conditions must be fulfilled, in particular use in the course of trade, use in relation to goods and services, and use that causes confusion, dilution, or unfair free-riding. An intermediary liability regime relating to trade marks must be context-specific enough to offer room for the consideration of inherent limits and statutory limitations of trade mark rights. 73 See the study conducted by Jennifer Urban and Laura Quilter, ‘Efficient Process or “Chilling Effects”? Takedown Notices Under Section 512 of the Digital Millennium Copyright Act’ (2006) 22 Santa Clara Computer and High Tech. L.J. 621 (showing, among other things, that 30 per cent of DMCA takedown notices were legally dubious, and that 57 per cent of DMCA notices were filed against competitors; while the DMCA offers the opportunity to file counter-notices and rebut unjustified takedown requests, Urban and Quilter find that instances in which this mechanism is used are relatively rare). However, cf. also the critical comments on the methodology used for the study and a potential self-selection bias arising from the way in which the analysed notices were collected by Frederick Mostert and Martin Schwimmer, ‘Notice and Takedown for Trademarks’ (2011) 101 Trademark Reporter 249, 259–60. 74  See Graeme Dinwoodie, ‘Secondary Liability for Online Trademark Infringement: The International Landscape’ (2014) 37 Columbia J. of L. & the Arts 463, 498–9 pointing out, with regard to the debate on the appropriateness of horizontal liability standards covering both copyright and trade mark rights, that a horizontal legal basis for granting injunctions need not lead to the same filtering standards. More concretely, he sees copyright as a potential field of application for ‘file-matching technology’, whereas trade mark law might require ‘human-intensive assessments’ (at 499). This distinction seems to confirm that, due to the context-specific nature of trade mark rights, algorithmic filter systems are even more problematic in the field of trade mark law than they are in the area of copyright. 75  For a discussion of the fundamental change of the default position that follows from the adoption of filtering obligations in copyright law, see Niva Elkin-Koren, ‘Fair Use by Design’, (2017) 64 UCLA L. Rev. 1082, 1093 (‘if copyrighted materials were once available unless proven to be infringing, today materials that are detected by algorithms are removed from public circulation unless explicitly authorized by the right holder’).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

396   Martin Senftleben

3.1  Guidelines at EU Level Addressing the traditional liability privilege for online platforms—the safe harbour for hosting following from Article 14 of the e-Commerce Directive76—the CJEU has taken a cautious approach that is capable of satisfying this general requirement. In Google France v Louis Vuitton, the Court qualified the advertising messages displayed by the Google keyword advertising service as third party content provided by the advertiser and hosted by Google. These advertising messages appear once the search terms selected by the advertiser are entered by the internet user. If an advertiser selects a competitor’s trade mark as a keyword, the question of trade mark infringement and intermediary liability arises. As to the applicability of the safe harbour for hosting in these circumstances, the CJEU pointed out that it was necessary to examine whether the role played by Google as an intermediary was neutral, in the sense that its conduct was ‘merely technical, automatic and passive, pointing to a lack of knowledge or control of the data which it stores’.77 The financial interest which Google had in its advertising service was not decisive in this context. An active involvement in the process of selecting keywords, by contrast, would be relevant to the assessment of eligibility for the safe harbour.78 In the further case L’Oréal v eBay, the CJEU arrived at a more refined test by establishing the standard of ‘diligent economic operator.’ The Court explained that it was sufficient: in order for the provider of an information society service to be denied entitlement to the exemption [for hosting], for it to have been aware of facts or circumstances on the basis of which a diligent economic operator should have identified the illegality in question.79 76  Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1. Cf. Stefan Kulk, ‘Internet Intermediaries and Copyright Law: Towards a Future-Proof EU Legal Framework’, PhD thesis, Utrecht University Law Department (2018); Christina Angelopoulos, European Intermediary Liability in Copyright: A Tort-Based Analysis (Kluwer Law Int’l 2016); Martin Husovec, Injunctions Against Intermediaries in the European Union: Accountable But Not Liable? (CUP 2017); Thomas Hoeren and Silviya Yankova, ‘The Liability of Internet Intermediaries: The German Perspective’ (2012) 43 IIC 501; Rita Matulionyte and Sylvie Nerisson, ‘The French Route to an ISP Safe Harbour, Compared to German and US Ways’ (2011) 42 IIC 55; Miguel Peguera, ‘The DMCA Safe Harbour and Their European Counterparts: A Comparative Analysis of Some Common Problems’ (2009) 32 Colum. J. of L. & Arts 481; Christiaan Alberdingk Thijm, ‘Wat is de zorgplicht van Hyves, XS4All en Marktplaats?’ [2008] Ars Aequi 573; Matthias Leistner, ‘Von “Grundig-Reporter(n) zu Paperboy(s)” Entwicklungsperspektiven der Verantwortlichkeit im Urheberrecht’ [2006] GRUR 801. 77  C-236–238/08 (n. 7) para. 114. The Court also held that the search engine offering a keyword advertising service did not use affected trade marks in the sense of trade mark law. Direct liability arising from keyword advertising services thus seems to be excluded in the EU. Ibid. para. 57. As to the debate on potential direct liability and primary infringement, see Graeme Dinwoodie and Mark Janis, ‘Lessons From the Trademark Use Debate’ (2007) 92 Iowa L. Rev. 1703, 1717 (pointing out in the light of developments in the United States that ‘the sale of keyword-triggered advertising and the manner of presentation of search results potentially create independent trademark-related harm, thus making it an appropriate subject of direct liability’). 78  C-236–238/08 (n. 7) paras 116–18. 79 C-324/09 L’Oréal SA and Others v eBay International AG and Others [2011] ECLI:EU:C:2011:474, para. 120.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Proliferation of Filter Obligations in Civil Law Jurisdictions?   397 While stressing that this new diligence test should not be misunderstood to impose a general monitoring obligation on platform providers, the Court indicated that, under this standard, own investigations of the platform provider would have to be taken into account. Moreover, a diligent economic operator could be expected to consider even imprecise or inadequately substantiated notifications received in the framework of its notice-and-takedown system. Such notifications represented a factor of which the national court had to take account when determining whether the intermediary was actually aware of facts or circumstances on the basis of which a diligent economic op­er­ator should have identified the illegality.80 CJEU jurisprudence thus reflects a shift from a general exemption from investigations to an obligation to consider even imprecise notifications. Platform providers must set up a knowledge management system that reaches a certain level of sophistication.81 However, these guidelines of the CJEU must not be misunderstood as an indication that, in the case of trade marks, the mere appearance of a protected sign on an online platform could be sufficient to apply automated, algorithmic filter systems. This follows quite clearly from the rules which the Court established in the field of injunctions that can be imposed on an intermediary despite eligibility for the safe harbour for hosting.82 In the context of measures against trade mark infringement on online marketplaces, the Court clarified in L’Oréal v eBay that it was possible to order an online service provider, such as eBay, to take measures that contribute ‘not only to bringing to an end infringements committed through that marketplace, but also to preventing further infringements’.83 Hence, the Court opened the door to the introduction of filtering obligations. However, it pointed out that this should not culminate in a general and permanent ban on the use of goods bearing a specific trade mark.84 By contrast, measures only had to be taken against repeat infringers of the same trade mark. The CJEU explained that: if the operator of the online marketplace does not decide, on its own initiative, to suspend the [infringer] to prevent further infringements of that kind by the same seller in respect of the same trade marks, it may be ordered, by means of an injunction, to do so.85

Hence, the CJEU did not have a system in mind that generally prevents uploads of offers or other content comprising a protected trade mark. First, the filtering system following from L’Oréal v eBay only targets repeat infringers. In the absence of a first il­legit­im­ate act that identifies a particular platform user as an infringer, no filtering is required. Secondly, the filter system only concerns further infringements in respect of the same trade mark. It is based on a ‘double identity’ requirement: the same infringer and 80  ibid. para. 122. 81 For discussion, see Martin Senftleben, ‘Breathing Space for Cloud-Based Business Models— Exploring the Matrix of Copyright Limitations, Safe Harbours and Injunctions’ (2013) 4 JIPITEC 87, 94–5. 82  See Directive 2004/48/EC of the European Parliament and of the Council of 29 April 2004 on the enforcement of intellectual property rights, [2004] OJ L195/16, Art. 11. Cf. C-314/12 UPC Telekabel v Constantin Film and Wega [2014] ECLI:EU:C:2014:192, paras 52–63; Dinwoodie (n. 74) 496–7. 83  C-324/09 (n. 79) para. 131. 84  ibid. para. 140. 85  ibid. para. 141.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

398   Martin Senftleben the same trade mark. This approach differs markedly from the approach taken in Article 17(4)(b) of the DSM Directive which requires ‘best efforts to ensure the unavailability of specific works’ and, therefore, leads to filtering measures that block content whenever the content identification algorithm detects traces of a protected work.86 In contrast to this new legal framework for algorithmic copyright enforcement, the EU framework for intermediary liability in trade mark cases is not based on cooperation between the online platform industry and the branding industry that encourages trade mark proprietors to send information about their entire trade mark portfolio and obliges online platforms to block any upload containing a trade mark, in respect of which ‘rightholders have provided the service providers with the relevant and necessary information’.

3.2  Application in Civil Law Jurisdictions At the national level, however, the line between targeted filtering under the double identity rule following from L’Oréal v eBay and more general filtering measures may become blurred. In Germany, for instance, the rules on secondary infringement in the strict sense of aiding and abetting are regularly inapplicable unless an online platform has actual knowledge of individual persons using its services to infringe intellectual property rights. This starting point, however, did not prevent the German Federal Court of Justice from developing a specific liability regime in analogy with a general provision in the German Civil Code which entitles proprietors to request that interferences having detrimental effects on their property be removed and enjoined in the future.87 In the context of this ­so-called ‘Störerhaftung’, it is not necessary to establish guilt or negligence. It is sufficient to establish an adequate causal link between an (ongoing) infringement of an intellectual property right and acts or omissions of the defendant: in the case of online platforms, acts or omissions of the platform provider and a platform design that offers room for interferences with intellectual property rights. Moreover, the platform provider must be able— factually and legally—to remove the cause of the interference. To avoid infinite obligations, ‘Störerhaftung’ also requires a specific duty of care vis-à-vis the prevention of continued infringement.88 In practice, this final element allows the courts to draw the conceptual contours of the liability regime more precisely. Intermediaries can escape the verdict of liability if it cannot be demonstrated that they have neglected monitoring obligations which the courts tailor to the individual characteristics of the online platform concerned.89 86  Senftleben (n. 2) 5–10. 87  Civil Code (Bürgerliches Gesetzbuch) § 1004 (Ger.). See BGH Internet-Versteigerung [11 March 2004] I ZR 304/01 36 (Ger.) in (2005) 36 IIC 573; BGH Internet-Versteigerung II [19 April 2007] case no. I ZR 35/04 (Ger.); BGH Internet-Versteigerung III [30 April 2008] I ZR 73/05 (Ger.). Cf. Joachim Bornkamm, ‘E-Commerce Directive vs IP Enforcement—Balance Achieved?’ [2007] GRUR Int’l Teil 642, 643. 88  This specific duty of care must not be confused with negligence of duties of care that would result in an independent tort leading to direct liability of the intermediary. 89  cf. Kur and Senftleben (n. 11) paras 13.133–13.134. As the monitoring duty following from the concept of Störerhaftung is regarded as a specific duty, it is believed to be compatible with the prohibition of general monitoring obligations in Art. 15 of the e-Commerce Directive.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Proliferation of Filter Obligations in Civil Law Jurisdictions?   399 The German Federal Court of Justice developed the core principles of liability for trade mark infringement under the standard of Störerhaftung in cases concerning sales of infringing articles on auction sites.90 In particular, the court confirmed that, in the absence of actual and concrete knowledge of infringing acts of third parties, it was not possible to claim damages. Once the platform provider had been made aware of an infringement, however, liability as ‘Störer’ allowed the grant of injunctive relief and the imposition of preventive measures.91 With regard to the scope of preventive measures—de facto filtering obligations—the open-ended concept of Störer offers German courts considerable room to develop specific duties which they deem adequate and reasonable in the light of the individual circumstances of the tort at issue. As Article 15 of the e-Commerce Directive prohibits general monitoring obligations, this constitutes the ceiling of filtering orders that may result from Störerhaftung.92 Nonetheless, the specific monitoring duties imposed on platform providers to prevent further infringements can be quite substantial—more substantial than double-identity filtering (same infringer; same trade mark) which the CJEU considered appropriate in L’Oréal v eBay. Cases that culminated in substantially broader filter obligations concerned in particular copyright infringement. In Rapidshare, the German Federal Court of Justice dealt with a cloud storage service that allowed users not only to upload content to their individual locker space but also to share the content via download links that could be added to link collections.93 Given the characteristics of the service, the court found that Rapidshare enhanced the risk of infringement by offering premium accounts with very high download speed that made the accounts particularly attractive for illegal contentsharing. The court saw the premium offer as an indication that Rapidshare sought to profit from illegal downloads. As the service also offered anonymous user accounts, the court held the platform providers liable as Störer and imposed far-reaching control obligations. Upon receipt of a notice of infringing content, Rapidshare had to remove the individual content item at issue. Moreover, it had to ensure that the affected literary and artistic work was not infringed again in the future. Hence, the German Federal Court of Justice departed from the double-identity rule following from L’Oréal v eBay. Instead of limiting filter obligations to cases where the same user infringed the same work, the court generalized the monitoring and filtering duty by establishing an obligation to prevent any unauthorized reappearance of the same work—irrespective of the user involved.94 Furthermore, it held that Rapidshare had to actively search for names and descriptions of download links that contained the title of works in respect of which it had

90 See Internet-Versteigerung I-III (n. 87). Cf. Annette Kur, ‘Secondary Liability for Trademark Infringement on the Internet: The Situation in Germany and Throughout the EU’ (2014) 37 Colum. J. of L. & Arts 525, 532–9. 91  cf. Kur and Senftleben (n. 11) para. 13.136. 92  See BGH File-Hosting-Service [15 August 2013] I ZR 80/12. Cf. Bornkamm (n. 87) 643; Dinwoodie (n. 74) 492. 93 See File-Hosting-Service (n. 92). Cf. Kur (n. 90) 538–9. 94 See File-Hosting-Service (n. 92).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

400   Martin Senftleben received notifications of infringement. This obligation included the use of web crawlers and text filters to detect links to illegal content on Google, Facebook, and Twitter.95 In Germany, the flexible concept of Störerhaftung thus served as a vehicle to go beyond the concept of double-identity filtering which the CJEU accepted in L’Oréal v eBay. This development occurred in decisions concerning the infringement of literary and artistic works. Given the universal applicability of the underlying provision in the German Civil Code, however, it cannot be ruled out that a similar broadening of filter obligations may take place with regard to trade marks. In Internet Auction I, for instance, the German Federal Court of Justice recognized in respect of trade mark infringement that it would be too heavy a burden for the provider of an auction site to check each individual offer prior to its publication on the website.96 The situation was different, however, once the platform provider had been notified of clear trade mark infringements, such as infringing offers of Rolex watches. In such a case, the provider had to block the concrete offer without delay and apply preventative measures to ensure—as far as pos­sible—that no further trade mark infringements of the same kind occurred. More concretely, the provider had to see the notification of clear cases of trade mark infringement as a reason to subject offers for Rolex watches to specific (automated) checking which could be based, for instance, on typical features of suspect offers, such as a low price or a reference to Rolex imitations.97 Under the concept of Störerhaftung, targeted filtering in trade mark cases had thus already been deemed possible in the past. It is an open question whether more extensive filter obligations, such as the measures which the German Federal Court of Justice considered necessary in Rapidshare, comply with harmonized EU law—despite the explicit prohibition of general monitoring duties in Article 15 of the e-Commerce Directive.98 The prejudicial questions which the German Federal Court of Justice submitted in YouTube may clarify this compliance question.99 Whatever the outcome of the request for CJEU guidance, the previous analysis of context-specific inherent limits and statutory limitations of trade mark rights is a call for caution. Even if the concept of Störerhaftung survives the ruling of the CJEU in YouTube, it is not advisable to use this national concept as a backdoor to bring filter obligations in the field of trade mark law close to the copyright standard that follows from the adoption of the DSM Directive.100 As filter technology is not sophisticated enough to take account of the limited scope of trade mark rights, this step would encourage excessive algorithmic trade mark enforcement that disregards the context-specific conceptual contours of trade mark rights.

95 ibid. 96 See Internet-Versteigerung I-III (n. 87) para. 19. 97  ibid. para. 20. Cf. Kur (n. 90) 536; Matthias Leistner, ‘Structural Aspects of Secondary (Provider) Liability in Europe’ (2014) 9 JIPLP 75, 78–82. 98  See also the critical comments by Kur (n. 90) 539–40. 99 BGH YouTube [13 September 2018] I ZR 140/15 (Ger.) in [2018] GRUR 1132, 1132. As to the question of compliance of the concept of Störerhaftung with EU law, see the case comment by Ansgar Ohly, ibid. 1139–41. 100  cf. Senftleben (n. 2) 5–10.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Proliferation of Filter Obligations in Civil Law Jurisdictions?   401 An alternative, more cautious approach to filtering can be found in the Netherlands where injunctions including a filtering obligation to ensure that infringing content— after removal—does not reappear on an online platform, have traditionally101 only been granted in cases where the platform provider played an active role in the sense that he dealt with content on the platform in a way that implied knowledge of infringing ma­ter­ ial, collaborated with third parties offering illegal content, or added content himself. In these cases of ‘systematically and structurally facilitating’ the uploading of infringing content, Dutch courts are prepared to impose not only an obligation to remove the content at issue, but also an obligation to prevent the uploading of the same content by the same infringer or other platform users.102 Hence, an obligation to take measures against repeat infringers are conceivable with regard to specific content, if the platform provider plays an active role and has knowledge of the infringing content.103 In cases where the intermediary does not systematically and structurally facilitate infringing activities, Dutch courts are hesitant to impose far-reaching filter obligations. In particular, they embark on a thorough scrutiny of the scope of the injunction in the light of the principle of proportionality. In Stokke v Marktplaats, for instance, the Court of Appeals of Leeuwarden attached particular importance to the fact that the majority of offers on the online sales portal Marktplaats stemmed from private parties and had a less corrosive effect on Stokke’s business activities than offers posted by commercial sellers. The court also attached importance to the fact that Stokke—as copyright and trade mark owner—monitored the offers on the auction site anyway and had declared during the proceedings that it would continue this practice in the future. Against this background, 101  It remains to be seen in which way the decisions in C-527/15 Stichting Brein v Jack Frederik Wullems [2017] ECLI:EU:C:2017:300, para. 50 and C-610/15 Stichting Brein v Ziggo BV and XS4All Internet BV [2017] ECLI:EU:C:2017:456, para. 36, impact national Dutch practice. As these decisions broaden the concept of primary liability, they may lead to broader injunctions on the basis of a finding of primary liability. Cf. Ansgar Ohly, ‘The Broad Concept of “Communication to the Public” in Recent CJEU Judgments and the Liability of Intermediaries: Primary, Secondary or Unitary Liability?’ (2018) 67 GRUR Int’l Teil 517; Eleonora Rosati, ‘The CJEU Pirate Bay Judgment and its Impact on the Liability of Online Platforms’ (2017) 39 EIPR 737. 102 Court of Appeals of The Hague FTD v Eyeworks [15 November 2010] LJN BO3980, ECLI:NL:GHSGR:2010:BO3980, para. 6.9 (Neth.); Court of Appeals of Den Bos C More Entertainment v MyP2P [12 January 2010] LJN BM9205, ECLI:NL:GHSHE:2010:BM9205, para. 5 (Neth.); District Court of Utrecht BREIN v Mininova [26 August 2009] LJN BJ6008, ECLI:NL:RBUTR:2009:BJ6008, paras 4.66–4.69 (Neth.); Court of Appeals of Amsterdam Techno Design v BREIN [15 June 2006] LJN AX7579, ECLI:NL:GHAMS:2006:AX7579, para. 5 (Neth.). Cf. Jurre Reus, ‘De bescherming van IE-rechten op platforms voor user-generated content’ [2012] Intellectuele eigendom en reclamerecht 413, 414–15; B. Rietjens, ‘Over leechers, seeds and swarms: auteursrechtelijke aspecten van BitTorrent’ [2006] Tijdschrift voor auteurs-, media- en informatierecht 8; Madeleine de Cock Buning and Daan van Eek, ‘Aansprakelijkheid van derden bij auteursrechtinbreuk’ [2009] Intellectuele eigendom en reclamerecht 224; A.P. de Wit, ‘De civielrechtelijke aansprakelijkheid van internetproviders (Deel I)’ [2009] Tijdschrift voor Internetrecht 37; A.P. de Wit, ‘De civielrechtelijke aansprakelijkheid van internetproviders (Deel II)’ [2009] Tijdschrift voor Internetrecht 72. 103  For an example of the grant of a preliminary filtering injunction because of active control of platform content and knowledge resulting from this control activity, see District Court of Amsterdam PVH v Facebook [21 December 2018] ECLI:NL:RBAMS:2018:9362, paras 4.10–4.11 (Neth.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

402   Martin Senftleben the court found it disproportionate to impose the additional burden of establishing a filtering system on the platform provider Marktplaats. In the court’s view, additional monitoring by Marktplaats itself would add little to the results reached on the basis of the measures already taken by the rightholder himself.104 The proportionality test also played a decisive role in the context of copyright infringement. In the national BREIN v Ziggo decisions preceding the ruling of the CJEU,105 blocking orders with regard to the platform the Pirate Bay had been granted at first instance. The Court of Appeals of The Hague, however, concluded that the website blocking had been disproportionate because internet users had little difficulty in circumventing the resulting access restriction. Moreover, the court held that the blocking order could hardly be deemed effective as long as BREIN focused on only one file-sharing platform while leaving other platforms intact that could serve as alternative avenues for infringing activities. In such a case, the file-sharing traffic would only be diverted instead of being reduced. Given this lack of effectiveness, the inroads made into the fundamental freedom to conduct a business could not be justified.106

3.3  Need for Balanced, Proportionality-Based Approach In contrast to the case law that evolved in Germany, national decisions in the Netherlands thus reflect a somewhat different attitude: Dutch courts take a cautious approach to filtering orders. They rely heavily on the principle of proportionality as a compass to arrive at tailor-made solutions case by case. While Article 17 of the DSM Directive will lead to broader filtering obligations in the copyright arena, it is important that this cautious approach survives in the field of trade mark rights. Otherwise, as pointed out previously, overbroad control over trade mark-related online communications will be offered— control based on algorithmic enforcement that does not cor­res­pond to the scope of the underlying trade mark rights.

4. Conclusions An inquiry into the context-specific nature, inherent limits, and statutory limitations of trade mark rights shows that the expansion of algorithmic content identification and filtering systems to trade mark cases is likely to yield undesirable results. Trade mark law 104 See Court of Appeals of Leeuwarden Stokke v Marktplaats [22 May 2012] LJN BW6296, ECLI:NL:GHLEE:2012:BW6296, paras 8.4–10.5 (Neth.). Cf. Reus (n. 102) 416; M.H.M.  Schellekens, ‘Internet-filteren—komt van het een het ander?’ [2011] Mediaforum 286, 289–93; Christiaan Alberdingk Thijm, ‘Wat is de zorgplicht van Hyves, XS4All en Marktplaats?’ [2008] Ars Aequi 573. 105  See C-610/15 (n. 101) para. 36. 106 See Court of Appeals of The Hague BREIN v Ziggo [28 January 2014] case no. 374634, ECLI:NL:GHDHA:2014:88, paras 5.10–5.22 (Neth.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Proliferation of Filter Obligations in Civil Law Jurisdictions?   403 seeks to strike a proper balance between trade mark protection and competing economic, social, and cultural needs. Due to the different conceptual contours of trade mark rights, a system mimicking the filtering obligations following from the DSM Directive in the field of copyright law would give trade mark proprietors excessive control over the use of their trade marks in the digital environment. Such an overbroad system of automated, algorithmic filtering would encroach upon the fundamental guarantee of freedom of expression and freedom of competition. It is likely to have a chilling effect on legitimate descriptive use of trade marks, comparative advertising, advertising by resellers, information about alternative offers in the marketplace, and use criticizing or commenting on trade-marked products. As a result, consumers would receive less diverse information on goods and services. The reliability of the internet as an independent source of trade mark-related information would be put at risk. Instead, a liability regime is necessary that is tailored to the particular scope and reach of trade mark protection. Besides the inherent limits of trade mark rights, statutory limi­ta­tions of protection play an important role that satisfy competing economic, social, and cultural needs. Moreover, it is advisable to take into account the differences between online platforms and the different ways in which trade marks may be used in accordance with the platform infrastructure. In sum, the configuration of trade mark protection requires a nuanced approach to the question of intermediary liability and related filtering issues. The principle of proportionality constitutes an important signpost in this respect. Only a cautious, proportionality-based approach allows judges to align their decisions with the specific scope and reach of trade mark rights and the individual characteristics of online platforms.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 21

I n ter m edi a ry Li a bilit y a n d Tr a de M a r k I n fr i ngem en t: A Com mon L aw Perspecti v e Richard Arnold*

In discussing intermediary liability and trade mark infringement, it is important to ­distinguish between two different questions. The first concerns the extent to which intermediaries may themselves be legally liable, whether as primary infringers or accessories, for trade mark infringement, and thus amenable to remedies such as damages, orders for delivery up of infringing goods, and injunctions. The second concerns the extent to which intermediaries may be amenable to injunctions aimed at preventing or reducing trade mark infringement by third parties even though the intermediaries are not themselves liable for such infringements as either primary infringers or accessories. The first question depends on the application of legal principles which are not particular to intermediaries, whereas the second question depends on the application of principles which are specific to intermediaries. Accordingly, I will refer to liability of the first kind as ‘primary’ and ‘accessory’ liability, whereas I will refer to liability of the second kind as ‘intermediary’ liability.1 It is important to appreciate that the primary and accessory liability of intermediaries involves liability for past acts giving rise to claims for damages as well as liability to injunctions with regard to future acts, whereas intermediary *  I am grateful to Paul Davies, Martin Husovec, and Ansgar Ohly for comments on an earlier draft. 1  The different kinds of liability that may be encompassed within the term ‘secondary liability’ are discussed by Graeme Dinwoodie, ‘A Comparative Analysis of Secondary Liability of Online Service Providers’ in Graeme Dinwoodie (ed.), Secondary Liability of Internet Service Providers (Springer 2017).

© Richard Arnold 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Common Law Perspective   405 liability is purely for injunctions and not for damages or other remedies for past acts.2 A  subsidiary difference is that injunctions based on primary or accessory liability are  usually prohibitory in character (requiring the infringer not to commit further infringing acts), whereas injunctions based on intermediary liability are frequently mandatory in character (requiring the intermediary to take steps to prevent or reduce infringement by others). In the EU, the law applicable to the first question derives from what is now European Parliament and Council Directive 2015/2436/EU of 16 December 2015 to approximate the laws of the Member States relating to trade marks (recast) (the Trade Marks Directive) and what is now European Parliament and Council Regulation 2017/1001/EU of 14 June 2017 on the European Union trade mark (codification) (the EU Trade Mark Regulation) so far as primary liability is concerned and the national laws of the Member States so far as accessory liability is concerned. The law applicable to the second question derives at least primarily3 from the third sentence of Article 11 of European Parliament and Council Directive 2004/48/EC of 29 April 2004 on the enforcement of intellectual property rights (the Enforcement Directive). Articles 12 to 15 of European Parliament and Council Directive 2000/31/EC of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (the e-Commerce Directive) are relevant to both questions. It follows that the main area in which English common law can play a distinctive role is that of accessory liability; but it is suggested that the common law perspective on the questions of European law may also be of value.4 This Handbook is concerned with online intermediaries, but it should not be forgotten that similar issues can arise with respect to offline intermediaries.5 To date, most of the case law and commentary6 concerning the liability of online intermediaries, and in particular intermediary liability as I have defined it, has focused on liability for copyright infringement. While some of the copyright case law and commentary is relevant to the present context, the liability of intermediaries for trade mark infringement raises distinct issues.7 2  For this reason, Martin Husovec has referred to intermediaries being accountable rather than liable. See Martin Husovec, Injunctions Against Intermediaries in the European Union: Accountable But Not Liable? (CUP 2018). 3  It is an open question whether EU law pre-empts national law in these respects, but it seems likely that it does. 4  At the time of writing the UK remains a Member State of the EU. 5  See e.g. C-494/15 Tommy Hilfiger Licensing LLC v Delta Center as [2016] ECLI:EU:C:2016:528. 6  At the time of writing the leading texts are Jaani Riordan, The Liability of Internet Intermediaries (OUP 2016); Christina Angelopoulos, European Intermediary Liability in Copyright: a Tort-Based Analysis (Wolters Kluwer 2016); Dinwoodie (n. 1) and Husovec (n. 2). Of these, the best coverage of trade mark issues is in Riordan ch. 7. 7  As Martin Senftleben has pointed out, the issues raised by allegations of trade mark infringement are generally more context-specific than those raised by allegations of copyright infringement (and hence, he argues, should be the subject of a differentiated safe harbour standard). See Martin Senftleben, ‘An Uneasy Case for Notice and Takedown: Context-Specific Trademark Rights’, SSRN Research Paper no. 2025075 (16 March 2012) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

406   Richard Arnold

1.  Who are Online Intermediaries? It is clear from the decisions of the Court of Justice of the European Union (CJEU) in the LSG8 and UPC9 cases that internet service providers (ISPs, also known as internet access providers) are intermediaries within Article 8(3) of European Parliament and Council Directive 2001/29/EC of 22 May 2001 on the harmonisation of certain aspects of  copyright and related rights in the information society (the Information Society Directive) which is equivalent to the third sentence of Article 11 of the Enforcement Directive in this respect. Equally, it appears from L’Oréal v eBay10 that an online marketplace like eBay is an intermediary. More generally, the CJEU held in Tommy Hilfiger11 that, for an economic operator to constitute an intermediary, ‘it must be established that it provides a service capable of being used by one or more other persons in order to infringe one or more intellectual property rights, but it is not necessary that it maintain a specific relationship with that or those persons’. This implies quite a broad definition, but the limits of that definition remain to be tested. For example, it is not clear whether the operator of a search engine would qualify as an intermediary within Article 11. The answer may vary according to the service in question: a keyword advertising service and a keyword suggestion tool are more likely to qualify than a plain vanilla search tool.12 Although a service provider which is not an intermediary within Article 11, and thus not exposed to intermediary liability, may still be accused of trade mark infringement either as a primary infringer or an accessory, such allegations are unlikely to succeed.

2.  Primary Liability of Intermediaries for Trade Mark Infringement Whether under the Trade Marks Directive or under the EU Trade Mark Regulation, there are three ways in which a trade mark may be infringed: (1) through use of an identical sign in relation to identical goods or services (Art. 9(2)(a) of the Regulation/ Art. 10(2)(a) of the Directive); (2) through use of an identical or similar sign in relation to similar or identical goods or services so as to give rise to a likelihood of confusion (Art. 9(2)(b) of the Regulation/Art. 10(2)(b) of the Directive); and (3) where the trade 8 C-557/07 LSG-Gesellschaft zur Wahrnehmung von Leistungsschutzrechten GmbH v Tele2 Telecommunication GmbH [2009] ECLI:EU:C:2009:107. 9 C-314/12 UPC Telekabel Wien GmbH v Constantin Film Verleih GmbH [2014] ECLI:EU:C:2014:192. 10 C-324/09 L’Oréal SA v eBay International AG [2011] ECLI:EU:C:2011:474. 11  Tommy Hilfiger (n. 5) para. 23. 12  The Cour de cassation [French Supreme Court] granted an injunction against Google in respect of its keyword suggestion tool, implicitly holding that Google was an intermediary in that respect. See Syndicat National de l’Edition Phonographique v Google France SARL [12 July 2012] case comment [2013] IIC 380.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Common Law Perspective   407 mark has a reputation, through use of an identical or similar sign in relation to any goods or services which without due cause takes unfair advantage of, or is detrimental to, the distinctive character or the repute of the trade mark (Art. 9(2)(c) of the Regulation/Art. 10(2)(c) of the Directive). This is not the place in which to survey the law with respect to each of these three types of infringements, which are the subject of an extensive body of case law. Rather, I will focus upon certain specific issues which tend to arise in the case of allegations of infringement by intermediaries. I will assume for these purposes that the courts of England and Wales have jurisdiction over the intermediary in relation to the alleged infringing acts (although in practice jurisdictional issues of some complexity may arise).

2.1  Use of the Sign Complained of by the Intermediary In order for a person to have infringed a trade mark in any of the three ways specified above, the first and most basic requirement is that the person must have ‘used’ the sign complained of. In many cases this requirement is uncontentious, but where primary infringement by an intermediary is alleged, a question is likely to arise as to whether the sign had been used by the intermediary, rather than by some other party.13 The CJEU has held in three cases that the intermediary did not use the sign in issue. In Google France,14 Google did not use the keywords which were the subject of its keyword advertising service even though Google displayed advertisements to its users by ­reference to those keywords, but rather the advertisers who selected and paid for the keywords did. In L’Oréal v eBay,15 eBay did not use the signs under which sellers offered goods for sale on its platform, but rather the sellers did. The test laid down by the CJEU was whether the party in question used the sign ‘in its own commercial communication’. Similarly, in Winters v Red Bull,16 Winters did not use the trade marks affixed to cans supplied by its customer Smart Drinks which Winters filled with drinks in accordance with Smart Drinks’ instructions and Smart Drinks then marketed.

13  Conversely, the CJEU held in C-179/15 Daimler AG v Együd Garage Gépjárműjavító és Értékesítő Kft [2016] ECLI:EU:C:2016:134 that an advertiser did not use a sign in an advertisement where it requested removal or amendment of the advertisement but the publisher failed to do so or where the advertisement was republished by third parties without the knowledge or consent of the advertiser. 14  See C-236–238/08 Google France SARL v Louis Vuitton Malletier SA [2010] ECLI:EU:C:2010:159, paras 50–8, 104–5. Note that the CJEU appears to have held at paras 106–20 that Google France was an intermediary service provider within the meaning of Section 4 of Chapter II of the e-Commerce Directive; but there was no issue as to whether it was an intermediary within the meaning of Art. 11 of the Enforcement Directive. 15  See C-324/09 (n. 10) paras 98–105. 16  See C-119/10 Frisdranken Industrie Winters BV v Red Bull GmbH [2011] ECLI:EU:C:2011:837. The CJEU described Winters as a service provider; but again there was no issue as to whether it was an intermediary within the meaning of Art. 11 of the Enforcement Directive.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

408   Richard Arnold By contrast, in the LUSH case,17 the High Court of England and Wales held that Amazon did use the sign when users of Amazon’s platform searched for ‘lush’ and Amazon displayed offers for sale of what were in fact competitive products in response. Some of the offers were by Amazon, some were by third parties with Amazon providing fulfilment services, and some were by third parties with the third parties fulfilling the sale, but the three types of offer were mixed up together. The court held that no distinction was to be made between the three types of offer, they were all Amazon’s own commercial communication. The Bundesgerichtshof (German Federal Court of Justice, BGH) reached a similar conclusion in the ORTLIEB case.18 Further light may be shed on this question by the pending reference from the BGH in the Davidoff Hot Water case,19 in which a third party vendor on the Amazon marketplace sold goods parallel imported from outside the European Economic Area (EEA) using Amazon’s fulfilment service. In that case the German courts took the view that Amazon did not use the sign in question in relation to the stocking of the goods.

2.2  Use of the Sign in the Relevant Territory Even if the intermediary has used the sign complained of, it will not have infringed the trade mark unless it has used the sign in the relevant territory (the UK in the case of a UK trade mark and the EU in the case of an EU trade mark). In the online context, where a website is accessible worldwide, this gives rise to the question of the test to be applied. It is now well established that the use in question must be targeted at consumers in the relevant territory, mere accessibility is not enough.20 The law was reviewed in 2017 by the Court of Appeal of England and Wales in Merck,21 where the court held that the issue of targeting is to be considered objectively from the perspective of average consumers in the relevant territory. The question is whether those average consumers would consider that the use is targeted at them. Nevertheless, evidence that a trader does in fact intend to target consumers in the territory may be relevant in assessing whether its use has that effect. The court must carry out an evaluation of all the relevant circumstances. These may include any clear expressions of an intention to solicit custom in the territory by, for example, including the territory in a list or map of the geographic areas to which the trader is willing to dispatch its products. But a finding that an advertisement is directed at consumers in the territory does not depend upon there being any such clear evidence. The appearance and content of the website will be of 17 See Cosmetic Warriors Ltd v Amazon.co.uk Ltd [2014] EWHC 181 (Ch), [2014] FSR 31 [51]–[61] (UK). 18  I ZR 138/16 [2018] IIC 1121, [2018] GRUR 924 (Ger.). 19  See C-567/18 Coty Germany. 20  See C-585/08 and C-144/09 Pammer v Reederei Karl Schluter GmbH & Co. KG and Hotel Alpenhof GesmbH v Heller [2010] ECLI:EU:C:2010:740; L’Oréal v eBay (n. 10); and C-173/11 Football Dataco Ltd v Sportradar GmbH [2012] ECLI:EU:C:2012:642. 21 See Merck KGaA v Merck Sharp & Dohme Corp. [2017] EWCA Civ 1834, [2018] ETMR 10 [153]–[170] (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Common Law Perspective   409 particular significance, including whether it is possible to buy goods or services from it. However, the relevant circumstances may extend beyond the website itself and include, for example, the nature and size of the trader’s business, the characteristics of the goods or services in issue, and the number of visits made to the website by consumers in the territory. In Walton v Verweij,22 the High Court held, applying this guidance, that advertisements and offers for sale of clothing displayed by an AliExpress store (AliExpress being an international competitor to eBay operated by the Chinese company Alibaba) operated by the defendants’ group were not targeted at the UK, whereas actual sales of clothing supplied to UK consumers were targeted at the UK. In that case, however, it was not alleged that the intermediary had used the signs in issue.

2.3  Counterfeit Goods and Grey Goods Allegations of trade mark infringement directed at online intermediaries are usually concerned with one or both of two types of infringing conduct: dealings in counterfeit goods and dealings in grey goods such as parallel imports of genuine trade-marked goods from outside the EEA.23 It is increasingly the case that the channels of trade through which both counterfeits and grey goods reach consumers in the EU involve online intermediaries. Furthermore, the very nature of online intermediaries as operators in borderless cyberspace makes them particularly attractive as channels for trade in grey goods. Trade mark owners obviously wish to prevent sales of both counterfeits and grey goods so far as possible. It is often the case that tracing and tackling the sources of such goods is a difficult, time-consuming, and resource-intensive task. Accordingly, trade mark owners look to online intermediaries to assist them in combating these kinds of infringements. Reputable intermediaries recognize that it is in their own interests to try to minimize sales of counterfeit goods, and to a lesser extent grey goods, through their platforms, but are of course resistant to accepting liability for infringement themselves. For the reasons explained above, it can be difficult for trade mark owners to establish primary infringement by the intermediary. Accordingly, the trade mark owner will often turn to trying to establish that the intermediary is liable as an accessory. Failing that, the trade mark owner may try to obtain an injunction on the basis of intermediary liability as defined earlier. These issues came to the fore in the litigation between L’Oréal and eBay in a number of EU countries, including the UK, in the period from 2007 to 2013. The reference from the High Court of England and Wales 22 See Walton International Ltd v Verweij BV [2018] EWHC 1068 (Ch), [2018] ETMR 34 [111]–[113], [147]–[157] (UK). 23  Claims in respect of keyword advertising were ended by Google France (n. 14). Claims in respect of comparative advertising would be likely to encounter similar difficulties where directed at the intermediary rather than the advertiser. The LUSH case (n. 17) was unusual since the claim was effectively one of switch-selling.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

410   Richard Arnold to the CJEU raised a number of questions concerning eBay’s liability for dealings in counterfeits and grey goods. It is of some interest that, following the judgment of the CJEU, the parties negotiated an EU-wide settlement of their disputes. The details of the settlement have never been made public, but it is understood that they involved eBay taking further steps to combat sales of counterfeits and grey goods by sellers on its platform.24

2.4  Articles 12 to 14 of the e-Commerce Directive Even if there is use of the sign by the intermediary which is targeted at the relevant territory, and the use would otherwise amount to an infringement of the trade mark, the intermediary may attempt to rely on the exemption from liability for financial remedies provided by Articles 12 to 14 of the e-Commerce Directive. Of these, the most likely to be relevant is Article 14 (hosting). If an online intermediary is sufficiently involved to have used the sign in question and targeted the relevant territory, it is unlikely that the intermediary will be able to rely on this exemption, however. Accordingly, it is more convenient to deal with this issue in the context of accessory liability.

3.  Accessory Liability of Intermediaries for Trade Mark Infringement Although it is settled law that Articles 10 to 15 of the Trade Marks Directive and Articles 9 to 16 of the EU Trade Mark Regulation embody a complete harmonization of the rules relating to infringement of the rights conferred by registration of a trade mark within the EU,25 accessory liability for trade mark infringement is not harmonized by either the Trade Marks Directive or the EU Trade Mark Regulation, both of which are entirely silent on the point. Although it appears that the CJEU may have started in cases such as GS Media,26 Filmspeler,27 and The Pirate Bay28 to develop a European 24  In the referring judgment, the High Court listed ten additional steps that eBay could take, while observing that it did not follow that eBay was legally obliged to take any of those steps. See L’Oréal SA v eBay International AG [2009] EWHC 1094 (Ch), [2009] RPC 21 [277] (UK). On the question of voluntary cooperation between trade mark owners and platform operators, see further Chapter 19. 25 C-355/96 Silhouette International Schmied GmbH & Co. KG v Hartlauer Handelsgesellschaft mbH [1998] ECLI:EU:C:1998:374; C-414–416/99 Zino Davidoff SA v A & G Imports Ltd [2001] ECLI:EU:C:2001:617; C-244/00 Van Doren + Q. GmbH v Lifestyle sports + sportswear Handelsgesellschaft mbH [2003] ECLI:EU:C:2003:204; C-16/03 Peak Holding AB v Axolin-Elinor AB [2004] ECLI:EU:C:2004:759. 26 C-160/15 GS Media BV v Sanoma Media Netherlands BV [2016] ECLI:EU:C:2016:644. 27 C-527/15 Stichting Brein v Wullems (Filmspeler) [2017] ECLI:EU:C:2017:300. 28 C-610/15 Stichting Brein v Ziggo BV (The Pirate Bay) [2017] ECLI:EU:C:2017:456.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Common Law Perspective   411 doctrine of accessory liability for copyright infringement at least by communication to the public,29 it has so far eschewed this in trade mark cases.30 It follows that national law applies.31

3.1  Accessory Liability of Intermediaries for Trade Mark Infringement under English Law Accessory liability for infringements of trade marks (and other intellectual property rights) under English law is governed by the doctrine of joint tortfeasance.32 This is a common law (i.e. judge-created) doctrine which complements the statutory regimes. With the exception of the Copyright, Designs and Patents Act 1988,33 the UK’s intellectual property statutes do not contain provisions for accessory liability, but only primary liability. Hence the gap has been filled by the common law. In summary, the doctrine of joint tortfeasance states that a person may be jointly liable for a tort (e.g. infringement of an intellectual property right) committed by a third party where that person has procured the commission of the tort by inducement, incitement, or persuasion or where that person has participated in a common design to secure the doing of acts which amount to trade mark infringement. In practice, procurement is generally harder to establish than participation in a common design. Accessory liability of online intermediaries for trade mark infringements by third parties was considered by the High Court of England and Wales in L’Oréal v eBay.34 At that time, the leading authority on the doctrine of joint tortfeasance was the decision of the House of Lords in CBS v Amstrad.35 The High Court held that the ­evidence did not establish procurement by eBay of the particular acts of infringement by the other defendants complained of and that eBay had not participated in a 29  See Ansgar Ohly, ‘The broad concept of “communication to the public” in recent CJEU judgments and the liability of intermediaries: primary, secondary or unitary liability?’ [2018] JIPLP 664 and Jan Bernd Nordemann, ‘Recent CJEU case law on communication to the public and its application in Germany: A new EU concept of liability’ [2018] JIPLP 744. 30  See C-236–238/08 (n. 14) para. 57; C-324/09 (n. 10) para. 104; Winters v Red Bull (n. 16) para. 35. See also the discussion of this question by AG Jääskinen in his Opinion in L’Oréal v eBay. See C-324/09 L’Oréal SA v eBay International AG [2010] ECLI:EU:C:2010:757, Opinion of AG Jääskinen, paras 55–8. 31  For German law, see Annette Kur, ‘Secondary Liability for Trademark Infringement on the Internet: The Situation in Germany and Throughout the EU’ (2014) 37 Colum. J. of L. & Arts 525. 32  On accessory liability in English law generally, see Paul Davies, Accessory Liability (Hart 2015). On joint tortfeasance generally, see Michael Jones (ed.), Clerk & Lindsell on Torts (Sweet & Maxwell 2018) paras 4-02–4-06. 33  Section 16(2) provides that copyright is infringed by a person who ‘authorises another to do’ a restricted act. See Richard Arnold and Paul Davies, ‘Accessory Liability for Intellectual Property Infringement: The Case of Authorisation’ [2017] LQR 442. 34 See L’Oréal v eBay (n. 24) [346]–[382]. 35  CBS Songs Ltd v Amstrad Consumer Electronics plc [1988] AC 1013 (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

412   Richard Arnold common design to secure the commission of acts which amounted to trade mark infringement. L’Oréal’s strongest case was in relation to grey goods emanating from outside the EEA, where the evidence showed that eBay actively encouraged the listing and sale of goods from outside the EEA to buyers in the UK and provided specific facilities to assist sellers to do this. Moreover, no steps were taken by eBay to discourage such infringements, let alone to try to prevent them. Nevertheless, it could not be said that the facilities provided by eBay inherently led to infringement. They were capable of being used by sellers in a manner which did not infringe trade marks. Whether the use of the facilities led to infringement depended on the autonomous actions of foreign sellers. Given that (1) eBay was under no legal duty to prevent infringement and (2) facilitation of infringement with knowledge that it was likely to occur and an intention to profit were not enough, the High Court concluded that eBay was not liable as a joint tortfeasor. It may be noted that L’Oréal did not appeal to the Court of Appeal on this issue of domestic law while the reference to the CJEU was pending. Since then, the doctrine of joint tortfeasance through participation in a common design has been considered by the Supreme Court in Fish & Fish.36 The Supreme Court held that, in order to be liable with a principal tortfeasor, a defendant had to be proved to have combined with the principal tortfeasor to do, or to secure the doing of, acts which constituted the tort; that that required proof that the defendant had acted in a way which furthered the commission of the tort by the principal tortfeasor and that he had done so in pursuance of a common design to do, or to secure the doing of, the acts which constituted the tort; and that whether the defendant had done enough to further commission of the tort would depend on the circumstances in each case. Although the Supreme Court held that it was sufficient for the defendant to be liable that it had done acts which furthered the commission of the tort in a more than minimal or trivial way, it reiterated that neither facilitation of a tort nor knowledge that it would be committed were enough.37 On the other hand, Lord Sumption considered that it was different if the defendant intended the infringements to be an effective way of increasing the use of its own website.38 This suggests that it is now more likely that eBay would be held jointly liable for at least some of the other defendants’ infringements in L’Oréal v eBay. Certainly, it seems clear that the defendant’s intention will be an important factor to consider in any future claim against an intermediary that it is jointly liable for infringements by third parties.

36  Fish & Fish Ltd v Sea Shepherd UK [2015] UKSC 10, [2015] AC 1229 (UK). 37  ibid. [21] (Lord Toulson), [39]–[42] (Lord Sumption), [58] (Lord Neuberger of Abbotsbury). 38  ibid. [43]–[44]. Note also Lord Sumption’s citation with approval at [38] of Dramatico Entertainment Ltd v British Sky Broadcasting Ltd [2012] EWHC 268 (Ch), [2012] RPC 27 (UK), in which it was held that the Pirate Bay was jointly liable for infringements of copyright by its users. The latter case is among those discussed in Arnold and Davies (n. 33).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Common Law Perspective   413

3.2  Article 14 of the e-Commerce Directive As noted earlier, online intermediaries may attempt to rely on the exemptions from accessory liability for financial remedies provided by Articles 12 to 14 of the e-Commerce Directive, and in particular Article 14 (hosting). The CJEU has now considered the availability of Article 14 to intermediaries in three cases of alleged trade mark infringement:39 Google France,40 L’Oréal v eBay,41 and SNB-REACT.42 The CJEU has held that, in order to fall within Article 14(1), an online service provider must confine itself to technical, automatic, and passive processing of the data provided by its customers. If it plays an active role of such a kind as to give it knowledge of, or control over, those data, it is not protected by Article 14(1). More specifically, in L’Oréal v eBay it held that the fact that the operator of an online marketplace stores offers for sale on its server, sets the terms of its service, is remunerated for its service, and provides general information to its customers does not prevent it from relying on Article 14(1), but it is different if the operator provides assistance to its customers by optimizing the presentation of offers for sale or promoting those offers. Even if the operator does not play an active role, it will only benefit from the exemption from liability under Article 14(1) if it does not have actual knowledge of illegal activity or information and is not aware of facts and circumstances from which the illegal activity or information is apparent (or, if it becomes aware of such facts and circumstances, acts expeditiously to remove or disable access to the information in question). Operators may become aware of such facts and circumstances either through their own investigations or through notifications by others. The question for the national court is whether a diligent economic operator should have identified the illegality. The Cour de cassation (French Supreme Court) subsequently held that eBay could not rely on Article 14(1) because it had played too active a role by assisting sellers in the promotion of sales;43 but the litigation settled before the High Court of England and Wales issued a final decision on this point. In ECB v Tixdaq,44 the High Court expressed the provisional view that the Article 14 defence would be available to the defendants, who operated an app which enabled users to share clips of broadcasts of sporting events, in respect of user-posted clips which were not editorially reviewed, but not in respect of clips which were editorially reviewed.

39 See also C-484/14 McFadden v Sony Music Entertainment Germany GmbH [2016] ECLI:EU:C:2016:689, concerning Art. 12 and copyright infringement, 40  See C-236–238/08 (n. 14) paras 106–20. 41  See C-324/09 (n. 10) paras 106–24. 42  See C-521/17 Coöperatieve Vereniging SNB-REACT UA v Mehta [2018] ECLI:EU:C:2018:639, paras 40–52. 43  See Cour de cassation eBay Inc. v LVMH, Parfums Christian Dior [3 May 2012] Bull. civ. IV, 89 (Fr.). 44 See England and Wales Cricket Board Ltd v Tixdaq Ltd [2016] EWHC 575 (Ch), [2016] Bus LR 641 [170] (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

414   Richard Arnold

4.  Injunctions Against Intermediaries Whose Services are Used to Infringe Trade Marks The third sentence of Article 11 of the Enforcement Directive requires Member States to ‘ensure that rightholders are in a position to apply for an injunction against intermediaries whose services are used by a third party to infringe an intellectual property right’. As noted previously, this provision enables trade mark owners to obtain injunctions against intermediaries even though the intermediaries are not themselves primarily or accessorily liable for trade mark infringement. Article 11 of the Enforcement Directive generalizes Article 8(3) of the Information Society Directive, which applies to infringements of copyright and related rights. The rationale for these provisions is set out in recital 59 of the Information Society Directive: In the digital environment, in particular, the services of intermediaries may increasingly be used by third parties for infringing activities. In many cases such intermediaries are best placed to bring such infringing activities to an end. Therefore, without prejudice to any other sanctions and remedies available, rightholders should have the possibility of applying for an injunction against an intermediary who carries a third party’s infringement of a protected work or other subject-matter in a network. This possibility should be available even where the acts carried out by the intermediary are exempted under Article 5. . . . 

Intermediaries may be ‘best placed’ to prevent infringement in two senses. First, because it is more practical for trade mark owners to bring proceedings against intermediaries than against the infringers. The intermediaries are likely to be readily identifiable, amenable to the jurisdiction of a court which is convenient for the trade mark owners, and amenable to the enforcement measures at that court’s disposal, whereas the infringers may well be none of those things. Secondly, because intermediaries are the ‘lowest cost avoiders’ of infringement. That is to say, it is economically more efficient to require intermediaries to take action to prevent infringement occurring via their services than it is for rightholders to take action directly against infringers. The second claim is more controversial than the first, and its correctness may depend upon factors such as the allocation of costs. Husovec’s analysis is that enforcement by intermediaries is economically efficient if, but only if, the costs of both obtaining and implementing injunctions are borne by rightholders.45

45  See Husovec (n. 2) ch. 2. Some of the limitations of this analysis are discussed in a review by the  present author. See Richard Arnold, ‘Essential reading on intermediary accountability’ [2018] JIPLP 247.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Common Law Perspective   415

4.1  Jurisdiction of the Courts of England and Wales to Grant Injunctions Against Intermediaries in Trade Mark Cases The UK implemented Article 8(3) of the Information Society Directive by amending the Copyright, Designs and Patents Act 1988 to insert section 97A, which confers a specific power on the High Court to grant injunctions against service providers. Since 2011, a series of injunctions have been granted in the exercise of the section 97A power against ISPs requiring them to block (or at least, attempt to block) access by their customers to websites which infringe or facilitate the infringement of the claimants’ copyrights.46 When Article 11 of the Enforcement Directive required Member States to make the same provision in respect of other intellectual property rights, however, no statutory provision was introduced to implement this. The government then took view that this was unnecessary because the courts already had the power to grant such injunctions under domestic law. That view has been vindicated by the decisions of the High Court, Court of Appeal, and Supreme Court in Cartier,47 in which website-blocking injunctions were granted against ISPs in order to combat trade mark infringement rather than copyright infringement. Lord Sumption, who delivered the judgment of the Supreme Court, held that the courts had power to grant such an injunction on ordinary principles of equity quite apart from the power derived from EU law. He traced the historical origin of this power to the bill of discovery in equity, in which the sole relief sought was an order for disclosure for use in common law proceedings. In Orr v Diaper,48 the power to order disclosure was extended to disclosure of the identity of a wrongdoer whom the claimant wished to sue. In Upmann v Elkan,49 it was held that a freight forwarding agent who was innocently in possession of consignments of counterfeit cigars had a duty, once informed of the infringement, not only to identify the wrongdoers, but also to ensure that the goods were not removed or dealt with until the spurious brand had been removed. In Norwich Pharmacal Co. v Customs and Excise Commissioners,50 the House of Lords confirmed that, if through no fault of his own a person gets mixed up in the tortious acts of others so as to facilitate their wrongdoing, he may incur no personal liability, but he comes under a duty to assist the person who has been wronged by giving him full information and disclosing the identity of the wrongdoer. Lord Sumption concluded that it was clear from the authorities and correct in principle that orders for the disclosure of information were only one 46  Beginning with Twentieth Century Fox Film Corp. v British Telecommunications plc [2011] EWHC 1981 (Ch), [2012] Bus LR 1471 (UK) and Twentieth Century Fox Film Corp. v British Telecommunications plc (No. 2) [2011] EWHC 2714 (Ch), [2012] Bus LR 1525 (UK). 47  Cartier International AG v British Telecommunications plc [2014] EWHC 3354 (Ch), [2015] Bus LR 298 (ChD); [2016] EWCA Civ 658, [2017] Bus LR 1 (CA); [2018] UKSC 28, [2018] Bus LR 1417 (SC) (UK). 48  (1876) 4 Ch D 92 (UK). 49  (1871) LR 12 Eq 140, (1871) LR 7 Ch App 130 (UK). 50  [1974] AC 133 (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

416   Richard Arnold category of order which a court could make against a legally innocent party to prevent the use of its facilities to commit or facilitate a wrong.

4.2  Website-Blocking Injunctions: Threshold Conditions Where an injunction is sought against an intermediary on the basis that its services have been used to infringe copyright or related rights, Parliament has laid down a number of threshold conditions for the exercise of the High Court’s jurisdiction to grant an injunction.51 In the context of website-blocking orders, there are four conditions which must be satisfied. First, that the defendant is a service provider. Secondly, that users and/ or the operator of the website in question infringe the claimant’s copyrights. Thirdly, that users and/or the operator of the website use the defendant’s services to do that. Fourthly, that the defendant has actual knowledge of this. For the reasons discussed earlier, Parliament has not laid down any threshold conditions for the exercise of the High Court’s jurisdiction to grant an injunction against an intermediary on the basis that its services have been used to infringe other intellectual property rights. It is clear, in particular from the judgment of the CJEU in L’Oréal v eBay,52 that the court must exercise its power to grant an injunction consistently with the provisions of the Enforcement Directive, and in particular Article 3 and the third sentence of Article 11, and with other applicable provisions of EU law, and in particular Articles 12 to 15 of the e-Commerce Directive. Accordingly, it was held by the High Court and Court of Appeal in Cartier that similar threshold conditions must be satisfied in order for a websiteblocking injunction to be granted in a trade mark case.53 First, the ISPs must be intermediaries within the meaning of the third sentence of Article 11. Secondly, either the users and/or the operators of the website must be infringing the claimant’s trade marks. Thirdly, the users and/or the operators of the website must use the ISPs’ services to do that. Fourthly, the ISPs must have actual know­ledge of this. Each of the first three conditions follows from the wording of Article 11 itself. The fourth condition is not contained in Article 11, but it was held that it follows from Article 15 of the e-Commerce Directive (no general monitoring) and by analogy with Articles 13(1)(e) and 14(1)(a) of the e-Commerce Directive (exemptions for caching and hosting in the absence of actual knowledge of unlawful activity). If ISPs could be required to block websites without having actual knowledge of infringing activity, that would be tantamount to a general obligation to monitor. It is also difficult to see that such a requirement would be consistent with the requirements of Article 3(1) of the Enforcement Directive. Intermediaries can be given actual knowledge by notification, however.

51  See Copyright, Designs and Patents Act 1988, s. 97A (UK). 52  See C-324/09 (n. 10) paras 137–8. 53 See Cartier (n. 47) [2014] EWHC 3354 (Ch), [2015] Bus LR 298 [139]–[141] (ChD); [2016] EWCA Civ 658, [2017] Bus LR 1 [80]–[81] (CA) (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Common Law Perspective   417

4.3  Website-Blocking Injunctions: Applicable Principles Even if the threshold conditions are satisfied, it does not necessarily follow that an injunction should be granted. It was held by the High Court and Court of Appeal in Cartier that the relief must: (1) be necessary; (2) be effective; (3) be dissuasive; (4) not be unnecessarily complicated or costly; (5) avoid barriers to legitimate trade; (6) be fair and equitable and strike a ‘fair balance’ between the applicable fundamental rights; and (7) be proportionate.54 Of these factors, the key one is proportionality, since consideration of the other factors feeds into the proportionality analysis. The fundamental rights in play are usually those protected by Article 17(2) of the EU Charter of Fundamental Rights (intellectual property) on the one hand and those protected by Articles 11 (freedom of expression and information) and 16 (freedom to conduct a business) of the Charter on the other hand. The court must carefully evaluate the comparative importance of the rights that are engaged, the justifications for interfering with those rights, and the availability of less intrusive measures. The proportionality of a website-blocking injunction as between the rightholders and the operators and users of the website depends critically upon how targeted the injunction is: the more carefully targeted at unlawful content the injunction is, and hence the less likelihood of ‘overblocking’ of lawful content, the more likely the injunction is to be proportionate when viewed from that perspective. The proportionality of a website-blocking injunction as between the rightholders and the ISPs depends critically upon the efficacy of the injunction (which in turn depends on the technical measures employed to implement the block and upon the block being kept up to date) and upon the burdens and costs imposed by the injunction on the ISPs: the key question is whether the effectiveness of the blocking in preventing or reducing infringement (given that the block can readily be circumvented) justifies the burden and costs imposed on the ISPs. Finally, it is important to ensure that the injunctions are safeguarded against abuse. The English courts have taken a number of steps to this end: providing that affected third parties can apply to the court to set aside or vary the orders, requiring notice of and information about the block to be displayed at the appropriate address, and imposing a sunset clause. It has subsequently been held by the Supreme Court in Cartier that, where a websiteblocking injunction was granted against ISPs under Article 11 of the Enforcement Directive, the trade mark owners should be required not only to pay for the costs of an unopposed application to the court, and hence of the evidence-gathering necessary to support such an application, but also the marginal cost of the initial implementation of the order (which involves processing the application and configuring the ISPs’ blocking systems), the cost of updating the block over the lifetime of the order in response to notifications from the rightholders (which involves reconfiguring the blocking system to accommodate the migration of websites from blocked internet locations), and any costs and liabilities that may be incurred if blocking malfunctions through no fault of 54 See Cartier (n. 47) [2014] EWHC 3354 (Ch), [2015] Bus LR 298 [158]–[191] (ChD); [2016] EWCA Civ 658, [2017] Bus LR 1 [100]–[128] (CA) (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

418   Richard Arnold the ISPs, for example as a result of overblocking because of errors in notifications or malicious attacks provoked by the blocking.55 The Supreme Court’s reasoning was that this allocation of the implementation costs followed from the basis upon which the injunction was made under domestic law, and that this was unaffected by EU law since recital 23 of the Enforcement Directive (corresponding to recital 59 of the Information Society Directive in this respect) states that ‘the conditions and procedures relating to such injunctions should be left to the national law of the Member States’.56 The Supreme Court did not think that either L’Oréal v eBay or UPC indicated otherwise. As Briggs LJ noted in the Court of Appeal,57 the consequence of this allocation of the implementation costs will be to make it easier for rightholders to establish that, as between themselves and the ISPs, a website-blocking injunction is proportionate; but it will also make the remedy a somewhat more expensive one for rightholders.58

4.4  Website-Blocking Injunctions: Application to Trade Mark Cases In Cartier, the claimant trade mark owners applied for injunctions requiring the five defendant ISPs to block, or at least impede, access by the ISPs’ customers to six target websites (a further two were added by a subsequent application). It was held by the High Court and Court of Appeal that the threshold conditions were satisfied and that, applying the principles set out previously, it was appropriate to grant the injunctions sought. Much of the reasoning was similar to the reasoning applied in previous cases involving website-blocking injunctions to combat copyright infringement. The following points which are specific to trade marks may be noted. So far as the third of the threshold conditions was concerned, the High Court held that the operators of the target websites were infringing the claimants’ trade marks first by advertising and offering for sale counterfeit goods and secondly by selling and 55  This decision may give rise to disputes as to the quantification of the marginal costs of implementing the order and updating the block which the previous costs regime (under which ISPs had to bear their own implementation costs) avoided. It also lessens the incentive for ISPs to be efficient in implementing orders. 56  Husovec (n. 2) 94–7 identifies a possible alternative interpretation of these recitals, however, as meaning merely that Member States should do what is necessary to implement such injunctions in their national systems. If that is the correct interpretation of the recitals, it is debatable whether it is consistent with EU law for English law to impose implementation costs on rightholders, while German law imposes not merely implementation costs, but also legal costs, on intermediaries. See McFadden (n. 39). 57 See Cartier (n. 47) [2016] EWCA Civ 658, [2017] Bus LR 1 [212] (UK). 58  It should, however, be noted that, in a series of subsequent applications for streaming-server blocking injunctions under s. 97A it was agreed between the parties that there would be no order for costs. See Football Association Premier League Ltd v British Telecommunications plc [2018] EWHC 1828 (Ch) [8] (UK); Union Des Associations Européennes De Football v British Telecommunications plc [2018] EWHC 1900 (Ch) [6] (UK); Matchroom Boxing Ltd v British Telecommunications plc [2018] EWHC 2443 (Ch) [9] (UK); Queensberry Promotions Ltd v British Telecommunications plc [2018] EWHC 3273 (Ch) (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Common Law Perspective   419 supplying such goods, with both activities being targeted at UK consumers. The ISPs had an essential role in these infringements, since it was via the ISPs’ services that the advertisements and offers for sale were communicated to 95 per cent of broadband users in the UK. It was immaterial that there was no contractual link between the ISPs and the operators of the target websites. It was also immaterial that UK consumers who viewed the target websites might not purchase any goods, since the first type of infringement was already complete by then. It was also immaterial that, if a UK consumer did purchase an item, the item would be transported by courier or post, since the contract of sale would be concluded via the website.59 The Court of Appeal agreed.60 In considering the proportionality of the injunctions, the High Court considered a series of alternative enforcement measures which the ISPs contended that the claimants should pursue: taking action against the website operators, sending notice to the website hosts demanding that the target websites be taken down, asking payment processors such as Visa to suspend the operators’ merchant accounts, seizing the domain names of the target websites, sending notice to search engine providers such as Google requesting them to de-index the target websites, and customs seizure. The High Court was not persuaded that there were alternative measures open to the claimants which would be equally effective, but less burdensome, with the consequence that the claimants’ application should be refused on that ground alone, but accepted that the availability of some of these measures was a factor to be taken into account in assessing the proportionality of the orders sought by the claimants.61 Two other particular factors which the High Court took into account in assessing the proportionality of the injunctions were that, first, it was the claimants’ own evidence that they had identified no less than 46,000 websites as infringing,62 and, secondly, other websites were highly substitutable for the target websites.63 These factors pointed to the possibilities that the injunctions, even if effective in stopping infringement by the target websites, might simply lead to infringing activity being displaced to other websites and that the injunctions would be followed by further applications to block large numbers of other websites. Nevertheless, the High Court concluded that the injunctions were proportionate, and the Court of Appeal agreed.64 Since Cartier, there has only been one application for a website-blocking injunction to combat trade mark infringement in the UK. In Nintendo v Sky65 a website-blocking injunction was granted, applying the principles laid down in Cartier, against four websites which advertised, distributed, offered for sale, and/or sold devices that allowed technological protection measures on Nintendo’s popular Switch games console to be circumvented.

59 See Cartier (n. 47) [2014] EWHC 3354 (Ch), [2015] Bus LR 298 [147]–[156] (ChD) (UK). 60 See Cartier (n. 47) [2016] EWCA Civ 658, [2017] Bus LR 1 [86]–[97] (CA) (UK). 61 See Cartier (n. 47) [2014] EWHC 3354 (Ch), [2015] Bus LR 298 [197]–[217] (ChD) (UK). 62  ibid. [246]–[248]. 63  ibid. [258]–[259]. 64 See Cartier (n. 47) [2016] EWCA Civ 658, [2017] Bus LR 1 [132]–[194] (CA) (UK). 65  Nintendo Co. Ltd v Sky UK Ltd [2019] EWHC 2376 (Ch), [2019] Bus LR 2773 (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

420   Richard Arnold

4.5  Other Kinds of Injunctions Against Intermediaries in Trade Mark Cases It is clear from L’Oréal v eBay that website-blocking injunctions are not the only kinds of injunction that can be ordered against online intermediaries under Article 11 of the Enforcement Directive in appropriate cases. One particular possibility which the CJEU specifically mentioned is an injunction requiring the operator of an online marketplace to suspend the accounts of sellers who perpetrate trade mark infringements.66 To date, it does not appear that any application has been made for such an injunction in the UK. Another possibility which is arguably within the contemplation of the CJEU is an injunction to enforce notice and stay-down; that is, an injunction that requires the intermediary, when notified of an infringement by a rightholder, not merely to take down the relevant content from its platform but also to prevent the same infringing content from being reposted by its users subsequently. Again, however, it does not appear that any application has been made for such an injunction in the UK. 66 See L’Oréal v eBay (n. 10) [141].

OUP CORRECTED PROOF – FINAL, 03/30/2020, SPi

chapter 22

Digita l M a r k ets, Ru les of Con duct, a n d Li a bilit y of On li n e I n ter m edi a r ie s— A na lysis of T wo Case St u die s: U n fa ir Com m erci a l Pr actice s a n d Tr a de Secr ets I n fr i ngem en t Reto M. Hilty* and Valentina Moscon

*  The authors jointly conceived the chapter and share the views expressed. Nonetheless, Sections 2, 3, and 4 are specifically attributable to Moscon. We wish to thank Prof. Dr. Frauke Henning-Bodewig, Dr. Marco Bellia and Mr. Luc Desaunettes-Barbero for useful and constructive discussions on this project.

© Reto M. Hilty and Valentina Moscon 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

422   Reto m. Hilty and Valentina Moscon

1.  Problem Definition In Europe the debate on the liability of online intermediaries (OIs)1 has mainly focused on the infringement of intellectual property rights (IPRs).2 However, the circulation of information through OIs might affect a wide range of legal interests.3 The ubiquitous nature of OIs and the fast-changing business models on which they rely have a ­considerable effect on the variety of information they convey. In particular, through OIs that allow users to upload content, infringements of different third party rights can proliferate.4 This chapter will explore the liability of OIs under European law, with a focus on infringements of legal interests other than exclusive rights such as IPRs. The main research question is whether European law adopts a consistent regulatory approach with regard to content hosted by OIs. In particular, European law will be investigated taking into consideration rules on market functioning5 that do not assign exclusive rights but simply impose rules of conduct (e.g. rules on unfair commercial practices). We will explore whether and under which conditions OIs are liable and—beyond the liability of OIs—whether the interests protected by the European laws governing the functioning of the market receive the same level of protection in different contexts. 1  European law does not offer a comprehensive and consistent definition of online intermediary, nor does the Court of Justice of the European Union (CJEU) offer unambiguous interpretations of existing European law and, on the contrary, provides different definitions depending on the context of reference. See Martin Husovec, Injunctions Against Intermediaries in the European Union, Accountable But Not Liable? (CUP 2017) 60, 70; Jaani Riordan, The Liability of Internet Intermediaries (OUP 2016) 389. Even European Commission documents reveal the difficulty of bridling online intermediaries into a static definition. With specific regard to ‘online marketplaces’, it is worth recalling the definition provided by the proposed Directive amending Council Directive 93/13/EEC of 5 April 1993, Directive 98/6/EC, Directive 2005/29/EC and Directive 2011/83/EU as regards better enforcement and modernization of EU consumer protection rules. See, in particular, the text adopted by the European Parliament on 19 April 2019, P8_TA-PROV(2019)0399, Art. 1. On this matter, see Frauke Henning Bodewig, ‘Zur Sektoruntersuchung des Bundeskartellamts zu Vergleichsportalen’ (2019) WRP 537–45. For an overview of the initiatives taken within the Digital Single Market strategy, see Giancarlo Frosio, ‘Reforming Intermediary Liability in the Platform Economy: A European Digital Single Market Strategy’ (2017) 112 Northwestern U. L. Rev. 19. The above leads to the overlapping usage in EU law of various terms such as online intermediaries, service providers, or, sometimes, platforms. On account of such complexity we will use in this chapter the term online intermediary with a broad meaning. Under the umbrella of OIs, we will include a wide range of business models and subjects which offer various services in different layers of the internet. The characteristics of the OI at play will be considered in relation to the legal circumstances under analysis. 2  Among the recent attempts to regulate the matter, see Directive 2019/790/EU of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/ EC [2019] OJ L130/92 that was adopted by the European Parliament on 26 March 2019. 3  See Thomas Cotter, ‘Some Observations on the Law and Economics of Intermediaries’ (2005) 1 Mich. State L. Rev. 2. 4  For a general overview of OIs, see Riordan (n. 1). 5  Both IPRs and unfair competition law in their substantive and broad meaning concern market regulation. See Reto M. Hilty, ‘The Law Against Unfair Competition and Its Interfaces’ in Reto M. Hilty and Frauke Henning-Bodewig (eds), Law Against Unfair Competition. Towards a New Paradigm in Europe?— MPI Studies on Intellectual Property, Competition and Tax Law 1 (Springer 2007) 1‒52.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Unfair Commercial Practices and Trade Secrets Infringement    423 To carry out such analysis we will take into consideration as case studies two bodies of law. The first deals with online unfair commercial practices (UCPs)6 and the second with the unlawful acquisition, use, and disclosure of trade secrets.7 The selected laws have in common that they aim to regulate the market by imposing rules of conduct without attributing exclusive rights to the market players.8 Similarly to rules attributing exclusive rights, these rules can be violated in the online environment by means of digital platforms and other tools offered by OIs. Also, although restricted to businessto-consumer (B2C) relationships, the UCPs Directive has a significant impact on online markets and therefore it is material in relation to OI liability. Likewise, the digital environment has exacerbated the vulnerability of trade secrets by providing new tools including digital platforms which facilitate their speedy and anonymous dissemination. In that context, the role of the OIs is crucial in both facilitating and blocking the circulation of content-infringing rules laid down by European lawmakers. Besides fundamental rights,9 in this analysis vertical and horizontal regulatory levels will be taken into account in relation to one another. The first reference is to the liability rules set forth in the two bodies of law mentioned earlier, which apply ­vertically to UCPs and trade secrets. The second regulatory level is based on the e-Commerce Directive, which applies horizontally, thus encompassing trade secret infringements and UCPs.10 The safe harbours shaped by Articles 12 to 14 of the e-Commerce Directive will under certain conditions protect OIs11 from liability for transmitting, caching, and storing third parties’ illicit information.12 At the same time, however, the e-Commerce Directive does not prevent injunctions13 from being 6  See Directive 2005/29/EC of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC and Regulation (EC) No. 2006/2004 (UCPs Directive) [2005] OJ L149/22. This article has been conceived and written before Directive 2019/2161/EU (Omnibus Directive) - which amends Directive 2005/29/EC - was approved. Therefore the changes introduced by the Omnibus Directive could not be considered in this article. 7  See Directive (EU) 2016/943 of 8 June 2016 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure [2016] OJ L157/1. 8  The scope of trade secrets protection varies across Member States. However, traditionally most European countries do not classify them as IPRs. Instead, they apply an unfair competition-type model. For an overview, see the survey commissioned from KPMG by the European Union Intellectual Property Office (EUIPO). See KPMG, The Baseline of Trade Secrets Litigation in the EU Member States (EUIPO 2018) . 9  See Christina Angelopoulos, European Intermediary Liability in Copyright: A Tort-Based Analysis (Wolters Kluwer 2016) 6. 10  A safe harbour may apply irrespective of whether the primary wrong lies in defamation, infringement of IP rights, in terrorism-related materials, etc. Art. 1(5) of the e-Commerce Directive specifically excludes from its scope a limited number of areas of national and EU law. 11  See Arts 2 and 14 of the e-Commerce Directive. 12  See Riordan (n. 1) 379. 13  An injunction is a court order prohibiting a person from doing something or requiring a person to do something. See UK Ministry of Justice, Civil Procedures Rules, Glossary .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

424   Reto m. Hilty and Valentina Moscon ordered against OIs.14 With regard to IPRs, the Enforcement and the Information Society Directives15 impose on Member States the duty to ensure that rightholders are in a position to apply for injunctions against intermediaries whose services are used by a third party to infringe IPRs, and the question thus arises as to whether similar remedies are applicable to similar situations. The chapter is structured as follows. After having summarized in Section 2 the distinguishing features of the safe harbour regime and its interplay with injunctive relief, Sections 3 and 4 will focus on the UCPs and Trade Secrets Directives in relation to the position of OIs. Reference will be made to the criteria of interpretation of European law in order to assess the relationship between specific provisions and the concurrent regime set out in the e-Commerce Directive. In Section 5 we will assess the emerging results, highlighting the issues which hinder the creation of a coherent and satisfying legal framework.

2.  Online Intermediaries: Walking a Tightrope Between Immunity from Liability and Remedies 2.1  Safe Harbours European Union law does not generally (explicitly) address the liability of someone other than the direct infringer of the right, if the right at issue is subject to European law.16 The principles of tort law, including those characterizing secondary liability, are not harmonized within Europe. Only a few specific cases of indirect liability are regulated within the European legal framework.17 14  See Directive 2000/31/EC of 17 July 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1, recitals 40, 41, 47, 48, 52. 15  See Directive 2004/48/EC of 29 April 2004 on the enforcement of intellectual property rights [2004] OJ L195/16; Directive 2001/29/EC of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L178/1. 16 See Annette Kur, ‘Secondary Liability for Trademark Infringement on the Internet: The Situation in Germany and the EU’ (2014) 37(4) Colum. J. of L. & Arts 525–40. On this issue, see also Matthias Leistner, ‘Common Principles of Secondary Liability?’ in Asgar Ohly (ed.), Common Principles of European Intellectual Property Law (Mohr Siebeck 2012) 117–46 (arguing that ‘while the rules on secondary liability (including their potential legal consequences) differ remarkably throughout the various Member States, a minimum common principle of secondary liability in the sense of a common general direction in assessing secondary liability cases can be identified in the law of most Member States’). 17  This refers to the provisions on technological protection measures (TPMs) and Digital Rights Management (DRM) in the Information Society Directive. On the European regulation of the matter, see Michèle Finck and Valentina Moscon, ‘Copyright Law on Blockchains: Between New Forms of Rights Administration and Digital Rights Management 2.0’ (2019) 50 IIC 77.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Unfair Commercial Practices and Trade Secrets Infringement    425 Regarding OIs which may be misused by third parties for unlawful acts,18 the e-Commerce Directive has created the well-known exemptions from liability (safe harbours).19 Safe harbours encompass all forms of liability including monetary liability (for the harm suffered) and both civil and criminal sanctions.20 Articles 12 to 14 protect passive and neutral service providers from liability for transmitting, caching, and storing third parties’ illicit information.21 Article 14 requires that no liability will ensue if the person offering such services: (1) confines his role to enable use by others; and (2) reacts expeditiously on receiving information about illegitimate content.22 Thus, on obtaining knowledge or awareness, protection is lost unless the service provider acts expeditiously to ‘remove or disable access’ to the unlawful information.23 Further, Article 15 provides that internet service providers acting within the limits of Articles 12 to 14 have no obligation to monitor content proactively.24 Thereby the legal regime set out by the e-Commerce Directive guarantees protection to OIs from general duties to monitor third parties’ information and activity.25 Finally, as the Directive does not spell out the consequences for OIs that enjoy safe harbours, the legal grounds on which they are held liable, and whether that liability is for secondary or primary infringement, are to be determined by national law. Therefore the e-Commerce Directive provides only a narrow safeguard zone against intra-EU intermediary liability fragmentation.26

2.2  Injunctions Against Online Intermediaries The safe harbours provided for in the e-Commerce Directive do not prevent injunctions from being ordered against OIs.27 Quite the opposite. Recital 45 of the e-Commerce 18 This approach follows international trends in favour of internet self-regulation. See Damian Tambini and others, Codifying Cyberspace (Routledge 2008) 1–28. 19  The s-Commerce Directive applies to services as defined by Art. 1(2) of Directive 98/34. This provision provides that ‘service’ is ‘any Information Society service’. This definition is linked to primary law, namely Art. 57 TFEU concerning free movement of services. However, the CJEU in C-484/14 Tobias Mc Fadden v Sony Music Entertainment Germany GmbH [2016] ECLI:EU:C:2016:689 clearly favoured a broader reading arguing that the service does not necessarily need to be paid for directly by those who benefit from it. For a general overview on the e-Commerce Directive, see Miquel Peguera, ‘The DMCA Safe Harbours and their European Counterparts: A Comparative Analysis of some Common Problems’ (2009) 32 Colum. J. of L. & Arts 481. 20  See Riordan (n. 1) 384; Husovec (n. 1) 57 ff. 21  See Riordan (n. 1) 379. 22  See among others Peguera (n. 19) 481. 23  See Directive 2000/31/EC (n. 14) Art. 14 24  On the rationale and the discipline provided in the e-Commerce Directive the literature is vast. Among others, see Angelopoulos (n. 9) and Riordan (n. 1). 25  See Directive 2000/31/EC (n. 14) Art. 15. 26  For more detail, see Graeme Dinwoodie, ‘Secondary Liability for online trademark infringements: The International Landscape’ (2014) 37 Colum. J. of L. & Arts 463. See also Angelopoulos (n. 9); Kur (n. 16). 27  See Directive 2000/31/EC (n. 14) recitals 40, 41, 47, 48, 52.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

426   Reto m. Hilty and Valentina Moscon Directive expressly recognizes that intermediaries are not relieved of their obligations to comply with ‘injunctions of different kinds’.28 Also, Article 18 of the Directive explicitly requires Member States to make available remedies that allow for measures to be adopted for the purpose of terminating wrongdoing and preventing further impairment of the same interest.29 Thus, even if an intermediary is not liable under Article 12, 13, or 14 of the e-Commerce Directive it may nevertheless be susceptible to injunctive relief.30 Within the European legal framework, injunctive relief against OIs is explicitly prescribed by Article 11 of the Enforcement Directive and Article 8(3) of the Information Society Directive31 which require the Member States to provide for injunctions against intermediaries32 whose services are used by a third party to infringe on IPRs.33 At the same time, Article 2(3) of the Enforcement Directive states that Articles 12 to 15 of the e-Commerce Directive remain unaffected. Despite this safeguard clause, there is a notorious tension between Articles 12 to 15, in particular Article 15(1), of the e-Commerce Directive, and the third sentence of Article 11 of the Enforcement Directive as well as Article 8(3) of the Information Society Directive. This then raises the question of the 28  ‘The limitations of the liability of intermediary service providers established in this Directive do not affect the possibility of injunctions of different kinds; such injunctions can in particular consist of orders by courts or administrative authorities requiring the termination or prevention of any infringement, including the removal of illegal information or the disabling of access to it.’ 29  See Riordan (n. 1) 387. 30  The CJEU in L’Oréal v eBay clarified that an intermediary may ‘regardless of any liability of its own in relation to the facts at issue’, be ordered to take enforcement measures by means of an injunction. See C‑324/09 L’Oréal SA and Others v eBay International AG and Others [2011] ECLI:EU:C:2011:474. See also European Commission, ‘Report to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on the Application of Directive 2004/48/EC of the European Parliament and the Council of 29 April 2004 on the enforcement of Intellectual Property Rights’ COM(2010) 779 final and European Commission, ‘Commission Staff Working Paper: Analysis of the Application of Directive 2004/48/EC of the European Parliament and the Council of 29 April 2004 on the enforcement of Intellectual Property Rights in the Member States’ SEC(2010) 1589 final. For a general overview on the matter, see Martin Husovec, ‘Injunctions against Innocent Third Parties. The case of Website Blocking’ (2013) 4(2) JIPITEC 116–29; Husovec (n. 1) 57–64 (arguing that injunctions provided for by the Enforcement Directive and the Information Society Directive should serve as a cooperation tool, never as a tool of sanction. Several arguments are offered in support of this including the legislative history of both the Directives). 31  European law includes other provisions obliging Member States to apply injunctions in certain cases. See e.g. Directive 2009/22/EC of 23 April 2009 on injunctions for the protection of consumers’ interests [2009] OJ L110/30. 32  The Directive makes a broad interpretation of the concept of ‘intermediaries’ to include all intermediaries ‘whose services are used by a third party to infringe an intellectual property right’. See European Commission, ‘Report’ (n. 30). 33  The Directive applies to all infringements of IPRs protected under European or national law and contains no definitions of the IPRs it covers. Different interpretations of the concept of ‘intellectual property right’ led Member States to ask the Commission to publish a minimum list of the IPRs which it considers covered by the Directive. See European Commission, ‘Statement 2005/295/EC concerning Article 2 of Directive 2004/48/EC of the European Parliament and of the Council on the enforcement of intellectual property rights’ [2005] OJ L94/37. Even after the Commission published some clarification, uncertainties still remain as to whether some rights protected under national law are covered. This mainly concerns domain names, national rights on matters such as trade secrets (including know-how), and other acts frequently covered by national unfair competition law such as parasitic copies. Ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Unfair Commercial Practices and Trade Secrets Infringement    427 scope of injunctions as well as whether and to what extent Member States are required to  provide injunctive relief covering the prevention of future infringing activities.34 Even though the CJEU has provided some guidance,35 this remains a major critical point, and the law of the EU Member States is controversial.36

3.  Unfair Commercial Practices via Online Intermediaries A comprehensive harmonization of unfair competition law is notoriously absent in European law.37 The initial project of organic harmonization of the subject in Europe was cut back to what was just about capable of achieving consensus.38 The only Directive that adopts a broad approach is the UCPs Directive.39 However, although it is clear that one of the most substantial parts of marketing activity takes place on the web, where OIs play a central role,40 in that Directive there is no specific regulation of or reference to intermediary liability. In order to define the liability regime applicable to OIs with respect to UCPs to the detriment of consumers, reference will be made to the joint reading of the UCPs Directive and the e-Commerce Directive.

3.1  Unfair Commercial Practices Definition The UCPs Directive approximates the laws of the Member States on UCPs applying ‘to  unfair business-to-consumer commercial practices as are laid down in Article 5, before, during and after a commercial transaction in relation to a product.41’42 34  See C-324/09 (n. 30) paras 125 ff. 35  See ibid. paras 138 ff. 36  On this controversy among others, see Martin Senftleben and others, ‘The Recommendation on Measures to Safeguard Fundamental Rights and the Open Internet in the Framework of the EU Copyright Reform’ (2018) 40(3) EIPR 149–63. In Italian literature, see Marco Ricolfi, ‘Contraffazione di Marchio e Reposansabilita’ degli Internet Service Providers’ (2013) 3 Diritto Industriale 237–50. In the German literature, see Leistner (n. 16) 117–46. 37  For an overview, see Frauke Henning-Bodewig (ed.), International Handbook on Unfair Competition Law (Hart 2013). 38 ibid. 39  See Frauke Henning-Bodewig, ‘Secondary Unfair Competition Law’ in Reto M. Hilty and Frauke Henning-Bodewig (eds), Law Against Unfair Competition (Springer 2007) 111–25. 40  Most of the business models that are emerging along with the digital economy, including those based on the so-called collaborative economy, leverage on OIs. For a general overview of legal problems raised by the collaborative economy, see Vassilis Hatzopoulos, The Collaborative Economy and EU Law (Hart 2018). 41  According to Art. 2 ‘ “product” means any goods or service including immovable property, rights and obligations’. 42  See Art. 3.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

428   Reto m. Hilty and Valentina Moscon According to Articles 6 and 7, commercial practices are always to be considered unfair if they are misleading or aggressive. Both provisions are then accompanied by a comprehensive list of misleading and aggressive actions and omissions.43 Furthermore, the Directive includes a general clause which ensures that any unfair practice that is not caught by other provisions of the Directive (i.e. that is neither misleading nor aggressive) can still be penalized; viz. Article 5(1) establishes a prohibition of UCPs which are unfair if they are: (1) contrary to the requirements of professional diligence and; 2) likely to materially distort the economic behaviour of the average consumer.44

3.2  Is the Online Intermediary a ‘Trader’ that Performs ‘Commercial Practices’? Very often UCPs take place via infrastructures made available by OIs including online marketplaces, social networks, rating portals, and other digital platforms. In the absence of specific rules within the UCPs Directive, the first question to ask is whether OIs can be considered direct addressees of that Directive and therefore liable for UCPs.45 The general prohibition established by the Directive applies to UCPs which occur outside any contractual relationships between traders and consumers. This means that the provisions of the UCPs Directive can potentially apply to OIs. However, two preliminary interpretative questions have to be answered considering the scope of application of the UCPs Directive. The first question to deal with is whether an OI can be considered a ‘trader’. Secondly, if yes, the question is then whether by hosting third parties’ activities the OI is performing ‘commercial practices’.46 At present there are no CJEU decisions that clearly qualify OIs as traders,47 nevertheless it is important to highlight that with 43  Also, Annex 1 to the Directive provides a list of the ‘commercial practices which are in all circumstances considered unfair’. 44  In the Italian literature it has been argued that the unfairness has to be interpreted in the same way as it is with regard to business-to-business relationships. See Adriano Vanzetti and Vincenzo di Cataldo, Manuale di Diritto Industriale (Giuffrè Editore 2018) 138–9. 45  The rise of digital platforms such as Uber and Airbnb has sparked a controversy about how to fit new business models into existing legal categories. This debate, which with regard to Airbnb ended up before the CJEU in 2018, focuses on the applicability of the provisions laid down in the e-Commerce Directive to those platforms. See Christoph Busch, ‘The Sharing Economy at the CJEU: Does Airbnb Pass the “Uber test”? Some Observations on the Pending Case C-390/18—Airbnb Ireland’ [2018] 4 EuCML 172–4. 46  Art. 2(d) of Directive 2005/29 defines commercial practices using a particularly broad formulation. On the interpretation of ‘commercial practices’, see C‑435/11 CHS Tour Services GmbH v Team4 Travel GmbH [2013] ECLI:EU:C:2013:574, para. 27; C‑391/12 RLvS Verlagsgesellschaft mbH v Stuttgarter Wochenblatt GmbH [2013] ECLI:EU:C:2013:669, para. 37. See also Rossana Ducato, ‘One Hashtag to Rule them All? Mandated Disclosure and Design Duties in Influencer Marketing Practices’ (Palgrave forthcoming). 47  See C-105/17 Komisia za zashtita na potrebitelite v Evelina Kamenova, Okrazhna prokuratura— Varna [2018] ECLI:EU:C:2018:808, para. 38. In the same direction, see European Commission, ‘Guidance on the implementation/application of Directive 2005/29/EC’ COM(2016) 320 final.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Unfair Commercial Practices and Trade Secrets Infringement    429 the aim of extending consumer protection, the Court interprets the notion of a trader broadly, thereby potentially including OIs.48 Yet, since the question of whether an OI should be considered a trader is assessed on a case-by-case basis, this leaves open the possibility of a restrictive interpretation of the notion of ‘trader’. Similarly, it is not a foregone conclusion that OIs hosting third parties’ activities perform commercial practices. Some commentators highlight that in the light of the CJEU case law a restrictive interpretation seems plausible.49 In RvLS, the CJEU denied the possibility of classifying the publication of sponsored articles in a newspaper as a ‘commercial practice’ performed by the publisher.50 If there is a commercial practice, the latter is eventually put in place by the sponsors of the article and not by the publisher itself. Following this view, it could be argued that the OI performs commercial practices if a direct connection can be established with respect to a commercial communication and the commercial interest of the OI.51 If the OI is not considered a trader that perform commercial practices, two hypotheses can be distinguished: (1) the OI supports the exchange of goods and services between users/consumers (e.g. rating portals). In that case, the prohibition in the UCPs Directive is not applied tout court; (2) the OI supports the activity of traders subject to the prohibition in the UCPs Directive (e.g. marketplaces). Here the prohibitions established in the UCPs Directive are relevant, with an indirect effect on an OI whose platform allows UCPs. In this case, however, the intermediary can plainly enjoy immunity under the e-Commerce Directive and can therefore be considered liable only under the conditions established in Article 14 of the e-Commerce Directive.52 48  See C‑105/17 (n. 48) para 30; C‑59/12 BKK Mobil Oil Körperschaft des öffentlichen Rechts v Zentrale zur Bekämpfung unlauteren Wettbewerbs eV [2013] ECLI:EU:C:2013:634, para. 32. 49  See C-391/12 (n. 46), paras 36 ff; Ansgar Ohly, ‘Die Haftung von Internet-Dienstleistern für die Verletzung lauterkeitsrechtlicher Verkehrspflichten’ [2017] GRUR 441–51. For a general overview on the economic arguments influencing decision-making, see George Akerlof, ‘The Market for “Lemons”: Quality Uncertainty and the Market Mechanism’ (1970) 84(3) The Quarterly J. of Economics 488–500. 50  See C-391/12 (n. 46) para. 40. 51  It could be argued that misleading information by sellers regarding the product quality or the delivery of the product are not in principle in the commercial interest of the OI, which, on the contrary, has an interest in improving the commercial credibility of its platform. At the same time, however, such a circumstance may not in itself be sufficient to encourage control over the platform thereby establishing a direct connection with the hosted commercial practices. There are, indeed, costs including those related to information asymmetry. For a general overview on the economic arguments influencing decision-making, see Akerlof (n. 49). 52  On 25 June 2019, the Italian Supreme Administrative Court ruled on the decision issued by the  National Competition and Consumer Authority (Autorità Garante della Concorrenza e del Mercato,  AGCM) against the secondary ticketing provider Viagogo (Consiglio di Stato, judgment No. 04359/2019). The ruling represents the outcome of a proceeding initiated by AGCM in 2016 on alleged breaches of the Italian Consumer Code, relating in particular to the insufficient or deceptive information provided by Viagogo on its website. On 5 April 2017, by Decision 26535, AGCM found Viagogo acting as a trader in violation of Arts 20, 21, and 22 of the Italian Consumer Code for unfair commercial practices to the detriment of Italian consumers. In the Decision against Viagogo, AGCM pointed out the lack of clear and precise information on the platform’s website on the activity carried out by Viagogo. AGCM rejected Viagogo’s argument that, as a passive hosting provider within the meaning of the e-Commerce Directive 2000/31/EC, it was not liable for the activities of the vendors or for not having checked the

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

430   Reto m. Hilty and Valentina Moscon However, at the national level there is a trend of qualifying ‘transaction ­intermediaries’53 and ‘rating-platforms’ as traders that perform commercial practices.54 For example, in 2014 the question was put to the Italian Competition Authority and, on rehearing, to the Administrative Court of Lazio.55 Both were called upon to decide a case brought by the Italian Association of Hotels against TripAdvisor. In this case it was debated whether the publication of hosts’ authored reviews of hotels and restaurants, without a prior check on the accuracy of their truthfulness, might amount to UCPs. The Italian Competition Authority56 held that TripAdvisor was a ‘trader’ for the purpose of the UCPs Directive as implemented in the Italian Consumer Code.57 It was argued that although TripAdvisor does not directly charge for its service, it draws revenue from ­targeted advertising.58 The Administrative Court of Lazio confirmed the qualification of TripAdvisor as a trader that performs commercial practices.59 If the intermediary is qualified as a trader and engages in commercial practices with users who qualify as ‘­consumers’ within the meaning of the UCPs Directive,60 its conduct would amount to a B2C commercial practice and it might be held liable for UCPs.

sellers’ information in advance. Viagogo brought a claim against the 2017 AGCM decision, which was finally annulled by the Supreme Administrative Court stating, in essence, that a passive hosting provider, such as the secondary marketplace Viagogo, cannot be requested to provide consumers with information held by sellers operating through the platform. 53 See e.g. Autorità Garante della Concorrenza e del Mercato (AGCM) [Italian Competition Authority] Amazon-Marketplace-Garanzia legale [9 March 2016] no. 25911, Bollettino 11/2016, 38 (It.). 54  See European Commission, ‘Guidance’ (n. 47). 55  See AGCM TripAdvisor [19 December 2014] Decision PS9345, paras 87–9 (It.). This part of the AGCM’s decision was confirmed by the Tribunale Amministrativo Regionale (TAR) Lazio [Regional Administrative Tribunal of Lazio], Section I [13 July 2015] case no. 09355 (It.). See also Kammergericht [Court of Appeal] Berlin [8 April 2016] (Ger.) in [2016] MultiMedia und Recht 601. 56  In the Italian law, UCPs are detected by the AGCM which enforces consumer protection against UCPs. See Vanzetti and di Cataldo (n. 44). This approach differs from that taken by other European countries, such as Germany. In the German legal system, a civil law approach applies but public ­enforcement has been discussed. See Rupprecht Podszun, Christoph Busch, and Frauke HenningBodewig, ‘Die Durchsetzung des Verbraucherrechts: Das BKartA als UWG-Behörde? Ergebnisse des Professorengutachtens für das Bundesministerium für Wirtschaft und Energie’ (2018) 120(10) GRUR 1004–11. Rupprecht Podzun, Christoph Busch, and Frauke Henning Bodewig, ‘Consumer Law in Germany: A Shift to Public Enforcement’ [2019] EuCLM 75–84; Bodewig (n. 1) 537–45. 57  See Legislative Decree no. 206 of 6 September 2005 which implements the UCPs Directive (It.). 58  The authority highlighted that the main source of revenue from its commercial activity was the income generated by pay-per-click advertising contracts. 59  See TAR Lazio, Section I [13 July 2015] no. 9355 in [2015] Diritto dell’Informazione e dell’Informatica 494 (It.). The court arrived, however, at a different conclusion with regard to negligence in organizing and supervising its review system. According to the court, consumers dealing with such reviews could easily ascertain the nature of subjective evaluations and not of assertion of facts. Also, TripAdvisor never advertised the existence of a system of fact checking. As a result, TripAdvisor could not be considered to be misleading consumers. See Giorgio Resta, ‘Digital Platforms under Italian Law’ [2018] Medialaws 108–12. In this regard, it is interesting to note that even the German Federal Court of Justice (BGH) which applies a civil law approach to UCPs, follows the same path as the Italian AGCM. See BGH [27 April 2017] I ZR 55/16 in [2017] GRUR 1265. 60  See Directive 2005/29/EC (n. 6) Art. 2(a).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Unfair Commercial Practices and Trade Secrets Infringement    431

3.3  The Interplay Between the UCPs Directive and e-Commerce Directive Some argue that if the intermediary is qualified as a trader, it might not be exempted from liability under the e-Commerce Directive and would therefore be subject to a stringent due diligence obligation imposed by Article 5 of the UCPs Directive.61 From the latter a ‘duty of activation’ does indeed emerge which could translate into an obligation to monitor or carry out fact finding.62 However, classification of an operator as a trader does not rule out its nature as an intermediary. The fact that OIs operate in the market as traders does not exclude a priori the applicability of safe harbours. A commercial operator can offer different services and, consequently, be qualified in various ways according to the different activities it performs. Also, and more importantly, the concepts of a trader and intermediary exempted from liability are not alternatives. Therefore, an area of overlap is conceivable, which is more likely the broader the notion of trader. Returning to the example mentioned earlier, TripAdvisor might be considered a trader and a hosting provider at the same time. The same applies to marketplaces such as eBay and Amazon and to all other OIs. Although operating for commercial purposes, an OI can offer a neutral space ­without monitoring content.63 And, when the requirements for applicability of the e-Commerce Directive are met, the OI might be exempt from liability, even with regard to the general prohibition of UCPs contained in Article 5 of the UCPs Directive.64 This situation poses a problem of the concurrent application of two provisions of the same order within the European legal system. On the one hand, there is a general rule governing the conduct of traders with respect to UCPs. On the other hand, the e-Commerce Directive provides a wide-ranging exception to the rules of liability of the intermediary providing ‘information society services’. As this is an exception applicable only to the intermediaries under certain conditions, it might be considered lex specialis. According to the principle lex specialis derogat legi generali, if a trader is also an intermediary, then in the case of overlap the immunity rules might prevail over the rules of liability. 65 61  See Alberto De Franceschi, ‘Uber Spain and the “Identity Crisis” of Online Platforms, Editorial’ [2018] 7 EuCML 1–4. 62 ibid. 63  See C-324/09 (n. 30) para. 115 (‘the mere fact that the operator of an online market place offers for sale on its server, sets the terms of its service, is remunerated for that service and provides general information to its customers does not prevent that provider from relying on this exemption to its customers’). 64  This circumstance is evaluated on a case-by-case basis. See European Commission Communication, ‘A European agenda for the collaborative economy’ CMO(2016) 356 final, 8. Further indications in this regard are contained in European Commission, ‘Guidance’ (n. 47). 65  In this sense, see Hatzopoulos (n. 40) 49. The lex specialis principle may be applied and it is often used to resolve redundancy in law, rather than legal antinomies, and so it is a tool to prevent the simultaneous application of special and general compatible rules. In every field of law, the lex specialis principle is a device to coordinate and integrate special and general rules to obtain a more complete regulation of a certain matter. In general, see Gerard Martin Conway, ‘Conflict of norms in European Union law and the legal reasoning of the European Court of Justice’ (2010) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

432   Reto m. Hilty and Valentina Moscon If the safe harbour applies to OIs with regard to UCPs, the need arises of clarifying the conditions under which they operate. The definition of the knowledge standard66 is challenging in cases of UCPs the assessment of which requires the concrete evaluation of the users’ conduct67 in a certain context.68 In this regard, it is worth noting that if actual knowledge is required, it is doubtful whether and under which conditions the notification sent by the claimant to the OI can preclude immunity under the hosting safe harbour with respect to UCPs.69 In cases of UCPs, indeed, a discretionary assessment and balancing of the interests at stake is needed and therefore required to the OI.70 This issue remains, however, under-estimated at the European level and it is therefore left to the Member States to precisely define the knowledge standard.71 Similarly uncertain is the equally important issue of whether and under which conditions the OI, after achieving knowledge of illicit content by means of a notification, or even a court order, should prevent future infringements.72 Finally, if immunity is excluded for the intermediary in the concrete case, the question arises of the liability of the intermediary in positive terms.73 Given that liability is not harmonized at the European level,74 one might wonder whether and how it is possible to attribute liability to the OI for not intervening in the case of a third party infringement of a mere conduct obligation (which has limited effects between the parties).75 Once again,

66  See Angelopoulos (n. 9) 82 ff (arguing that the EU documents provide little interpretative guidance in this regard. Hence, there is broad disagreement on the correct interpretation between the main stakeholders. From a comparative perspective, US courts have tended to adopt the high standard of ‘actual knowledge’. In Europe, as mentioned earlier, some guidance is provided by the CJEU in L’Oréal v eBay. 67  The liability depends in this case on the interpretation of the safe harbour’s applicability conditions in relation to content that is illegal because it violates obligations of conduct and not exclusive rights. 68  A similar reasoning applies to trade mark infringements via OIs, see Martin Senftleben, ‘An Uneasy Case for Notice and Takedown: Context-Specific Trademark Rights’, SSRN Research Paper no. 2025075 (16 March 2012) ; see also Kur (n. 16). 69  See Ricolfi (n. 36) 241 (arguing that, with regard to trade mark infringements, a notice might not be sufficient to prove the actual knowledge of the OI). 70  We will return to this point later in the chapter when dealing with trade secret infringements. 71  See Angelopoulos (n. 9) 84. 72  For an overview of CJEU jurisprudence on this point, see Martin Senftleben and others (n. 36) 149–63. 73  As mentioned earlier, the enforcement of the prohibition established by the UCPs Directive is implemented differently in various Member States. In some Member States there are forms of private enforcement and in others forms of public enforcement. See Dörte Poelzig, ‘Private or Public Enforcement of the UCP Directive? Sanctions and Remedies to Prevent Unfair Commercial Practices’ in W.  Van Boom, Amandine Garde, and Orkun Akseli, The European Unfair Commercial Practices Directive (Ashgate 2014) 235–65. 74  General principles of tort law are not harmonized within Europe. See Kur (n. 16) 525–40. 75  Unlike IPRs, it is not a matter of violation of exclusive rights with erga omnes effects, but rather of prohibited behaviour with inter partes effects. Exclusive rights ensure protection to the rightholder in any case irrespective of any further assessment concerning, e.g., the competitive advantage which infringers obtain through the illicit conduct. In contrast, UCPs and trade secret infringements serve as examples of inter partes legally protected positions whose assessment depends on the existence of particular circumstances. For an introduction, see Leistner (n. 16) 117–46.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Unfair Commercial Practices and Trade Secrets Infringement    433 the matter needs to be settled at national level causing uncertainty and likely adverse effects of the behaviour of the OIs.76

3.4  Protection of Consumers’ Interests vs IPRs’ Protection Against the uncertainty described earlier is a problem of effectiveness of consumer ­protection and consistency of the European legal system especially in comparison with the protection of IPRs. Article 11 of the UCPs Directive states that Member States must ensure that ‘adequate and effective means exist to combat unfair commercial practices’.77 In the UCPs Directive, however, the European legislator did not tackle the issue of OI liability and did not even consider possible injunctive relief for intermediaries as it does in Article 11 of the Enforcement Directive. According to the latter, rightholders are in a position to apply for an injunction against intermediaries whose services are used by a third party to infringe an IPR. It has been observed in the literature that this omission does not seem to be the result of a deliberate and well-thought-out decision.78 As a matter of fact, the need to guarantee enforcement tools against OIs not only concerns IPRs but also opportunistic behaviour towards consumers regulated by the UCPs Directive. In this sense, there is a regulatory misalignment: while IPRs enjoy remedies against intermediaries who are in the position of blocking the infringement (including in cases in which the infringer is anonymous), the UCPs Directive does not provide similar remedies for consumers’ protection. It may be wondered whether this gap could be filled, for example, through the ­analogical application79 of Article 11 of the Enforcement Directive.80 This would be ­supported by the fact that recital 13 of the Enforcement Directive suggests to 76  Against this background, it become apparent that there is a risk of over-takedown-and–stay-down policies from the side of OIs which act with the aim of avoiding being held liable. This has been identified in the field of copyright law and arises with regard to content allegedly violating the prohibition of UCPs. With regard to copyright see, Jennifer  M.  Urban and Laura Quilter, ‘Efficient Process or Chilling Effects—Takedown Notices under Section 512 of the Digital Millennium Copyright Act’ (2006) 22 Santa Clara High Tech. L.J. 621. 77  Directive 2005/29/EC (n. 6) Art. 11. 78  See Ohly (n. 49) 441–51. 79  On the analogous application of principles set out in Art. 11 of the Enforcement Directive and Art. 8 of the Information Society Directive, see Ohly (n. 49) 441–51 (arguing for the analogous application of the rules laid down in § 7 abs. 4 Telemediengesetz (Telemedia Act, TMG) in cases of personal rights infringement). See also, on this analogous application, Jörg Neuer, ‘Judicial development of Law’ in Karl Riesenhuber (ed.), European Legal Methodology (Intersentia 2014) 293–315. On interpretation theory, see in general Claus-Wilhelm Canaris, Die Feststellung von Lücken im Gesetz (Duncker & Humblot 1983) 39, 57, 71 with further references. The CJEU interprets the principle of equal treatment as ‘a general principle of European Law’. See 550/07 P Akzo Nobel Chemicals Ltd and Akcros Chemicals Ltd v European Commission [2010] ECLI:EU:C:2010:512. 80  According to Art. 11 of the Enforcement Directive, ‘Member States shall also ensure that rightholders are in a position to apply for an injunction against intermediaries whose services are used by a third party to infringe an intellectual property right’.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

434   Reto m. Hilty and Valentina Moscon Member States ‘the possibility to extend, for internal purposes, the provisions of this Directive to include acts involving unfair competition, including parasitic copies, or similar activities’. However, the extension of the effects of the Enforcement Directive in the context of the UCPs Directive raises among others the issue of the scope of injunctions. Injunctive relief that is broad in scope may conflict with the prohibition of imposing monitoring obligations on intermediaries.81 This problem, which has already been debated with regard to IPRs, arises most notably in the case of UCPs Directive violations for the same reasons as explained in the previous section. If the injunctions are not sufficiently ­precise, a discretional assessment on the ‘unfairness’ of a given commercial practice is delegated to the OI.

4.  Trade Secrets Infringement via OIs In the case of disclosure of trade secrets through OIs, three scenarios can be identified. First, there are cases where the claimant discloses confidential information directly to an online intermediary for safekeeping (e.g. information stored in a cloud service). In those cases, there can be no doubt that a trade secret infringement may arise on the part of the OIs because such situations involve a direct relationship between the OI and the confider. Greater difficulties arise when a service provider allows or facilitates a disclosure of trade secrets caused by the actions of a third party. On the one hand, there are cases in which an intermediary induces or encourages disclosure.82 On the other hand, there are situations in which the intermediaries merely assist users by receiving, storing, and making the information available to the public.83 For example, at a time when research collaboration between different subjects (including private companies, universities, and research institutions) located in different parts of the world is facilitated by the ability to exchange information, there is a risk that the results of that research end up being disclosed without the consent of all the research partners. This might happen, for example, in repositories devoted to the dissemination of research results, blogs, or even social media platforms such as LinkedIn or Twitter. These latter cases are the main focus of this section.

81  See Directive 2000/31/EC (n. 14) Art. 15. 82  One of the most notorious examples is ‘WikiLeaks’, which is a whistle-blowing platform established to obtain and disseminate documents and data sets from anonymous sources and leakers. See Wikipedia, ‘Wikileaks’ . 83  For an overview of cases, see Riordan (n. 1) 254–89.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Unfair Commercial Practices and Trade Secrets Infringement    435

4.1  Protection of Undisclosed Know-How and Business Information (Trade Secrets) Within the EU: Overview Undisclosed know-how and business information are regulated by Article 39 of the TRIPs Agreements.84 However, the provision does not detail how to implement the legal protection of undisclosed information in national law.85 TRIPs is also silent regarding the enforceability of such legal protection.86 Accordingly, there are a variety of possible implementations.87 In 2016, the matter was the subject of harmonization in Europe with the Trade Secrets Directive,88 the main aim of which is to overcome the fragmentation of laws protecting trade secrets so as to strengthen cross-border innovation.89 Also, the Directive aims to foster cooperation in research, providing certain rules that ensure exchange of information.90 Article 2 of the Directive adopts a definition of trade secrets which has three elements:91 (1) secrecy; (2) commercial value;92 and (3) reasonable steps to preserve 84 See Marrakesh Agreement Establishing the World Trade Organization, Annex 1C: Agreement on Trade-Related Aspects of Intellectual Property Rights (15 April 1994) 1869 UNTS 299, 33 ILM 1197, Art. 39. This provision is based on the Uniform Trade Secrets Act. See Daniel Gervais, The TRIPS Agreements. Drafting History and Analysis (Sweet & Maxwell 2012) 542; Sharon Sandeen, ‘The Limits of Trade Secrets Law: Article 39 of the TRIPS Agreement and the Uniform Trade Secrets Act on Which it is Based’ in Rochelle Dreyfuss and Katherine Strandburg (eds), The Law and Theory of Trade Secrecy: A Handbook of Contemporary Research (Edward Elgar 2011), ch. 20, 554–7. 85  See Reto M. Hilty, ‘Are New Modes of Criminal and Civil Enforcement a New Form of Intellectual Property?’ in Susy Frankel and Daniel Gervais (eds), The Internet and the Emerging Importance of New Forms of Intellectual Property (Kluwer Law Int’l 2016) 314. 86 ibid. 87  See ‘Study on Trade Secrets and Confidential Business Information in the Internal Market’, prepared for the European Commission) contract no. MARKT/2011/128/D (April 2013). 88  See Directive 2016/943/EU (n. 7). 89 See European Commission, ‘Impact Assessment Accompanying the document proposal for a Directive of the European Parliament and of the Council on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure’ COM(2013) 813 final, 41 ff. In the literature, see Tanya Aplin, ‘A Critical Evaluation of the Proposed EU Trade Secrets Directive’, King’s College London Law School Research Paper no. 2014/25 (18 July 2014) . 90  See Directive 2016/943/EU (n. 7) recital 2. The Directive has come under criticism from different angles. Some have even argued that the Directive was already outdated at its inception by not taking into account the influence of the economy of knowledge on trade secrets. In particular, the Directive does not consider the process of datafication of information. See Valeria Falce, ‘Tecniche di protezione delle informazioni riservate. Dagli accordi TRIPS alla Direttiva sul segreto industriale’ [2016] 3 Rivista di Diritto Industriale 129; Valeria Falce, ‘Dati e segreti. Dalle incertezze del regolamento trade secret ai chiarimenti delle line guida della commissione UE’ [2018] 2 Dir. Industriale 155 ff. 91  We will not expand on the analysis of the issues emerging from the setting of this requirement. For further analysis, see Aplin (n. 89). These requirements reflect those laid down in the TRIPs Agreements, for comment see Gervais (n. 84) 532–46. 92  The definition also requires the information to have commercial value because it is secret. See Aplin (n. 89).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

436   Reto m. Hilty and Valentina Moscon secrecy.93 The notion of secrecy, which is relevant to our analysis, is of ‘relative secrecy’ because Article 2(1)(a) refers to ‘secret in the sense that it is not, as a body or in the ­precise configuration and assembly of its components, generally known among or ­readily accessible to persons within the circles that normally deal with the kind of information in question’.94

4.2  Unlawful Acquisition, Use, and Disclosure of Trade Secrets by Third Parties Article 4 of the Directive states the conditions under which the acquisition, use, and disclosure of trade secrets are unlawful. It lists a number of cases when acquisition is unlawful, and sets out a general clause according to which the acquisition of a trade secret without the consent of the holder is unlawful, whenever carried out by any conduct ‘which under the circumstances is considered contrary to honest commercial practices’. Similarly, the use and disclosure of a trade secret are considered unlawful under the conditions specified in Article 4(3).95 Therefore, unlike IPRs, the violation of trade secrets is always linked to a certain conduct of the infringer, which must be assessed on a case-by-case basis. This aspect brings the protection of trade secrets closer to the area of unfair competition than intellectual property.96 Particularly significant for the purposes of this chapter is Article 4(4) which regulates the liability of ‘third parties’ stating that ‘The acquisition, use or disclosure of a trade secret shall also be considered unlawful whenever a person, at the time of the acquisition, use or disclosure, knew or ought, under the circumstances, to have known that the trade secret had been obtained directly or indirectly from another person who was using or disclosing the trade secret unlawfully within the meaning of paragraph 3.’97 This ­provision has a general scope, which leads to the conclusion that even the OI might be considered a third party and therefore liable for trade secret infringement. Also, according to the Directive a third party may originally acquire a trade secret in good faith, and only become aware at a later stage on notice served by the trade secret holder, that the trade secret in question derived from sources using or disclosing the relevant trade secret in an unlawful manner.98 This circumstance appears to be likely to happen in the online environment. However, in similar cases the question to be dealt 93  The third element of the definition of trade secrets requires the person lawfully in control of the information to have taken reasonable steps under the circumstances to keep it secret. See further Aplin (n. 89) 10. 94  See further Gervais (n 84) 542. 95  See Directive 2016/943/EU (n. 7) Art. 4(3)(a), (b), (c). 96  The Directive obligates the Member States to respect its limitation. Ibid. Art. 1(1), (2), so e.g. reverse engineering or independent discovery need to remain free. Thus, while the Member States remain free to qualify trade secrets, the regime set out in the Trade Secrets Directive is closer to unfair competition than IPRs. See Aplin (n. 89). 97  See Aplin (n. 89) recitals 26 and 27. 98  ibid. recital 29.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Unfair Commercial Practices and Trade Secrets Infringement    437 with, first, concerns the actual existence of a trade secret when the OI is notified about the infringement. If before the notice the information is circulated on the internet, it has potentially lost its character of being a trade secret and therefore the intermediary may not be held liable. The answer to this question is not obvious. On the contrary, it depends on the interpretation of the ‘secrecy’ notion. Public circulation online of information for a certain period of time is not in itself a cause of exclusion of secrecy. Certain information might remain secret until it is circulated within the circles that normally deal with the kind of information in question.99 The assessment of the secrecy in the concrete ­circumstances depends on the features of the platform (e.g. on the target of the intermediary). In addition, the Directive includes some exceptions to the applicability of the measures, procedures, and remedies for trade secret protection. One of these exceptions is particularly significant with regard to the intermediaries’ activity. This is the exception under Article 5(1)(a) according to which measures, procedures, and remedies provided for in the Directive are dismissed where the acquisition, use, or disclosure of the trade secret was carried out ‘for exercising the right to freedom of expression and information’ including respect for the ‘freedom and pluralism of the media’.100 The scope of this exception is broad; and the assessment for its applicability is left to the interpreter, based on a balancing of the interests involved. The interest of the trade secret holder must be compared with: (1) that of the users in accessing that specific ­information; (2) in a broader sense, that of the public in the free circulation of ­information that could be affected if high standards of monitoring were required by the intermediary.

4.3  Remedies Against Third Parties Unlike the UCPs Directive, this Directive contains detailed harmonization measures for remedies. According to Article 12, the judiciary has the power to order injunctions and corrective measures. In the case of unlawful acquisition, use, or disclosure of a trade secret, the following orders are available against the infringer: (1) cessation or prohibition against use or disclosure of the trade secret; and (2) prohibitions on the production or distribution or use of infringing goods and corrective measures with regard to infringing goods. Also, Article 14 states that an infringer can be liable to pay damages appropriate to the prejudice suffered by the trade secret holder. Furthermore, in assessing the proportionality of remedies ‘the legitimate interests of third parties’, inter alia, must be taken into account.101 This statement shows that: (1) the ‘culpability’102 of a third party may be somewhat less than a person who is a ‘primary’ or 99  So, e.g., if the competitors who are potentially interested in the secret did not have access to that information, circulation online for a brief period of time might not be sufficient on its own to destroy secrecy. See ibid. 100  The rationale behind this exception is further clarified in recital 19. 101  See Directive 2016/943/EU (n. 7) Art. 13. 102  See Aplin (n. 89).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

438   Reto m. Hilty and Valentina Moscon ‘direct’ infringer and this can be reflected in the type of remedy that is awarded;103 and (2) the judicial authority should take into account the different positions of third parties who have knowledge at the outset and those ‘innocent’ third parties who later acquire knowledge that their use or disclosure of trade secrets was unlawful because the trade secret was obtained from another who had used or disclosed the trade secret unlawfully.104

4.4  The Interplay Between Trade Secrets Directive and the e-Commerce Directive While there is general regulation on the infringement of third parties’ trade secrets, the Directive does not provide specific rules concerning OIs (e.g. similar to those contained in the Enforcement Directive and the Information Society Directive). The remedies, including precautionary and provisional measures,105 can only be directed to the person liable or allegedly liable for the trade secret infringement. Hence, such measures are available against third parties only if they satisfy the conditions of Article 4(4); that is, when they have, or ought to have, knowledge that the trade secret was obtained from another who was unlawfully using or disclosing the trade secret.106 Again, as in the case of UCPs, when the OI is involved there is a problem of concurrent application of the e-Commerce Directive concerning immunity on the liability and the rules provided for in the Trade Secrets Directive, which have a general application. There is indeed more than one element of discrepancy between the two bodies of law under analysis. The first one is the subjective element. According to the Trade Secrets Directive, the OI, as a third party, might be held liable if it ought to have known that the trade secret had been obtained from another person who was using or disclosing it unlawfully,107 but according to the e-Commerce Directive the intermediary is exempt

103  See Directive 2016/943/EU (n. 7) Art. 13(3). 104  The possibility of directing injunctive measures against non-infringing intermediaries has the aim of obtaining the cooperation of ‘innocent’ intermediaries, since they are in the best position to interrupt the illegal behaviour. See further Aplin (n. 89). 105  Provided for in Directive 2016/943/EU (n. 7) Arts 12 and 10 respectively. 106  Aplin highlights that following the letter of Art. 10 there seems to be a potential loophole between Arts 4(4) and 10 in that it will be impossible to obtain an ex parte urgent injunction against a person who initially acquired a trade secret innocently, in particular until such person is notified of the relevant facts. The IP Federation suggested an amendment to Art. 10 in order to allow courts to have discretion in appropriate urgent cases to order interim relief ‘even before the relevant facts giving rise to the claim have been provided to the respondent’. See Policy Paper PP04/12 IP Federation . 107  According to Directive 2016/943/EU (n. 7) Art. 4(4), the third party might be liable if it ‘knew or ought . . . to have known that the trade secret had been obtained . . . from another person who was using or disclosing the trade secret unlawfully’.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Unfair Commercial Practices and Trade Secrets Infringement    439 from liability if it has not obtained ‘actual knowledge’. Only from that moment must it act expeditiously to ‘remove or disable access’ to the unlawful information.108 Moreover, the Trade Secrets Directive does not specify what characteristics should have the notification aimed at making the intermediary aware that it is hosting material infringing trade secrets. And, if the notice is not sufficiently detailed and contextspecific, the same problems as in the case of UCPs arise.109 All this shows that the European legislator when drafting the Trade Secrets Directive did not have in mind the OIs in the digital environment and the possible overlaps with the safe harbour Directve.110 In fact, it is doubtful when the liability rules established therein and the remedies outlined in the Trade Secrets Directive can be applied to OIs as in many cases the e-Commerce Directive might apply. Indeed, as lex specialis derogat legi generali, if the third party is an intermediary and the immunity rules are considered lex specialis, whenever the conditions of Article 14 of the e-Commerce Directive are met, the intermediary may be exempted from liability.

4.5  Trade Secrets and IPRs’ Enforcement Against Third Parties It is doubtful whether Article 11 of the Enforcement Directive can be extensively applied to cases of trade secrets infringement. The applicability of the Enforcement Directive was debated in the preparatory works for the Trade Secrets Directive111 and the debate has culminated in the adoption of recital 39 of the Directive. According to recital 39, ‘This Directive should not affect the application of any other relevant law in other areas, including intellectual property rights and the law of contract. However, where the scope of application of Directive 2004/48/EC and the scope of this Directive overlap, this Directive takes precedence as lex specialis.’ This provision—which has the aim of clarifying the ‘specialism’ of the Trade Secrets Directive stating its prevalence over the Enforcement Directive—seems, however, to acknowledge the potential overlaps between the two Directives. If the two norms were attributable to completely different legal areas, the problem of regulating their possible overlap would not be urgent.112 Therefore, while it is clear that in the case of overlap, the 108  Although disputed, the better view is that Art. 14, read in conjunction with Art. 15(1), refers to actual rather than constructive knowledge and to past or ongoing rather than future wrongful activity. See Riordan (n. 1) 1405; C‑324/09 L’Oréal SA and Others v eBay International AG and Others [2010] ECLI:EU:C:2010:757, Opinion of AG Jääskinen, paras 162–3. 109  See Section 3.3 in this chapter. 110  See Falce (n. 90). 111  In the Impact Assessment, the Commission adopts the view that the Enforcement Directive does not extend to trade secrets. European Commission, ‘Impact Assessment’ (n. 89) 267–8. The reason for this includes the fact that the Enforcement Directive ‘deals with the enforcement of Intellectual Property rights and trade secrets are not intellectual property rights and to mix trade secrets and IP rights would risk “adding confusion”’. 112  There are actually de facto situations in which the protection of a patent could overlap with that of know-how, leading to the possible overlapping of the provisions of the two Directives.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

440   Reto m. Hilty and Valentina Moscon provisions of the Trade Secrets Directive override those of the Enforcement Directive, there still remains the open issue of whether the latter can be applied to fill in loopholes in the Trade Secrets Directive. Given that the Trade Secrets Directive does not tackle the issue of OIs’ liability nor ensures for trade secrets holders the possibility of applying for injunctions against intermediaries whose services are used by a third party to infringe trade secrets—the same reasoning used earlier with regard to cases of UCPs via OIs also applies to trade secrets infringements.113 Hence, even if an extensive interpretation of the Enforcement Directive were not acknowledged, it is conceivable that an analogical application of the rules set forth in Article 11 of the Enforcement Directive would be accepted.

5. Assessment The OI has a central role in the circulation of information on the internet. Online ­platforms allow users to exchange and spread personal and business information. The unlawful transfer and distribution of content on the internet through digital platforms or other means made available by intermediaries raises the question of the intermediary’s liability with respect to such behaviour. The intermediary’s position is assessed from the point of view of general tort law, whose principles are not harmonized within Europe.114 Indeed, European law has regulated various aspects of e-commerce, but with respect to OIs it has limited itself to establishing an exemption rule. This brings complexity to the European legal framework. First, the ping-pong between national law and European exemptions causes uncertainty, because it requires a case-by-case assessment in applying national rules of liability and the OI safe harbours.115 Secondly, European rules on liability in specific areas, for example the UCPs Directive and the Trade Secrets Directive, in principle also apply to OIs. But (1) these rules may be implemented and interpreted differently at a national level and (2) the interplay between these rules and the e-Commerce Directive causes problems of concurrent application of those provisions. This adds further legal uncertainty in the European legal framework, also bringing about a lack of effective protection of the interests covered by the rules of liability.116 113  See Section 3.4 of this chapter. 114  See Kur (n. 16). 115  See Miquel Peguera, ‘Approaches to Secondary Trade Mark Liability: A Limited Harmonization Under EU Law’ in Jane Ginsburg and Irene Calboli (eds), Cambridge Handbook on Comparative and International Trademark Law (CUP 2020). 116  This is true even in cases of Directives applying the principle of maximum harmonization such as the UCPs Directive. E.g. Italian law chose the route of public enforcement, assigning to a public authority (i.e. the AGCM) the power to assess—even ex officio—cases of UCPs under the so-called Italian law Codice del Consumo [Consumer Code]. See Legislative Decree no. 206 of 6 September 2005, Official Gazette 8 October 2005, Art. 27 (It.). Conversely, German lawmakers opted for private enforcement. The German government assumed that Art. 11 of Directive 2005/29/EC did not require substantial amendments to the Gesetz gegen den unlauteren Wettbewerb (Antitrust Act, UWG). However, since 2017 an

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Unfair Commercial Practices and Trade Secrets Infringement    441 In the field of IPRs, European law fills the gap ensuring protection to the rightholders with the provisions laid down in Article 11 of the Enforcement Directive and Article 8(3) of the Information Society Directive as for copyright. Also, recently the level of safeguards for rightholders’ interests has risen further with regard to copyright law. The new Directive on Copyright in the Digital Single Market will under certain conditions provide for the direct liability of OIs against copyright infringement even for content uploaded by users. In that case, OIs do not enjoy, in relation to copyright-relevant acts, the liability exemption regulated by Article 14 of the e-Commerce Directive.117 Conversely, other areas of law (the UCPs Directive and the Trade Secrets Directive), despite similarly regulating market behaviour, lack equal protection mechanisms. Given that these regulatory bodies as well as IPRs govern market behaviours including digital markets, it is worth asking whether such misalignment is justified. If that misalignment does not have any justification, it may be wondered whether it would be reasonable to apply the mechanism set forth in Article 11 of the Enforcement Directive by analogy or extensively. From a functional point of view, all the previously mentioned cases call for rapid intervention to stop illicit behaviour and its effects on the market. Furthermore, the effects caused by that illicit behaviour are difficult to eliminate ex post with monetary measures. This is for several reasons. First, it is difficult to quantify damages.118 Secondly, infringing acts often have indirect effects on third parties and produce externalities that alter the functioning of the market.119 Therefore, in these situations immediate protection that blocks the illicit conduct might be preferable. At the same time, it could be argued that the case studies taken into consideration in this chapter should be treated differently from IPRs. First, it could be said that the judicial assessment of infringement of IPRs is easy, being based on a title of property. However, that is not always true: even in the case of IPRs the judge is called upon to evaluate the alleged violation on a case-by-case basis. For instance in the case of ­trademark infringements it is required to assess the risk of confusion non not just to ascertain the existence of the registration.120 Secondly, legal actions concerning IPRs, which are exclusive rights enforceable erga omnes, need to be distinguished from legal actions under the UCPs Directive and Trade

amendment to the German Antitrust Act has entrusted the German Federal Cartel Office with some new competences for the enforcement of consumer law. On this matter there is, however, growing debate. See Podzun, Busch, Henning Bodewig (n. 56) 75–84; Henning Bodewig (n. 1), 537–45; Poelzig (n. 73) 235–65. 117  See Directive 2019/790/EU (n. 2) Art. 17(3). 118  For a general overview, see Julia Stenkamp, ‘Enforcement of Intellectual Property and Competition Law after Implementation of the Directive 2004/48/EC’, Munich Intellectual Property Law Center (2007). 119  See generally William Landes and Richard Posner, The Economic Structure of Intellectual Property Law (Belknap Press 2003). 120  See Senftleben (n. 67).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

442   Reto m. Hilty and Valentina Moscon Secrets Directive, which do not assign exclusive rights but merely impose rules of ­conduct on market players. In the former case, when protected information is illegally transferred via digital platforms, the violation of IPRs is objective in nature and so is the OI’s involvement. In the latter case, no right is violated, and the infringement simply consists of conduct not compliant with the law; hence, it is always questionable whether the OI has contributed to that conduct in a way that is sufficient to justify its liability. However, even this argument is not decisive. The proprietary nature of IPRs does not imply that the interests protected are higher than those protected by the UCPs Directive and the Trade Secrets Directive. Quite the opposite; all these rules are crucial in allowing the market to work properly. Therefore, in all these cases effective protection should be granted and that would hardly be conceivable without the OI’s collaboration, since it is always the cheapest cost avoider and often the only subject that is quickly within reach with regard to illicit conduct in digital markets. That said, without entering the ongoing discussion in Germany concerning whether in the field of UCPs the best enforcement system is public or private, it is worth noting here that relying on injunctive relief against OIs might prove not to be the best option for improving the good functioning of digital markets.121 First, it is well known that the wide adoption of injunctive relief may sometimes bring about unintended consequences. For instance, experience with standard essential patents in the mobile phone markets shows that injunctions may generate anticompetitive effects.122 In digital markets, injunctions excessively wide in scope and applicable to future behaviours may involve monitoring obligations on OIs, thus creating a tension with fundamental rights, namely freedom of expression, and a potential violation of Article 15 of the e-Commerce Directive, which expressly states that OIs shall not be burdened with a general obligation actively to seek facts or circumstances indicating illegal activity. Secondly, courts decide on injunctive relief based on national substantial and procedural law. This approach does not really fit with digital markets that are global by definition. Thirdly, digital markets are increasingly growing and the number of potential violations involving OIs is growing as well. Therefore, it is reasonable to expect a great increase in litigation, especially in the case of UCPs. This may also lead to unexpected costs for the judicial system. Against this backdrop, appears evident that traditional legal tools are not fully adequate for regulating digital markets and it appears desirable to think of new alternative 121  See Podszun, Busch, and Henning-Bodewig (n. 56) 537–45. 122  On this issue, see Jorge Contreras (ed.), The Cambridge Handbook of Technical Standardization Law (CUP 2018); Beatriz Conde Gallego and Josef Drexl, ‘IoT Connectivity Standards: How Adaptive is the Current SEP Regulatory Framework? (2019) 50 IIC 135–56; Renato Nazzini, ‘FRAND-Encumbered Patents, Injunctions and Level Discrimination: What Next in the Interface between IP Rights and Competition Law?’ (2017) 40 World Comp. 213; Kristian Henningsson, ‘Injunctions for Standard Essential Patents Under FRAND Commitment: A Balanced, Royalty-Oriented Approach’ (2016) 47 IIC 438–69; Pedro Henrique Batista and Gustavo Cesar Mazutti, ‘Comment on “Huawei Technologies” (C-170/13): Standard Essential Patents and Competition Law—How Far Does the CJEU Decision Go?’ (2016) 47 IIC 244–53.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Unfair Commercial Practices and Trade Secrets Infringement    443 ­ echanisms tailored to this specific environment. To some extent, the recent debate on m the Directive on Copyright in the Digital Single Market shows precisely the need to ­regulate digital markets in a more modern way. Unfortunately, the solution found by the European legislator has been widely criticized and plainly cannot be a model for future regulation.123 Yet, this recent copyright reform as well as the case studies explored in this chapter bring to light that there is an urgent need to approach the regulation of information seriously, and therefore to define the position of OIs, rethinking the traditional ­categories which hardly prove to be fitting for digital markets. 123  Among others, see Senftleben and others (n. 36) 149–63; Christina Angelopoulos, ‘On Online Platforms and the Commission’s New Proposal for a Directive on Copyright in the Digital Single Market’, SSRN Research paper no. 2947800 (January 2017) ; Reto M. Hilty and Valentina Moscon (eds), ‘Modernisation of the EU Copyright Rules. Position Statement of the Max Planck Institute for Innovation and Competition’, Max Planck Institute for Innovation and Competition Research Paper no. 17-12 (2017) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 23

Notice-a n d -NoticePlus: A Ca na di a n Perspecti v e Beyon d the Li a bilit y a n d Im m u n it y Di v ide Emily Laidlaw*

The spectrum of intermediary liability models is wide. They range from strict liability models at one end, enlisting intermediaries to proactively seek and remove unlawful content,1 to broad immunity models at the other, notably section 230 of the US Communications Decency Act (CDA),2 which is near-sweeping in the protections it affords intermediaries. In between are a range of approaches that attempt to carve a middle path, such as Europe’s safe harbours in the e-Commerce Directive (ECD).3 Many of these models have dominated regulatory discussions for the last twenty years. However, more recently, alternative regulatory models have entered the picture, changing the nature of the conversation. *  This research was supported by the Social Sciences and Humanities Research Council of Canada and based on a research project with Hilary Young, Associate Professor, Faculty of Law, University of New Brunswick, for the Law Commission of Ontario. 1  This is evident in Thailand and China where the consequence might be criminal penalties, withdrawal of business licences, etc. See, generally, intermediary liability models in Article 19, ‘Internet Intermediaries: Dilemma of Liability’ (Article 19, 2013), (hereafter Article 19). Also, arguably, Art. 17 of Directive 2019/790/EU of the European Parliament and the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92 is creeping towards such an approach. 2  See 47 USC § 230 (US). Exceptions in s. 230 are provided for federal criminal, communications, privacy, and intellectual property matters. 3  See Directive 2000/31/EC of the European Parliament and the Council of 17 July 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1.

© Emily Laidlaw 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Canadian Perspective    445 Debates no longer centre as rigidly on immunity versus liability. Rather, the shift in focus is to responsibility,4 a broader notion based on principles of accountability.5 This shift is partly reflective of the changing role of intermediaries, from conduits to platforms that design and mediate how we interact with the world.6 As Julie Cohen explains, the platform ‘is the core organizational form of the emerging informational economy’.7 The shift is also partly explained, because intermediaries are increasingly scrutinized through the lens of international human rights principles.8 Brazil’s Marco Civil9 reflects a human rights-centred approach, largely reserving intermediary liability only for circumstances where the company fails to comply with a court order. Canada’s notice-andnotice model for copyright law is an innovative model that shifts away from a liability framework, but rather focuses on action requirements for intermediaries that serve an education function for users.10 In this spirit, Hilary Young and I proposed a regulatory model to the Law Commission of Ontario for its project on reforming defamation law in the digital age, which we call notice-and-notice-plus (NN+).11 This chapters seeks to do two things. First, I outline the NN+ model for readers, identifying the legal context for our proposal and sketching its main features. This inevitably will miss some key detail and I encourage reading the full paper.12 To summarize, 4  See e.g. exploration of the moral responsibilities of intermediaries in Mariarosaria Taddeo and Luciano Floridi, ‘The Moral Responsibilities of Online Service Providers’ in Mariarosaria Taddeo and Luciano Floridi (eds), The Responsibilities of Online Service Providers (Springer 2017). 5  Arguably, this focus on accountability is a maturing of the regulatory environment, shifting from softer, voluntary regulatory structures observable in the early days of the internet’s commercialization, to harder regulatory structures built on accountability systems such as reporting requirements, audits, certification schemes, and so on. On this concept, see Peter Utting, ‘Rethinking business regulation: from self-regulation to social control’, United Nations Research Institute for Social Development, Technology, Business and Society Programme Paper no. 15 (2005) 5–8. 6  A compelling perspective on the role of many of these technology companies is Tim Wu’s The Attention Merchants: the Epic Scramble to Get Inside our head (Vintage Books 2016), examining their economic drive and its impact on our lives. 7  Julie Cohen, ‘Law for the Platform Economy’ (2017) 51 UC Davis L. Rev. 131, 135. 8  See the United Nations Special Rapporteur on Freedom of Opinion and Expression, and others, ‘Joint Declaration on Freedom of Expression and the Internet’, para. 2 ; Frank La Rue, ‘Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’ (2011) A/HRC/17/27, para. 43; David Kaye, ‘Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’ (2016) A/HRC/32/38, para. 2; David Kaye, ‘Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’ (2018) A/HRC/38/35. 9  See Marco Civil da Internet (2014) (English translation) (Bra.). 10  See Copyright Act, RSC 1985, c. C-42, amended by the Copyright Modernization Act, 2012, c. 20 (Can.). The notice-and-notice provisions are ss. 41.25–41.27. This approach as a generalist model finds support in civil society organizations e.g. Art. 19 (n. 1). 11  See Emily Laidlaw and Hilary Young, ‘Internet Intermediary Liability in Defamation: Proposal for Statutory Reform’, Law Commission of Ontario (2017) . See also Emily Laidlaw and Hilary Young, ‘Internet Intermediary Liability in Defamation’ (2019) 54(1) Osgoode Hall L.J. 112. 12  ibid. I also encourage reading my companion paper examining alternative ways to resolve disputes, in particular, proposing creating an online tribunal: Emily Laidlaw, ‘Are we asking too much from

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

446   Emily Laidlaw intermediaries would be responsible to pass on notice of illegal content to the content creator much like the notice-and-notice model. However, if the creator does not respond in a reasonable time the intermediary is required to remove the content. The risk to the intermediary is not liability in defamation, rather a statutory fine. In this way, the intermediary does not play an adjudicative role and does not risk liability for the underlying wrong. Secondly, I explore the viability of deploying this model to other kinds of harmful speech. Our proposal is specific to defamation law, and yet a question we are regularly asked is its suitability for other forms of unlawful speech. Many of the regulatory models which influenced our thinking are horizontal in nature, applying to multiple causes of action, while the one we proposed was subject matter-specific. Horizontal models certainly provide better certainty to intermediaries, but the dynamics of different kinds of unlawful speech often lead to subject-specific solutions. They allow for more nuance, which more readily enables balancing of the right to freedom of expression against other rights. This section is necessarily exploratory, considering the strengths and weaknesses of deploying NN+ to other types of unlawful speech.

1.  Legal Context Canada does not have a general statutory model for intermediary liability as evident in Europe’s ECD, nor defamation legislation that speaks to the issue of intermediary liability, although note that the new trade deal with the United States and Mexico might lead to the adoption of broad intermediary immunity similar to CDA section 230.13 In the context of defamation, intermediary liability is a common law question of the definition of publication. Thus our proposal to the Law Commission of Ontario necessarily began with the common law, but we were nevertheless influenced by lessons learned from the statutory approaches of other countries, both general and subject-specific. This context is important as it provides the basis for the NN+ regime.

1.1  Common Law At common law, defamation consists of publishing a communication that would tend to make an ordinary person think less of the plaintiff. The definition of publication is broad defamation law? Reputation systems, ADR, Industry Regulation and other Extra-Judicial Possibilities for Protecting Reputation in the Internet Age: Proposal for Reform’, Law Commission of Ontario (2017) . 13  See Art. 19.17 of the Agreement between the United States of America, the United Mexican States, and Canada (USMCA) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Canadian Perspective    447 and includes acts of conveying libellous speech or participating in conveying it.14 Publication does not require authorship, endorsement, or even necessarily awareness of specific content.15 Because of this broad definition, a wide range of participants may be considered publishers, including internet intermediaries. Based on the developing body of case law,16 it seems the less implicated an intermediary is in creating, editing, or approving content, the less likely it is to be treated as a publisher. Thus, for example, internet service providers (ISPs) are never treated as publishers of defamatory content, even after notice.17 For other kinds of intermediaries, the issue tends to be whether notice is sufficient to make an intermediary responsible for that content should it fail to remove it. Some courts have found intermediaries, other than ISPs, to be non-publishers regardless of notice on the theory that their involvement in publication is too passive.18 More often, however, courts have found that notice makes intermediaries publishers— either because they can no longer avail themselves of a defence that applies to those without knowledge19 or because a failure to remove after notice makes one a ‘publisher by omission’.20 In some contexts, the effect of notice remains unresolved. Search engines, for example, raise unique issues, because they categorize and rank information sources, provide snippets, and algorithmically generate autocompletes. While courts have tended to find that search engines are not publishers before notice,21 their status after

14  The defendant must have, ‘by any act, conveyed defamatory meaning to a single third party who has received it’. Crookes v Newton [2011] 3 SCR 269, 2011 SCC 47 (Can.) para. 16, citing McNichol v Grandy [1931] CanLII 99 (SCC), [1931] SCR 696, 699 (Can.). See also Patrick Milmo and others (eds), Gatley on Libel and Slander (Thomson Reuters 2013) 6.1. Further, ‘liability extends to any person who participated in, secured, or authorised the publication’, ibid. 6.10. 15  The orthodox view is that awareness of content is not required for publication, although this is changing, Joachim Dietrich, ‘Clarifying the Meaning of “Publication” of Defamatory Matter in the Age of the Internet’ (2013) 18 Media and Arts L. Rev. 90. 16  We refer here primarily to the courts of Canada, the UK, Australia, New Zealand, and Hong Kong. 17  See e.g. Bunt v Tilley [2006] EWHC 407 (QB) (UK). In addition, the Supreme Court of Canada has treated ISPs as non-publishers in the copyright context. See Society of Composers, Authors and Music Publishers of Canada v Canadian Assn of Internet Providers [2004] 2 SCR 427, 2004 SCC 45, para. 101 (Can.). 18  See e.g. Metropolitan International Schools Ltd v Designtechnica Corp. [2009] EWHC 1765 (QB), [2011] 1 WLR 1743 (UK). 19  See e.g. Oriental Press v Fevaworks Solutions Ltd (2013) 16 HKCFAR 366 (HK); Carter v BC Federation of Foster Parents Association, 2005 BCCA 398 (CanLII), 257 DLR (4th) 133 (Can.). 20  See e.g. Tamiz v Google [2013] EWCA Civ 68 (CA) (UK); Murray v Wishart [2014] NZCA 461 (NZ); Weaver v Corcoran 2015 BCSC 165 (Can.); Byrne v Deane [1937] 1 KB 818 (UK). Although Byrne was overturned on appeal, the case still stands for the proposition that one can be a publisher by omission. Publishers by omission are those who did not participate in the initial act of publication but who control a venue in which a libel was published. If the defendant refuses to remove the communication and the refusal can be interpreted as endorsing it, the defendant will become a publisher of that communication from the time he should have removed it. 21  See e.g. Niemela v Malamas, 2015 BCSC 1024 (Can.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

448   Emily Laidlaw notice is less clear.22 The question is unresolved in Canada,23 New Zealand,24 Hong Kong,25 and Australia.26 The common law approach is problematic for various reasons. The law is unduly complex and has sometimes been interpreted inconsistently. It arguably imposes liability in the absence of blameworthiness of the same kind or level as the one who originally posted the defamatory content. It also incentivizes content removal whenever there is an allegation of defamation, which is problematic from a free speech perspective.

1.2  Statutory Approaches Statutory approaches are vulnerable to similar criticisms that they result in over- or under-removal, and are unduly complex or inconsistently interpreted. In the case of safe harbour models, such as the ECD and the Digital Millennium Copyright Act (DMCA),27 intermediaries are provided with conditional immunity from liability. This is better known as a notice-and-takedown (NTD) regime, where an intermediary is provided a safe harbour from liability provided it takes down content under certain circumstances. In the case of the ECD, it is a horizontal approach, applying to multiple causes of action. The immunity depends on the type of intermediary, with broad immunity for conduits, such as ISPs, lesser immunity for intermediaries that cache content, and conditional immunity for hosts, which must remove content upon knowledge or awareness that it is unlawful.28 The DMCA, in contrast, is vertical in its approach, only applying to cases of copyright infringement. In contrast to the ECD, the DMCA provides detailed rules for NTD notices29 and provisions to balance the risk of over-compliance. For example, the DMCA requires a good faith clause in notices to dissuade illegitimate claims,30 and provides for a put-back procedure where the copyright holder does not sue within a period of time.31 The challenge with NTD models is that, despite their seeming simplicity, in practice, they are complicated regulatory frameworks. Criticism of both the ECD and DMCA is 22  For the UK, see Metropolitan Schools (n. 18). 23  Niemela (n. 21) addressed whether a search engine was a publisher but addressed only the prenotice period. 24 See A v Google New Zealand Ltd [2012] NZHC 2352, paras 68, 71 (NZ). 25 See Dr Yeung, Sau Shing Albert v Google Inc. [2014] HKCFI 1404, para. 103 (HK). 26 See Trkulja v Google Inc. LLC & Anor (no. 5) [2012] VSC 533 (Aus.); Duffy v Google [2015] SASC 170, paras 29–31 (Aus.). In 2018, Trkulja was appealed to the High Court of Australia, which admonished the Court of Appeal for having decided the issue of publication. The High Court said the question was fact-dependent: ‘there can be no certainty as to the nature and extent of Google's involvement in the compilation and publication of its search engine results until after discovery’: Trkulja v Google LLC [2018] HCA 25, para. 39 (Aus.). 27  Pub. L. No. 105-304, 112 Stat. 2860 (1998), s 512 (US). 28  See Directive 2000/31/EC (n. 3) Arts 12–14. 29  DMCA (n. 27) s. 512(c)(3). 30  ibid. s. 512(f). 31  ibid. s. 512(g)(2)–(3).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Canadian Perspective    449 that they are easily abused. Since the intermediary risks liability if it refuses NTD, an intermediary is disincentivized from disputing a claim or otherwise self-regulating. Annemarie Bridy and Daphne Keller compiled various empirical studies of NTD under the DMCA, which together show over-removal of content.32 Empirical studies in Europe show similar results.33 Part of the uncertainty is the meaning of knowledge and notice.34 What must an intermediary know to trigger an obligation to remove content? What level of detail in a notice fixes an intermediary with knowledge that it is hosting unlawful content?35 What types of intermediaries are captured in an NTD model or unaccounted for? The above speaks to the general concern that NTD models privatize censorship and overly burden intermediaries, especially small and medium-sized companies. In addition, efforts to mitigate the hardship of NTD regimes is criticized as slow, burdensome, and ineffective. For example, the counter-notice procedure in the DMCA, something occasionally proposed to reform the ECD,36 is rarely used.37 In contrast, the immunity model evident in CDA section 23038 provides broad im­mun­ity to intermediaries for the content that is available through their services. There are significant strengths to a broad immunity model. As David Ardia describes, section 230 provides ‘breathing space’39 to intermediaries, and is often identified as a key policy that enabled the open internet we know today.40 Interpretation of section 230 is also relatively stable,41 providing certainty to intermediaries. The chief flaw of section 32  Annemarie Bridy and Daphne Keller, ‘U.S.  Copyright Office Section 512 Study: Comments in Response to Notice of Inquiry’, SSRN Research Paper no. 2757197 (March 2016) 23, 44, and Appendix B . For a link to other studies, see Daphne Keller, ‘Empirical Evidence of “Over-Removal” by Internet Companies under Intermediary Liability’ (StanfordCISBlog,12October2015). 33  See e.g. Christian Ahlert, Chris Marsden, and Chester Yung, ‘How “Liberty” Disappeared from Cyberspace: The Mystery Shopper Tests Internet Content Regulation’ (2014) . 34  See three interpretations of knowledge set out in the European Commission Communication, ‘Online services, including e-commerce, in the Single Market: Accompanying the document’ SEC (2011) 1641, 33; see, for discussion of Art. 14 and the knowledge requirement, Jaani Riordan, The Liability of Internet Intermediaries (OUP 2016) 12.131. 35  See discussion in European Commission (n. 34) 43–4; Davison v Habeeb & Others [2011] EWHC 3031 (QB) (UK). 36  See European Commission (n. 34) 43–5. 37  See Jennifer Urban, Joe Karaganis, and Brianna Schofield, ‘Notice and Takedown: Online Service Provider and Rightsholder Accounts of Everyday Practice’ (2017) 64 J. Copyright Soc’y 371. 38  CDA (n. 2). 39  David Ardia, ‘Free Speech Savior or Shield for Scoundrels: An Empirical Study of Intermediary Immunity under Section 230 of the Communications Decency Act’ (2010) 43 Loyola of Los Angeles L. Rev. 373, 494. 40  Danielle Citron, Hate Crimes in Cyberspace (HUP 2014) 170–2. See also, for a history of the CDA, Ardia (n. 39) 409–11. 41  The key case is still Zeran v America Online Inc., 129 F.3d 327 (4th Cir. 1997), cert. denied, 524 US 937 (1998) (US). Note recent amendments to s. 230: Allow States and Victims to Fight Online Sex Trafficking Act of 2017, Pub. L. no. 115-164 (4 November 2018) (US).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

450   Emily Laidlaw 230, however, is that it is questionable whether it prompted the kind of self-regulation that legislators intended.42 For example, in Jones v Dirty World Entertainment Recordings LLC,43 the Court of Appeal for the Sixth Circuit confirmed that section 230 even applies to sites that encourage the development of unlawful content. In that case, Sarah Jones, a cheerleader and teacher, unsuccessfully sued the website the Dirty for photos posted on the site and allegations that she slept around. The court was rightfully concerned with the vulnerability of consumer review sites, and similar, to the ‘hecklers’ veto’,44 which refers to complainants that threaten to sue an intermediary in order to remove unwanted posts. However, section 230 also has the effect of shielding sites even where the website owner knows it is hosting defamatory content, indeed actively solicits ‘dirt’ on people, and refuses to remove it. We reject this approach for Canada, although note the looming uncertainty with the similarities between Article 19.17 of the USMCA and section 230.45 In our view, section 230 does not reflect Canada’s balancing of the right to freedom of expression and reputation. The question we struggled with is how to target defamatory speech without bringing within its ambit offensive, but lawful, speech, and without incentivizing over-removal of content. A unique middle-ground is Canada’s notice-and-notice model for copyright law. Pursuant to the Copyright Act,46 when an ISP receives a notice of alleged copyright infringement from a rightholder, it is required to forward the notice to the alleged wrongdoer, namely the user linked with the IP address.47 If the intermediary fails to comply with this obligation, the risk is not liability for the underlying wrong, rather statutory damages between $5,000–10,000 (in essence a fine).48 The purpose of this regime is to educate users and discourage copyright infringement.49 The end result is that the rightholder’s only option, after sending the notice, is a cause of action against the user for copyright infringement. Lessons can be learned from flawed execution of the regime. Namely, specious claims were sent to users, because there were no regulations mandating the content of notices.50 In contrast, the detailed notice requirements in the 42 A chief reason s. 230 was passed is the disincentivizes to self-regulation created by Stratton Oakmont, Inc. v Prodigy Services Co., 1995 WL 323710 (NY Sup. Ct 1995) (US), which created a greater risk of liability the more the intermediary moderated content. This is similar to criticism of operation of the ECD. See Ryan Gerdes, ‘Scaling Back s. 230 Immunity: Why the Communications Decency Act should Take a Page from the Digital Millennium Copyright Act’s Service Provider Immunity Playbook’ (2011–12) 60 Drake  L.  Rev. 653, 667; Brian Holland, ‘In Defense of Online Intermediary Immunity: Facilitating Communities of Modified Exceptionalism (2008) 56 Kansas L. Rev. 101, 124–5. 43 See Jones v Dirty World Entertainment Recordings LLC, 521 F.3d 1157 (6th Cir. 2014) (US). 44  ibid. 10. 45  See USMCA (n. 13). 46  Canadian Copyright Act (n. 10). 47  ibid. s. 41.26(1)(a). 48  ibid. s. 41.26(3). 49  See Government of Canada, Office of Consumer Affairs, Notice and Notice Regime . 50  See Michael Geist, ‘Canada’s Copyright Notice Fiasco: Why Industry Minister James Moore Bears Some Responsibility’ (12 January 2015) ; Michael Geist, ‘Rightscorp and BMG Exploiting Copyright Notice-and-Notice System: Citing False Legal Information in Payment Demands’

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Canadian Perspective    451 DMCA provide important checks to the risks of abusive complaints or intermediary over-compliance.51

2.  Proposal for Reform Based on the strengths and weaknesses of common law and statutory approaches to intermediary liability, we crystallized a series of principles on the role of intermediaries that influenced our recommendations. First, in our view, intermediaries should not act in a quasi-judicial capacity assessing the merits of defamation complaints. This is because what makes defamation actionable involves a complex assessment of fact and law and intermediaries do not have the legal expertise to make such determinations. Further, if the burden to arbiter complaints is tied with a risk of liability, the simplest path for a company will be to remove content rather than defend expression.52 That said, intermediaries have a role in handling removal requests. The content posted online can significantly damage reputation, and intermediaries not only benefit from third party content, they have the capacity to manoeuvre and respond to complaints in a way that courts cannot. Indeed, defamation disputes tend to be high volume and low value, meaning that the expensive, slow process of traditional litigation diverts most complaints to intermediaries to resolve, and poses a barrier to access to justice if the intermediary fails to provide a sufficient remedial mechanism through its terms and conditions. Secondly, laws should be adaptable to changing technologies.53 Thirdly, whatever rules are created should be human-rights based. This means they should be prescribed by law with a legitimate aim, and comply with principles of necessity, proportionality, transparency, accountability, and due process.54 Fourthly, laws should not stifle innovation. We recognize that any regulatory framework is tied with innovation policy; the ‘allocation of responsibilities’55 creates incentives and disincentives that directly impact innovation. (8 January 2015) ; Office of Consumer Affairs (n. 49). 51  See DMCA (n. 27) s. 512(c)(3) (notification requirements); s. 512(f) (good faith clause); s. 512(g) (2)–(3) (put-back procedure). 52  See discussions about incentives for over-removal by Keller (n. 32) and Joint Committee on the Draft Defamation Bill, Draft Defamation Bill, House of Lords Paper no. 203, House of Commons Paper no. 930-1, Session 2010–12 (2011), 54. 53  This does not explore in depth the literature of technological neutrality versus adaptability to technology as it evolves. This simply recognizes that a technology-specific solution is ill-advised. See e.g. Carys Craig, ‘Technological Neutrality: Recalibrating Copyright in the Information Age’ (2016) 17 Theoretical Inquiries L. 601. 54  This is consistent with recent HRC general comments and special rapporteur reports (n. 8). 55  Martin Husovec, ‘Accountable, Not Liable: Injunctions Against Intermediaries’, TILEC Discussion Paper, 9 (May 2016).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

452   Emily Laidlaw

2.1  Common Law Based on the points raised in the previous section, we proposed to narrow the definition of publication. Publication should require an intentional act of conveying specific words, with awareness that those specific words are being conveyed.56 This would mean that an intermediary who is unaware of a libel at the time of publication would not be a publisher, nor would a search engine for its search results or autocompletes. Only content creators who intentionally disseminate their own words, and those who intentionally repeat others’ words, would be publishers. Practically speaking, this change to the common law would have the effect that intermediaries, by and large, would not be publishers of libel posted by third parties. This creates a practical problem of how to resolve the high volume of defamation disputes online. Thus NN+ is the product of our recommendation that procedures be codified for handling defamation complaints. NN+ acknowledges that intermediaries incentivize and often profit from content and have considerable power to mediate between those who post content and those who object to it.

2.2 Notice-and-Notice-Plus The notice-and-notice framework in copyright law is a compelling regulatory model. It lifts intermediary responsibility from the divide between broad immunity and safe ­harbour models, because intermediaries do not assess the merits of a complaint and the notices serve to educate users on infringing behaviour. However, notice and notice does not provide a complete answer to the challenge of defamatory content. In the light of the significant harm to reputation that can be caused by continued circulation of the offending content, users should have a mechanism to disable access short of obtaining a court order. Human rights-based intermediary frameworks, such as Brazil’s Marco Civil and the Manila Principles,57 recommend that most content should only be restricted via court order. In other respects, the human rights centring of these frameworks is persuasive. However, restricting content removal to a court process creates a barrier to access to justice in the light of the high-volume, low-value, and legally complex matrix of defamation disputes. A better approach is to codify rules for content removal that balances the risk of over-removal58 and provides due process.59 While beyond the scope of this chapter, reform to intermediary liability would ideally involve reform to dispute resolution to improve access to justice. For example, in other work, I recommend creation of online tribunals to hear defamation disputes.60 56  This change would not be limited to the internet intermediary context but rather to the tort of ­defamation as a whole. 57  See Manila Principles on Intermediary Liability . 58  See Bridy and Keller (n. 32) 23. 59  See Manila Principles (n. 57) 57. 60  See Laidlaw (n. 12) and Emily Laidlaw, ‘Re-Imagining Resolution of Online Defamation Disputes’ (2019) 56 Osgoode Hall L.J. 162.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Canadian Perspective    453 Taking into account the above, the NN+ framework we recommend draws from the notice-and-notice framework under the Copyright Act61 and the plus is the need for something more in a defamation context. The basic principle is that an intermediary should not be required to evaluate the legality of third party content. Upon receipt of a notice of complaint, an intermediary would be required to forward the notice to the third party content creator. If the content creator responds to the complaint, the intermediary will take no further steps and the content will remain in circulation. In such a situation, the complainant’s only option would be to obtain a court order for content removal. If the content creator fails to respond to the notice within a specified period of time, the intermediary would disable access to the content. The risk to the intermediary if it fails to comply with the rules is a fine rather than liability for the underlying wrong. Further, the intermediary should not provide the complainant with personally identifying information of the content creator, particular if the post was made anonymously or pseudonymously.62 A few aspects of this proposal require elaboration. First, the framework is geared towards platforms for user-generated content. Search engines would not have an obligation to forward a complaint, but like the notice-and-notice system under the Copyright Act,63 would only have obligations to remove from search results where the content has been removed from the source. Mixed-used sites, where the intermediary both hosts content and creates content, would be treated as a question of publication (under the narrower definition). This means that a platform that publishes defamatory content risks liability in defamation only if it intentionally conveyed those words with awareness it was defamatory. Secondly, the content requirements of such notices should also be codified, including requiring a declaration that a complaint is made in good faith (to deter abusive complaints), no monitoring requirements, and detailed notice requirements. Indeed, Bridy and Keller advocate such features as the strongest balance to an NTD regime like the DMCA.64 Thirdly, a point of contention is the amount of information needed in a notice. We recommend codifying the content requirements in a notice of complaint. Thus, an intermediary will not have to forward a notice or remove content on a bare allegation of defamation. This is not intended to weigh down a notice with legalese. Rather, the intermediary can provide a complaints form modelled on the various elements of defamation and defences. Through model language complainants could be provided with a path to understand and diagnose their problem, and frame their complaint, and intermediaries would have an avenue to dismiss claims that fail to meet the notice requirements. This approach of diagnosing, containing, and resolving disputes has been successfully used in British Columbia’s Civil Resolution Tribunal, an online court for small claims and similar matters.65 61  Canadian Copyright Act (n. 10). 62  See also the Manila Principles (n. 57) principle 5. 63  Canadian Copyright Act (n. 10). 64  Bridy and Keller (n. 32) 23. 65  See Civil Resolution Tribunal and discussion by one of the project team members, Darin Thompson, ‘The Growth of Online Dispute Resolution and Its Use in British

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

454   Emily Laidlaw Fourthly, a critical issue is whether intermediaries should have discretion to refuse to forward a notice or remove content. The lack of discretion in the Copyright Act meant ISPs had to forward notices they knew were illegitimate. The problem with such discretion is that it once again places intermediaries in the role of arbiter, assessing the merits of a claim. We recognize that intermediaries are in a good position to identify repeat complainants, bulk requests, specious claims, and so on, and propose that intermediaries should have discretion in limited circumstances where the complainant is vexatious. Fifthly, we recommend, with some hesitation, that intermediaries should be permitted to charge a small administrative fee to process complaints. We acknowledge that large platforms currently do not charge to process complaints. Further, caution must be exercised to avoid an exploitive business model.66 That said, it would serve the regulatory function of dissuading casual complainers, and address the administrative burden, especially for small and medium-sized companies. In 2018, the Supreme Court of Canada upheld recovery by an ISP of reasonable costs for complying with a Norwich order.67 Further, the notice-and-notice framework from which our proposal draws inspiration enables creation of regulations concerning fees.68 At the moment, however, ISPs are precluded from charging a fee, because such regulations have not been enacted.69 Potential uses of such a fee invite further scrutiny. Suggestions made to us include directing the fee only to small and medium-sized companies and/or to an organization that helps online abuse complainants, or to be held in escrow and returned to the complainant if content is removed.70 Finally, since some content will be removed without court order, due process is a crucial aspect of intermediary content restriction practices. The Manila Principles71 emphasize that content-restriction practices must provide due process. While some platforms provide elements of due process in their complaints processes, others do not. The voluntary nature of this ordering is unsatisfying and risks disproportionately interfering with the right to freedom of expression.72 Rather, private remedial frameworks should be mandated to have certain characteristics modelled on due process principles. In practice, this means users should have the right to be heard, to hear the case against them, the right to appeal or review of a decision, reinstatement of content when Columbia’, Civil Litigation Conference, British Columbia (2014); Ethan Katsh and Orna Rabinovich-Einy, Digital Justice: Technology and the Internet of Disputes (OUP 2017) 151–60. 66 See AT v Globe24h.com 2017 FC 114 (Can.). 67 See Rogers Communications Inc. v Voltage Pictures LLC 2018 SCC 38 (Can.). See also Cartier International AG and others v British Telecommunications Plc and another [2018] UKSC 28 (UK). 68  See Canadian Copyright Act (n. 10) s. 41.26(2). 69  Indeed, the fact that these regulations have not been enacted created the foundation for the case in Rogers (n. 67). 70  Many thanks to the participants of RightsCon 2018 for their suggestions. 71  The Manila Principles (n. 57) emphasize the need for clear and unambiguous content-restriction practices that provide due process, principles 3, 5, and 6 in particular. 72  I examine the impact of voluntary agreements by intermediaries on freedom of expression in more depth in Emily Laidlaw, Regulating Speech in Cyberspace: Gatekeepers, Human Rights and Corporate Responsibility (CUP 2015) in particular, chs 3 and 6.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Canadian Perspective    455 appropriate, and provision of reasons. In the context of NN+, due process would require that the third party subject to the complaint should have an opportunity to dispute the allegations in the notice, that reasons should be provided to the complainant for an intermediary’s decision to remove or not remove content, and provision of a put-back procedure, and communication to the public, through a flag or similar, that content is challenged and/or removed.73 This should be coupled with encouragement of corporate responsibility through reporting requirements for content-restriction procedures.74

3. Notice-and-Notice-Plus Beyond Defamation Law The NN+ model was designed with the law of defamation in mind. A common question is the suitability of NN+ for other types of unlawful speech. I recognize the benefits of general intermediary liability models in providing certainty to intermediaries, and that unlawful speech is often not neatly siloed into single causes of action. Speech can be defamatory, privacy-invasive and hate speech, for example. Imagine the impracticality of a codified regime to handle defamation disputes, but not for invasion of privacy, where a post on Instagram, for example, is a sensitive private photo of a person that also presents a person in a light that is false and damaging. The complainant would have a codified regime for a defamation complaint, but not for invasion of privacy. This would replicate the illogical nature of the CDA and DMCA regimes, where a complainant must hope that a copyright complaint can be made to take down a post even though the core complaint is that the content is unlawful in some other way, such as hate speech or non-consensual distribution of intimate images (NCDII). However, strategic decisions based on a confluence of conflicting or overlapping laws is the reality of giving legal advice in general and, in particular, in the area of speech regulation. Further, something is lost in generalist regimes, namely their responsiveness to the specific harm suffered. For example, defamation and privacy overlap when the harm suffered is reputational, but the underlying logic of the causes of action are different. Defamation inevitably involves a public element as the defamation must be communicated to a third party, while the harm of privacy can be entirely private in nature where the invasion of privacy itself constitutes the cause of action. As much as a generalist model would provide certainty, it does not logically follow that this value outweighs the 73  These discursive solutions have been suggested by several scholars, e.g. Frank Pasquale, The Black Box Society: the Secret Algorithms that Control Money and Information (HUP 2015) ch. 3; David Ardia, ‘Reputation in a Networked World: Revisiting the Social Foundations of Defamation Law’ (2010) 45 Harv. Civil Rights-Civil Liberties L. Rev. 261; Alastair Mullis and Andrew Scott, ‘Reframing Libel: Taking (All) Rights Seriously and Where It Leads’ (2012) 65 N. Ir. Legal Q. 5, 19–21. 74  This is observable in other industries, e.g. California Transparency in Supply Chains Act (2010), Cal. Civ. Code, s. 1714.43 (US) and the Modern Slavery Act 2015 (UK). See also the Corporate Accountability Index on Ranking Digital Rights .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

456   Emily Laidlaw weaknesses of stripping these causes of action of their defining features. That said, the high-volume and low-value nature of these types of online harms creates a problem of access to justice, where most complainants do not have access to a forum to resolve their dispute beyond the terms and conditions of the platform they use.75 Perhaps disappointing, my conclusion as to the deployability of NN+ to other forms of unlawful speech is ‘it depends’. Ultimately, NN+ best reflects the particular issues that defamation law raises. Aspects of NN+ are suitable for other types of speech, in particular, the shift away from liability to responsibility, and regulatory strategies that seek to inject due process into content-restriction practices. This exercise, albeit at this stage only sketching the issues for the reader, illuminates the importance and challenges of codifying how private bodies handle complaints, and more specifically illustrates what makes NN+ unique, its limitations and possibilities.

3.1  NN+ Requires the Speech to Be Unlawful but Other Forms of Speech Are Harmful Too A critical hurdle is what triggers an NN+ framework. Our proposal is targeted to a particular form of unlawful speech, defamation. However, these platforms host a wealth of harmful speech, some of which is unlawful and other forms that are legal but offensive. The roots of NN+ in defamation law invite reflection on other similar types of online abuse that may or may not be legal. For example, many forms of bullying are harmful, but lawful. It is equally legal to hold and communicate racist thoughts as long as the communication is not hate speech in a way that the law has deemed it so. There is no doubt that these forms of speech are harmful, but it is necessary for the rule of law that there is a legal basis for content removal. While this statement might appear obvious, the reality is that speech harms are so prevalent online that the pressure to remove greater swathes of content strains the rule of law and has inspired a body of literature extolling the virtues of offensive speech.76 Indeed, this point was brought home by the former United Nations Special Rapporteur on freedom of expression, Frank La Rue. He reminded us that speech can only be restricted based on accepted international human rights principles, namely that the limitation must be provided by law (predictable and transparent), for one of the protected purposes, namely reputation, national security, public order, health, or morals (legitimacy), and the restriction must be necessary and the least restrictive means to 75  See Laidlaw (n. 12). 76  See e.g. Amal Clooney and Philippa Webb, ‘The Right to Insult in International Law’ (2017) 48 Colum. Hum. Rts L. Rev. 1. Useful introductions to thinking about harmful speech are Robert Faris and others, ‘Understanding Harmful Speech Online’, Berkman Klein Centre Research Publication no. 2016–21 (December 2016); and regarding the right to offend, Michael Bot, ‘The Right to Offend: Contested Speech Acts and Critical Democratic Practice’ (2012) 24 L. & Literature 232 and Francesca Klug, ‘Freedom of Expression Must Include the License to Offend’ (2006) 1 Religion & Hum. Rts 225.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Canadian Perspective    457 achieve the aim (proportionality).77 He then identified certain types of speech that are unlawful and maybe restricted, listing child pornography, hate speech, defamation, incitement to genocide, and incitement to terrorism.78 This means that NN+ cannot be the framework, at least mandated through legislation, to regulate many forms of harmful speech that defamation resembles. Instead, management of such content by the platforms can be encouraged through legislation as a form of meta-regulation, such as through reporting requirements or safe harbours. Indeed, the latter approach is evident in CDA section 230, however flawed, immunizing intermediaries from liability for moderating content on their platforms. Through community pressure, and similar, many of the large platforms have developed their own governance strategies79 creating the laws of Facebook or Twitter.80 For example, Twitter was heavily criticized for modelling its Twitter rules on the First Amendment, such that threats of violence were only removed in narrow circumstances where the threat was direct and specific. In 2015, it changed its rules to more broadly prohibit promotion of violence.81 However, this kind of shadow regulation privatizes censorship, and is particularly concerning as participation in our public sphere is increasingly experienced in these online spaces.82 To go forward we need to acknowledge the difficulty in expecting companies to moderate harmful speech on their platforms while criticizing them for removing lawful speech. We also need to acknowledge that all kinds of harmful speech are currently legal and perhaps the answer is to revisit speech laws to better address online abuse. The ways that private companies regulate speech has been the focus of this author’s past scholarly work, but is beyond the scope of this chapter. It is acknowledged here to tease out the relationship between NN+ and other ways that speech is regulated in online spaces. Further, as will be evident in the case studies of terrorist content and hate speech, limiting NN+ to unlawful speech sounds great but, in practice, is challenging. For example, the line between offensive speech and hate or terrorist speech can be difficult to discern. Since the basis of NN+ is that intermediaries do not assess the merits of a complaint, there are two options. First, the intermediary must begin assessing content and be provided with wider discretion to refuse to pass on notices it considers illegitimate (something less than the vexatious litigant threshold we pinpointed in our proposal). This is no longer NN+, but a bespoke framework with some similarities to NN+. The second option is that NN+ is implemented without modification. In this case, the 77  La Rue (n. 8) para. 24. 78  ibid. para. 25. 79  See Kate Klonick, ‘The New Governors: The People, Rules, and Processes Governing Online Speech’ (2018) 131(6) Harv. L. Rev. 1599. 80  See Emily Laidlaw, ‘What is a Joke? Mapping the Path of a Speech Complaint on Social Networks’ in Lorna Gillies and David Mangan (eds), The Legal Challenges of Social Media (Edward Elgar 2017). 81  See Twitter Rules . 82  See discussion in Rikke Frank Jørgensen, ‘What Platforms Mean When they Talk about Human Rights’ (2017) 9(3) Policy and Internet 280, 283–4; Jillian York, ‘Policing Content in the Quasi-Public Sphere’ (OpenNet Initiative, September 2010) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

458   Emily Laidlaw intermediary does not assess the content and continues to pass on notices, with the risk that if the third party does not defend his or her post, any complaint of extremist speech risks content removal. We recommend NN+ for defamation, because we are comfortable with how it balances the right to free expression against the right to reputation even though, in limited circumstances, content might be removed without a court order. This balancing might not be the same in assessing other types of speech.

3.2  Speech Regulation for Which NN+ is Clearly Unsuitable We can dispense with two types of harmful speech quickly. In my view, NN+ is clearly unsuitable to address NCDII. Even the Marco Civil treats NCDII differently, providing that an intermediary risks liability for invasion of privacy if it fails to remove such content upon notice.83 NN+ is meant to strike a balance between free speech and reputation when it is hard to know without a trial whether speech is lawful or valuable. The same balance should not be struck in relation to NCDII. NCDII are low-value speech that can readily be removed without significant risk to freedom of expression. Further, the factual and legal issues will generally be less difficult with cases of NCDII than with defamation. Defamation requires determinations as to truth, whether communication was responsible, etc. The only issue likely to arise with most NCDII is whether distribution was consensual. Finally, the risk of harm that NCDII poses is extreme. Given this, a takedown should be possible based on little more than an allegation by a plaintiff that the distribution was non-consensual84—a threshold much lower than that in NN+. Equally NN+ is unsuitable to address the problem of fake news,85 largely because it does not offer the right type of solution to the problem at hand. The distribution of fake news is not in itself unlawful in Canada. Instead, fake news may constitute torts of defamation, invasion of privacy, or the crime of hate speech. Canada once had a crime of spreading false news, but it was held to be unconstitutional in 1992.86 Therefore, there 83  Marco Civil (n. 9) Art. 21. 84  This may be controversial and it is beyond the scope of this chapter to fully develop a model approach to NCDII. Hilary Young and I are co-leading a project for The Uniform Law Conference of Canada developing a model law for addressing NCDII: Hilary  A.N.  Young and Emily Laidlaw, ‘Nonconsensual Disclosure of Intimate Images Discussion Paper’, Uniform Law Conference of Canada (August 2018) . The point here is simply that NN+ does not strike the right balance for dealing with NCDII. 85  I note the definitional challenge of fake news, recognizing, as Alice Marwick argues, that a term that better captures the false and misleading information being spread online is ‘problematic information’. See Alice Marwick, ‘Why Do People Share Fake news? A Sociotechnical Model of Media Effects’ (2018) 2 Geo. L. Tech. Rev. 474, 476–81. Further, fake news is being examined from the angle of the legal basis of a complaints process modelled on NN+. This puts aside some of the more complicated aspects of fake news, such as implications to democracy, cybersecurity, data protection, or elections law, to name a few. 86 See R v Zundel [1992] 2 SCR 731 (Can.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Canadian Perspective    459 is no law on which to attach NN+ unless we route it through other traditional legal categories. Further, unlike defamation law, there is no individual whose rights are implicated and whose responsibility it would be to request that a notice be sent. The problem of fake news is a more fundamental challenge of false political information impacting democracy. The dangers of fake news are unwieldy for a regulatory model such as NN+. Solutions to fake news rather focus on harnessing the strength of technology as a regulator in cyberspace,87 and offer technical and discursive solutions that will prompt normative changes to online communities. In practice, this is deployed through fact-checking and media literacy.88 For example, one option is to place a banner or other warning on the site that the story is alleged to contain false statements of fact. Another possibility is one that Facebook has adopted. People can flag content as false and then some of those flagged stories are vetted by journalist fact-checkers. If false, those who shared the story are notified and the piece’s distribution is reduced. However, as Alice E. Marwick identified, such approaches wrongly presume people are passive vessels and will change their minds once provided with correct facts, and that media literacy will somehow counteract the distrust of media.89 The complexity of resolving the fake news problem is beyond the scope of this chapter, and certainly beyond the scope of NN+. Other types of harmful speech, namely terrorist and hate speech, strike closer to the core function of an NN+ regime, but the criminal nature of the complaints makes them challenging candidates. They make useful case studies, because terrorist content illustrates the important role of courts in making determinations about content restriction, and hate speech illustrates that the definitional struggles of what is hate speech complicates regulatory strategies to curb its dissemination.

3.3  Case Study: Terrorist Content Unlike the EU with its multitude of hard and soft law obligations90 on intermediaries regarding removal of extremist content, Canada has not specifically addressed the liability of intermediaries in legal instruments, guidance, or discussions, although it has created 87  See Lawrence Lessig, Code and other Laws of Cyberspace (Basic Books 1999) and Lawrence Lessig, Code: Version 2.0 (Basic Books 2006). 88  See Marwick (n. 85) 507. 89 ibid. 507–9. See e.g. CBC News, ‘Meet Montreal-based journalist fact-checking false news on Facebook’ (CBC News, 10 October 2018) ; Facebook, ‘Third-Party Fact-Checking on Facebook’ . 90  See discussion of soft law instruments in Joan Barata, ‘New EU Proposal on the Prevention of Terrorist Content Online: an Important Mutation of the E-Commerce Intermediaries’ Regime’, Stanford CIS White Paper (12 October 2018) 2 . E.g. see European Commission, ‘Code of Conduct on Countering Illegal Hate Speech Online’ (2016) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

460   Emily Laidlaw a content-takedown mechanism.91 In Europe, extremist content is captured by the ECD,92 and thus the obligations of intermediaries operate similarly to any type of unlawful content, including defamation. However, in recent years Europe intensified its focus on terrorist-related content, among other things, with specific content-removal obligations on Member States in the Terrorism Directive,93 direction for intermediaries to remove terrorist content within one hour,94 emphasis on automated solutions,95 and a proposal, at the time of writing, for new regulations that broaden the scope and nature of intermediaries roles, including potential monitoring of content.96 Europe’s Terrorist Directive broadly takes aim at ‘public provocation to commit a terrorist offence’,97 including, directly or indirectly, the glorification of terrorism, or advocating its commission.98 The latter definition creates space for tackling the types of communications that intermediaries host. In Canada, since 2015 it is a crime to advocate or promote commission of a terrorist offence,99 which will likely soon be amended to the narrower definition of ‘counselling the commission of a terrorism offence’.100 The Criminal Code also explicitly provides an avenue to obtain a court order for removal of terrorist propaganda from the ‘computer system’s custodian’.101 As Scott Newark describes, this is Canada’s ‘terrorism-propaganda “takedown” tool’.102 Indeed, it has been the basis for obtaining a court order for content removal from intermediaries.103 However, obtaining a court order for content removal envisions intermediaries in a vastly different

91  See Criminal Code, RSC 1985, c. 46, s. 83.223 (Can.). 92  See Directive 2000/31/EC (n. 3). 93  See Directive 2017/541/EU of the European Parliament and of the Council of 15 March 2017 on combating terrorism and replacing Council Framework Decision 2002/475/JHA and amending Council Decision 2005/671/JHA [2017] OJ L88/6. 94  See ‘Google, Facebook, Twitter face EU fines over extremist posts’ (BBC, 12 September 2018) . 95  See European Commission Communication, ‘Tackling Illegal Content Online—Towards an enhanced responsibility of online platforms’ COM(2017) 555 final; European Commission Press Release, ‘Fighting Terrorism Online: Internet Forum pushes for automatic detection of terrorist propaganda’ (6 December 2017) . 96  See European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online’ COM(2018) 640. 97  Directive 2017/541/EU (n. 93) Art. 5. 98 ibid. 99  Criminal Code (n. 91) s. 83.221. 100  Bill C-59, An Act respecting national security matters, s. 143 amending s. 83.221 . 101  See Criminal Code (n. 91) s. 83.223. 102  Scott Newark, ‘C-59: Building on C-51 Towards a Modern Canadian National Security Regime’, Macdonald-Laurier Institute Research Paper (October 2017) . See Colin Freeze, ‘Federal Prosecutors using expanded powers to target terrorist propaganda web content’ (The Globe and Mail, 16 September 2018) . 103  See Freeze (n. 102). It is possible that in extreme cases an intermediary could be charged with advocating or promoting a terrorist offence. Indeed, the same holds true for other terrorist offences. It is a crime to participate or contribute to, directly or indirectly, a terrorist group (s. 83.18) or facilitate a terrorist activity (s. 83.19(1)).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Canadian Perspective    461 role than in a typical NTD regime, where the intermediary is tasked with assessing the merits of a claim and risks liability for the underlying wrong. The dynamics of terrorist content are fundamentally different than defamation such that the NN+ framework is not appropriate to attach to such criminal offences. Indeed, in my view Canada’s mechanism through a court order for content removal is the optimal approach to balance the right to freedom of expression against other rights. This is because defamation concerns individual rights, while disseminating terrorist content is a crime. Defamation claimants’ goals are essentially to correct the reputational harm through apologies, retraction, and sometimes content removal.104 That is not the goal with criminalizing terrorist-related speech, which is to deter crime and prevent creation of spaces for radicalization and incitement. In addition, the underlying nature of terrorist-related content invites greater scrutiny from courts. One of the appeals of NN+ is that it seeks to remove intermediaries from the role of assessing the merits of content complaints. It does so by providing clarity as to the steps an intermediary should take and provides an additional mechanism for content removal in narrow circumstances. In my view, terrorist propaganda should always be assessed by a court. For one, as Craig Forcese and Kent Roach identify, there is a difference between radicalization and radicalization that leads to violence.105 While the latter is more likely enabled through personal relationships,106 the internet offers spaces for the echo chamber, fuelling ‘moral outrage’.107 They summarize: [w]hile the Internet alone may not be a cause of radicalization to violence, it may serve as a ‘driver and enabler for the process of radicalization’; a forum for radicalizing propaganda; a venue for social networking with the like-minded; and then, a means of data mining during the turn towards violence.108

More often, the kinds of comments posted would be the murkier category of ‘radical boasting’,109 praising past acts of violence and endorsing future ones (‘a form of chestthumping, far removed from operational intent or ability’).110 The task of assessing whether content removal is appropriate and achieves the goals sought (reduce supply, reduce radicalization, or otherwise educate users) is a complicated endeavour best left to the courts.111 Finally, the mechanics of NN+ are unsuitable for terrorist propaganda. Under NN+, one of the action requirements for intermediaries when they receive a complaint would be to notify the third party of the complaint and invite a response. This would be inappropriate for terrorist propaganda. In part, some of the complaints are made by 104  See Laidlaw (n. 12). 105  See Craig Forcese and Kent Roach, ‘Criminalizing Terrorist Babble: Canada’s Dubious New Terrorist Speech Crime’ (2015) 53(1) Alberta L. Rev. 35, 40. 106  ibid. 42, 43. 107  ibid. 43. 108  ibid. 44. 109  ibid. 49. 110  ibid. Note: such speech would likely be unlawful in Europe as glorification of terrorism, but not in Canada; ibid. 57. 111  ibid. 44–8, for exploration of factors to consider.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

462   Emily Laidlaw authorities monitoring such feeds and, therefore, it alerts the third party unnecessarily (but content takedown would also do this directly or indirectly). In general, the mechanism would not reduce or prevent radicalization. In the case of terrorist content, the greater risk of private regulation is driving the content to another platform or to the dark net.112

3.4  Case Study: Hate Speech Hate speech is also a form of extremist content, however, it is sufficiently different to terrorist-related content to analyse separately. There are some similarities. As previously, while in Europe intermediary obligations for hate speech are captured in the ECD safe harbour, Canadian law is silent as to intermediary liability for hate speech. The key struggle for deployment of NN+ is that hate speech laws are relatively disconnected from how hate is expressed online. In some ways, this is similar to definitional issues with the promoting terrorism offence. The line between disseminating terrorist propaganda, hate speech, radical opinions, jokes, hyperbole, or offence is sometimes unclear.113 This matters in relation to NN+, because it invites a more fundamental question of how one would narrowly target hate speech, given the definitional issue, and whether NN+ would help achieve the goal of reducing the spread of hate speech online. The first issue is identifying the legal basis of NN+. In Canada, hate speech is an offence under the Criminal Code for public incitement of hatred or wilful promotion of hatred against an identifiable group,114 and a judge may order that such content be removed by an internet provider.115 At one time, a complaint could be made to the Canadian Human Rights Commission, including specifically for hate speech communicated online, but the provision was repealed in 2012.116 The provision is notable, because it created a level of illegality below the criminal provision, which (over)breadth was the reason for the decision to repeal.117 While some provinces have human rights legislation with hate speech provisions, it is uncertain whether a complaint about hate speech on the internet, which is federally regulated, can be made to a provincial tribunal.118 Provincial legislation uses the same general language to define hate speech, focusing on publications that are ‘likely to expose a person or class of persons to hatred or contempt’.119 112  Forcese and Roach discuss the risk of speech going underground in the context of radicalizing radical boasting rather than private ordering as we discuss it here, ibid. 57–8. 113  ibid. 49 (discussing ‘expression spectrum’). 114  Criminal Code (n. 91) s. 319(1) and (2). 115  ibid. s. 320.1. 116  See Canadian Human Rights Act, RSC 1985, c. H6, s 13 (repealed 2013, c.37, s. 2). 117  See Richard Moon, Report to Canadian Human Rights Commission Concerning Section 13 of the Canadian Human Rights Act and the Regulation of Hate Speech on the Internet (October 2008) ; Canadian Human Rights Commission, Special Report to Parliament: Freedom of Expression and Freedom from Hate in the Internet Age (June 2009) . 118 See Elmasry and Habib v Roger’s Publishing and MacQueen (no. 4) 2008 BCHRT 378 (Can.). 119  Quoting s. 3(1)(a), Alberta Human Rights Act, RSC 2000 C. A-25.5 (Can).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Canadian Perspective    463 The Supreme Court of Canada elucidated the meaning of hate speech in such laws as exposing a group to detestation and vilification, which ‘goes far beyond merely ­discrediting, humiliating or offending the victims’.120 Because we have a two-tier system in Canada, with internet comments likely only captured under criminal law, NN+ would only be possibly triggered for a specific type of hate speech, namely speech that publicly incites hatred or wilfully promotes hatred. This suggests an intermediary would have no choice but to assess the merits of a complaint, at least notionally, to identify whether a complaint is capable of meeting the criteria to be passed on. Otherwise, any complaint that something is hate speech would risk content takedown. To put it another way, without assessing the merits of a complaint, it is unclear whether it is the type of speech that forms the legal basis for the NN+ regime. Some of this can be addressed through design of the complaints system (prompting a user, e.g., to identify in the complaint that the content complained about promotes hatred of a group). Generally, however, the intermediary would be tasked with some assessment of content defeating the purpose of NN+. Even if the above can be surmounted, there are key differences between hate speech and defamation. Both hate speech and defamation are concerned with an underlying dignitary harm. However, the focus of hate speech on dignitary harm to a group speaks to a different function. Translated to the role of intermediaries, it might call for different handling of complaints concerning hate speech. As the Supreme Court of Canada noted in R v Keegstra,121 the consequences of hate speech are that the group might retreat from public places or interactions with other groups, or change their behaviour to ‘blend[] in with the majority’.122 The court continues: ‘such consequences bear heavily in a nation that prides itself on tolerance and the fostering of human dignity through, among other things, respect for the many racial, religious, and cultural groups in our society’.123 NN+ would thus function to repair the community harm of hate speech in addition to the individual harm, which is a wider mandate than envisioned in its design. Further, most hate speech is at the margins of legality, and some of it is communicated in a way to bypass hate speech rules (whether through terms and conditions of a site or criminal law). Consider the Daily Stormer style guide,124 which instructs the use of code words that are understood as hate, but would not be easily pinned down as hate speech: ‘Generally, when using racial slurs, it should come across as half-joking’ and ‘women can be called slut, whore, bitch . . . ’.125 Indeed, platforms occasionally remove content as hate speech that is intended to illuminate the problem itself, such as removal of Black 120  Saskatchewan (Human Rights Commission) v Whatcott [2013] 1 SCR 467, para. 41 (Can.). 121  R v Keegstra [1990] 3 SCR 697 (Can.). 122 ibid. 123  ibid. See also Jeremy Waldron, The Harm in Hate Speech (HUP 2012) (arguing that the public call to action with hate speech laws is the assurance one will be treated decently). 124  See e.g. Andrew Marantz, ‘Inside the Daily Stormer’s Style Guide’ (The New Yorker, 15 January 2018) ; Ashley Feinberg,‘This is the Daily Stormer’s Playbook’ (Huffington Post, 13 December 2017) . 125  Feinberg (n. 124).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

464   Emily Laidlaw Lives Matter posts exposing the hateful communications they receive.126 Further, so much is dependent on local knowledge and context, which challenges intermediary assessment of hate speech without local involvement (leading platforms such as Facebook to increasingly employ local content moderators).127 Germany’s NetzDG highlights the struggle in regulating this kind of speech, where the line between unlawful speech and offence is not easy to identify.128 Consider also platforms that become spaces predominantly for the spread of extremist views, whether through solicitation of such content or hands-off content-moderation practices. Gab, for example, espouses a free speech ethos.129 It was frequently used by the Pittsburgh synagogue shooter. Shortly before his rampage, he posted: ‘[Hebrew Immigrant Aid Society] likes to bring invaders in that kill our people. I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in.’130 This statement would likely not be criminal hate speech in Canada, nor would necessarily fall foul of community guidelines of mainstream platforms such as Facebook. Rather, it is the collective force of a group congregating with a target that is the act of hate and for which the law, community guidelines, and other soft laws struggle to pin down. The responsibilities of intermediaries in this space is not answered through NN+. In the previous examples, intermediaries were dominant figures in assessing the merits of complaints. A key principle of NN+ is the opposite, that private parties should not be the arbiters of what is defamatory. In some ways, that holds true for assessment of hate speech; a private party does not have the legitimacy to restrict freedom of expression without a court order. However, the individual nature of defamation complaints compared to group harms of hate speech, means intermediaries perhaps have a greater regulatory role to play than with defamation. Despite the concerns above, there are three benefits to an NN+ regime. First, it would concretize the role of the intermediary. Secondly, it would create minimum standards codifying what a site must do when it receives a content complaint and providing a third party the opportunity to defend continued circulation of the content. Thirdly, a platform could still craft stricter rules for users through its terms and conditions (however flawed or controversial).131 126  See The Santa Clara Principles, ‘An Open Letter to Mark Zuckerberg’ ; Caroline Haskins, ‘86 Organizations Demand Zuckerberg to Improve Takedown Appeals’ (Vice, 15 November 2018) . 127  See Alexis Madrigal, ‘Inside Facebook’s Fast-Growing Content-Moderation Effort’ (The Atlantic, 7  February 2018) . 128  See the 2017 Network Enforcement Act (Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken, NetzDG) (Ger.). 129  See Gab (describing) itself as ‘[a] social network that champions free speech, individual liberty and the free flow of information online. All are welcome.’ 130  Brian Stelter, ‘What’s Gab, the social platform used by the Pittsburgh shooting suspect?’ (CNN, 27 October 2018) . 131  This puts aside the concerns related to privatization of human rights. See Laidlaw (n. 72).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Canadian Perspective    465 The greatest potential, I suggest, is not NN+, but some of its underlying logic, namely, that intermediaries should not be liable for the wrongdoing of third parties, but that rules should be codified for how intermediaries deal with such content based on principles of due process. It may be that liability is carved out for the ‘worst actors’132 whose platforms are primarily devoted to spreading hate or radicalizing users. And it may be that the mechanics of what an intermediary should do with a complaint about hate speech is different than defamation. For example, maybe intermediaries should assess the merits of a hate speech complaint. However, what is compelling is the way that a regulatory process can prompt responsibility. Indeed, a key function of NN+ is to prompt social responsibility. Platforms should manage their spaces in compliance with the duty to respect human rights.133 States can create the playing field for companies to do this by codifying expectations for contentrestriction procedures. In practice, this would be mean that if a complainant concludes that a site failed to handle a complaint as required, he or she could make an application to a court for imposition of a fine (if the NN+ process is strictly followed). The court, in determining the amount or suitability of this fine, could assess the quality of a company’s content-management practices. In this way, regulation can prompt social responsibility, leaving undefined how a platform achieves the goal.134 Tarleton Gillespie makes a similar argument in relation to CDA section 230—that the safe harbour should be ‘paired with public obligations’,135 namely rules about content moderation. His list is extensive compared to my focus on due process here (including welcome recommendations for regulatory oversight, a public ombudsman to communicate with the public, auditing, etc.).136 Indeed, NN+ would not meet those principles, as Gillespie rather argues that ‘[m]oderation is the essence of platforms’.137 Nevertheless, there is common understanding that a broad safe harbour should be matched with rules of accountability, the shape of which is left to be debated.

4. Conclusions With NN+, intermediary responsibility shifts from liability to codification of rules for handling defamation complaints. The risk to the intermediary is not liability for the 132  Danielle Citron used this term in her exploration of potential amendments to s. 230. See Citron ­(n. 40) 177. 133  See United Nations, Human Rights, Office of the High Commissioner, ‘Guiding Principles on Business Human Rights: Implementing the United Nations “Protect, Respect, and Remedy” Framework’ (2011) A/HRC/RES/17/4. 134  See Colin Scott, ‘Reflexive Governance, Meta-Regulation, and Corporate Social Responsibility: The Heineken Effect’ in Nina Boeger, Rachael Murray, and Charlotte Villiers (eds), Perspectives on Corporate Social Responsibility (Edward Elgar 2008) 174–5. 135  Tarleton Gillespie, ‘Platforms are not Intermediaries’ (2018) 2 Geo. L. Tech. Rev. 198, 213–16. 136  ibid. See Internet Rights Governance Model in Laidlaw (n. 72) ch. 6. 137  Gillespie (n. 135) 201.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

466   Emily Laidlaw underlying wrong, but rather a statutory fine. Further, under NN+ the intermediary does not assess the merits of complaints, but rather passes on notices of complaint to third parties and only removes content if the third party fails to respond. The question is whether NN+ is viable to deploy to other kinds of unlawful speech. The exploration of this potential through select case studies shows that NN+ is largely a subject-specific proposal, but that its underlying logic may be more widely deployed. This logic, I suggest, is twofold: (1) intermediaries should not be liable for the wrongdoing of third parties except in unusual circumstances; and (2) rules should be codified for how intermediaries manage content based on principles of due process. These rules might differ depending on the type of speech involved. Certain themes emerged from the analysis. First, the legitimacy of NN+ is its roots in unlawful speech, namely defamation. The problems pervasive online embrace a much wider category of harmful speech, which may or may not be lawful. Indeed, even where an unlawful basis is identified (hate speech, terrorist speech, privacy invasion, etc.), it is easier said than done to piggyback NN+ to these types of speech without a risk of overremoval or enlisting intermediaries in the role of assessing the merits of complaints. While defamation is no easier to assess, the individual nature of defamation complaints makes it easier to deploy through an NN+ regime. Secondly, some types of speech are clearly unsuitable for NN+. Non-consensual disclosure of intimate images can be assessed with less difficulty than defamation complaints, and the low value of the speech balanced against the risk of harm invites a lower threshold for content removal than available through NN+. Fake news is similarly not suited to NN+, because the legal basis of such a regime is murky, and the social harm of fake news is not resolved through NN+. Terrorist content, ultimately, is not suitable for NN+ either, because the criminal nature of the charge combined with the purpose of removing such content (e.g. reduce supply, reduce radicalization, and educate users) is best left to courts. Thirdly, some forms of speech might benefit from a bespoke framework modelled on the underlying logic identified previously. One of the aspects missing from the contentrestriction practices of many platforms is due process, and legislation can be used to shore up these practices while leaving undefined the precise rules. Oversight is through reporting requirements, or review of platform procedures by courts or a similar oversight body. Ultimately, the analysis shows that generalizing frameworks is problematic for the complexity of the online harms that platforms manage. However, we must acknowledge the risks to innovation created by fragmented regulatory systems and the barriers to access to justice it creates for users. Any effort to redesign intermediary liability regulatory frameworks might benefit from the underlying logic of NN+, and should test this logic against principles of innovation and access to justice.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 24

Fr ee Ex pr ession a n d I n ter n et I n ter m edi a r ie s: Th e Ch a ngi ng Geom etry of Eu ropea n R egu l ation Tarlach M c Gonagle*

In Europe, as elsewhere in the world, the perennial political and scholarly debates about the regulation of expression continue unabated. Fuelled by an incessant stream of ­high-profile controversies, those debates focus increasingly on online expression. Do international human rights standards also apply (fully) in the online environment? Do they need to be rethought and repurposed? When does regulation for free expression pass the tipping point and tumble into regulation of free expression? Is it necessary, desirable, or even appropriate to have specific regulatory regimes for different types of media ­platforms in the online environment? Questions such as these have been asked—and answered—repeatedly since the internet first emerged and progressively became a ubiquitous and indispensable ­ medium for communication. Yet, in an environment that is so dynamic, it is important to keep asking—and answering—these questions, because it is not only the technologies themselves that change rapidly, but also the public’s understanding of, trust in, and use of those technologies. These questions hover around the present chapter, which has been written at a moment of growing political and public pushback against the initial enthusiasm for, and uptake of, social networking, search, and other services offered by the ‘big tech’ companies. The chapter explores ongoing shifts in the geometrical patterns of speech regulation in Europe. It first sets out the regulatory framework, which comprises an array of *  This chapter repurposes, in places, some earlier work by the author on similar themes.

© Tarlach McGonagle 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

468   Tarlach Mcgonagle i­ntertwined legally binding and political standards adopted by the Council of Europe and the European Union. It then explains how this framework has given rise to, and indeed encouraged, particular geometrical patterns in European lawmaking and ­policymaking. Those patterns have been shaped by an awareness that the mass media have been powerful actors in public debate, and that their freedom must be safeguarded—within certain agreed limits. They also demonstrate a concern that regulation should not curb the development of new information and communications technologies and services and new markets for such technologies and services. The chapter’s next focus is the recent and ongoing shift in existing regulatory patterns, which entails a significant move towards foisting greater liability and responsibility on internet intermediaries for illegal third party content hosted by them or distributed via their services or networks. There is an emergent preference for self-regulatory codes of conduct as a regulatory technique. However, as this chapter will argue, the relevant European codes of conduct are less voluntary than they may ostensibly seem. Whereas in the past, the encouragement by public authorities of self-regulation appeared to indicate a certain level of trust in a given sector to ‘get its act together’ and to ‘get its own actors to step up to the plate’, recent codes of conduct seem to have a coercive undertone. The subtext appears to read: if the codes of conduct are not adequately adhered to, sanctions will follow.

1.  The European Regulatory Framework 1.1  The Council of Europe Article 10 of the European Convention on Human Rights (ECHR) is the centrepiece of protection for the right to freedom of expression in Europe. The European Court of Human Rights is the adjudicatory body formally tasked with the interpretation of the Convention, which binds all forty-seven Member States of the Council of Europe. The structure and scope of Article 10 ECHR are similar to those of Article 19 of the Universal Declaration of Human Rights and Article 19 of the International Covenant on Civil and Political Rights—the two leading provisions guaranteeing the right to freedom of expression in the United Nations’ legal framework. Article 10 ECHR opens with a broad statement of the right to freedom of expression in its first paragraph. It guarantees a composite right to freedom of expression. The component parts of the right are the freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. But no sooner have these freedoms been set out, than they are reined in, as the text provides that states shall not be prevented from licensing ‘broadcasting, television or cinema enterprises’. The second paragraph of Article 10 is a traditional claw-back clause.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Changing Geometry of European Regulation   469 It provides that the right may be subject to certain limitations based on different ­enumerated grounds, such as ‘the interests of national security, territorial integrity or public safety’, ‘the prevention of disorder or crime’, and ‘the protection of the reputation or rights of others’. Article 10(2) justifies the permissibility of such limitations by linking them to the ‘duties and responsibilities’ that govern the exercise of the right. The scope of those duties and responsibilities varies, depending on the ‘situation’ of the person exercising the right and on the ‘technical means’ used.1 The Court usually explores the nature and scope of relevant duties and responsibilities not through broad principles, but on a caseby-case basis. It tends to distinguish between different professional occupations, such as journalism, politics, education, and military service. It also tends to distinguish between the perceived reach and influence of different media, such as the printed press, audiovisual media, and internet and social media.2 The Court has by and large interpreted Article 10 expansively and in a way that is faithful to the broad principles of freedom of expression. Its approach is, simply stated: the right to freedom of expression is the rule; any limitations on the right are the exception. When assessing whether an interference with the right to freedom of expression amounts to a violation of the right, the Court applies a standard test. It first establishes whether the impugned measure that has led to the interference with the right to freedom of expression is prescribed by law. It then determines whether the impugned measure pursues a legitimate aim (in the sense of Art. 10(2), see earlier). Thirdly, it assesses whether the impugned measure is necessary in a democratic society, corresponding to a pressing social need. The measure must furthermore be proportionate to the legitimate aim(s) pursued and the reasons given by state authorities for the measure must be ‘relevant and sufficient’. The Court interprets the adjective ‘necessary’ in a strict fashion. In practice, the Court has sought to interpret Article 10 ECHR in a way that ensures strong protection for freedom of expression and robust public debate. As the Court famously affirmed in its Handyside judgment, information and ideas which ‘offend, shock or disturb the State or any sector of the population’ must be allowed to circulate in order to safeguard the ‘pluralism, tolerance and broadmindedness without which there is no “democratic society” ’.3 Recent case law from the Court suggests that this so-called Handyside principle actually extends to much of the offensive, unsavoury, and vulgar content that is widely available on the internet. However, a red line marking the outer limits of protected expression can be traced around the contours of hate speech. The term ‘hate speech’ does not appear in the text of the Convention. The Court has been using the term since 1999, but it has never defined the term. ‘Hate speech’ typically falls under Article 17—the Convention’s ‘Prohibition of abuse of rights’ provision. Article 17 aims to prevent any state, group, or person from engaging in any activity (including expression) directed at the destruction of any of the rights enshrined in the 1 See Fressoz and Roire v France [GC] App. no. 29183/95 (ECtHR, 21 January 1999) para. 52. 2 See Jersild v Denmark App. no. 15890/89 (ECtHR, 23 September 1994) para. 31. 3 See Handyside v United Kingdom App. no. 5493/72 (ECHR, 7 December 1976) para. 49.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

470   Tarlach Mcgonagle Convention or the limitation of those rights to an extent greater than is provided for in the Convention. Article 17 can therefore be seen as a safety valve that denies protection to acts that seek to undermine the Convention and go against its letter and spirit. In the past, the Court has applied Article 17 to ensure that Article 10 protection is not extended to racist, xenophobic, or anti-Semitic speech; statements denying, disputing, minimizing, or condoning the Holocaust, or (neo-)Nazi ideas.4 This means that, in practice, sanctions for racist speech do not violate the right to freedom of expression of those uttering the racist speech. In prima facie cases of hate speech, the Court will apply Article 17 in a straightforward fashion. This usually leads to a finding that a claim is manifestly ill-founded, and the claim is accordingly declared inadmissible. Such a finding means that the Court will not examine the substance of the claim because it blatantly goes against the values of the Convention. That is why Article 17 is sometimes referred to as a ‘guillotine’ provision.5 However, the criteria used by the Court for resorting to Article 17 (as opposed to Art. 10(2)) are unclear, leading to divergent jurisprudence.6 How the term ‘hate speech’ is understood and delineated is very important when it comes to determining what measures the media and internet intermediaries should take to counter types of expression that (may) amount to hate speech. The Court has developed a corpus of case law from which it has distilled a set of key free expression principles relating specifically to the media and journalists. The Court considers public debate to be of paramount importance for well-functioning democratic societies. It has repeatedly recalled the important contributions that the media, journalists, and—increasingly—other actors can make to public debate. It has recognized that the media: disseminate information and ideas widely and thereby contribute to public opinion-forming; perform a public watchdog role by keeping governmental and other powerful forces in society under scrutiny; and create shared fora in which public debate can take place. It has held time and again that the public not only have the right to receive information about matters of general interest to society, but the media have the duty to impart such information. In order to enable the media to carry out the key roles ascribed to them in democratic societies, the Court has carved out specific freedoms for them, such as the protection of confidential sources, presentational and editorial freedom, freedom to report and comment—including with recourse to exaggeration and provocation. With the advent and growing influence of the internet, the Court has had to figure out how far and how fast the principles it had developed for the media would travel in the 4  See Tarlach McGonagle, ‘The Council of Europe against online hate speech: Conundrums and c­ hallenges’, Expert paper, doc. no. MCM 2013(005) (the Council of Europe Conference of Ministers responsible for Media and Information Society, ‘Freedom of Expression and Democracy in the Digital Age: Opportunities, Rights, Responsibilities’, Belgrade, 7–8 November 2013). 5  See Françoise Tulkens, ‘When to say is to do: Freedom of expression and hate speech in the case-law of the European Court of Human Rights’ in Josep Casadevall, Egbert Myjer, Michael O’Boyle, and Anna Austin (eds), Freedom of Expression: Essays in honour of Nicolas Bratza (Wolf Legal Publishers, 2012) 284. 6  See Hannes Cannie and Dirk Voorhoof, ‘The Abuse Clause and Freedom of Expression in the European Human Rights Convention: An Added Value for Democracy and Human Rights Protection?’ (2011) 29 Netherlands Quarterly of Human Rights 54–83.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Changing Geometry of European Regulation   471 online world. It has progressively recognized that the roles that were traditionally the preserve of the media and journalists can also be carried out—to varying degrees—by a growing range of (non-media) actors. Examples include NGOs, academics, whistleblowers, citizen journalists, bloggers, and ordinary individuals.7 The Court has identified a positive obligation for states under the ECHR to create a favourable environment for participation in public debate by everyone and to enable the expression of opinions and ideas without fear.8 To meet the challenge of applying its principles in the digital age, the Court has sought to stand firm on familiar shores, but it has also sought to set sail for and explore new horizons. This has led to an approach that could be described as ‘adaptive replication’. The Court has sought to replicate its media freedom standards in respect of the internet, but in a way that is adaptive to distinctive features of the online environment. After a somewhat slow start, the Court is now steadily developing a corpus of ‘internet’ case law. A cornerstone of that case law is the acknowledgement that the internet ‘has become one of the principal means for individuals to exercise their right to freedom of expression today: it offers essential tools for participation in activities and debates relating to questions of politics or public interest’.9 Thus, a measure resulting in the wholesale blocking of Google sites in Turkey ‘by rendering large quantities of information inaccessible, substantially restricted the rights of Internet users and had a significant collateral effect’.10 In a communications environment where the internet is of central importance, intermediaries have gained influence and power over the shaping public debate. The Court has described them as ‘protagonists of the free electronic media’11 and it has referred to the ‘important role’ played by information society service providers ‘in facilitating access to information and debate on a wide range of political, social and cultural topics’.12 All of this is in line with earlier observations by the Court that the internet is ­qualitatively different from other media technologies, ‘in particular as regards the ­capacity to store and transmit information’.13 Nevertheless, the Court is clearly still navigating its way from the shoreline of familiar principles towards the new digital horizons. It still tends to measure new media against the yardstick of print and audiovisual media. As recently as 2013, it found that information on the internet and social media ‘does not 7  See, by way of indicative example, Magyar Helsinki Bizottság v Hungary [GC] App. no. 18030/11 (ECtHR, 8 November 2016). 8 See Dink v Turkey App. nos 2668/07 and four others (ECtHR, 14 September 2010) para. 137. For analysis, see Tarlach McGonagle, ‘Positive obligations concerning freedom of expression: mere potential or real power?’ in Onur Andreotti (ed.), Journalism at risk: Threats, challenges and perspectives (Council of Europe Publishing 2015) 9–35. 9  See Ahmet Yıldırım v Turkey App. no. 3111/10 (ECtHR, 18 December 2012) para. 54. 10  ibid. para. 66 and Cengiz and Others v Turkey App. nos 48226/10 and 14027/11 (ECtHR, 1 December 2015) para. 64. 11 See Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary App. no. 22947/13 (ECtHR, 2 February 2016) para. 88 (and para. 69). 12  Tamiz v United Kingdom App. no. 3877/14 (ECtHR, 12 October 2017) para. 90. 13  Editorial Board of Pravoye Delo and Shtekel v Ukraine App. no. 33014/05 (ECtHR, 5 May 2011) para. 63.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

472   Tarlach Mcgonagle have the same synchronicity or impact as broadcasted information’.14 It noted that ­notwithstanding ‘the significant development of the internet and social media in recent years, there is no evidence of a sufficiently serious shift in the respective influences of the new and of the broadcast media in the [UK] to undermine the need for special measures for the latter’.15 On the other hand, the Court has been willing to explore and accept the importance for free expression online of novel technological features of the new ­communications environment, such as hyperlinking.16 In the light of the complexity of the current-day communications environment, the Court has underscored the increased importance of the duties and responsibilities that govern the exercise of the right to freedom of expression and the pursuit of journalistic activities.17 It also sees it as a task for states’ authorities to develop a legal (and policy) framework clarifying issues such as liability and responsibility.18 Whereas the Court’s so-called internet case law generally extols the informational abundance and communicative potential of the medium, its judgment in Delfi AS v Estonia, which dealt with harmful aspects of online expression, threw a proverbial spanner in the works.19 In the case of Delfi AS v Estonia, the Estonian courts had held a large online news portal liable for the unlawful third party comments posted on its site in response to one of its own articles, despite having an automated filtering system and a notice-and-takedown procedure in place. Delfi removed the comments on the same day that it was requested to do so by the lawyer of the person most directly implicated by the comments. However, that was some six weeks after the publication of the article to which the comments reacted. The Grand Chamber of the European Court of Human Rights held that the national courts’ finding of liability did not violate Delfi’s right to freedom of expression under Article 10 ECHR. The Grand Chamber’s findings were not unanimous, however: Judges Sajó and Tsotsoria penned a lengthy and very strongly worded joint dissenting opinion. The judgment has proved very controversial, particularly among free speech advocates, who fear that such liability would create proactive monitoring obligations for internet intermediaries, leading to private censorship and a chilling effect on freedom of expression. Several criticisms have been levelled at the Delfi judgment. First, the Court took the view that ‘the majority of the impugned comments amounted to hate speech or incitements to violence and as such did not enjoy the protection of Article 10’.20 By classifying the comments as such extreme forms of speech, the Court purports to legitimize the stringent measures that it sets out for online news portals to take against such manifestly unlawful content. The dissenting judges objected to this approach, pointing out that

14  Animal Defenders International v United Kingdom [GC] App. no. 48876/08 (ECtHR 2013) para. 119. 15 ibid. 16 See Magyar Jeti Zrt v Hungary App. no. 11257/16 (ECtHR, 4 December 2018). 17 See Stoll v Switzerland [GC] App. no. 69698/01 (ECtHR, 10 December 2007). 18  Inferred from Editorial Board (n. 13) para. 63. 19  See Delfi AS v Estonia [GC] App. no. 64569/09 (ECtHR, 16 June 2015). 20  ibid. para. 140.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Changing Geometry of European Regulation   473 ‘[t]hroughout the whole judgment the description or characterisation of the comments varies and remains non-specific’ and ‘murky’.21 Secondly, the Court endorses the view of the Estonian Supreme Court that Delfi could have avoided liability if it had removed the impugned comments ‘without delay’.22 This requirement is problematic because, as pointed out by the dissenting judges, it is not linked to notice or actual knowledge23 and it paves the way for systematic, proactive monitoring of third party content. Thirdly, the Court underscored that Delfi was ‘a professionally managed Internet news portal run on a commercial basis which sought to attract a large number of comments on news articles published by it’.24 The dissenting judges aptly argued that the economic activity of the news portal does not cancel out the potential of comment sections for facilitating individual contributions to public debate in a way that ‘does not depend on centralised media decisions’.25 Fourthly, the Court was at pains to stress that ‘the case does not concern other fora on the Internet where third-party comments can be disseminated . . . ’,26 but again, this did not wash for the dissenting judges.27 It is noteworthy that the Court has distinguished the Delfi case and a string of subsequent cases on the basis of the nature of the comments. Whereas it had found that some of the comments in Delfi amounted to hate speech, it described the comments at issue in the MTE & Index.hu case as ‘offensive and vulgar’, but found that they ‘did not constitute clearly unlawful speech’ and ‘certainly did not amount to hate speech or incitement to violence’.28 The Court followed this line in its inadmissibility decision in Pihl v Sweden, a case involving a defamatory blogpost and an anonymous online comment.29 Similarly, in Savva Terentyev v Russia, the Court did not hide its repulsion at the language at the centre of the case, describing it as ‘framed in very strong words’ and as ‘largely [using] vulgar, derogatory and vituperative terms’.30 However, after deep contextual examination, it did not classify the impugned expression as ‘hate speech’. The impugned expression was a diatribe against the police, posted as a comment on an online blog. This distinction between offensive and vulgar expression and hate speech is of major significance, even if it is sometimes difficult to determine in practice. Hate speech—and other types of extreme speech such as incitement to violence and/or terrorist activities—may justify far-reaching restrictions on freedom of expression and imply heightened responsibilities for internet intermediaries to prevent the dissemination of such types of expression via their sites and services. This, at least, seems to be the Court’s approach in the Delfi case and its progeny.

21  ibid. Joint Dissenting Opinion of Judges Sajó and Tsotsoria, paras 12 and 13, respectively. 22  ibid. para. 153. 23  ibid. Joint Dissenting Opinion para. 8. 24  ibid. para. 144. 25  ibid. Joint Dissenting Opinion paras 39 and 28. 26  ibid. para. 116. 27  ibid. Joint Dissenting Opinion para. 9. 28  Magyar (n. 11) para. 64. 29 See Pihl v Sweden (dec.) App. no. 74742/14 (ECtHR, 7 February 2017). 30  Savva Terentyev v Russia App. no. 10692/09 (ECtHR, 28 August 2018) para. 67. See also Pihl v Sweden (n. 29) para. 73.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

474   Tarlach Mcgonagle Besides the ECHR, the Council of Europe uses various other instruments—other treaties and political standard-setting texts—to address media and internet freedom and regulation. The Committee of Ministers, for instance, has adopted numerous Declarations and Recommendations dealing with topics such as a new notion of media; freedom of expression, association, and assembly with regard to privately operated internet platforms and online service providers; human rights and search engines, and human rights and social networking services. Among these political standard-setting texts, there is a discernible—and growing—emphasis on the responsibilities of internet intermediaries. For instance, in its Recommendation CM/Rec(2018)2 to Member States on the roles and responsibilities of internet intermediaries, the Committee of Ministers observes that internet intermediaries ‘facilitate interactions on the internet between natural and legal persons by offering and performing a variety of functions and services’.31 If further states that ‘[o]wing to the multiple roles intermediaries play, their corresponding duties and responsibilities and their protection under law should be determined with respect to the specific services and functions that are performed.’32 In its Appendix, the Recommendation sets out detailed and extensive Guidelines for states on actions to be taken vis-à-vis internet intermediaries with due regard to their roles and responsibilities. The Guidelines have a dual focus: obligations of states and responsibilities of internet intermediaries. The identified obligations of states include ensuring: the legality of measures adopted, legal certainty and transparency, safeguards for freedom of expression, privacy and data protection, and access to an effective ­remedy. The responsibilities of internet intermediaries include: respect for human rights and fundamental freedoms, transparency and accountability, responsibilities in respect of content moderation, the use of personal data, and ensuring access to an effective remedy.

1.2  The European Union The EU, too, has a multilayered regulatory framework containing provisions on ­freedom of expression, the media, and internet intermediaries. It comprises primary and secondary EU law, as well as non-legislative acts, and is supplemented by self- and co-regulatory mechanisms. A selection of the framework’s most salient focuses will now be presented by way of general overview. The Charter of Fundamental Rights of the European Union is the EU’s flagship ­instrument for the protection of human rights. Since the entry into force of the Lisbon Treaty at the end of 2009, the Charter has acquired the same legal status as the EU Treaties, thereby enhancing its relevance. The Charter’s provisions ‘are addressed to the 31  Recommendation CM/Rec(2018)2 of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries (7 March 2018) Preamble, para. 4. 32  ibid. para. 11.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Changing Geometry of European Regulation   475 institutions, bodies, offices and agencies of the Union with due regard for the principle of subsidiarity and to the Member States only when they are implementing Union law’ (Art. 51(1)). The Charter’s provisions which contain principles may be implemented by legislative and executive acts taken by institutions, bodies, offices and agencies of the Union, and by acts of Member States when they are implementing Union law, in the exercise of their respective powers (Art. 52(5)). However, they shall be ‘judicially cognisable only in the interpretation of such acts and in the ruling on their legality’ (ibid.). Insofar as the Charter recognizes fundamental rights resulting from the constitutional traditions common to EU Member States, those rights shall be interpreted in harmony with those traditions (Art. 52(4)). It is important to ensure that the human rights standards elaborated by the Council of Europe and the EU are broadly consistent or equivalent. This is important from the point of view of legal consistency within Europe. In keeping with this line of thinking, the Charter provides that insofar as the Charter contains rights that correspond to those safeguarded by the ECHR, ‘the meaning and scope of those rights shall be the same as those laid down by’ the Convention (Art. 52(3)). This reference to the Convention includes the case law of the European Court of Human Rights.33 Article 11 of the Charter—which focuses on freedom of expression, as well as media freedom and pluralism—should therefore be interpreted consistently with Article 10 of the Convention and relevant case law of the European Court of Human Rights. The text of Article 11 of the Charter is in any case modelled on Article 10 of the Convention, but is more succinctly formulated. All of this means that the principles from relevant case law of the European Court of Human Rights (set out earlier) ought to govern the interpretation of Article 11 of the Charter by the Court of Justice of the European Union (CJEU). At the level of secondary EU law, a number of Directives are relevant, in particular the e-Commerce Directive and the Audiovisual Media Services Directive. The main aim of the e-Commerce Directive is to seek ‘to contribute to the proper functioning of the internal market by ensuring the free movement of information society services between the Member States’.34 The Directive is premised on a contemporary understanding of how internet intermediaries worked in 2000 when the Directive was adopted, namely that intermediaries either have a passive or an active relationship with third party content disseminated through their networks or services. In the logic of this binary distinction, the drafters of the Directive sought to ensure that passive intermediaries would not be held liable for content over which they had no knowledge or control. The Directive thus establishes a ‘safe harbour’ regime for passive intermediaries. The safe harbour regime entails exemptions from liability in ‘cases where the activity of the information society service provider is limited to the technical process of ­operating and giving access to a communication network over which information made 33  EU Network of Independent Experts on Fundamental Rights, Commentary of the Charter of Fundamental Rights of the European Union (2006) 400. 34  Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1 , Art. 1.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

476   Tarlach Mcgonagle available by third parties is transmitted or temporarily stored, for the sole purpose of making the transmission more efficient; this activity is of a mere technical, automatic and passive nature, which implies that the information society service provider has neither knowledge of nor control over the information which is transmitted or stored’.35 These exemptions are set out in Articles 12 to 14 of the Directive and they can be availed of by service providers acting as a ‘mere conduit’ for information, or those which ­provide ‘caching’ or ‘hosting’ services. This means that intermediaries which serve as hosting providers would ordinarily benefit from an exemption for liability for illegal content, as long as they maintain a neutral or passive stance towards that content. A service provider that hosts third party content may avail of this exemption on condition that it does not have ‘actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent’ and that ‘upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information’.36 However, ‘the removal or disabling of access has to be undertaken in the observance of the principle of freedom of expression and of procedures established for this purpose at national level’.37 Pursuant to Article 15 of the Directive, EU Member States are not allowed to impose a general obligation on providers to ‘monitor the information which they transmit or store, nor a general obligation actively to seek facts or circumstances indicating illegal activity’. The type of surveillance that such a general monitoring obligation would entail would have a chilling effect on the freedom of expression of users of the service. The binary distinction between passive and active intermediaries that informed the drafting of the e-Commerce Directive has long been under strain. It no longer ad­equate­ly reflects the complexity of the relationship between intermediaries and third party content today. Ongoing technological developments have enabled intermediaries to engage in a range of activities that move beyond passive hosting towards presentational, recommendation, ranking, and editorial functions. Such activities place the binary distinction under strain because exemption from liability is based on an objective distance from content created by third party users. The Audiovisual Media Services Directive seeks to ensure a minimum level of harmonization across the EU of national legislation governing audiovisual media services, with a view to removing obstacles to the free movement of such services within the EU’s single market. In pursuance of these aims, the Directive coordinates a number of areas: general principles; jurisdiction; incitement to hatred; accessibility for persons with disabilities; major events; the promotion and distribution of European works; commercial communications; and protection of minors. The Directive has evolved from the former Television without Frontiers Directive, and covers traditional television broadcasting as well as on-demand audiovisual media services. Following the revision of the Directive in 2018, the providers of video-sharing 35  ibid. recital 42.

36  ibid. Art. 14.

37  ibid. recital 46.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Changing Geometry of European Regulation   477 platform services will henceforth also fall under the scope of the Directive, insofar as they are covered by the definition of such services. The definition is rather convoluted: ‘video-sharing platform service’ means a service as defined by Articles 56 and 57 of the Treaty on the Functioning of the European Union, where the principal purpose of the service or of a dissociable section thereof or an essential functionality of the service is devoted to providing programmes, user-generated videos, or both, to the general public, for which the video-sharing platform provider does not have editorial responsibility, in order to inform, entertain or educate, by means of electronic communications networks within the meaning of point (a) of Article 2 of Directive 2002/21/EC and the organisation of which is determined by the video-sharing platform provider, including by automatic means or algorithms in particular by displaying, tagging and sequencing.

The thinking behind this shift is that privately owned internet intermediaries exert organizational control over third party content; they determine the modalities of how that content is made available, its level of prominence, and so on. If they de facto control what their users see and how they see it, they should also be held responsible or liable for the content—even though they do not have editorial control over it. Recital 47 of the Directive spells out this thinking in relation to video-sharing platforms in the context of the Directive: A significant share of the content provided on video-sharing platform services is not  under the editorial responsibility of the video-sharing platform provider. However, those providers typically determine the organisation of the content, namely programmes, user-generated videos and audiovisual commercial communications, including by automatic means or algorithms. Therefore, those providers should be required to take appropriate measures to protect minors from content that may impair their physical, mental or moral development. They should also be required to take appropriate measures to protect the general public from content that contains incitement to violence or hatred directed against a group or a member of a group on any of the grounds referred to in Article 21 of the Charter of Fundamental Rights of the European Union (the ‘Charter’), or the dissemination of which constitutes a criminal offence under Union law.

This line of thinking has been criticized for resulting in ‘considerable political and social pressure [being] exerted on these platforms to resolve the problems “themselves” ’.38 This, in turn, ‘leads to a “spiral of privatised regulation” ’.39

38 Ben Wagner, ‘Free Expression? Dominant Information Intermediaries as Arbiters of Internet Speech’ in Martin Moore and Damian Tambini (eds), Digital Dominance: the Power of Google, Amazon, Facebook, and Apple (OUP 2018) 223. 39 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

478   Tarlach Mcgonagle The applicability of the Directive to the providers of video-sharing platform s­ ervices does not concern all provisions of the Directive. The focus is very much on content that is damaging for minors, incitement to violence or hatred, and public provocation to commit a terrorist offences (but there is also attention for requirements for audiovisual commercial communications). Article 28b is the operative provision in this regard. It reads: 1. Without prejudice to Articles 12 to 15 of Directive 2000/31/EC, Member States shall ensure that video- sharing platform providers under their jurisdiction take appropriate measures to protect: (a) minors from programmes, user-generated videos and audiovisual commercial communications which may impair their physical, mental or moral development in accordance with Article 6a(1); (b) the general public from programmes, user-generated videos and audiovisual commercial communications containing incitement to violence or hatred directed against a group of persons or a member of a group based on any of the grounds referred to in Article 21 of the Charter; (c) the general public from programmes, user-generated videos and audiovisual commercial communications containing content the dissemination of which c­ onstitutes an activity which is a criminal offence under Union law, namely public provocation to commit a terrorist offence as set out in Article 5 of Directive (EU) 2017/541, offences concerning child pornography as set out in Article 5(4) of Directive 2011/93/ EU of the European Parliament and of the Council (*) and offences concerning racism and xenophobia as set out in Article 1 of Framework Decision 2008/913/JHA.

Bringing video-sharing platform providers under the Directive stretches both the material scope and the underlying logic of the Directive. Be that as it may, the move reflects a clear anxiety about the prevalence of particular types of harmful content on video-sharing platforms and their influence on the public. The move seeks to ensure that the selected types of harmful content cannot slip through any regulatory meshes between the nets of the e-Commerce Directive and the Audiovisual Media Services Directive. The three types of harmful content have been singled out for far-reaching restrictions over and above other types of (less) harmful content. A similar distinction in the case law of the European Court of Human Rights has also been observed (see further earlier in the chapter). Besides the aforementioned Directives—both of which are traditional forms of regulation, it is also apposite to pay attention to self- and co-regulatory systems, and nonlegislative measures in this area. The European Union has a history of advocating the use of self-regulatory mechanisms as the most appropriate form of regulating the internet and mobile technologies, due to constant technological developments in those areas. According to the revised Audiovisual Media Services Directive: ‘Self-regulation constitutes a type of voluntary initiative which enables economic operators, social partners, ­non- governmental organisations and associations to adopt common guidelines amongst

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Changing Geometry of European Regulation   479 themselves and for themselves. They are responsible for developing, monitoring and enforcing compliance with those guidelines’ (recital 14). Self-regulation, with the flexibility it offers, is seen as a suitable means of regulating certain aspects of media, internet, and mobile technologies. For instance, the Audiovisual Media Services Directive (Art. 4a) encourages EU Member States to explore the suitability of self- and/or co-regulatory techniques.40 Similarly, both the e-Commerce Directive (Art. 16)41 and the former Data Protection Directive (Art. 27)42 (have) stress(ed) the importance of codes of conduct; approaches which represent a ­tentative move away from traditional regulatory techniques in the direction of self-regulation.

2.  Geometrical Shifts For several decades, the mass media—print and broadcast—held sway as ‘the central institution of a democratic public sphere’.43 In today’s increasingly digitized society, the mass media have ceded that position to, or are at least sharing it with, a growing number of other new media actors. These actors do not usually fall within the ambit of traditional media regulatory regimes. This has made it much more challenging to regulate expression in the online environment.44 Internet intermediaries are important actors in the online environment. Due to their gate-keeping functions, they can facilitate or obstruct access to the online fora in which public debate is increasingly conducted.45 Intermediaries with search and/or recommendation functions, typically driven by algorithms, have far-reaching influence on the 40  See Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) (codified version) [2010] OJ L95/1, as revised by Directive 2018/1808/EU of the European Parliament and of the Council of 14 November 2018 amending Directive 2010/13/EU . . . in view of changing market realities [2018] OJ L303/69. 41  See Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (e-Commerce Directive) [2000] OJ L178/1. 42  See Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data [1995] OJ L281/31. 43  Edwin Baker, ‘Viewpoint Diversity and Media Ownership’ (2009) 60 Federal Communications L.J. 651, 654. 44  See Egbert Dommering, ‘The Ever Growing Complexity of Regulating the Information Society’ in Pieter Kleve and Kees van Noortwijk (eds), Something Bigger than Yourself—Essays in Honour of Richard de Mulder (Erasmus Universiteit Rotterdam 2011) 1–15. 45  See e.g. Aleksandra Kuczerawy, Intermediary Liability and Freedom of Expression in the EU: from Concepts to Standards (Intersentia 2018) chs 1 and 2.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

480   Tarlach Mcgonagle availability, accessibility, visibility, findability, and prominence of particular content. The operators of social network services, for instance, ‘possess the technical means to remove information and suspend accounts’, which makes them ‘uniquely positioned to delimit the topics and set the tone of public debate’.46 Search engines, for their part, have the aim and the ability to make information more accessible and prominent; which gives them influence over how people find information and ideas and over what kinds of information and ideas they find.47 Both of these types of internet intermediary therefore have clear ‘discursive significance’ in society.48 The term ‘online platform’ is being used increasingly in scholarship, sometimes thoughtfully and sometimes loosely, to denote a particular type of online actor. The term has come to the fore in policymaking discussions and processes. It has been described as ‘a programmable digital architecture designed to organize interactions between users—not just end users but also corporate entities and public bodies’.49 It is furthermore ‘geared toward the systematic collection, algorithmic processing, circulation, and monetization of user data’.50 The combination of actions and interactions enabled by platforms, and their complexity, demonstrate that they are qualitatively different to traditional media, and that the regulatory framework for traditional media cannot straightforwardly be extended to online platforms. Some authors speak of the datafication and platformization of society and the Internet of Things. Ongoing developments and trends have prompted the observation that ‘[p]latform mechanisms shape every sphere of life, whether markets or commons, private or public spheres’.51 All of this points towards the dislodging of the mass media as the central institution in democratic societies. A more abundant, but fragmented, information offer has emerged instead, with new gate-keepers controlling its flow. Internet intermediary liability has been the subject of extensive academic examination, from a variety of perspectives, such as accountability issues,52 tort law,53 freedom of expression,54 and copyright.55 Some authors have detected a recent shift of focus in

46  See Patrick Leerssen, ‘Cut Out by The Middle Man: The Free Speech Implications of Social Network Blocking and Banning in the EU’ (2015) 6 JIPITEC 99, 99–100. 47  See generally Joris van Hoboken, Search Engine Freedom. On the Implications of the Right to Freedom of Expression for the Legal Governance of Web Search Engines (Kluwer Law Int’l 2012). 48 Emily Laidlaw, Regulating Speech in Cyberspace: Gatekeepers, Human Rights and Corporate Responsibility (CUP 2015) 204. 49  ibid. 4. 50 ibid. 51  (Emphasis in original) José van Dijck, Thomas Poell, and Martijn de Waal, The Platform Society: Public Values in a Connective World (OUP 2018) 46. 52  See Martin Husovec, Injunctions Against Intermediaries in the European Union—Accountable But Not Liable? (CUP 2017). 53 See e.g. Christina Angelopoulos, European Intermediary Liability in Copyright. A Tort-Based Analysis (Kluwer Law Int’l 2017). 54  See e.g. Aleksandra Kuczerawy, Intermediary Liability and Freedom of Expression in the EU: from Concepts to Standards (Intersentia 2018). 55  See e.g. Stefan Kulk, ‘Internet Intermediaries and Copyright Law: Towards a Future-proof EU Legal Framework’, PhD thesis, Utrecht University (2018).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Changing Geometry of European Regulation   481 (the discourse around) relevant lawmaking and policymaking. They have pointed to an ongoing movement from intermediary and platform liability to responsibility.56 This is an insightful reading of current regulatory and policy discussions and developments. At the Council of Europe, the Delfi judgment underscored the requirement for certain types of internet intermediaries to take strong and effective measures against hate speech (although the Grand Chamber of the Court was at pains to stress that the wider ramifications of the judgment were limited). Standard-setting instruments by the Committee of Ministers stress that the human rights responsibilities of internet intermediaries should guide all of their activities. Under EU law, the revised Audiovisual Media Services Directive has created new obligations for video-sharing platforms to prevent the dissemination of certain types of harmful illegal content via their services. The new Directive on copyright and related rights in the Digital Single Market also contains provisions that create liability for internet intermediaries for unauthorized communication to the public of copyrighted works.57 Under Article 17 of the Directive, ‘online content-sharing service providers shall be liable for unauthorised acts of communication to the public, including making available to the public, of copyright-protected works and other subject matter’, save in certain limited circumstances. This provision has sparked fears that it will lead in practice to the installation of upload filters to pre-empt the sharing of copyright-protected works in a strategy to avoid liability for the unauthorized communication to the public of such works. These legislative developments continue a wider-sweeping policy line set out by the  European Commission in its Communication, ‘Tackling Illegal Content Online: Towards an enhanced responsibility of online platforms’.58 The title of the Communication accurately encapsulates its intent. The Communication provides guidance to online service providers in respect of their responsibilities vis-à-vis illegal online content. The Commission took the opportunity to announce that it would assess whether additional measures were needed, including by monitoring progress on the basis of existing voluntary arrangements among service providers. The Communication was followed by a Recommendation on measures to effectively tackle illegal content online, which ‘builds on and consolidates the progress made in the framework of

56 See Giancarlo Frosio, ‘Why Keep a Dog and Bark Yourself? From Intermediary Liability to Responsibility’ (2018) 26(1) IJLIT 1, 8; Aleksandra Kuczerawy, Intermediary Liability and Freedom of Expression in the EU: from Concepts to Safeguards (Intersentia 2018). See also John Naughton, ‘Platform Power and Responsibility in the Attention Economy’ in Martin Moore and Damian Tambini (eds), Digital Dominance: the Power of Google, Amazon, Facebook, and Apple (OUP 2018) 371–95. 57  See Directive 2019/790/EU of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92. 58  See European Commission, ‘Tackling Illegal Content Online—Towards an enhanced responsibility of online platforms’ (28 September 2017) COM (2017) 555 final.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

482   Tarlach Mcgonagle v­ oluntary arrangements agreed between hosting service providers and other affected service providers regarding different types of illegal content’.59 Other current legislative proposals similarly follow this policy line, for example the proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online.60 The proposed Regulation seeks to ‘provide clarity as to the responsibility of hosting service providers in taking all appropriate, reasonable and proportionate actions necessary to ensure the safety of their services and to swiftly and effectively detect and remove terrorist content online, taking into account the fundamental importance of the freedom of expression and information in an open and democratic society’.61 This new wave of EU law and policy consistently mentions the need to take into account or have regard for fundamental rights safeguards and existing provisions for exemptions for liability under the e-Commerce Directive. It is very important that this does not become mere lip-service and that the new wave of EU law and policy does not wash over or wash away fundamental rights safeguards.62 A selection of policy and ostensibly self-regulatory initiatives deserve mention at this juncture as they are indicative of one of the main ongoing shifts in the geometry of European regulation, namely codes of conduct against online hate speech and online disinformation. In May 2016, at the behest of the European Commission, the Code of Conduct on Countering Illegal Hate Speech Online was adopted by a number of leading multinational tech companies. The initial signatories were Facebook, Microsoft, Twitter, and YouTube (owned by Google), with other companies joining later: Instagram (owned by Facebook), Google+, Snapchat, and Dailymotion in 2018 and Jeuxvideo.com in 2019. Under the Code, the signatory IT companies commit inter alia ‘to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary’. Compliance with the Code is monitored by way of annual evaluations; the fourth evaluation took place in December 2018. As in the previous annual evaluations, the Commission and the IT companies were self-congratulatory about the high statistics provided about the speed of reviewing and high removal rate of illegal hate speech from their services. The IT companies review an

59 European Commission, ‘Recommendation on Measures to Effectively Tackle Illegal Content Online’ (1 March 2018) C (2018) 1177 final. 60  See European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on Preventing the Dissemination of Terrorist Content Online’ (12 September 2018) COM (2018) 640 final, 2018/0331 (COD) para. 2. 61  ibid. 2. 62  See generally Christina Angelopoulos and others, ‘Study of fundamental rights limitations for online enforcement through self-regulation’, commissioned by Open Society Foundations, Institute for Information Law (2015). See also, in particular as regards the proposal for a Regulation on preventing the dissemination of terrorist content online, European Union Agency for Fundamental Rights, Opinion 2/2019—Online Terrorist Content (February 2019).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Changing Geometry of European Regulation   483 average of 89 per cent of notifications of illegal hate speech within twenty-four hours and they are removing 72 per cent of the illegal hate speech notified to them.63 The aim and achievement of cleaning up the internet’s cesspools64 is laudable, and the expeditious removal or disabling of hate speech from online networks is to be welcomed. However, concerns persist about the risk of private censorship by the actors responsible for the decisions to remove or disable particular types of content. There is particular concern about the risk of over-censorship and the removal of content ‘to be on the safe side’ and to thereby avoid incurring liability for such content. Reporting under the Code reveals little about the processes and criteria used to make such decisions—despite the Code’s professed commitment to enhancing the transparency of such processes. The focus of the IT companies’ reporting so far has been predominantly on statistics about the removal of content and less on other commitments under the Code that could contribute to creating an online environment that is more resilient in the face of hate speech. Examples include education and awareness-raising and the promotion of independent counter-narratives. It is important to appreciate and pay attention to the range of responsibilities of the IT companies and not to fixate on removal statistics. Existing European and international standards on business and human rights underscore the need for a broad understanding of the range of responsibilities at issue.65 It is important for the IT companies to demonstrate positional awareness within ­relevant European and international standards when honouring their commitments under the Code. At the end of September 2018, representatives of several online platforms, social ­networking service operators, and advertising companies agreed on a Code of Practice on Disinformation. This initiative should be seen in the context of a wider range of efforts by the EU to combat online disinformation, including the Commission’s Communication, ‘Tackling online disinformation: a European approach’ (April 2018), and an Action Plan against Disinformation (December 2018). Google, Facebook, Twitter, Mozilla, and the trade associations representing the advertising sector submitted their first reports on the measures they are taking to comply with the Code of Practice on Disinformation at the end of January 2019.66 The European Commission gave the reports a guarded welcome, while urging the signatories to improve and/or 63  See European Commission, ‘Code of Conduct on countering illegal hate speech online: Fourth evaluation confirms self-regulation works, Fact sheet’ (February 2019). 64  See Brian Leiter, ‘Cleaning Cyber-Cesspools: Google and Free Speech’ in Saul Levmore and Martha Nussbaum (eds), The Offensive Internet: Speech, Privacy, and Reputation (HUP 2010) 155–73. 65  See United Nations Guiding Principles on Business and Human Rights (2011); Council of Europe, ‘Recommendation CM/Rec(2016)3 of the Committee of Ministers to member States on human rights and business (2 March 2016) CM/Rec(2016)3; Council of Europe, ‘Recommendation CM/Rec(2018)2 of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries’ (7 March 2018) CM/Rec(2018)2. 66 See European Commission, ‘First results of the EU Code of Practice against disinformation’ (29  January 2019) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

484   Tarlach Mcgonagle increase the measures they have taken. Each signatory chooses the most relevant commitments for its own company—in the light of the services it offers and actions it performs—from a list of possible commitments. The Commission has reminded/­ cautioned the signatories about the possibility of a legislative backstop in this area. The Commission has stated that should the results of the envisaged comprehensive, twelve-month assessment of the operation of the Code of Practice in December 2019 prove unsatisfactory, it may take further action, including of a regulatory nature.

3. Conclusions It is little wonder that the Council of Europe and the European Court of Human Rights are moving tentatively from one shore to another in their approach to media freedom and regulation in the digital age. As the Delfi judgment appears to put greater onus on internet intermediaries—in particular circumstances—to tackle hate speech over which they can exert control, it is essential not to lose sight of the free expression principles that have guided the Court’s approach in the analogue world. That the Court has emphasized those principles repeatedly in the post-Delfi case law may suggest that the critical backlash against that judgment has not gone unheard within the Court. The Committee of Ministers is pushing for internet intermediaries to show a greater sense of ambition and initiative when it comes to identifying and fulfilling their human rights responsibilities. A continuing challenge and source of tension involves the delineation of the term hate speech; that is, the demarcation line between types of harmful expression that ordinarily are entitled to protection and the most harmful types of expression that attack the values of the ECHR and therefore do not enjoy protection. It is very important that the Court provides as much clarity and clarification as possible, in the light of the growing expectations on internet intermediaries to take effective measures to counter and prevent hate speech, terrorist content, and other such content. There is a correlation between the seriousness of the perceived harms of certain categories of expression and the expectation of heightened responsibility on the part of internet intermediaries to provide effective protection against them. This is precisely the dilemma that is dogging the current EU approach, which is increasingly shifting the burden of policing content to private actors because they have the technical capacity to take preventive and removal/blocking actions. However, the lack of legal legitimacy of private actors to carry out such public tasks and the absence of transparency and accountability for how they actually exercise their censorial power in this regard, raises a range of pressing fundamental rights concerns. Internet intermediaries are now coming under increasing pressure to live up to their corporate social and human rights responsibilities in markets where their corporate interests dominate, but

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Changing Geometry of European Regulation   485 where there is a pending possibility of sanctions and/or regulation to put steel into ­voluntary commitments entered into by intermediaries. The regulatory geometry of internet intermediaries is complex and multi-­dimensional. It is shifting from the relative confidence in a binary understanding of passive and active actors to an anxious awe of complicated, multi-functional p ­ latforms which have reshaped the whole ecosystem in which they operate. More shifts in these geometrical patterns are to be expected in order to deal with the ongoing p ­ latformization of ­public debate.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 25

The R ight to Be Forg ot ten i n th e Eu ropea n U n ion Miquel Peguera

Among the huge volume of content that is made available at a rapidly growing pace on the internet, it is only natural that we find abundant information related to identifiable individuals. Such information may be inaccurate, may be shown out of context, or may be decades old and have become embarrassing or damaging today. Even where it is perfectly accurate information, its current availability on the net—particularly when search engines cause such information to emerge as one of the first results in a search made on the basis of an individual’s name—may harm in different ways the person to which it refers, distorting or ruining one’s reputation. Even if it is not harming, the individual may not want such information to be permanently remembered and linked to him or her. He or she wants to escape from it, to be left alone—to ‘be forgotten’. The problem of the widespread availability of privacy-damaging information is not new. It has been dealt with since long before the internet era, sometimes under the label of droit à l’oubli, particularly in connection to mass media publications.1 Nonetheless, the issue has grown to an unprecedented level after the eruption of the web, the digitization of press archives, and the ease of finding information thanks to search engines.2 The right to protection of personal data, enshrined as an autonomous right by the Charter of Fundamental Rights of the European Union,3 which is in some respects

1  See Alessandro Mantelero, ‘The EU Proposal for a General Data Protection Regulation and the Roots of the “Right to Be Forgotten” ’ (2013) 29 Computer L. & Security Rev. 229, and references therein. 2 See e.g. Viktor Mayer-Schönberger, Delete: The Virtue of Forgetting in the Digital Age (Princeton U. Press 2009). 3  See Charter of Fundamental Rights of the European Union [2012] OJ C326/391 (the Charter), Art. 8. On the recognition of data protection as a fundamental right, see Gloria González Fuster, The Emergence of Personal Data Protection as a Fundamental Right of the EU (Springer 2014).

© Miquel Peguera 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Right to be Forgotten in the European Union   487 wider than that of privacy notwithstanding their overlaps,4 has been seen as a ­particularly fitting tool to regain control over the dissemination of personal information on the internet. Indeed, data protection, as conceived in EU law, is essentially a right of informational self-determination.5 It includes the right to have personal data erased wherever they are unlawfully processed—irrespective of whether such processing amounts to a violation of privacy—except where there is a competing right or interest that should prevail. The 1995 Data Protection Directive—now replaced by the General Data Protection Regulation (GDPR)6—already granted a right of erasure, so data ­subjects may obtain, as appropriate, ‘the rectification, erasure or blocking of data the processing of which does not comply with the provisions of this Directive, in particular because of the incomplete or inaccurate nature of the data’.7 When the works for revamping the EU data protection legal framework started around 2010, the EU Commission emphasized the need of strengthening data subjects’ control over their data,8 noting that the effective exercise of the rights provided for by the Directive was still challenging, especially regarding the online environment. Specifically, the Commission sought to ‘clarifying the so-called “right to be forgotten”, i.e. the right of individuals to have their data no longer processed and deleted when they are no longer needed for legitimate purposes’.9 The label ‘right to be forgotten’ made its way to the final text of the GDPR, if only in brackets and in double quotes, in the title of Article 17, devoted to the right to erasure—somehow conveying that such a right encompasses a ‘right to be forgotten’.10 Nonetheless, the GDPR did not expressly codify the outcome of the landmark 2014 ruling issued by the Court of Justice of the European Union (CJEU) in the Google Spain case.11 The ruling held that internet search engines are obliged to remove the search results pointing to personal information which is deemed to be ‘inadequate, irrelevant

4  See e.g. Orla Lynskey, ‘Deconstructing Data Protection: The “Added-Value” of a Right to Data Protection in the EU Legal Order’ (2014) 63 Int’l & Comparative L. Quarterly 569. 5  See Giancarlo Frosio, ‘Right to Be Forgotten: Much Ado About Nothing’ (2017) 15(2) Colorado Tech. L.J. 307, 313. 6  See Regulation 2016/679/EU of the European Parliament and the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119/1. 7  See the no-longer-in-force Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data [1995] OJ L281/31, Art. 12(b). 8  See European Commission Communication, ‘A comprehensive approach on personal data protection in the European Union’ COM (2010) 609 final. 9 ibid. 8. See also the speech by Viviane Reding, Vice-President of the European Commission, responsible for Justice, Fundamental Rights and Citizenship, European Data Protection and Privacy Conference, ‘Privacy matters—Why the EU needs new personal data protection rules’, Brussels (30 November 2010) . 10  However, there is little in Art. 17 which was not already recognized for data subjects in the Directive, and thus it is not clear that the right to erasure has been meaningfully enhanced by the GDPR, the addition of the right-to-be-forgotten words being arguably cosmetic. 11  See C-131/12 Google Spain SL v Agencia Española de Protección de Datos [2014] ECLI:EU:C:2014:317.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

488   Miquel Peguera or no longer relevant, or excessive’,12 where the search is made on the basis of the data subject’s name. Such a tailored right is commonly referred to as the ‘right to be forgotten’—or the ‘right to be delisted’ from the search results. While not specifically reflected in the GDPR, the grounds the CJEU found in the Directive for recognizing such a right—that is, the right to erasure and the right to object to the processing—are also present in the GDPR. Therefore, the CJEU’s holdings in Google Spain must also be deemed essentially applicable under the GDPR.13 This chapter will briefly consider two manifestations of the right to be forgotten as they are being currently applied in the EU. First, the right to be forgotten vis-à-vis internet search engines; that is, the right to be delisted from search results. Secondly, the right-to-be-forgotten claims directed against primary publishers to have the information deleted or anonymized at the source.

1.  The Right to be Forgotten Vis-à-Vis Search Engines Under EU data protection law, the so-called right to be forgotten may be exercised by a data subject through the right of erasure—now also labelled as ‘the right to be forgotten’ in Article 17 of the GDPR—or via the right to object to the processing, where the conditions for any of those rights are met. Such claims may be directed to any data controller. In practice, however, the most relevant development of the right to be forgotten has consisted of its use vis-à-vis internet search engines, following the recognition of this right by the CJEU in Google Spain.

1.1  Google Spain As a threshold question, the Court in Google Spain dealt with the territorial scope of the then-in-force Data Protection Directive. It found that it was applicable to the processing of data carried out outside the EU by a non-EU company—in this case, Google Inc. The Court found that the processing was carried out ‘in the context of the activities of an establishment’ of the search engine in Spain, even though such an establishment—Google’s Spanish subsidiary—confined its activities to promoting and selling advertisement space offered by the search engine.14 As a result, one of the connecting factors provided for in the Directive to trigger its applicability was met.15 The same conclusion can certainly be reached under the GDPR, as it provides—even more 12  ibid. para. 94. 13 However, Arts 17(3)(a) and 85 GDPR offer more room for taking freedom of expression into account. 14  See C-131/12 (n. 11) para. 60. 15  See Directive 95/46/EC (n. 7) Art. 4(1)(a).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Right to be Forgotten in the European Union   489 broadly—for the same connecting factor, and generally widens the territorial scope compared to the Directive.16 The Court also held that a search engine’s operator: (1) carries out a processing of the personal data included in the web pages it indexes and gives access to through the search results, a processing which is different from that carried out by the websites where the information is located;17 and (2) is a data controller, as it determines the purposes and means of such a processing.18 Again, the basis for such conclusions may also be found in the GDPR’s definitions of processing and controller. After redrafting some of the questions posed by the national court so as to be able to give a somewhat narrow answer, the Court held that the right to erasure and the right to object—provided that their conditions are met—allow data subjects to require the removal of search results pointing to personal information in searches carried out on the basis of the data subject’s name. Under the Directive, the right to erasure was granted where the processing of data does not comply with the provisions of the Directive— thus, for instance, where the data are inadequate, irrelevant, or excessive in relation to the purposes of the processing, as that would entail not complying with the Directive’s requirement of data quality.19 According to the CJEU, the data subject does not need to have suffered any prejudice;20 the data does not have to be necessarily unlawful, inaccurate, or untruthful, or even be private information. In addition, there is no need to have the content erased beforehand or simultaneously from the publisher’s web page, or even to ask the publisher for such a removal. The data subject may directly request the delisting of the search results to the search engine, as it carries out a separate and additional processing from that carried out by the primary publisher.21 The CJEU made it clear that a fair balance should be sought between the legitimate interest of internet users in accessing the information and the data subject’s fundamental rights under Articles 7 and 8 of the Charter. It held that, as a rule, data subject’s rights will override those of internet users, though in some cases the outcome may depend ‘on the nature of the information in question and its sensitivity for the data subject’s private life and on the interest of the public in having that information, an interest which may vary, in particular, according to the role played by the data subject in public life’.22 16  See Regulation 2016/679/EU (n. 6) Art. 3. 17  C-131/12 (n. 11) para. 35. 18  ibid. para. 33. 19  See Directive 95/46/EC (n. 7) Art. 6(c). Similarly, the principle of data minimisation under the GDPR requires the data to be ‘adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed’ (Art. 5(1)(c) GDPR). 20  C-131/12 (n. 11) para. 96. 21  ibid. para. 88. 22  ibid. para. 81. Some criticism about the way the CJEU framed the balancing of rights can be seen, e.g., in Stefan Kulk and Frederik Zuiderveen Borgesius, ‘Google Spain v. González: Did the Court Forget About Freedom of Expression?’ (2015) 3 Eur. J. of Risk Reg. 389; Christopher Rees and Debbie Heywood, ‘The “Right to be Forgotten” or the “Principle That Has Been Remembered” ’ (2014) 30 Computer L. & Security Rev. 574; Eleni Frantziou, ‘Further Developments in the Right to be Forgotten: The European Court of Justice’s Judgment in Case C-131/12, Google Spain, SL, Google Inc. v. Agencia Espanola de Proteccion de Datos’ (2014) 14 Human Rights  L.  Rev. 761; Michael Rustad and Sanna Kulevska, ‘Reconceptualizing the Right to Be Forgotten to Enable Transatlantic Data Flow’ (2015) 28 Harv. J. of L. &

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

490   Miquel Peguera As a result, search engine operators must assess on a case-by-case basis the merits of the right-to-be-forgotten requests and make a decision taking into account the rights and interests involved.23 When a request is rejected by the search engine, the data subject may resort to the Data Protection Authority (DPA), which may order the search engine to delist the link. Courts may also be involved either where a data subject files a judicial complaint asking for the removal or where the DPA’s decision is appealed.

1.2  Delisting in Numbers When considering the practical application of the right to be forgotten in relation to search engine results, Google is of course the crucial source of information to look at, not least because of its absolute dominance in Europe, where it holds more than 90 per cent of the market share for search engines.24 According to Google’s transparency report,25 from 29 May 2014—when it launched its official request process dealing with the right to be forgotten—up to the end of October 2019, it received 864,791 requests to delist. Each request may comprise one or more URLs to be delisted. The requests received in that period comprised 3,426,336 URLs. Out of all requests fully processed by Google as of 31 October 2019, the search engine delisted 1,327,420 URLs (45.0 per cent) and rejected delisting 1,620,598 URLs (55.0 per cent).26 Most of the requests (88.4 per cent) came from private individuals. The remaining 11.6 per cent were requests made by government officials, other public figures, minors, corporate entities, and requests made on behalf of deceased persons. Regarding the type of websites targeted, starting from 21 January 2016, when Google began recording this information, 15.6 per cent of the URLs evaluated were hosted on directories or aggregators of information about individuals such as names and addresses, 11.5 per cent on social networking sites, 18.8 per cent on the website of a media outlet or tabloid, 2.5 per cent on official government websites, and the rest on different sites fitting under a broad ‘miscellaneous’ category.

Tech. 349; Anna Bunn, ‘The Curious Case of the Right to be Forgotten’ (2015) 31 Computer L. & Security Rev. 336; Miquel Peguera, ‘The Shaky Ground of the Right to Be Delisted’ (2016) 18 Vand. J.  of Entertainment & Tech. L. 507. 23  On the role of search engines in deciding delisting requests, see Maria Tzanou, ‘The Unexpected Consequences of the EU Right to Be Forgotten: Internet Search Engines as Fundamental Rights Adjudicators’ in Maria Tzanou (ed.), Personal Data Protection and Legal Developments in the European Union (IGI Global, 2020) . 24  See Statcounter . 25  See (2019) . For a detailed research paper covering up to 31 December 2017, see Theo Bertram and others, ‘Three years of the Right to be Forgotten’ (2018) . 26  Google, ‘Transparency Report’ (n. 25). The number of URLs does not include those not fully processed as of 31 October 2019.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Right to be Forgotten in the European Union   491 Concerning the kind of information—also from January 2016—it should be ­ ighlighted that 17.5 per cent of URLs related to professional information. In addition to h those, content related to professional wrongdoing made up 6.1 per cent of URLs, and crime represented 6.2 per cent. Pages containing self-authored content made up 6.7 per cent of URLs. Non-sensitive personal information, such as addresses, contact details, or other content represented 5.8 per cent. In addition, many requests—comprising 25.8 per cent of the URLs—did not provide sufficient information as to be located or evaluated. In other cases, the name of the requester did not appear on the web page (15.5 per cent). The URL delisting rate is 100 per cent where the name is not found on the web page— it must be recalled that delisting only implies that the result will not be shown in searches on the person’s name, and the information may still be retrieved using other search queries. Conversely, requests that lack sufficient information are not delisted. A very high delisting rate applies to non-sensitive and sensitive personal information. Crimerelated content is delisted in less than half of cases, as well as self-authored content. Professional information and content related to professional wrongdoing have a very low delisting rate; and an even lower rate applies to political information.

1.3  Balancing Rights The balancing exercise required by the CJEU lies at the core of any delisting decision. Each search engine uses its own tools and criteria to make a decision about the requested removal. Some helpful criteria were provided by the Guidelines issued by the Article 29 Working Party (Article 29 WP).27 A small fraction of data subjects who do not agree with the search engine’s response bring the case before the DPA or before the courts. As a result, a body of case law is being created, which deals with a variety of situations and reflects the different cultural approaches to freedom of expression and information. By way of illustration, some cases will now be briefly considered—most of them from Spain, which is the EU country where far more court rulings have been handed down regarding delisting requests.28 A relevant case in the UK is NT1, NT2 v Google LLC,29 which decides on two separate claims by two data subjects, referred to in the ruling as NT1 and NT2. Both were businessmen who had been convicted of criminal offences many years beforehand. They requested Google to delist search results to information about their convictions. After Google rejected delisting most of the links, they brought judicial proceedings seeking 27  See Art. 29 Working Party (WP29), ‘Guidelines on the implementation of the Court of Justice of the European Union judgment on “Google Spain and Inc v. Agencia Española de Protección de Datos (AEPD) and Mario Costeja González” C-131/12’ (2014) 14/EN WP 225 . 28  An overview as of July 2015 is provided in Miquel Peguera, ‘In the Aftermath of Google Spain: How the “Right to Be Forgotten” is Being Shaped in Spain by Courts and the Data Protection Authority’ (2015) 23 IJLIT 325. 29 See NT1, NT2 v Google LLC [2018] EWHC 799 (QB) (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

492   Miquel Peguera orders to remove the links as well as compensation from Google. The claimants alleged that the information was inaccurate and, in any event, out of date, not relevant, of no  public interest, and an illegitimate interference with their rights. Interestingly, Mr Justice Warby considered specifically the criteria contained in the Article 29 WP Guidelines, finding particularly relevant the criterion regarding the cases where the data relate to a criminal offence. The Guidelines state that ‘[a]s a rule, DPAs are more likely to consider the de-listing of search results relating to relatively minor offences that ­happened a long time ago, whilst being less likely to consider the de-listing of results relating to more serious ones that happened more recently. However, these issues call for careful consideration and will be handled on a case-by-case basis.’30 NT1 had been convicted of conspiracy of false accounting for the purpose of evading tax. He was sentenced to four years’ imprisonment and was disqualified from acting as a company director. The Court of Appeal noted that he had been the principal actor in the false accounting conspiracy.31 Mr Justice Warby rejected the requested delisting. He underlined that the claimant played a limited role in public life, and the information was not inaccurate, was related to his business life not his personal life, and originally appeared in the national media, being a foreseeable consequence of his criminal ­conduct. Mr Justice Warby concluded that the information retained sufficient current relevance even though, thanks to a change in the law, the conviction was spent. He highlighted that NT1 remained in business and ‘the information serves the purpose of minimising the risk that he will continue to mislead, as he has in the past’.32 A different outcome was reached regarding the second claimant. NT2 had been convicted of phone tapping and computer hacking and had pleaded guilty. He was sentenced to six months’ imprisonment. In this case, Mr Justice Warby found that the information was out of date and irrelevant and there was no sufficient legitimate interest of internet users in its continued availability. The judge noted that the claimant’s past offending was of little, if any, relevance to assess his suitability to engage in business, there was no risk of repetition, and no need for anybody to be warned about it.33 In the field of medical activity, a Spanish court ruling of 11 May 2017 tackled harsh negative comments about the professional conduct of a renowned medical doctor.34 The comments were published on a website in 2008. Google rejected the delisting request alleging public interest in finding the information. The DPA then ordered Google to remove the result. Google appealed to the Audiencia Nacional (AN)—the competent court for revising the DPA’s decisions. The AN took into account the criteria provided by the Article 29 WP and concluded that the link should not be delisted. The comments were covered by freedom of expression, the claimant was still working as a doctor, and the interest of his future patients should prevail. They were entitled to know about his former patients’ experiences and opinions. However, in another case also concerning the professional activity of a medical doctor, decided just two months later by the same court—though with a different judge-rapporteur—the DPA’s delisting order was 30  See WP29, ‘Guidelines’ (n. 27) 20. 31 See NT1, NT2 v Google (n. 29) [68]–[76]. 32  ibid. [170]. 33  ibid. [223]. 34  Audiencia Nacional (AN) judgment of 11 May 2017 ECLI:ES:AN:2017:2433 (Sp.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Right to be Forgotten in the European Union   493 upheld.35 In the latter case the court argued that the publication was more than twenty years old; the doctor, who specialized in gynaecology and obstetrics, lacked public relevance or public notoriety in his professional field, and had not committed a crime but was at fault of negligence. Somewhat similarly, in a decision by a Dutch court of first instance, a surgeon who had been disciplined for medical negligence was granted a delisting request.36 The surgeon was included in a website containing an unofficial blacklist of doctors. The court found that the website might suggest that the surgeon was unfit to treat people, which was not supported by the disciplinary procedure. It held that the surgeon’s interest of not being matched with such a result in any search on the basis of her name should prevail over the public’s interest. In another ruling,37 the Spanish AN reversed the DPA delisting order arguing that the information—a blogpost casting doubts about the professional conduct of a businessman, with comments from different people—was protected by freedom of expression, noting as well that there was a public interest in accessing the information. Interestingly, the court referred to the GDPR—in dictum, as it was not yet applicable at the relevant time—to underline that Article 17 GDPR expressly considers freedom of expression as an exception to the right to erasure. Other cases concern information about political activities. In a 2017 ruling which denied the delisting, the Spanish AN held that access to information about the names of the candidates in a political election is of public interest and is required by the principle of democratic transparency.38 The ruling cited a 2007 judgment of the Spanish Constitutional Court where it was held that a person who participates as a candidate in a public election cannot invoke a data protection right to limit access to that information, and that such information must be public in a democratic society.39 A relevant factor in any analysis is whether the information relates to the professional or to the personal life of the data subject—in the former case, the protection is much lower.40 In many cases, the elapsed time is also a relevant factor in the overall assessment, even though a long period of time is not necessarily required.41 35  See AN judgment of 13 July 2017 ECLI:ES:AN:2017:3257 (Sp.). 36  See Rechtbank [District Court] Amsterdam decision of 19 July 2018 ECLI:NL:RBAMS:2018:8606 (Neth.). See also Daniel Boffey ‘Dutch surgeon wins landmark “right to be forgotten” case’ (The Guardian, 21 January 2019) . 37  AN judgment of 12 December 2018 (Sp.). 38  See AN judgment of 19 June 2017 ECLI:ES:AN:2017:2562 (Sp.). More recently, in a similar case, the court held likewise, denying the requested delisting. See AN judgement of 27 November 2018 ECLI:ES:AN:2018:4712 (Sp.). 39 Tribunal Constitucional [Constitutional Court] judgment 110/2007 [10 May 2007] ECLI:ES:TC:2007:110 (Sp.). 40  See e.g. AN judgment of 6 June 2017 ECLI:ES:AN:2017:3111 (Sp.). In a case concerning a legal adviser at the Parliament, where the information was not just a critique of her professional performance but included data about her relatives, spouse, ideology, and religious beliefs, the AN ordered the delisting. See AN judgment of 5 January 2018 ECLI:ES:AN:2018:136 (Sp.). 41  e.g. the lapse of three years was highly relevant for granting the delisting in a case concerning news information about the data subject’s participation in a public demonstration. See AN judgement of 25 July 2017 ECLI: ES:AN:2017:3260 (Sp.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

494   Miquel Peguera In yet another Spanish case, on this occasion concerning Yahoo,42 the data subject requested the delisting of results pointing to information about the Cali Cartel and about the data subject’s links with the smuggling of goods in Colombia. After Yahoo rejected the request, the individual obtained a delisting order from the DPA. Yahoo brought an appeal arguing that the data subject played a relevant role in public life and the information related to his commercial activities and was of social significance. The AN held that one of the links should indeed be delisted as, in view of the documents proving the closure of the criminal investigation and of the time elapsed—more than twenty years from the publication of the news report, and more than fifteen years after the closure of the case—the information was obsolete and no longer relevant. However, it denied the removal of the other two URLs, noting that they referred to much more recent publications, the data subject was a prestigious businessman and thus a public figure, and the public interest should prevail. All in all, these examples reveal that where there may be a significant public interest, such as in cases affecting professional activities or concerning public figures, a truly case-by-case approach is followed, reaching sometimes diverging outcomes which are not always straightforward in view of the limited context which some rulings provide.

1.4  Geographical Scope Google Spain did not specify the territorial scope of the delisting. It was not clear in the judgment whether delisting the link on the search engine’s website under the domain name corresponding to the country were the search is carried out (e.g. google.fr, in the case of France) would be enough; or rather, if the link should be delisted in all EU domains; or even if, irrespective of the place where the search was initiated, the link should be removed on any domain name used by the search engine, including the .com and the country codes of non-EU countries—that is, globally.43 The latter option appears to be the one favoured by the Article 29 WP in its guidelines on the application of Google Spain.44 The Article 29 WP emphasized that the judgment establishes an obligation of results, which must be implemented in such a way that guarantees the effective and complete protection of a data subject’s rights, and that EU law cannot easily be circumvented. In this regard, it put forward that ‘limiting de-listing to EU domains . . . cannot be considered a sufficient means to satisfactorily guarantee the rights of data subjects according to the judgment. In practice, this means that in any case de-listing should also be effective on all relevant domains, including .com.’45 42  See AN (Administrative Chamber) judgment of 8 November 2017 ECLI: ES:AN:2017:5118 (Sp.). 43  See e.g. Frosio (n. 5) 329; Brendan van Alsenoy and Marieke Koekkoek, ‘Internet and Jurisdiction After Google Spain: The Extraterritorial Reach of the “Right to be Delisted” ’ (2015) 5 Int’l Data Privacy L. 105. 44  See WP29, ‘Guidelines’ (n. 27) 20. 45  ibid. para. 20. While it is true that the WP29 did not state that the delisting should be made on all domain names only on the relevant ones, it certainly included the .com domain, which in any event is rejected by those opposing global delisting.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Right to be Forgotten in the European Union   495 The practical application of this criterion has not been homogeneous. In a 2015 case,46 the Swedish DPA ordered Google to delist some results globally. On appeal, however, the Stockholm Administrative Court upheld the delisting only for searches made from Sweden, thus reversing the global reach of the DPA’s decision. In this regard, the court argued that global delisting would imply going beyond the scope of application of the Directive and would prejudice legal certainty.47 The geographical scope of the delisting was brought before the CJEU in Google v CNIL. Advocate General Szpunar delivered his Opinion on 10 January 2019, recommending a ‘European delisting’—that is, that search engines delist the links on any of their domain names, but only for searches carried out in the EU.48 In this case, the French Data Protection Authority (CNIL), ordered Google to delist certain links globally—on all domain name extensions of the search engine. Google refused and offered instead to use geoblocking techniques to remove the links whenever a search was carried out from an IP address deemed to be located in the EU, regardless of the domain name utilized by the internet user. This was not accepted by the CNIL, which fined Google. On appeal, the Conseil d’État referred the question to the CJEU. It asked, in essence: (1) whether a search engine is obliged to delist globally; (2) if not, whether it is enough to delist on the domain name of the state in which the request was made, or more generally on all domains of EU countries; and (3) whether, in addition to the latter obligation, the search engine must employ geoblocking tools to remove the results in any domain for searches deemed to be initiated in the EU country of the data subject, or more generally in any EU Member State. Advocate General Szpunar noted that the data subject’s rights must be balanced against other fundamental rights, particularly that of receiving information, and that if worldwide delisting was to be imposed, EU authorities would not be able to define and determine a right to receive information, let alone to balance that right against other fundamental rights, particularly taking into account that the public interest in accessing information would vary from one third-country to another.49 He warned against the risk of preventing people in a third country from accessing the information. If the EU did impose a worldwide delisting, other countries may want to do the same according to their own laws, thus giving rise to a race-to-the-bottom, affecting freedom of expression both at the European level and globally.50 As to the other questions, Advocate General Szpunar underlined that Google Spain ordered the search engine to ‘ensure, within the framework of its responsibilities, powers and capabilities’ that the processing met the Directive, so that an ‘effective and complete’ protection of data subjects’ rights would be 46  See Nedim Malovic, ‘Swedish court holds that Google can be only ordered to undertake limited delisting in right to be forgotten cases’ (The IPkat, March 2018) . 47 ibid. 48  See C‑507/17 Google LLC v Commission nationale de l’informatique et des libertés (CNIL) [2019] ECLI:EU:C:2019:15, Opinion of AG Szpunar. 49  ibid. para. 60. 50  ibid. para. 61. AG Szpunar did not rule out that there may be situations were worldwide delisting could be required but notes that there is no reason for that in the case at issue.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

496   Miquel Peguera achieved.51 He concluded that the delisting should therefore cover any search carried out from a location in the EU. The search engine operator should employ all means at its disposal to ensure that such delisting was effective and complete, which included resorting to geoblocking techniques.52 The Court, which considered the questions in the light of both the Directive and the GDPR, offered a somewhat open answer in its ruling, handed down on 24 September 2019. It noted that a delisting carried out on all the versions of a search engine would certainly meet in full the objective of guaranteeing a high level of protection of personal data throughout the EU.53 In this regard, the ruling pointed out that search engines render search results ubiquitous, and that when the results concern a person whose centre of interests is located in the EU, access to those links, also from outside the EU, is likely to have substantial effects on that person within the EU. According to the Court, this justifies the existence of a competence on the part of the EU legislature to require global delisting.54 However, the Court concluded that neither the Directive nor the GDPR have chosen to impose such an obligation,55 and held therefore that current EU law does not require delisting on all the versions of a search engine.56 It also held that, under the GDPR, it is assumed that the delisting is to be carried out not only on the version corresponding to the Member State of residence of the data subject, but in respect of all the Member States. In addition, the ruling established that, if necessary, the search engine must take measures to prevent or, at the very least, seriously discourage internet users in the Member States from accessing the links in searches conducted on the basis of the data subject’s name.57 It is for the national courts to assess the adequacy of such measures. These holdings are essentially in line with the Advocate General’s Opinion. In a last twist, however, the ruling added that ‘while . . . EU law does not currently require that the de-referencing granted concern all versions of the search engine in question, it also does not prohibit such a practice’.58 In this vein, the Court held that a supervisory or judicial authority of a Member State, after weighing the data subject’s right to privacy and data protection against the right to freedom of expression, remains competent ‘to order, where appropriate, the operator of [a] search engine to carry out a de-referencing ­concerning all versions of that search engine’.59

1.5  Sensitive Data Characterizing a search engine as a data controller raises the question of how a search engine could possibly comply with all controllers’ legal duties regarding the incredibly huge amount of data it indexes. As noted earlier, Google Spain held that search engines 51  C-131/12 (n. 11) para. 38. 52  See AG Opinion in C‑507/17 (n. 48) para. 78. 53  See C‑507/17 Google LLC v Commission nationale de l’informatique et des libertés (CNIL) [2019] ECLI:EU:C:2019:772, para. 55. 54  ibid. para. 58. 55  ibid. para. 62. 56  ibid. paras 64–5. 57  ibid. paras 70, 73. 58  ibid. para. 72. 59 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Right to be Forgotten in the European Union   497 must ensure compliance with the data protection legal requirements ‘within the ­framework of its responsibilities, powers and capabilities’.60 This may be understood as an acknowledgement of the impossibility for a search engine to perform all such obligations, and as an effective limitation of its obligations in that respect. Nonetheless, the matter requires clarification, particularly in relation to the prohibition of processing of ‘special categories’ of data—such as data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, and data on health and sexual life.61 This issue was tackled in another case referred to the CJEU by the Conseil d’État. The questions include how search engines should act when required to remove links to information containing sensitive data, and how the exception based on freedom of expression should play out in those cases. In his Opinion,62 Advocate General Szpunar suggested that the Court should answer that the prohibition of processing sensitive data does apply to a search engine operator, but—following the language in Google Spain— only within the framework of its responsibilities, powers, and capabilities, noting that ex ante control by search engines would be neither possible nor desirable.63 Crucially, Advocate General Szpunar put forward that the Directive’s prohibitions and restrictions regarding sensitive data cannot be applied to a search engine as if the search engine itself had put the data on the indexed web pages. Rather, they can only apply to a search engine by reason of the indexation and location of the information, and therefore, by means of ex post verification after a delisting request.64 In the judgment delivered on 24 September 2019, the Court followed this approach, holding that those prohibitions and restrictions apply to the operator of a search engine ‘on the occasion of a verification performed by that operator, under the supervision of the competent national authorities, following a request by the data subject’.65 The Court also held that the operator of a search engine is in principle required by the provisions regarding the processing of special categories of data to accede to delisting requests concerning those data. However, if it establishes that the processing is covered by the exception in Article 8(2)(e) of the Directive or Article 9(2)(e) of the GDPR—­ particularly where the data are manifestly made public by the data subject—it may refuse to grant the request, provided that all the other conditions of lawfulness are satisfied, and unless the data subject has the right to object to the processing on grounds relating to his or her particular situation.66 In addition, the Court acknowledged the need to balance the rights of privacy and data protection with the right of freedom of information guaranteed in Article 11 of the 60  C-131/12 (n. 11) para. 38. 61  See Directive 95/46/EC (n. 7) Art 8 and GDPR (n. 6) Arts 9 and 10. See also Joris van Hoboken, ‘Case note, CJEU 13 May 2014, C-131/12 (Google Spain)’, SSRN Research paper no. 2495580 (2014) . 62  See C‑136/17 G C, A F, B H, E D v Commission nationale de l’informatique et des libertés (CNIL) [2019] ECLI:EU:C:2019:14, Opinion of AG Szpunar. 63  ibid. paras 49–54. 64  ibid. paras 56–7. 65  See C‑136/17 G C, A F, B H, E D v Commission nationale de l’informatique et des libertés (CNIL) [2019] ECLI:EU:C:2019:773, para. 48. 66  ibid. para. 69.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

498   Miquel Peguera Charter—a balance expressly required by Article 17(3) of the GDPR. In this regard, the ruling held that the operator of a search engine must ascertain, taking into account all the relevant factors of the particular case and the seriousness of the interference with the data subject’s rights, whether the inclusion of the link is strictly necessary for protecting the freedom of information of internet users potentially interested in accessing the information by means of a search on the basis of the data subject’s name.67 A previous Dutch case reported by Kulk and Borgesius seems to be in line with the Court’s holdings.68 There, the lower court ordered Google to delist a link to information about a criminal conviction, which under Dutch law is considered sensitive data, precisely because of the sensitive character of the data.69 On appeal, however, the ruling was reversed, and the court held that the search engine could benefit from the exception for journalistic purposes.70

2.  The Right to be Forgotten Vis-À-Vis Primary Publishers While the right to be forgotten has most dramatically affected search engines, requests and legal actions have also been brought against primary publishers of information. As Google Spain noted, the assessment of the competing rights and interests in those situations may be different than when referring to processing by a search engine. Most cases refer to information appearing in newspapers’ digital archives. National courts have reached different results, and the European Court of Human Rights (ECtHR) has had occasion to balance the rights at stake. Nationally, there have been divergent rulings regarding claims seeking to anonymize newspapers’ digital archives. The Belgian Supreme Court (Cour de Cassation) upheld in 2016 a lower court decision ordering Le Soir to anonymize the name of the claimant in the online version of an article published in 1994.71 The article reported the conviction of the claimant, who caused a car accident in which two people died, while driving under the influence of alcohol. The action was based on the claimant’s right to be forgotten under the general right to privacy, rather than being a claim based on data protection. The court held that, in the circumstances of the case, so many years after the initial publication, the claimant’s right should prevail, hence the interference with the right to

67 ibid. 68  Stefan Kulk and Frederik Borgesius, ‘Privacy, Freedom of Expression, and the Right to Be Forgotten in Europe’ in Evan Selinger, Jules Polonetsky, and Omer Tene (eds), The Cambridge Handbook of Consumer Privacy (CUP 2018). 69  See Rechtbank Rotterdam decision of 29 March 2016 ECLI:NL:RBROT:2016:2395 (Neth.). 70  See Hof Den Haag [Court of The Hague] decision of 23 May 2017 ECLI:NL:GHDHA:2017:1360 (Neth.). 71  See Cour de Cassation Olivier G v Le Soir [29 April 2016] no. C.15.0052.F (Fr.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Right to be Forgotten in the European Union   499 freedom of expression and information consisting in anonymizing the name of the claimant in the digital archive was justified. Conversely, in a case against El País,72 the Spanish Supreme Court, citing the ECtHR jurisprudence, denied in 2015 a claim to anonymize the newspaper’s digital archive, as such archives are protected by the right to freedom of expression and information. The court also rejected that the newspaper should delist the result in its website internal search tool.73 The latter holding was nonetheless reversed by the Constitutional Court, holding that the claimant had the right to have the results delisted in the internal search tool—while upholding the conclusion that the archive should remain unaltered.74 On a different note, the Supreme Court held in the same ruling that the publisher should have implemented exclusion protocols so that the content was not indexed by search engines. Failing to do so constituted an illegal processing by the publisher and thus the court granted moral damages to the claimant. A similar measure mandating the use of exclusion protocols had already been held by the Hamburg Court of Appeal a few months earlier.75 On the other hand, in ECtHR cases dealing with news publishers, the Court has engaged in balancing the competing rights and interests under the ECHR. Namely, the right to respect for private and family life (Art. 8) and the right to freedom of expression (Art. 10), which encompasses the freedom to receive and impart information. The ECtHR jurisprudence confirms that Article 8 ECHR may serve as a basis for a right to be forgotten regarding primary publishers such as media organizations if, in an assessment of the relevant circumstances, the balancing of rights favours the individual’s fundamental rights over the right to freedom of expression of the publisher and the public’s right to receive information. Nonetheless, the ECtHR has stressed the important role of online archives in preserving and facilitating the public’s access to news and information and has held that such archives fall within the scope of Article 10 ECHR.76 Indeed, the ECtHR has strongly protected freedom of information against attempts to suppress information from press archives, even for information which has been declared defamatory. A case in point is Węgrzynowski and Smolczewski v Poland, concerning a newspaper article containing allegations of unlawful practices against the lawyers by some Polish politicians. A domestic court found the allegations to be unfounded and damaging to the lawyers’ good names and reputation. Some years later, the plaintiffs discovered that 72 See Tribunal Supremo [Supreme Court] (Civil Chamber) judgment of 15 October 2015 ECLI:ES:TS:2015:4132 (Sp.). 73  This was in line with the WP29 Guidelines, which noted that ‘as a rule the right to delisting should not apply to search engines with a restricted field of action, particularly in the case of newspaper website search tools’. See WP29, ‘Guidelines’ (n. 27) 8 (para. 18). 74  See Tribunal Constitucional judgment of 4 June 2018 ECLI:ES:TC:2018:58 (Sp.). 75  See Oberlandesgericht [Higher Court] Hamburg decision of 7 July 2015 7U 29/12 (Ger.). See Irini Katsirea, ‘Search Engines and Press Archives Between Memory and Oblivion’ (2018) 24 European Public L. 125. 76 See Times Newspapers Ltd (Nos 1 and 2) v UK App. nos 3002/03 and 23676/03 (ECtHR, 10 March 2009) para. 27.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

500   Miquel Peguera the original article was still available on the newspaper’s online archive and that it was highly positioned in Google’s search results. They sought an order to remove the article from the online archive and compensation for non-pecuniary damages. The claim was rejected. The ECtHR underscored that the legitimate interest of the public in accessing the archive was protected under Article 10,77 and found no violation of Article 8. Agreeing with one domestic court, the ECtHR held that ‘it is not the role of judicial authorities to engage in rewriting history by ordering the removal from the public domain of all traces of publications which have in the past been found, by final judicial decisions, to amount to unjustified attacks on individual reputations’.78 When assessing the degree of the diffusion of a news report, the ECtHR considers the original reach of the publication rather than the amplifying effects facilitated by search engines. In two cases where the Court tackled the applicants’ contention that the content was readily available through search engines, the Court disregarded the claim noting that the applicants had not contacted the search engine to have the links removed.79 In M.L.  and W.W.  v Germany, the ECtHR examined whether the German court’s refusal to oblige media publishers to suppress from their news reports available online the names of two persons convicted of murder in a famous case some years earlier constituted a violation of those persons’ right to private live.80 The case concerned the murder of a popular actor, of which the applicants had been convicted and served time in prison. According to the German Federal Court of Justice, the concerned news reports were objective and truthful. Interestingly, in this case the ECtHR cited the Google Spain judgment extensively. The ECtHR adopted the CJEU’s reasoning, noting that search engine’s obligations regarding the affected person may be different than those of the publisher of the information. The Court stressed that while the initial publisher’s activity lies at the core of what freedom of expression seeks to protect, the main interest of a search engine is not that of publishing the information about the affected individual, but to make it possible for the public to find the available information about that person and to establish a ­profile about him or her.81 The ECtHR jurisprudence provides some criteria for weighing the interests at stake; namely, the ‘contribution to a debate of public interest, the degree of notoriety of the person affected, the subject of the news report, the prior conduct of the person concerned, the content, form and consequences of the publication, and, where it arises, the circumstances in which photographs were taken’.82 Applying these criteria in M.L. and W.W. v Germany, the ECtHR found that the freedom of expression and information should prevail over the rights of the claimants, and thus that Germany did not incur a violation of the latter. In particular, the ECtHR found that the news reports at issue did 77 See Węgrzynowski and Smolczewski v Poland App, no. 33846/07 (ECtHR, 16 July 2013) para. 65. 78 ibid. 79 See Fuchsmann v Germany App. no. 71233/13 (ECtHR, 19 October 2017) para. 53; M.L and W.W. v Germany App. nos 60798/10 and 65599/10 (ECtHR, 28 June 2018) para. 114. 80 ibid. 81  ibid. para. 97 (referring to Google Spain, paras 59–62). 82  ibid. para. 95.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Right to be Forgotten in the European Union   501 contribute to a debate of public interest. The issue at stake was not the initial publication of the news reports, but their availability years after the criminal procedure had ended— when the data subjects were about to leave prison, and thus were all the more interested in no longer being confronted with their criminal past in view of their social reintegration. In that regard, the ECtHR fully agreed with the German Federal Court in that the public has a legitimate interest not only in being informed about current events, but also in being able to search for information about past events, and reminded that the public interest in accessing online press archives is protected under Article 10.83 The ECtHR also warned that finding otherwise could give rise to a chilling effect, with news publishers avoiding making their archives available online or omitting parts of news reports. In this case, the applicants did not intend to have the news reports deleted altogether, but only to have their names erased from them, which implies a lower degree of interference with freedom of expression. Nonetheless, the ECtHR held that including in a news report individualized elements such as the full name of the concerned person is for the journalists to decide—within the deontological rules of their profession—and that, in fact, including such details is an important aspect of the work of the press, all the more when it comes to reporting criminal proceedings that have aroused considerable public interest.84

3. Conclusions The right to be forgotten is a broad category which refers to individual’s right to control the dissemination and persistent availability of information about them. The basis for such a right may be found in the fields of privacy, data protection, and other personality rights. The most visible manifestation of the right to be forgotten focuses on obtaining delisting from search results generated in searches for the name of the concerned person, and emerged in 2014, in the framework of data protection law, with the landmark CJEU’s Google Spain judgment. Since the Google Spain judgment in 2014, the right to be forgotten regarding search engines is well-established and settled law. The key findings of the judgment, no matter how controversial they were at the time—and may still be—seem to be here to stay. In addition, as noted in the literature, the basic tenets of this right may be also found in other jurisdictions.85 In any event, delisting requests are routinely dealt with by search engines, DPAs, and the courts in the Member States. While not incorporating the ruling in its precise details, the GDPR only enhances what the Court had already devised based 83 See Węgrzynowski and Smolczewski (n. 77) para. 65. 84 See M.L. and W.W. v Germany (n. 79) para. 105. The ECtHR refers to Fuchsmann (n. 79). 85  See Krzysztof Garstka and David Erdos, ‘Hiding in Plain Sight? The “Right to Be Forgotten” and Search Engines in the Context of International Data Protection Frameworks’, University of Cambridge Faculty of Law Research Paper no. 46/2017 (2017) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

502   Miquel Peguera on the Directive—although arguably allowing for better consideration of the freedom of expression and information. The key element when it comes to exercising the right is the appropriate balancing of rights and interests the search engine is initially called on to perform. Search engines and DPAs seem to converge more and more in the criteria and results of the analysis, which in turn are modelled by court decisions. The areas where the outcomes are more unpredictable are those of information concerning professional performance and of content relating to crimes or to involvement in actual or alleged unlawful activities in the past. Nonetheless, significant open questions still surround the right to be forgotten. Two of them, extraterritoriality and sensitive data, have been highlighted in this chapter. Though they have already been tackled by the CJEU, the practical application of the Court’s criteria is still uncertain. The right to be forgotten is also being exercised against primary publishers, particularly in relation to press archives. Here the case law, led by the ECtHR jurisprudence, emphasizes the importance of such archives for accessing information and tends to favour their inalterability, while accepting less intrusive measures to provide for some obscurity to benefit the individual expectations of oblivion.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 26

R ight to be .  . . Forg ot ten? Tr en ds i n L ati n A m er ica a fter th e Belén Rodr igu ez Case a n d the Im pact of th e N ew Eu rope a n Ru l e s Eduardo Bertoni*

Many years ago, I was discussing with fellow colleagues from Latin America the ­implications of the decision of the Court of Justice of the European Union (CJEU)—the ‘Costeja case’—that established the so-called ‘right to be forgotten’. The common ­opinion was that the name itself was an affront to Latin America; rather than promoting this type of erasure, we have spent the past few decades in search of the truth regarding what occurred during the dark years of the military dictatorships.1 In this sense, if those who were involved in massive human rights violations could request a search engine (Google, Yahoo, or any other) to make that information inaccessible, claiming, for example, that *  I thank Victoria Racciatti who assisted me in writing this chapter. Victoria conducted part of the research and drafted some parts of the chapter under my supervision. Victoria obtained her Law degree suma cum laude from the Universidad de San Andrés (2017) School of Law, Argentina, and two postgraduate diplomas. One in International Arbitration from the Universidad Austral (2017) and the other, in migrants and protection of refugees from the Universidad de Buenos Aires (2018). This chapter is up to date as of December 2018. 1  I published these ideas some time ago at The Huffington Post. See Eduardo Bertoni, ‘The Right to Be Forgotten: An Insult to Latin American History’ (Huffington Post, 24 November 2014) .

© Eduardo Bertoni 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

504   EDUARDO BERTONI the information is extemporaneous, it would be an enormous insult to our history. Because this ‘right’ has begun to permeate countries of our region in the form of legislative reforms and judicial decisions—either to implement or to reject it—this chapter intends to review the content and scope given to the right to be forgotten in some Latin American countries during the past years. It is important to start outlining the CJEU Costeja ruling, in which the Court’s decision was that ‘the operator of a search engine is obliged to remove from the list of results displayed following a search made on the basis of a person’s name links to web pages, published by third parties and containing information relating to that person’.2 By stressing this ruling, the Court thereby affirmed what many now refer to as the ‘right to be forgotten’, or in fact, ‘the right not to be indexed by a search engine’. In other words, the information intended to be forgotten is not erased, but rather remains on the site where it was published. The only obligation search engines have is the prohibition to direct users to that site. However, it is fair to say that the search engine, according to the ruling, would not be forced to remove those results from the list for reasons such as ‘the role played by the aforementioned interested party in public life, that the interference in their fundamental rights is justified by the preponderant interest of said public in having, as a result of this inclusion, access to the information in question’. The European ruling has been subject to many critiques. The main one being that the CJEU leaves it to the private companies that manage the search engines to decide what we are able to encounter in the digital world. The problem might be more important for the Latin American countries because of the case law of Inter American System of Human Rights’ as will be summarized later. Peter Fleischer, a lawyer and advisor to Google, posted many years ago in his blog: ‘The “Right to be Forgotten” is a very successful political slogan. Like all successful political slogans, it is like a Rorschach test. People can see in it what they want.’3 On the one hand, judges and legislators, perhaps without exhaustively considering the consequences, ‘see’ in this right the need to protect privacy; on the other hand, defenders of freedom of expression, access to information, and the search for the truth ‘see’ its disadvantages. Perhaps, the answer is that of Jonathan Zittrain, author of The Future of Internet and How to Stop It.4 Zittrain suggested that the path forward is probably not a legal right, but rather a structure that permits those who disseminate information to build connections with the subjects of their discussions. In practical terms, that would imply constructing mechanisms for facilitating dialogue between people involved in information management. When people feel wronged by information available about them online, they should be able to contest that information directly, and the search engine itself should have an instrument to enable that process. More information, not

2  See C-131/12 Google Spain SL v. Agencia Española de Protección de Datos [2014] ECLI:EU:C:2014:317. 3  Peter Fleischer, The right to be forgotten, or how to edit your history (Peter Fleischer Blogspot, 29 January 2012) . 4  See Jonathan Zittrain, The Future of Internet and How to Stop It (Yale U. Press 2008).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

BELÉN RODRIGUEZ AND THE IMPACT OF THE NEW EUROPEAN RULES    505 less. That way, we can stop discussing the right to be ‘forgotten’, which is misguided in many regards, including for its name. This chapter reviews—briefly—judicial and other authorities’ decisions and some legislative initiatives to demonstrate whether there is a trend to implement a right to be forgotten in Latin America. This chapter summarizes first a landmark case decided in Argentina. Then it reviews some standards coming from the Inter American System of Human Rights that should be considered not only by legislators but also, and principally, by judges. The chapter finishes with some conclusions.

1. The Belén Rodriguez Case: Something New Under the Sun? In 2014, Argentina’s Supreme Court provided guidance on the matter in the Belén Rodríguez case.5 This case energized the debate concerning the conflicting interests of privacy and honour against freedom of expression. In 2006, Argentinian model María Belén Rodriguez sued Google claiming that searches of her name returned links to and thumbnail photographs from pornographic websites. She alleged the search results falsely portrayed her as a prostitute and the thumbnails used her image without permission. A lower court ordered Google to pay damages of approximately US$6,000 on the basis that Google was responsible for the harm caused by the third party sites, which were not parties to the case. The lower court decided in favour of Rodriguez’s requests.6 Both Google and Yahoo were ordered to pay and delete all links referring the plaintiff ’s name and image to websites with sexual, erotic, and/or pornographic content. The ordinary court alleged that the defendants acted negligently by not proceeding to block or prevent the existence of harmful or illegal content that could affect the plaintiff ’s personal rights. The María Belén Rodriguez case later reached the Court of Appeal, where the previous decision was partially revoked.7 The Court of Appeal decided to analyse Google and Yahoo’s conduct applying a subjective regime of liability. In doing so, they concluded that the search engines did not act negligently, as they had not been directly notified by request to remove the harmful content, but instead had actual knowledge once the plaintiff had initiated legal action. The Court of Appeal added, however, that Google did have to be held liable for the use of thumbnails, because such use represented an unauthorized use of the plaintiff ’s image. The Supreme Court of Justice had the final say in this matter. It decided that the search engines could not be held liable for ‘contents that they have not created themselves’, 5  See Corte Suprema de Justicia de la Nación [National Supreme Court] Rodríguez, María Belén v Google Inc./daños y perjuicios [2014] CSJN Case no. 337:1174 (Arg.). 6  ibid. para. 2. 7  ibid. paras 3–5.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

506   EDUARDO BERTONI except for the cases in which (1) they had previous and actual knowledge (2) of the ­explicit illegality of the content and (3) did not act diligently to remove such content.

1.1  Previous and Actual Knowledge The Supreme Court considered that it was not appropriate to judge the search engine’s liability applying rules of strict liability, but rather decided to apply subjective liability rules, according to which the search engines were not bound to monitor the contents uploaded to the web.8 More specifically, the Supreme Court stated that the search engines did not have a general obligation to monitor the uploaded content provided by each website’s author. Instead, each author should be held liable for the content provided. As such, the Supreme Court stressed that the absence of a general obligation to monitor implies the inexistence of liability.9 Pursuant to this decision, search engines should only respond to content published by third parties when they have actual knowledge of the illegality of the content.10 Actual knowledge is gained either by the victim’s private notice to the search engine or by a court order deciding to block the website. In this sense, only at the moment of actual knowledge of the existence of the illegal content uploaded to the internet does the search engine become actively involved with the wrongdoing and its conduct may trigger liability.

1.2  Explicit Illegality of the Content As an obiter dictum the Supreme Court’s ruling differentiated the events in which there would be explicit and obvious damage from those in which the circumstances are debatable, doubtful, or require further clarification. For explicit and obvious situations, the court decided that a private notice from the victim addressed to the search engine would be sufficient to bind the search engine to act on the matter. However, with respect situations which are not obvious, the Supreme Court demanded judicial or administrative intervention.11 In order to clarify which events may cause explicit and obvious damages, the Supreme Court made a list. The following were enumerated: (1) explicit illegality regarding harmful content; (2) child pornography; (3) data that facilitates or instructs the commission of crimes; (4) that endangers the life or physical integrity of one or more people; (5) that motivates genocide that involves racism or discrimination with incitement to violence; (6) that involves racial or religious hatred; (7) that disrupts or informs about ongoing judicial investigations that should remain secret; (8) that harms to someone’s honour,

8  ibid. para. 15.

9  ibid. para. 16.

10  ibid. para. 17.

11  ibid. para. 18.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

BELÉN RODRIGUEZ AND THE IMPACT OF THE NEW EUROPEAN RULES    507 image, or is notoriously false; (9) and images or acts that are naturally private, although not necessarily of a sexual nature.12 According to the Supreme Court, the unlawful nature of the above events is obvious, and it is unambiguously demonstrated by consulting the website claimed by the victim to be unlawful. There is no need to require any other assessment or clarification and an explicit notice by the victim will suffice.

1.3  Negligent Response The Supreme Court expressly stated that in the previously mentioned cases of explicit illegality—in which private notice by the victim is sufficient—search engines are expected to block the harmful content.13 However, in those cases in which further clarification is needed, judicial proceedings are required to determine if the content in question violates the victim’s rights and, therefore, must be removed, delisted, or blocked. Only with notification of a court order demanding the content’s removal, will the plaintiff be able to claim damages if the search engine negligently ignores its obligation to block the content.14 The Supreme Court rejected the victim’s claim. This decision shaped the future of the right to be forgotten in Argentina. And, as it will be explained below, perhaps it influenced other Latin American jurisdictions.

2.  Trends in Latin America: What do they Follow? The following will explore legislation, regulations, and judicial and administrative ­decisions relating to the right to be forgotten in some Latin American countries. References are to Chile, Colombia, Peru, Mexico, and Uruguay.

2.1 Chile The Republic of Chile does not have specific regulations on the liability of intermediaries. This represents a problem for its jurisprudence. However, there are some general guidelines available in the Constitution and ordinary civil and criminal legislation that allows the issue to be addressed. On the one hand, some argue that the circulation of content, data, and personal images on the internet affects certain fundamental rights of individuals, including private life and the honour of the person and their families. On 12 ibid.

13 ibid.

14 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

508   EDUARDO BERTONI the other hand, the activity of internet authors and intermediaries is protected by the same rules, specifically with regard to freedom of association for the development of economic activities and the freedom to express opinions and inform, without prior ­censorship. Depending on the criteria adopted, judges have found grounds to lean in favour of one or the other of these rights. In any event, within the framework of the right to be forgotten in Europe, a group of Chilean senators presented a bill15 which provides for the possibility of requesting internet search engines to remove links to personal data content on certain occasions. At its core, the bill aims to grant the right to users to demand the elimination of their personal data from search engines or websites. Thus far, in claims brought to remove content circulating on the internet, Chilean jurisprudence has mostly rejected the right to be forgotten. However, there have been some exceptions. The first exceptional case was Abbott Charme, Jorge v Google.cl.16 Jorge Abbott requested a protection action in the Court of Appeals of Valparaiso, claiming that his name should not be linked in search engines to insults. According to Abbott, Google searches on his name showed websites that insulted him. To prevent that information from appearing on the search engine, Abbott notified Google, which defended itself by replying that the plaintiff should go to the servers or direct managers of the websites where the publications appeared and request them to remove the content. Finally, the Court of Appeals ruled in favour of the plaintiff. It considered that both Jorge Abbott and his family were damaged by the content of the websites and the accusations made against his honour and private life. In that sense, the Court of Appeal ordered Google Chile to establish automated filters to avoid publications of an injurious nature or of any type that entails a violation of constitutional rights. In addition, and exceeding its jurisdiction, the court ordered Google to adopt a system of preventive control of attacks against the right to honour. This measure meant that search engine employees became judges of the content in circulation on internet platforms. However, this decision was never enforced: Google did not adopt filters and the court did not supervise the implementation of its decision. The second exception was ISB and others v VF, Google Chile and Google Inc.17 In this case, three sisters and their aunt requested protective action against VF, a man who had been convicted for creating profiles on internet sites between 2009 and 2013, where he uploaded offensive content against the plaintiffs. The Court of Appeals admitted that the expressions and photographs were offensive and ordered Google to remove the sites. The court held that maintaining the URL was a grievance against the psychic integrity and honour of the plaintiffs. Both parties appealed the decision,

15  See Proyecto de Ley no. 10608-07, 7 de abril de 2016, modifica la ley no. 19.628, sobre Protección de la Vida Privada [Draft Law no. 10608-07 of 7 April 2016 amending Law no. 19.628 on the Protection of Private Life] (Chile). 16  See Corte de Apelaciones de Valparaíso [Court of Appeal of Valparaiso] Abbott Charme, Jorge v Google Chile [2012] case no. 228-2012 (Chile). 17  See Corte de Apelaciones ISB y otros v VF, Google Chile y Google Inc. (Chile).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

BELÉN RODRIGUEZ AND THE IMPACT OF THE NEW EUROPEAN RULES    509 although the Supreme Court never resolved the case because of the withdrawal of both parties as a result of a private settlement. In the opposite direction to the two cases mentioned, jurisprudence has been, in ­general, homogeneous in rejecting action brought against search engines. The courts have maintained the impossibility of casting liability on internet intermediaries which, as such, only provide an internet search service, without even owning domains and websites containing injurious content. For example, this conclusion was reached in Kruljac, Daira v Google Chile and Google Inc.18 Mrs Kruljac started litigation because the search engine indexed her name to a blog that contained her personal information and alleged that she was affected by AIDS. The Constitutional Court rejected the appeal because the content had already been eliminated (or had disappeared) from the original source but considered that the action should have been directed to the original creator of the content, not the intermediary. The court ruled that, in general, the possibility of action against search engines should be rejected for content created by third parties. Also, as an obiter dictum, the court added that beyond the plaintiff ’s assertions, she had not provided data, background, or elements that would categorically state that she had urged the respondent to eliminate the comments and offences that could be found on the internet through use of the Google search engine. In addition, it was argued that, if ‘­personal data’ is understood to refer to any information concerning natural persons, identified or identifiable (according to the definition provided in Law 19.628), insults, falsehoods, and imputations, could not possibly be included in that definition, which added another reason for the request to be rejected. In Gómez Arata, Maximiliano v Google Chile Ltda and Google Inc., the plaintiff claimed to have been subjected to insulting expressions because when he typed his name in the Google search engine, his name appeared to be associated with words like ‘chanta’ and ‘thief ’. Given these facts, the plaintiff tried to contact the respondents who rejected the blocking requests of the web pages in which the harmful content appeared. The Court of Appeals ruled that the search engines ‘are in principle not responsible for content they have not created’. According to this principle, the court considered that internet intermediaries do not have a general obligation to monitor the content that is uploaded to the network and that is provided by those responsible for each of the web pages. Referring to Law no. 27.336, as amended by Law no. 20,345 of May 2010, the court held that search engines do not have an obligation to monitor the data they transmit, store, or reference, except in certain circumstances. In this regard, the Court of Appeals ruled that the search engines could respond to third party content when they had actual knowledge of the manifest illegality of that content and if such knowledge was not followed by acting diligently. Next, the court distinguished certain situations of manifest illegality in which, in principle, it would be enough to notify the search engine. Furthermore, in cases involving actions against electronic media, Chilean ­jurisprudence has also frequently rejected the right to be forgotten. According to a study 18  See Quinta Sala de la Corte de Apelaciones de Santiago ‘Kruljac, Daira contra Google Chile y Google Inc’ [2014] (Chile).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

510   EDUARDO BERTONI conducted by Pedro Anguita Ramírez, the case of Granziani Le-Fort, Aldo v Empresa El Mercurio was the first and the only one to recognize the right to be forgotten, but this decision has not been ratified either by the Courts of Appeals or by the Supreme Court.19

2.2 Colombia Colombian legislation does not explicitly mention the right to be forgotten. However, there are some standards the consideration of which is essential to carry out a correct analysis of this matter. The Constitution of Colombia protects the freedom to express and to disseminate thoughts and opinions, to inform and receive information, and to establish mass media outlets. It also guarantees journalistic endeavours and prohibits censorship of the media. In turn, the Constitution obliges the state to respect and enforce the rights to privacy and good name, and guarantees the right to know, update, and rectify information that has been collected about people. In addition to the Constitution, the so-called Law of Protection of Personal Data (Statutory Law no. 1,581) was introduced in 2012. Colombia additionally has Law no. 1,341, which establishes the Network Neutrality Principle. Taking into consideration current and applicable legislation on the matter, we refer here to some judgments that are relevant in the formation of Colombian jurisprudence on point. First, Decision T-453/13 is noteworthy.20 According to this decision, the court accepted the appeal of a decision that forced the newspaper El Nuevo Día to remove from the web any information, news, report, or data that would allow the identification or the ability to deduce the identification of a minor. In this case, ‘L’, acting in her own name and on behalf of her son ‘P’ (a minor), had filed a tutela action, alleging that the newspaper had violated her and her son’s rights, allowing them to be identified in the mass media, as appeared in two pages of El Nuevo Día. The court understood that the freedom to inform is not absolute, finding its limit inter alia in respect of the rights of others. In that regard, the court affirmed that the media have a duty to issue certain, objective, and timely information, but must be diligent and careful in the disclosure of information that involves elements of the intimate lives of individuals or their families, because even if the information is true when it is publicly presented, it can harm the fundamental rights of those concerned. It also stated that an even greater degree of responsibility is required when the news involves children and adolescents. In addition, in Decision T-453/13, the court analysed the responsibility of search engines.21 Citing Decision T-040 of 28 January 2013,22 the court held that search engines cannot be held liable for the veracity or impartiality of an article, news item, or column that appears in its results. The search engines simply provide a search service for all the 19  See Pedro Anguita Ramírez, ‘The Right to be Forgotten in Chile. Doctrine and Jurisprudence’ (2017) E-Conference Right to be Forgotten in Europe and Beyond 1–9. 20  See Sala Sexta de Revisión de la Corte Constitucional [2013] case no. T-3819973 (Col.). 21  See Sala Sexta de Revisión de la Corte Constitucional [2013] case no. T-3819973 (Col.). 22  See Sala Primera de Revisión de la Corte Constitucional [2013] case no. T-3.623.589 (Col.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

BELÉN RODRIGUEZ AND THE IMPACT OF THE NEW EUROPEAN RULES    511 information found on the network, but they are not the ones who write or publish such information. Thus, the court determined that those responsible for the information were those who collected, analysed, processed, and disseminated the news which identified a minor. Likewise, the court refused to hold search engines liable in Decision T-277/15.23 According to the facts of the case, the plaintiff sued the newspaper El Tiempo for violating her rights to name, privacy, due process, petition, and work. According to the plaintiff, the violation resulted from the publication—and indexation—of her name in a journalistic note that linked her to the commission of a crime for which she has never been convicted by a formal judicial process. By virtue of this, the court ruled that the real violation of the plaintiff ’s rights did not occur at the time when the search engine indexed her name, but when the newspaper published the story without properly updating it. Therefore, the court ruled against the liability of the search engines and, instead, ordered the newspaper to modify the news including the final judicial decision, although it did not order the newspaper to remove the content. The court referred to the Google Spain case, but decided to depart from it because it considered that the solution reached was not suitable in the case in question for two reasons: (1) because the deindexing of the harmful article would not prevent access to it through a direct link; and (2) to guarantee, first of all, the right to freedom of expression set out in the Constitution. Finally, the Court forced El Tiempo to use technological means (e.g. robots.txt and meta tags) to prevent new search engines from indexing the news to their web pages. In Decision T-098/17 the court rejected the claim of Luis Alfonso Cano Bolaño against Caracol Televisión.24 The plaintiff alleged that the images they had recorded in the courtyard of the school where he worked, were illegal and thus requested their deletion. Caracol Television had tried to record an interview as part of an investigation into the conviction of Mr Cano Bolaño in 2000 for several crimes. The Constitutional Court indicated that within the Colombian order there is a presumption in favour of freedom of expression. As obiter dictum, the Constitutional Court underscored that in matters of criminal law one cannot preach a ‘right to oblivion’, in contrast to cases of credit information. According to the court, this is a fundamental difference between the right to ‘habeas data’ and the ‘habeas data penal’. Thus, the unified jurisprudence of the Constitutional Court explained that there is no such right to totally and definitively suppress negative data referring to a criminal record. According to the court, the faculty of suppression should be understood in dynamic play with the rest of the principles of personal information management and, above all, in relation to the principle of purpose. The preservation of criminal records fulfils legitimate constitutional and legal purposes to which the court had made constant reference (morality of the public function, application of criminal law, intelligence activities, enforcement of the law). In a convincing way, the court affirmed that, in this case, there was no right to oblivion as such. Therefore, the court considered that it was not part of the right of habeas data, allowing 23  See Sala Primera de Revisión de la Corte Constitucional [2015] case no. T-4296509 (Col.). 24  See Sala Novena de Revisión de la Corte Constitucional [2017] case no. T-5759011 (Col.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

512   EDUARDO BERTONI the possibility of demanding from the administrator of the criminal records’ database the total and definitive exclusion of such data.

2.3 Mexico The Constitution of the United Mexican States, in its articles 6 and 16, expressly guarantees the freedom to disseminate opinions, information, and ideas, through any means, without prior censorship. The Mexican state guarantees the right to information, except in the case that it attacks morals, private life, or the rights of third parties, provokes any crime, or disturbs public order. Likewise, the Constitution recognizes, in its article 7, the fundamental right to protection, access, rectification, and cancellation of personal data (except for reasons of national security, public order provisions, security, and public health, or to protect the rights of third parties). In turn, the Federal Law on the Protection of Personal Data in Possession of Individuals guarantees individuals the right to access, rectify, cancel, and oppose the processing of personal data. Compliance with that law is supervised by the National Institute of Transparency, Access to Information and Protection of Personal Data of Mexico (INAI), an autonomous administrative agency. Both the so-called rights of access, rectification, cancellation, and opposition, which establish that personal data are not owned by public bodies, as well as the Federal Law on Protection of Personal Data are the fundamental tools available to the Mexican state to promote digital protection rights. In addition, in 2017 the Official Gazette published the General Law for the Protection of Personal Data in Possession of Obliged Subjects through which Mexico seeks to have greater control of the treatment of the personal data of every individual. This legislation is added to the Telecommunications Law and the Personal Data Protection Law. The decisions of Mexican courts will be analysed in the light of that legislation. The first case that dealt with the matter was in 2011 in Anonymous Applicant v Federal Board of Conciliation and Arbitration.25 The applicant had participated in a labour dispute before the Federal Board of Conciliation and Arbitration. As part of that procedure, the Board had posted notices in its online bulletins and anyone could find and access information about the applicant’s participation in the dispute through a simple search of the applicant’s name. The applicant requested the Board to cease the online publication of her personal data. On refusal to do so, the applicant requested that the Federal Institute of Access to Public Information (IFAI) determine the cessation of the publication. In 2009, the IFAI denied the request on the grounds that the request did not comply with the grounds for the review as the relevant laws only allowed review of the denial of access to information instead of requests to stop the dissemination of information. 25 See Anonymous Applicant v Federal Board of Conciliation and Arbitration [2011] in Columbia University Global Freedom of Expression .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

BELÉN RODRIGUEZ AND THE IMPACT OF THE NEW EUROPEAN RULES    513 The ­applicant filed an amparo (remedy for the protection of constitutional rights) and the decision was reviewed again. The IFAI began, then, to distinguish between the right to correction of personal data and the right to the elimination of personal data. In this regard, it was concluded that the first applies in cases where the information held by a public authority is incorrect or inaccurate, while the second applies in cases where the possession of personal information by a public authority is unnecessary or excessive. In addition, the information reported was considered as a historical document and, as such, could not be subject to deletion because it was of a permanent secondary value of an informative nature. Likewise, the IFAI clarified that the applicant was not requesting the complete elimination of his personal data, and was only complaining about its continuous dissemination which, in the opinion of the IFAI, involved the right to opposition instead of the right to elimination. The IFAI examined the doctrine regarding the right to opposition and concluded that the right would only apply in cases where: (1) consent to process the data is not mandatory; (2) the processing of the data is legitimate in accordance with the applicable standards; and (3) there is a justified reason on the part of the applicant, based on their specific personal situation, to request an end to the processing of their data to avoid injury. The Carlos Sánchez de la Peña case against Google Mexico was resolved in the diametrically opposite direction.26 This was the first Mexican case not only referring to the right to be forgotten, but also endorsing it. In fact, Sánchez de la Peña had asked Google Mexico to remove various results following a search based on his name, which linked him to acts of corruption. Given the refusal, the INAI initiated proceedings to sanction Google Mexico because de la Peña had not been able to exercise his constitutional right of ‘cancellation and opposition to the processing of his personal data’. The INAI decided to order Google Mexico to remove the links. The INAI maintained that Google was a data controller, under the Mexican data protection law, and, as such, was responsible for the treatment it applied to personal data that appeared on web pages published by third parties. Therefore, under certain conditions, the data subjects should be able to go directly to the search engine to seek removal of links to web pages that contained information about her, when they were displayed as a result of a search based on the data subject’s name. The INAI concluded that Google had breached that obligation when it denied the request for cancellation. It is worth noting that, later, Fortuna magazine filed an amparo suit against the decision,27 arguing that both the procedure and the INAI’s decision violated the due process rights of Fortuna magazine. The Seventh Collegiate Circuit Court of the Auxiliary Center of the First Region granted Fortuna’s request. The court considered that the order granted by the INAI affected the right to freedom of expression of the magazine and that, therefore, it should have participated in the administrative ­procedure as an interested third party. With that decision, the INAI resolution was 26  See Instituto Federal de Acceso a la Información y Protección de Datos [2014] case no. PPD.0094/14 (Mex.) . 27 See Séptimo Tribunal Colegiado de Circuito del Centro Auxiliar de la Primera Región [2017] (Mex.) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

514   EDUARDO BERTONI overturned and de la Peña’s claim came back before the INAI, this time granting the magazine the right to join the proceedings.28 Regardless of current and applicable regulations and jurisprudential decisions, it is worth noting the actions of Mexican civil society organizations, which have urged rejection of the right to be forgotten. Led by the Digital Rights Network, civil society organizations said that accepting the right to be forgotten and de-indexing does not guarantee the protection of personal data or solve the real problems of misuse of data on the internet, but rather implies regression.29

2.4 Peru Peru welcomes in its Constitution the right to freedom of information, opinion, expression, and dissemination by any means of communication, without prior authorization or censorship or any impediment. In fact, the Constitution establishes as an offence any action that suspends or closes an organ of expression or that prevents it from circulating freely.30 In turn, the Constitution (in the same article, different paragraph) provides the right of people to ‘the assurance that information services, whether computerized or not, whether public or private, will not provide information affecting personal and ­family privacy’.31 Since 2011, Peruvian law has regulated the right to data protection in the Law of Protection of Personal Data.32 This guarantees the rights of access, rectification, cancellation, and opposition. Peru also has a Draft Regulation of the Law on Protection of Personal Data that ensures the commitment of the state in the surveillance and protection of information and data in circulation. In Peru, there was additionally an administrative office in the Ministry of Justice and Human Rights serving as a data protection authority that, since 2013, had the power to resolve data protectionrelated matters—however, the institutional design of the data protection authority is currently undergoing change. Peru has shown a trend in recent years in favour of the application of the right to be forgotten in several rulings from Peruvian authorities that have been concerned with the application of the liability of search engines with respect to the processing of ­personal data. In Resolutions 074-2014-JUS/DGPDP33 and 075-2014-JUS/DGPDP,34 28  See Red de Defensa de los Derechos Digitales ‘¡Ganamos! Tribunal anula resolución del INAI sobre el falso derecho al olvido’ [2016] . 29  Carla Martínez, ‘Piden eliminar derecho al olvido de Constitución en CDMX’ (El Universal, 2016) . 30  Political Comstitution, enacted 29 December 1993, Art. 2(4) (Per.). 31  ibid. Art. 2(6). 32  See Ley no. 29.733, 3 de julio de 2011 [Law no. 29.733 of 3 July 2011] (Per.) . 33  See Resolución Directoral [2014] case no. 074-2014-JUS/DGPDP (Per.) . 34  See Resolución Directoral [2014] case no. 075-2014-JUS/DGPDP (Per.) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

BELÉN RODRIGUEZ AND THE IMPACT OF THE NEW EUROPEAN RULES    515 the General Directorate of Personal Data of Peru issued a decision against the website datosperu.org for containing judicial and administrative resolutions without anonymization and without giving the data holders the right to access, rectification, cancellation, and opposition. In Resolution 045-2015-JUS/DGPDP,35 the General Directorate of Personal Data of Peru was even more severe when ordering Google to de-index search results and imposing a fine based on the liability of the search engine. The plaintiff had requested Google Peru to cancel his personal data from any information or news regarding a criminal accusation that had been dismissed. The plaintiff used the tool called ‘remove or update outdated information’ that Google makes available to internet users for sending requests for information removal. In response to the request, Google responded ‘we recommend that you contact the owner of the website in question directly’. Faced with that rejection, the plaintiff filed a complaint with the General Directorate of Personal Data of Peru. The General Directorate ordered Google to de-index certain results, so that they did not appear in the search engines, and imposed a fine. According to the General Directorate, de-indexing the results does not imply in any case the elimination from the internet of all information or news related to the matter, which (1) continues to exist unchanged on the source web page (being able to exercise the claim against the search engine, without the need to contact publishers in advance) and (2) is still accessible through search engines by any other search term (reducing accessibility to information by searching by first and last names). Additionally, the General Directorate of Personal Data of Peru identified three types of search engine: (1) search indexes, which in their database relate topics to internet addresses; (2) search engines proper, which in their database relate topics to keywords; and (3) the metasearch engines, which do not have their own database, but use those of third parties. According to the Directorate, Google Search is a search engine and there is no doubt that, as such, it tracks information and catalogues it according to a certain order of preference. This single classification of information, which includes people’s names and surnames, constitutes a processing of personal data and the Directorate considered, for the first time in Peru,36 that Google was responsible for that data processing. In the same year, the Directorate issued Resolution 026-2016 (2016) again against Google.37 The plaintiff filed a claim against Google stating that his right to obtain cancellation of his personal data contained in a notice related to the dismissal of a lawsuit to which he was a party, which appears in Google search results, was denied. The General Directorate of Personal Data of Peru considered that search engines function as data controllers and, Google one such controller, was ordered to block not only specific URLs but any and all searches related to the incident that could appear by searching for the 35  See Resolución Directoral [2015] case no. 045-2015-JUS/DGPDP (Per.) . 36  See Cynthia Téllez Gutiérrez, ‘Derecho al Olvido en versión peruana 1.1’ (La Ley, 2016) . 37  See Resolución Directoral [2016] case no. 026-2016-JUS/DGPDP (Uru.) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

516   EDUARDO BERTONI name of the plaintiff. Subsequently, on appeal, the General Directorate of Personal Data of Peru confirmed its decision and detailed a list of sixteen links subject to de-indexing.

2.5 Uruguay While the Constitution of the Republic of Uruguay provides for the right to freedom of expression, it also provides that ‘the enumeration of rights, duties and guarantees made by the Constitution does not exclude others that are inherent to human personality or derive from the Republican form of government’. The right to the protection of personal data is an inherent right of the human person, included in Article 72 of the Uruguayan Constitution. Reinforcing constitutional protection, Law 18,331 was enacted in 2008 to regulate the Protection of Personal Data and Habeas Data. Moreover, Article 329 of Law 16.226, requires the National Registry of Judicial Records to eliminate all facts and data in reference to the event that determined the prosecution, for cases in which the criminal proceeding ended with revocation of the prosecution, dismissal, or acquittal. Despite the current and applicable Uruguayan regulations, the case of AA and BB against CC rejected a request to eliminate data and personal information.38 In the case, a man was accused ten years earlier as a mentally incapacitated party of two homicides and theft. He was admitted to a mental hospital and after release several years later, he moved to live in his parents’ house, following the prescribed treatment and periodic consultations with psychiatrists who took care of his mental condition. His parents, the plaintiffs, came to know that the defendant CC, for its TV programme ‘GG’ conducted by DD, was seeking information about the case involving their son. Therefore, the plaintiffs brought an amparo action against the defendant to enjoin the future release of any TV programme or information that might mention them or their son in connection with their son’s murder and theft proceedings. In this instance, the court considered that the request fitted within the area of prior censorship. Nevertheless, that conclusion was not unanimous and the treatment of similar matters continues to be subject to constant misunderstandings. As a result of a request to the Executive Council of the Personal Data Regulatory and Control Unit, Resolution 1040/01239 determined that the best option to address the issue of content de-indexing would be to take preventive measures to control the spread of information and, once the person who published the content wants it to be removed, the ­original content should simply be removed from the internet and the most popular

38  See Juzgado Letrado de Primera Instancia en lo Civil de 8° Turno no. 55 [2008] case no. IUE 2-400000/2008 (Uru.) . 39  See Executive Council of the Personal Data Regulatory and Control Unit [20 December 2012] Resoution no. 1040/012 (Uru.) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

BELÉN RODRIGUEZ AND THE IMPACT OF THE NEW EUROPEAN RULES    517 search engines should be asked to remove the content from their caches.40 In any event, the Council adds, ­metatags can be added in the HTML code of the web page to avoid indexing or caching of content.41

3.  The Right to Privacy and the Right to Freedom of Expression Under the Inter American System of Human Rights and its Impact on the Right to be Forgotten One of the established standards in the Inter American System of Human Rights is the prohibition of prior censorship, an issue that, as will be seen in the following, has an important influence when discussing content-filtering, content de-indexing, or the ‘right to be forgotten’. The text of Article 13 of the America Convention on Human Rights (ACHR) in relation to the prohibition of prior censorship is clear.42 In 1985 when Advisory Opinion no. 5 (OC-5) was issued, the Inter-American Court of Human Rights explained that ‘the abuse of freedom of expression cannot be subject to preventive control measures but the basis of responsibility for whoever committed it’.43 Secondly, the high value attached to the prohibition of prior censorship led the Court to understand that ‘Article 13 of the American Convention, which was partly modeled on Article 19 of the Covenant, contains a reduced list of restrictions than that the European Convention and the Covenant itself, only because it does not expressly prohibit prior censorship’.44 Later, and reaffirming concepts already held in OC-5, the Inter-American Court went further by saying that: Article 13(4) of the Convention establishes an exception to prior censorship, since it allows it in the case of public entertainment, but only in order to regulate access for the moral protection of children and adolescents. In all other cases, any preventive measure implies the impairment of freedom of thought and expression.45 40  ibid. 3. 41 ibid. 42  See Organization of American States (OAS), American Convention on Human Rights (‘Pact of San Jose’), Costa Rica, 22 November 1969 (entered into force 18 July 1978) Art. 13. 43 See Advisory Opinion OC-5/85 Series A no. 5 (Inter-American Court of Human Rights, 13 November 1985) para. 39 (‘[e]l abuso de la libertad de expresión no puede ser objeto de medidas de control preventivo sino fundamento de responsabilidad para quien lo haya cometido’). 44 ibid. 45  La Última Tentación de Cristo (Olmedo Bustos y otros) v Chile Serie C no. 73 (Inter-American Court of Human Rights [IACHR], 2009) para. 70.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

518   EDUARDO BERTONI Note the force of this last sentence which establishes that in the Inter-American Court’s view, the prohibition of prior censorship is practically absolute. In this context, a ­question that is difficult to resolve in the digital age is the proposals that enable, for example, the implementation of the right to be forgotten as defined in this chapter (­de-indexing of content). When the responsibility of intermediaries—whose link with the implementation of the right to be forgotten is direct—is mentioned in freedom of expression studies in the digital age, it pertains to the possibility of casting subsequent responsibility on those who do not intervene in the creation of contents found on the internet.46 How does this relate to the just mentioned standard of the Inter-American system and with the implementation of the right to be forgotten, as defined in the Costeja case or the findings of the recently decided Delfi As v Estonia case in the European Court of Human Rights?47 In both cases, intermediaries can avoid liability by applying some criterion of censorship. However, it is doubtful if this would be possible in the context of the InterAmerican System. First, as explained earlier, the prohibition of prior censorship would also apply to cases that seek to resolve the responsibility of intermediaries when they agree not to index (censor?) some content. In other words, establishing these guidelines could contradict Article 13(2) of the ACHR. On the other hand, leaving the possibility of censoring content to private companies can generate responsibility for the state ­precisely because those individuals would be violating freedom of expression. OC-5 reminds us that: Article 13.3 not only deals with indirect governmental restrictions, but also expressly prohibits ‘particular . . . controls’ that produce the same result. This provision must be read together with Article 1.1 of the Convention, where the States Parties ‘undertake 46 A 2001 Joint Declaration of international freedom of expression by rapporteurs specifically addressed this matter by noting that: ‘2. a. No one who simply provides technical Internet services such as providing access, or searching for, or transmission or caching of information, should be liable for content generated by others, which is disseminated using those services, as long as they do not specifically intervene in that content or refuse to obey a court order to remove that content, where they have the capacity to do so (“mere conduit principle”). b. Consideration should be given to insulating fully other intermediaries, including those mentioned in the preamble, from liability for content generated by others under the same conditions as in paragraph 2(a). At a minimum, intermediaries should not be required to monitor user-generated content and should not be subject to extrajudicial content takedown rules which fail to provide sufficient protection for freedom of expression (which is the case with many of the “notice and takedown” rules currently being applied)’. See Organization for Security and Co-operation in Europe (OSCE), ‘International Mechanism for Promoting Freedom of Expression: Joint Declaration on Freedom of Expression and the Internet by the United Nations Special Rapporteur on Freedom of Opinion and Expression, the OSCE Representative on Freedom of the Media, the OAS Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples’ Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information’ (2011) §  2(a)–(b) . This Declaration was followed by the RELE-OEA in a 2013 report. 47 See Delfi AS v Estonia [GC] App. no. 64569/09 (ECtHR, 16 June 2015). See, for a detailed description of the case, Chapter 24.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

BELÉN RODRIGUEZ AND THE IMPACT OF THE NEW EUROPEAN RULES    519 to respect the rights and freedoms recognized (in the Convention) . . . and to guarantee their free and full exercise to any person subject to their jurisdiction . . .’ Therefore, the violation of the Convention in this area can be a product not only of the State itself imposing restrictions aimed at indirectly preventing ‘the communication and ­circulation of ideas and opinions’, but also that it has not ensured that the violation does not result from the ‘particular . . . controls’ mentioned in paragraph 3 of article 13.48

The ‘particular controls’ that may violate the exercise of freedom of expression to which OC-5 referred in 1985, acquire an important meaning in the digital era. Are not ‘particular controls’ what the CJEU asked Google to do in Costeja or what the European Court of Human Rights asked the news platforms to do in Delfi? The Inter-American Court in OC-5 does not leave much doubts in answering this question by stating that ‘in the broad terms of the Convention, freedom of expression can also be affected without the direct intervention of the state action’.49 Secondly, when it comes to expressions that may concern public interest, OC-5 established the bases for the development of what some years later began to be called the ‘­tripartite test’ which enables the possibility of imposing subsequent responsibilities only for certain expressions. This test finds an initial formulation in the Advisory Opinion of the Inter-American Court when it determined that ‘for such a responsibility to be validly established, according to the Convention, it is necessary that several requirements be met, namely: a) the existence of previously established causes of liability, b) the express and exhaustive definition of these causes by law, c) the legitimacy of the aims pursued when establishing them, and d) that those grounds of liability are “necessary to ensure” the aforementioned purposes’.50 This test is important in the light of what the IACHR subsequently understood OC-5 to be in a good number of cases where it interpreted Article 13. In them, the Court understood that: freedom of expression constitutes one of the essential pillars of democratic society and a fundamental condition for its progress and the personal development of each individual. This freedom should not only be guaranteed with regard to the dissemination of information and ideas that are received favorably or considered inoffensive or indifferent, but also with regard to those that offend, are unwelcome or shock the State or any sector of the population.51

Consequently, if some kind of responsibility is established for intermediaries as a consequence for not de-indexing content that they do not create, such responsibility should be established not only by respecting the tripartite test but also considering that a good 48  Advisory Opinion OC-5/85 (n. 43) para. 38. 49  ibid. para. 56. 50  ibid. para. 39. 51 See Herrera Ulloa Serie C no. 107 (IACHR, 2001) para. 113; La Última Tentación de Cristo (n. 45) para. 69; Ríos y otros v Venezuela Serie C no. 194 (IACHR, 2009) para. 105; ‘Caso Perozo y otros v Venezuela Serie C no. 195 (IACHR, 2009) para. 116.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

520   EDUARDO BERTONI number of expressions that are currently the object of questioning and that are found on the internet should not generate any kind of responsibility, either for the authors or for intermediaries. Although this is a very brief review of the jurisprudence of the IACHR, the standards summarized here provide evidence that adapting the ‘right to be forgotten’ in Latin American countries must be done with great care.

4. Conclusions In view of the previous analysis, it might be important to underscore some preliminary conclusions about trends and future developments in Latin America. (1) The frequent confusion must be avoided when we speak of intermediaries and refer only to internet search engines, since the decisions and public policies on ‘intermediaries’ can affect all types of intermediaries. If the ‘right to be forgotten’ is applied to any intermediary, the consequences for the exercise of freedom of expression would be even greater. And its practical implementation would be more problematic. (2) It would seem that the trend in Latin America is to completely banish the idea of​​ strict liability—as explained in the Argentine case of Belén Rodriguez—which at some point was proposed by some scholars through theories that led to interpretation of the use of the internet as a ‘risky thing’. (3) Although the decisions do not go only in one direction, the influence of the Inter American System of Human Rights seems to be powerful and for that reason there is a tendency to deny search engines’ responsibility. (4) However, determining, as it is proposed by some cases in Latin America or in some way in the Costeja case, when a content is ‘manifestly illegitimate’—therefore making its censorship legitimate—is an issue that must be treated with care. This thorny issue is aggravated when the faculty of determining what is ‘manifestly illegitimate’ is left in the hands of a private party. This is a concerning issue which is not yet resolved. Perhaps, to stress some of the points already made in this chapter, the best way to wrap up this review of the state of intermediary liability in Latin America might be by quoting the 2016 annual report of the Office of the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights: [I]nternational human rights law does not protect or recognize the so-called ‘right to be forgotten’ in the terms outlined by the CJEU in the Costeja case. On the contrary, the Office of the Special Rapporteur is of the opinion that the application to the Americas of a private system for the removal and de-indexing of online content with such vague and

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

BELÉN RODRIGUEZ AND THE IMPACT OF THE NEW EUROPEAN RULES    521 ambiguous limits is particularly problematic in light of the wide regulatory margin of the protection of freedom of expression provided by article 13 of the ACHR. The removal of content from the Internet constitute a clear interference with the right to freedom of expression, in both its individual and social dimensions, as well as the right of access to information by the people. . . . A similar effect—albeit not identical because of its dimension—is the de-indexing of content, insofar as it makes the information more difficult to find and renders it invisible. Both have a limiting effect on the right to freedom of expression because they restrict the possibility to seek, receive and impart information and ideas regardless of national frontiers. In the Americas, after many years of conflict and authoritarian regimes, individuals and human rights groups have maintained a legitimate claim to access to information regarding governmental and military activity of the past and gross human rights violations. People want to remember and not to forget. In this sense, it is important to recognize the particular context of the region and how a legal mechanism such as the so-called ‘right to be forgotten’ and its incentive for de-indexation might impact the right to truth and memory. The protection of personal data to which the right to be forgotten refers cannot lead to the imposition of restrictions on information disseminated by media outlets that could affect the privacy rights or reputation of an individual. . . . Media digital platforms cannot be understood as personal data controllers. They are public sources of information and platforms for the dissemination of opinions and ideas on matters of public interest, and therefore cannot be subject to a de-indexing order nor to the suppression of online content regarding matters of public interest.52

52  Office of the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights, ‘Annual Report’ (15 March 2017) OEA/Ser.L/V/I Doc. 22/17 .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

PA RT V

I N T E R M E DI A RY L I A BI L I T Y A N D ON L I N E E N FORC E M E N T

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 27

From ‘Notice a n d Ta k edow n ’ to ‘Notice a n d Stay Dow n ’: R isks a n d Sa fegua r ds for Fr eedom of Ex pr ession Aleksandra Kuczerawy

Since the emergence of the internet industry, liability for user content has been ­considered a problematic issue. Providers of intermediary services, such as access ­providers and hosts, quickly became aware of the potentially high risks resulting from content liability.1 In the light of the developing case law and a lack of harmonization, the young internet industry launched a plea for immunity for third party’s content.2 The plea did not go unanswered. Policymakers around the world introduced limited liability regimes consisting of two basic principles: (1) a lack of liability of intermediaries for third party content provided they do not modify that content and are not aware of its illegal character; and (2) the absence of a general obligation to monitor content.3 Such immunity was meant to stimulate growth and innovation of the newly born technology and provide positive incentives for further development. Some regulatory instruments introduced an additional immunity condition, which requires intermediaries to act 1  See Organization for Economic Co-operation and Development (OECD), Directorate for Science, Technology and Industry, Committee for Information, Computer and Communication Policy, ‘The Role of Internet Intermediaries In Advancing Public Policy Objectives, Forging partnerships for advancing public policy objectives for the Internet economy’, Part II (22 June 2011) 11. 2  ibid. 11. 3  ibid. 6.

© Aleksandra Kuczerawy 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

526   ALEKSANDRA KUCZERAWY expeditiously upon obtaining knowledge about illegal character of content. The resulting mechanism, commonly referred to as ‘notice and takedown’, provides rightholders with an opportunity to call upon an internet intermediary directly to remedy a wrongdoing they believe they have been subject to. Limited liability regimes then gradually made their way into regulatory instruments at both the national and regional level. It could first be spotted in the US Digital Millennium Copyright Act (DMCA).4 In the EU, liability exemptions for internet intermediaries were incorporated in the e-Commerce Directive 2000/31, which was later implemented in all EU Member States.5 Despite the popular name, ‘notice and takedown’ is only one of a variety of mechanisms that can be taken by an intermediary. A more appropriate and generic term is ‘notice and action’ (N&A), which encompasses the variety of mechanisms designed to eliminate illegal or infringing content from the internet upon request of the ­rightholder.6 N&A is based on the relatively simple idea of a complaint mechanism that provides anyone who considers that their rights have been infringed (from the rightholder, third party, organization, etc.) with an easy way to seek relief. N&A is always initiated by way of a notice. The intermediary can respond to such a notice in a number of ways. It can react immediately by taking down or blocking access to content, or it can wait for a response from the content provider and react accordingly after hearing his defence. It can also act continuously against the content and any future infringements towards the same or merely similar content. The following sections describe the most commonly encountered notice–and-action mechanisms around the world, namely ‘notice and takedown’ (NTD), ‘notice and notice’ (NN), and ‘notice and stay down’ (NSD). The analysis of each mechanism refers to specific national implementations. As all N&A mechanisms provide for the removal or blocking of content, each mechanism also constitutes a potential interference with the right to freedom of expression. The goal of this chapter is to examine how different types of N&A mechanisms amplify the risks to free expression and what safeguards they include to ­prevent such risks from manifesting themselves. The reader should note that the analysis provided here is not exhaustive. Rather, it serves to give an indication of how the presence of safeguards and their implementation can impact the right to freedom of expression.

4  See the Digital Millennium Copyright Act of 1998, 17 USC § 512 (US). 5  See Directive 2000/31/EC of the European Parliament and the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1 (e-Commerce Directive). 6  See more in European Commission Communication, ‘Communication to the European Parliament, The Council, The Economic and Social Committee and The Committee of Regions, A coherent framework for building trust in the Digital Single Market for e-commerce and online services’ SEC(2011) 1640 final, 13, fn. 49.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

RISKS AND SAFEGUARDS FOR FREEDOM OF EXPRESSION   527

1.  Impact on Freedom of Expression Notice–and-action mechanisms are often driven by practical considerations, allowing for swift and effective relief, far quicker than the relief typically provided by the judiciary. By creating such a possibility, states provided an efficient, direct, and accessible form of redress mechanism.7 Responses to complaints, however, have a direct effect on the right to freedom of expression as they can lead to removal of content from the internet (or alternatively a reduction of its visibility). N&A mechanisms place intermediaries in a situation where they are essentially required to decide about competing rights and interests. This is obviously problematic because, as private companies, intermediaries are not qualified to replace courts of law in such an important task. This approach has often been described as an ‘inappropriate transfer of juridical authority to the private sector’.8 An aggravating factor is the fact that refusal to take down content puts the intermediaries at risk of being held liable. Obviously, the most cautionary approach is to act upon any indication of illegality, without engaging in any sophisticated balancing of rights in conflict. It is not surprising, therefore, that in many cases, investigations of the illicit character of the content and balancing of the rights at stake is minimal at best.9 This often leads to preventive overblocking of entirely legitimate content, or in other words, to ‘over-compliance’. Nevertheless, for reasons of practicality and efficiency, involvement of intermediaries in content regulation seems inevitable. Involving intermediaries in making decisions as to whether or not content should be taken down is, in fact, ‘something that cannot practicably be forestalled, on pain of completely undermining the way the Internet operates’.10 If this is the case, however, sufficient safeguards to protect the right to freedom of expression must be put in place. Many existing N&A mechanisms attempt to do so by incorporating various safeguards in their procedures. Whether or not these safeguards function properly depends very much on how they are legally framed and implemented. The provided safeguards can either achieve their goal, or completely miss the point. For this reason, it is worth looking at a few of the most popular notice-and-action mechanisms

7  See also Jaani Riordan, The Liability of Internet Intermediaries (OUP 2016) 64. 8 European Commission, ‘Summary of the results of the Public Consultation on the future of ­electronic commerce in the Internal Market and the implementation of the Directive on electronic ­commerce (2000/31/EC)’ 12 . 9  See discussion in Christian Ahlert, Chris Marsden, and Chester Yung, ‘How Liberty Disappeared from Cyberspace: the Mystery Shopper Tests Internet Content Self-Regulation’ (2014) . 10  Marcelo Thompson, ‘Beyond Gatekeeping: The Normative Responsibility of Internet Intermediaries’ (2016) 18(4) Vand. J. of Entertainment & Tech. L. 793.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

528   ALEKSANDRA KUCZERAWY and to examine how the different elements of their procedures impact freedom of expression and what safeguards they contain.

2.  Notice and TakeDown 2.1 General Notice and takedown (NTD) is a mechanism where an internet intermediary is called upon directly by a private entity (individual, company, rightholders organization, etc.) to remove or disable access to information in response to a breach of their rights (or of the law more generally). It is the intermediary’s task to assess whether such a complaint is credible and whether the content is in fact infringing or illegal. Based on this assessment, the intermediary must decide either to remove the disputed content or to keep it available. It is, therefore, a two-stage process where both rightholders and intermediaries are involved in the enforcement of rights on the internet.11 When looking at specific national implementations of the NTD, it becomes clear that they all contain additional elements or formal conditions, which have a substantial effect on the right to freedom of expression in the content-removal process.

2.2  Variations of the Mechanism Notice and takedown mechanisms can be found in regulatory instruments at both regional and national levels. The most well-known notice-and-takedown mechanism is  included in the US DMCA. The procedure in the DMCA relates exclusively to ­copyright-infringing content. A crucial element of the DMCA is that it allows and expects internet intermediaries to disable access to material or activity claimed to be infringing as long as they act in good faith in response to a claim or based on facts or ­circumstances that the material or activity is infringing.12 In the EU, a notice-and-takedown mechanism is implied, but not directly provided, in Article 14 of the e-Commerce Directive 2000/31/EC. Under this provision, hosting providers can benefit from a liability exemption if they act expeditiously to remove or disable access to information upon obtaining knowledge about its illegal character. The provision applies to any kind of illegal or infringing content. In practice, however, NTD mechanisms are often introduced in sector-specific regulations, often addressing only one type of content. 11 See Martin Husovec, ‘The Promises of Algorithmic Copyright Enforcement: Takedown or Staydown? Which is Superior? And Why?’ (2018) 42 Colum. J. of L. & Arts 53 . 12  See DMCA (n. 4) s. 512 (g)(1).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

RISKS AND SAFEGUARDS FOR FREEDOM OF EXPRESSION   529 It is indicative that the majority of existing NTD mechanisms apply only to copyright infringements.13 Over time, some countries have broadened the reach of the provided mechanisms to other types of infringements.14 Hungary, for example, has extended the mechanism to cover the personal rights of minors in addition to copyright infringements.15 In 2017, Germany introduced a law requiring social media providers to remove manifestly unlawful content, for example hate speech, upon complaint—the NetzDG.16 South Korea provides two versions of the mechanism, one for copyright infringements and another applying to privacy-infringing content, defamatory content, or content otherwise violating the rights of others.17 Not all the countries, however, limit the scope of the NTD mechanism to a specific type of content. The NTD procedure implemented in France, for example, does not ­contain such delineation.18 Removal can be requested with regard to any content in ­violation of national law.19

13  See ibid. s. 512. In Finland, the NTD procedure described in the Information Society Code applies specifically to content infringing copyright or neighbouring rights. See Tietoyhteiskuntakaari [The Finnish Information Society Code] no. 917 of 7 November 2014, entered into force on 1 January 2015, ch. 22 (Fin.) . In Hungary, see Act CVIII of 2001 on certain issues of electronic commerce services and information society services, promulgated on 24 December, 2001, Art. 13(1) (Hun.) . In South Korea, see Copyright Act amended by Act no. 9625 of 22 April 2009; Act no. 10807 of 30 June 2011; Act no. 11110 of 2 December 2011, Act no. 14083 of 22 March 2016; Act no. 14634 of 21 March 2017 (Kor.) . 14  South Korea provides another type of NTD, for types of content-infringing rights other than copyright, in the Act on Promotion of Information and Communications Network Utilization and Information Protection, last amended by Act no. 11322 of 17 February 2012 (Kor.) (hereafter ICNA, Information and Communications Network Act). 15  The Hungarian NTD procedure foresees two procedures: (1) ‘hard’ notice and takedown for intellectual property rights and trade mark infringements; and (2) ‘soft’ notice and action for infringements of personal rights of minors. ‘Hard’ NTD requires content deletion without assessing whether the request is justifiable or not. ‘Soft’ N&A allows the intermediary to examine the request and disregard those that he considers unjustified. See Gábor Csiszér, Ministry of National Development, ‘Presentation of Hungary concerning national legislative approach to notice and action’ (EC Expert group on electronic commerce, 27 April 2017) E01636. 16  See the 2017 Network Enforcement Act (Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken, NetzDG) (Ger.). See also Aleksandra Kuczerawy, ‘Phantom Safeguards? Analysis of the German law on hate speech NetzDG’ (KU Leuven CiTiP Blog, 30 November 2017) . 17  See ICNA, Information and Communications Network Act (n. 14). 18  See loi no. 2004-575 du 21 juin 2004 pour la confiance dans l’économie numérique (Fr.) (hereafter LCEN). Currently France is discussing a possible new law (the so-called ‘Avia Bill’) targeting hate speech. See more in ‘Article 19, France: Analysis of draft hate speech bill’ (3 July 2019) . 19  See Swiss Institute of Comparative Law (SICL), Comparative Study on Filtering, Blocking and TakeDown of Illegal Content on the Internet—France Country Report, report commissioned by the Council of Europe (20 December 2015) 244.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

530   ALEKSANDRA KUCZERAWY

2.3  Risks and Safeguards for Freedom of Expression 2.3.1 Foreseeability The level of detail in the legal frameworks that introduce notice-and-takedown ­mechanisms varies considerably. In the EU, the e-Commerce Directive provided no guidelines on how the mechanism should look but left it to the discretion of the Member States.20 To this day, a significant number of countries have no specific legal framework for content removal but rely on general rules of law instead. Only a few EU countries opted for the opportunity provided by Article 14(3) of the Directive to introduce more detailed measures, most notably Finland, France, Hungary, Lithuania, Sweden, and partially the UK.21 Countries that further codified NTD mechanisms into their national law, both in the EU and beyond, describe the relevant procedures in detail. For example, they specify the time frames for different actions in the procedure and the formal requirements for a valid notice.22 Rules regulating the latter are particularly relevant because the validity of notice often determines the existence of actual knowledge, which is necessary to decide on the service provider’s liability.23 The importance of a legal framework specifying detailed NTD procedures should not be underestimated. It improves foreseeability and legal certainty, as all stakeholders can easily inform themselves on what behaviour is expected of them, and when.

2.3.2  Abusive requests Interferences with freedom of expression can also stem from abusive takedown requests. Certain NTD procedures include provisions aimed to discourage abusive notifications by introducing penalties for misrepresentations. Examples can be found in the United States, which takes a strict approach with penalties of perjury, and Finland, with a more lenient approach limited to compensation for damage.24 Penalties for misrepresentations in a notice, as implemented in the United States, are not a successful deterrent against abusive notices.25 Complaints against notification

20  See Directive 2000/31/EC (n. 5) recital 46. 21  See Thibault Verbiest and others, ‘Study on the Liability of Internet Intermediaries’ (12 November 2007) Markt/2006/09/E; Patrick Van Eecke and Maarten Truyens, ‘EU Study on the Legal Analysis of A  Single Market for the Information Society—New Rules for a New Age?, Legal analysis of a Single Market’, study commissioned by the European Commission’s Information Society and Media DirectorateGeneral (November 2009); SICL (n. 19) 797. 22  See, in Finland, Information Society Code (n. 13) s. 191; in France, LCEN (n. 18) art. 6-I-5; in Hungary, Act CVIII (n. 13) Art. 13(2). 23  See Verbiest and others (n. 21) 14 and 41. 24  See DMCA (n. 4) s. 512(f); Information Society Code (n. 13) s. 194. 25  See Wendy Seltzer, ‘Free Speech Unmoored in Copyright’s Safe Harbor: Chilling Effects of the DMCA on the First Amendment’ (2010) 24(1) Harv. J. of L. & Tech. 178. See also Jennifer Urban, Joe Karaganis, and Brianna Schofield, ‘Notice and Takedown in Everyday Practice’, UC Berkeley Public Law Research Paper no. 2755628 (2017) 42 .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

RISKS AND SAFEGUARDS FOR FREEDOM OF EXPRESSION   531 senders are rarely successful, although there are some notable exemptions to this trend.26 Perhaps the most famous example is the ‘Dancing Baby’ case.27 The Ninth Circuit Court of Appeals ruled, after ten years of litigation, that copyright owners must at least ­consider fair use before they issue the DMCA takedown request.28 The Finnish provision regarding misrepresentations is more light-handed in its approach. A person who gives false information in the notification or in the plea with objection will be liable to compensate for the damage caused.29 However, there is no liability to compensate or it may be adjusted if the notifying party had reasonable grounds to assume that the information was correct or if the false information was only of minor significance, when taking into account the entire content of the notification or the plea.

2.3.3  Notification and counter-notification Due process and effective remedy are essential to the protection of fundamental rights. One way to introduce elements of due process is by requiring a notification to the content provider, to inform him of a complaint made against ‘his’ content. According to the DMCA, the service provider should take reasonable steps to promptly notify the content provider that it has removed or disabled access to content.30 Similarly, in Finland and in Hungary when the challenged content is taken offline the host must notify the content provider.31 In Finland, additionally, the notification must state the reason for removal and provide information on the right to appeal in a court, within fourteen days of receipt of the notification. By informing the content provider of the charges against him, the notifications bring the element of the right to a fair hearing into the process. The existing procedures allow, moreover, for a certain form of appeal, through objection or counter-notification.32 The possibility of a counter-notification allows the parties to respond to the complaint and put forward a defence for their use of the content. Counter-notifications must usually meet specified requirements and time frames. They are resolved by the hosting providers, who can effectively put the content back online. In Hungary, the service provider is required to expeditiously make the relevant information accessible again upon receiving an objection. In the United States, content is 26 See Online Policy Group v Diebold Inc., 337 F.Supp.2d 1195 (ND Cal. 2004) (US) (granting plaintiff ’s claim); and Rossi v Motion Picture Assoc. of Am., 391 F.3d 1000 (9th Cir. 2004) (US) (dismissing plaintiff ’s claim). In 2017, a claim under s. 512(f) of the DMCA survived a motion to dismiss. This, however, rarely happens and the claimant still faces a long way to reach a favourable judgment. See Johnson v New Destiny Christian Center Church, 2017 WL 3682357 (MD Fla. 2017) (US). See also Eric Goldman, ‘Section 512(f) Complaint Survives Motion to Dismiss—Johnson v. New Destiny Church’ (Tech & Markt Law Blog, 30 August 2017) . 27 See Lenz v Universal Music Corp., 801 F.3d 1126 (9th Cir. 2015) (US). 28  See Marc Randazza, ‘Lenz v. Universal: A Call to Reform Section 512(f) of the DMCA and to Strengthen Fair Use’ (2016) 18(3) Vand. J. of Entertainment & Tech. L. 103. 29  See Information Society Code (n. 13) s. 194. 30  See DMCA (n. 4) s. 512(g)(2)(A). 31  See Information Society Code (n. 13) s. 187; and Act CVIII (n. 13) Art. 13(4). 32  DMCA (n. 4) s. 512 (g)(B); Information Society Code (n. 13) s. 193; Act CVIII (n. 13) Art. 13(6).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

532   ALEKSANDRA KUCZERAWY reinstated, but no earlier than in ten days and no later than fourteen days, unless the service provider receives notice that the rightholder took the case to court.33 Research on the NTD mechanism of the DMCA has revealed that its counter-notice mechanism is rarely used in practice.34 The reason for this is that the content provider has to state, under penalty of perjury, that he has a good faith belief that the material was removed or disabled as a result of a mistake or misidentification.35 The process, in consequence, is intimidating to individual users responding without the benefit of legal counsel. As pointed out by Bridy and Keller, ‘the cost of error for a user if she is mistaken about her copyright defenses is much higher than the cost of error for a copyright owner who is mistaken about her claims’.36 As a result, while counter-notification in the DMCA is visible and concrete, it is perceived as being a ‘largely symbolic acknowledgment of the importance of users’ expressive rights’.37

3.  Notice and Notice 3.1 General Under a notice-and-notice mechanism (NN), an intermediary receives a notification with a complaint, which he then forwards to the content provider.38 The notification to the content provider serves the purpose of a warning. The content provider is given an opportunity to correct his behaviour, which halts the procedure, or to defend it within a provided time limit, which may lead to further actions (notifications or sanctions). The notice-and-notice mechanism is a variation of notice and takedown. The main difference is that the remedy to the potential wrongdoing is not taken immediately but is spread over time. Usually, there are several notifications (warnings) to the content provider before the final response is delivered. The goal of this mechanism is to educate users and to deter them from wrongdoings by demonstrating that they cannot hide from detection.39 It is meant to persuade users to look for legal alternatives, for example to obtain legally purchased music.40 Several variations of the notice-and-notice mechanism exist, varying in how the ­conflict escalates or in the final outcome. The variations include, for example, notice and 33  DMCA (n. 4) s. 512(g)(2)(c). 34  See Jennifer Urban and Laura Quilter, ‘Efficient Process or “Chilling Effects”? Takedown Notices Under Section 512 of the Digital Millennium Copyright Act’ (2006) 22 Santa Clara Computer and High Tech. L.J. 621, 625; Urban, Karaganis, and Schofield (n. 25) 42. 35  See DMCA (n. 4) s. 512(g)(3)(C). 36  Annemarie Bridy and Daphne Keller, ‘U.S.  Copyright Office Section 512 Study: Comments in Response to Notice of Inquiry’, SSRN Research Paper no. 2920871 (31 March 2016) 29 . 37  ibid. 30. 38  See OECD (n. 1) 57. 39  ibid. 29. 40 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

RISKS AND SAFEGUARDS FOR FREEDOM OF EXPRESSION   533 notice leading to a judicial takedown and notice wait and takedown.41 Other responses might involve the suspension and termination of service, capping of bandwidth, blocking of sites, portals, and protocols.42 In the most extreme form, the content provider is disconnected from the internet after the final warning has been issued. This version of the mechanism is known as graduated response or the ‘three-strikes-and-you’re-out’ approach, although the number of strikes might differ.43 Administration of the responses may be entrusted to an internet service provider (ISP), an administrative authority, or a court of law, depending on the country. Examples of NN mechanisms can be found in Canada, Chile, and South Korea.44 Also the NN in France, which pioneered the approach in the HADOPI Act, deserves attention.45

3.2  Variations of the Mechanism Each of the provided examples of NN mechanisms applies solely to copyright infringements. It is interesting that a mechanism where the response is not provided in one definitive step is preferred for copyright infringements. In France, initially, it was also applicable to situations when a user failed to properly secure his internet connection, creating a possibility for somebody else to commit a copyright infringement using his network. The law was criticized for not clearly defining the ‘sufficient security measures’

41  See Christina Angelopoulos and Stijn Smet, ‘Notice-and-Fair-Balance: How to Reach a Compromise Between Fundamental Rights in European Intermediary Liability’ (2016) 8(2) J.  of Media  L.  266, 294–300. 42  See Peter Yu, ‘The Graduated Response’ (2010) 62 Florida L. Rev. 1373, 1374. 43  See ibid. In February 2013, Motion Picture Association of America (MPAA) and five major US internet service providers launched a ‘six strikes’ Copyright Alert system to deal with online copyright infringements. See Jyoti Panday and others, ‘Jurisdictional Analysis: Comparative Study of Intermediary Liability Regimes Chile, Canada, India, South Korea, UK and USA in support of the Manila Principles On Intermediary Liability’ (1 July 2015) 15 . See also Joel Hruska, ‘“Six Strikes” Programs from ISPs & MPAA Ignites in Nine Days: Here’s What You Need to Know’ (Extreme Tech, 19 November 2012) . 44 See Copyright Modernization Act of 2012, c. 20 (Can.) (hereafter CMA); Ley no. 20.435, modifica la Ley no. 17.336 sobre propiedad intelectual [Law no. 20.435, amending Intellectual Property Law enacted on 4 May] (Col.) 2010 . In South Korea graduated response is provided by the Copyright Act, Arts 133-2 and 133-3. The mechanism functions in addition to the NTD procedures for copyright and other types of infringing content (hereafter LPI). 45  See loi no. 2009-669 du 12 juin 2009 favorisant la diffusion et la protection de la création sur internet [Law no. 2009-669 of 12 June 2009, promoting the dissemination and protection of creative works on the internet] (HADOPI Act) (Fr.) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

534   ALEKSANDRA KUCZERAWY it required. In July 2013, the French Ministry of Culture issued a decree lifting the ­penalty of internet access suspension for those who failed to secure access to their network.46 The NN mechanism exists in many variations.47 Canada took a light-handed approach whereby intermediaries are only required to forward the notice, while Chile opted for a moderate version where any decisions to take down or suspend an account are taken by the courts.48 South Korea chose a stricter version, where NN can lead to user account suspension (but not to the suspension of an email account).49 The most severe approach was taken in France, going all the way to internet disconnection with prohibition to subscribe to a new ISP.

3.3  Risks and Safeguards for Freedom of Expression 3.3.1 Foreseeability The examples of NN mechanisms mentioned earlier describe the relevant procedures in some detail. Yet, they are not entirely free from issues regarding interpretation and application. For example, in Canada notices have to be forwarded by the intermediary to the content provider. The rules describe what information the notice must contain but there are no restrictions limiting the content of the notice. This has allowed ­copyright holders to add information designed to intimidate users in order to demand settlements.50 Intermediaries, even if aware of the false claims, cannot refuse to forward the notice. Users who receive such notices are often confused about their rights and o ­ bligations.51 A system which allows misleading people about their 46  See Décret no. 2013-596 du 8 juillet 2013 supprimant la peine contraventionnelle complémentaire de suspension de l’accès à un service de communication au public en ligne et relatif aux modalités de transmission des informations prévue à l’Article L 331-21 du code de la propriété intellectuelle [Decree no. 2013-596 of 8 July 2013 abolishing the additional complementary penalty of the suspension of the access to an online public communication service and relating to the modalities of transmission of information provided for in article L 331-21 of the Intellectual Property Code] (Fr.). 47  This chapter focuses on mechanism provided by legislation. For a mechanism developed through jurisprudence, see the landmark case of Belén Rodríguez v Google and Yahoo! decided by the Argentinian Supreme Court. The case involved violation of copyright, reputation, and privacy rights. The Supreme Court ruled that judicial review is required for issuing a notice to take down content—except in a few cases of ‘gross and manifest harm’. See Corte Suprema de Justicia de la Nación [National Supreme Court] Rodríguez, María Belén v Google Inc./daños y perjuicios [2014] CSJN Case no. 337:1174 (Arg.). See also Chapter  26; Giancarlo Frosio, ‘The Death of “No Monitoring Obligations”: A Story of Untameable Monsters’ (2017) 8 JIPITEC 199, 206–7. 48  See Alberto Cerda Silva, ‘Cyber Law in Chile’ in International Encyclopaedia of Laws (Kluwer Law Int’l 2017) 131. CDT, ‘Chile’s Notice-and-Takedown System for Copyright Protection: an Alternative Approach’ (CDT, August 2012) 2 . 49  See Copyright Act (n. 13) Art. 133-2(2). 50  See Michael Geist, ‘Misuse of Canada’s Copyright Notice System Continues: U.S.  Firm Sending Thousands of Notices With Settlement Demands’ (Michael Geist Blog, 5 March 2015) . 51  See Sophia Harris, ‘ “Feels Like Blackmail”: Canada Needs to Take a Hard Look at Its Piracy Notice System’(CBC,2November2016).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

RISKS AND SAFEGUARDS FOR FREEDOM OF EXPRESSION   535 situation can hardly pass as foreseeable. Such ‘education through fear’, which is ­currently a result of the missing restrictions, has been criticized by many, who urge the government to review the rules.52 The French law introducing the graduated response started with a firm idea of what it wanted to achieve and how. Over time, however, the law has grown weaker due to (­justified) complaints about its constitutionality and respect for human rights. Already at the beginning of its existence, the law received a severe cool-down by the Conseil ­constitutionnel, which declared that the power to suspend internet access could not be exercised by an administrative body as it constituted a disproportionate restriction on the freedom of expression and an unacceptable presumption of culpability.53 Later, the penalty of suspending access to the internet was lifted due to heavy criticism. From a severe enforcement tool, the law was reduced to a mere ‘pedagogical’ system.54 It eventually became clear that the HADOPI Act was not sufficiently foreseeable, neither for French internet users nor for the French legislator.

3.3.2  Decision-making bodies Entrusting removal decisions to courts can increase the level of procedural fairness, and as a result, the legitimacy of the NN procedure. It also improves the quality of the decisions as they are made by bodies competent to resolve conflicts. This option was chosen in Canada and Chile. In Canada, the intermediary is not obliged to take down the content without a court order, but only to assist the copyright holder in exercising the rights against the primary infringer. The only tasks of the ISPs are to forward the complaint and to retain records to be presented in court and to identify the infringer.55 In Chile, a court order is required to compel blocking or removal of infringing content.56 The approach shifts the task of evaluating notices from intermediaries to courts.57 Still, these two countries added other elements of due process in their procedures, such as the forwarding of notifications to the content provider and the possibility to issue counternotifications.58 Both these steps strengthen the fairness of the procedure by introducing elements of the right to a fair hearing, adversarial proceedings, and equality of arms. Court involvement, moreover, ensures compliance with the right to an effective ­remedy, as the right to appeal is generally available when the decisions are made directly by the courts.

52 ibid. See also Sophia Harris, ‘U.S.  Cancels Internet Piracy Notices While Canadians Still Get Notices Demanding Settlement Fees’ (CBC, 1 February 2017) . 53  See Conseil constitutionnel [Constitutional Court], décision no. 2009-580 DC du 10 juin 2009 [case no. 2009-580 DC of 10 July 2009] (Fr.) as referenced by Christina Angelopoulos, European Intermediary Liability in Copyright: A Tort-Based Analysis (Kluwer Law Int’l 2016) 148. 54  See Simon Columbus, ‘France to Disconnect First Internet Users under Three Strikes Regime’ (Opennet, 27 July 2017) . 55  See CMA (n. 44) Art. 41.26(1). 56  See CDT (n. 48) 2. 57  ibid. 5. 58  See CMA (n. 44) Art. 41.26(1); LPI (n. 44) Art. 85 U. However, the content provider in Chile has an opportunity to respond to the notice when conflict reaches the court. See also Cerda Silva (n. 48) 131.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

536   ALEKSANDRA KUCZERAWY In countries where the decisions are made by public administration, the lack of due process is one of the main recurring criticisms. This is the case, for example, in South Korea where the procedure is handled by the executive branch of the government and the decisions are not reviewed by the courts.59 This makes the process non-transparent and vulnerable to arbitrary decision-making. South Korea provides an opportunity for intermediaries and users to present their opinion in advance of fulfilling the order, but it is not clear if the opinion is actually considered an appeal and whether it leads to a proper review.60 The South Korean NN mechanism provides for additional safeguards, including notification to the content provider. Their presence, however, does little to improve the opaque decision-making process and the absence of an effective remedy through judicial review. In France, the lack of judicial redress was one of the main reasons why the original HADOPI Act was challenged and eventually watered down significantly. The initial procedure contained elements introducing a certain amount of due process. For example, the first warning had to inform the user about his right to request further clarification regarding the charges.61 The second warning gave the user a possibility to respond and defend his behaviour.62 In the last phase, the user was able to challenge the decision in front of a judge. Nevertheless, the Conseil constitutionnel ruled that the law allowed for disproportionate interference with the right to freedom of expression and an unacceptable presumption of guilt.63 The court effectively took away the power of the HADOPI agency to sanction users and confirmed that this type of punishment must be administered under judicial control.64 The law was supplemented with the ‘HADOPI 2’ Act,65 which gave the competence to the criminal court.66 Procedures that concentrate all the decision-making power exclusively with administrative authorities are generally ­problematic from the perspective of the right to an effective remedy.67

3.3.3  Severity of the response The severity of existing NN mechanisms differs widely, from a light slap on the wrist to full-on disconnection from the internet. It is interesting to observe that, despite 59  The Minister of Culture, Sports and Tourism (MCST) has the power to order service providers to issue warnings to infringers and websites hosting infringing content and to order them to cease transmission or to delete infringing material. If the infringement continues after three warnings, the minister may order the service provider to suspend an account of the infringer or a website for up to six months. Copyright Act (n. 13) Art 133-2 (Orders, etc. for Deletion of Illegal Reproductions, etc. through Information and Communications Network). 60  See Copyright Act (n. 13) Art. 133-2(7). 61  See HADOPI Act (n. 45) arts L 335-7 and L 335-7-1. 62  See Primavera de Filippi and Daniele Bourcier, ‘Three-Strikes Response to Copyright Infringement: The Case of HADOPI’ in Francesca Musiani and others (eds), The Turn to Infrastructure in Internet Governance (Palgrave-Macmillan 2016) 134. 63  See Conseil constitutionnel (n. 53). 64  See de Filippi and Bourcier (n. 62) 135. 65  See loi no. 2009-1311 du 28 octobre 2009 relative à la protection pénale de la propriété littéraire et artistique (HADOPI 2) (Fr.). 66  See de Filippi and Bourcier (n. 62) 141. 67  See Chapter 30.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

RISKS AND SAFEGUARDS FOR FREEDOM OF EXPRESSION   537 such varying degrees of severity, each version is considered as an appropriate and proportionate response to copyright infringements in the respective country. In France, the NN mechanism took the most extreme form. When it was first ­introduced, the new law gave the HADOPI agency power to issue sanctions in the form of fines and temporary suspensions of internet connection. The latter penalty was strengthened with a prohibition to subscribe to any other ISP for the period of the punishment. The period of suspension could range from three months to one year.68 Between 2010 and 2017, the law led to more than 2,000 referrals to prosecutors and 189 criminal convictions.69 Initially, disconnection was also foreseen as a sanction for not securing one’s internet connection, which was used for copyright infringements by others. The latter penalty was eventually abolished by the Ministry of Culture.70 As a result of low effectiveness and recurring constitutional struggles, the law is considered to have failed to achieve its goals.71 With the intended severe response, France opted for the most restrictive measures available. Due to continuous doubts about its constitutionality, proportionality, and effectiveness, however, the law was gradually reduced to a mere shadow of its former self. The measures foreseen by the Korean legislation are also quite severe. After three warnings, the minister may order suspension of an account or a website for up to six months.72 The suspension does not apply to email accounts but includes other accounts given by the relevant online service provider.73 Unlike in France, the sanction is not a complete exile from the internet but it clearly creates an obstacle to the exercise of the right to freedom of expression and access to information. The disconnection measure, moreover, does not appear to be proportionate to the harm caused. Most of the suspended users are minor offenders.74 Half of those suspended were involved in infringement of content that would cost less than 90 US cents.75 This is in stark contrast to the

68  See de Filippi and Bourcier (n. 62) 134. 69  ‘Seven Years of Hadopi: Nine Million Piracy Warnings, 189 Convictions’ (Torrentfreak, 1 December 2017) . 70  By 2013, the law had resulted in conviction and a fifteen-day suspension of exactly one individual who, moreover, insisted that he did not commit the infringement. See Columbus (n. 54). 71  See Pierre Lescure, ‘Mission “Acte II de l’exception culturelle”—Contribution aux politiques culturelles à l’ère numérique’ (May 2013). See also ‘Seven Years of Hadopi: Nine Million Piracy Warnings, 189 Convictions’ (n. 69). 72  See Copyright Act (n. 13) Art. 133-2(2). 73 ibid. 74  See Centre for Law and Democracy, ‘Analysis of the Korean Copyright Act’ (June 2013) 6 . See also Paul Resnikoff, ‘Three Strikes: A Complete & Total Failure In South Korea . . .’ (Digital Music News, 1 April 2013) ; ‘Facts and Figures on Copyright Three-Strike Rule in Korea’ (Heesob’s IP Blog, 24 October 2010) . 75  See Danny O’Brien and Maira Sutton, ‘Korean Lawmakers and Human Rights Experts Challenge Three Strikes Law’ (EFF, 29 March 2013) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

538   ALEKSANDRA KUCZERAWY intention of the law, which is supposedly aimed at ‘heavy uploaders’.76 The measure was also not very successful in curbing online piracy.77 As more users were suspended, the amount of detected infringements was constantly increasing.78 Considering the low effectiveness of the approach, together with the widespread levity of the offences, it is hard to consider the mechanism as proportionate.

4.  Notice and Stay Down 4.1 General Under a notice-and-stay-down (NSD) mechanism, the intermediary receives a ­notification about illegal or infringing character of hosted content, similar to NTD. In this case, however, the intermediary is required not only to remove the information, but also to take additional measures to ensure that it is not subsequently reposted, either by the same user or by other users.79 The identification of recurring postings of content previously notified as unlawful requires the implementation of systems that monitor all user-submitted information. Such systems can take the form of manual human supervision or automated systems.80 In both cases, however, the intermediaries must filter the entirety of content to detect a reposting of once-removed content.81 The mechanism, therefore, requires mandatory filtering initiated by the first notification.82 Notice and stay down goes further than ‘traditional’ notice and takedown. This is because a submitted notice not only concerns a one-time infringement, but starts an ongoing obligation on the side of the intermediary to prevent the same infringement from occurring in the future. It could, however, go even further and require prevention not only of the same but also similar infringements. The NSD mechanism, until recently, has not been provided by any law but has resulted from an extensive interpretation of the same provisions that constitute the basis of a NTD mechanism. Existing instances of NSD mechanisms can therefore be found in case law, rather than in the law on the books. Most instances of NSD involved intellectual property infringements.

76  See ‘Copyright Reform—Abolishing Three-Strikes-Out Rule from Copyright Law’ . 77  See O’Brien and Sutton (n. 75). 78 See ‘South Korea Sees Fewer Crackdowns on Copyright Infringement: Report’ (Yonhap News Agency, 13 August 2011) . 79  See Angelopoulos and Smet (n. 41) 288. 80 ibid. 81  See Elliot Harmon, ‘“Notice-and-Stay-Down” Is Really “Filter-Everything”’ (EFF, 21 January 2016) . 82 See Martin Husovec, ‘Accountable, Not Liable: Injunctions Against Intermediaries’, TILEC Discussion Paper no. 2016-012 (2016) 70 .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

RISKS AND SAFEGUARDS FOR FREEDOM OF EXPRESSION   539 The situation has changed recently. The new Directive on copyright in the Digital Single Market (DSM)83 targets a category of online intermediaries which includes most user-generated content platforms. According to the Directive, to avoid liability for infringing materials uploaded by users on their networks, the online intermediaries must not only act ‘expeditiously, upon receiving a sufficiently substantiated notice from the rightholders, to disable access to, or to remove from their websites, the notified works’ but also make ‘best efforts to prevent their future uploads’ (emphasis added). Effectively, the Directive introduced an NSD mechanism in EU law although it remains to be seen how the Directive will be implemented in national legislations.

4.2  Judicial Construct Before the very recent legislative development, NSD has been primarily a judicial construct. The most prominent example of a judicial NSD mechanism can be found in the Court of Justice of the European Union (CJEU)’s eBay case, dealing with trade mark infringement on the online marketplace eBay. The CJEU stated that injunctions ex Article 11 of the Enforcement Directive can be issued not only to take measures that contribute to bringing to an end to infringements, but also to preventing further infringements.84 However, the prevention of further infringements cannot be achieved through active monitoring of all the data of each of the users of an online intermediary.85 According to the CJEU, an online intermediary can be ordered to suspend the perpetrator of an IP infringement in order to prevent (1) further infringements of that kind (2) by the same seller (3) in respect of the same trade marks.86 Among national attempts at creating NSD arrangement,87 a relevant application of NSD has been endorsed by the German jurisprudence. The German Federal Supreme Court introduced an NSD mechanism in its Internetversteigerung I judgment.88 The  mechanism was achieved through a special notion of ‘disturbance liability’ (Störerhaftung), which is applied in the online context to hold hosting providers liable for third party illegal content, irrespective of their liability in tort.89 The mechanism 83  See Directive 2019/790/EU of the European Parliament and the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92, Art. 17. 84  See C‑324/09 L’Oréal SA and Others v eBay International AG and Others [2011] ECLI:EU:C:2011:474, para. 144. See Chapter 28. 85  ibid. para. 139. 86 ibid. para. 141. See also C-494/15 Tommy Hilfiger Licensing LLC v Delta Center as [2016] ECLI:EU:C:2016:528. 87  For a brief time, it also operated in France, but it was rather quickly brought to an end by the Cour de cassation. See Cour de cassation [Supreme Court] La société Google France v la société Bach films (L’affaire Clearstream) [12 July 2012] decision no. 831, 11-13669 (Fr); Cour de cassation La société Google France v La société Bac films (Les dissimulateurs) [12 July 2012] decision no. 828, 11-13666 (Fr.); Cour de cassation La société Google France v. André Rau (Auféminin) [12 July 2012] 11-15.165; 11-15.188 (Fr.). 88 BGH Internetversteigerung I [2004] I ZR 304/01 (Ger.). 89  See SICL (n. 19) 261.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

540   ALEKSANDRA KUCZERAWY requires that the host blocks any ‘clear infringements’ which are pointed out in a ­notification.90 The disturber liability, furthermore, involves the duty to review (­monitor) content to prevent future infringements. Interestingly, the duty does not apply only to identical copies of the content, or to copies uploaded by the same users.91 On the contrary, the duty extends to all following infringing acts of a similar nature that are easily recognizable.92 In short, the infringements must be ‘similar in their core’ (the ‘Kerntheorie’).93

4.3  Risks and Safeguards for Freedom of Expression 4.3.1  General v specific monitoring The NSD mechanism is not a commonly used mechanism. The main reason is that most laws on intermediary liability, for example the e-Commerce Directive in its Article 15, prevents states from introducing general monitoring obligations.94 An obligation to prevent re-uploads means that a service provider must constantly monitor all uploads to catch future infringements.95 An obligation of this sort has been rejected by the CJEU. Next to the afore-mentioned eBay case, the CJEU addressed the issue in Scarlet Extended and Netlog. In the latter rulings, the CJEU declared that the filtering of all electronic communications applied indiscriminately to all users as a preventive measure, at the expense of the intermediary and for an unlimited period of time, should be understood as general, and therefore not permitted by the Directive.96 The Court explained that preventive monitoring of this kind would require active observation of almost all files stored by almost all users of the hosting service provider.97 In the more recent 2016 McFadden ruling, which concerned access providers, the CJEU declared that monitoring all of the information transmitted, as a measure, ‘must be excluded from the outset as contrary to Article 15(1) of Directive 2000/31’.98

90  See Husovec (n. 11) 10. 91 See Joachim Bornkamm, ‘E-Commerce Directive vs. IP Rights Enforcement—Legal Balance Achieved?’ [2007] GRUR Int. 642. 92 ibid. 93  Matthias Leistner, ‘Störerhaftung und mittelbare Schutzrechtsverletzung’ [2010] GRUR-Beil 1 as referenced by Angelopoulos (n. 53) 154. For a new take on equivalent content, in the context of defamatory statements, see C-18/18 Eva Glawischnig-Piesczek v Facebook Ireland Ltd [2019] ECLI:EU:C:2019:821. 94  This argument was decisive in ending the NSD mechanism in France. 95  See CDT, ‘Cases Wrestle with Role of Online Intermediaries in Fighting Copyright Infringement’ (CDT, 26 June 2012) . 96  See C-70/10 Scarlet Extended SA v Société belge des auteurs, compositeurs et éditeurs SCRL (SABAM) [2011] ECLI:EU:C:2011:771 and C-360/10 Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v Netlog NV [2012] ECLI:EU:C:2012:85. 97  ibid. para. 37. 98 C-484/14 Tobias Mc Fadden v Sony Music Entertainment Germany GmbH [2016] ECLI:EU:C:2016:689, para. 87.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

RISKS AND SAFEGUARDS FOR FREEDOM OF EXPRESSION   541 In Germany, courts and many scholars believe that an obligation to detect future infringements that are not only the same but merely similar does not constitute a general monitoring obligation, but a specific one, which is allowed by the e-Commerce Directive.99 The opinion seems to stem from an alternative understanding of the difference between the purpose of the monitoring (detecting specific infringements) and the subject of the monitoring (entirety of the content) to achieve that purpose. The CJEU’s rulings in Scarlet Extended, Netlog, and McFadden did not convince German courts that what they administer would qualify, on many occasions, as general monitoring. This inconsistency and possible incompatibility with Article 15 of the e-Commerce Directive raises legitimate questions about the compatibility of the NSD mechanism with the original principles of the intermediary liability regimes. The newly adopted Directive on copyright in the DSM seem to follow the German line. In Article 17, the Directive requires online intermediaries to make best efforts to prevent future uploads of the notified works, in accordance with high industry standards of professional diligence. The Directive stipulates that the application of Article 17 ‘shall not lead to any general monitoring obligation’ (emphasis added). It is hard to imagine, however, in what other way service providers can ensure copyrighted works are not re-uploaded. To effectively recognize infringing content, a technological tool must be used to systematically monitor the entirety of the uploaded users’ content.100

4.3.2  Clear and precise notifications The key characteristic of NSD is that an intermediary, once notified about the infringing content, must continuously monitor its platform to prevent reoccurrence of the infringement. Until the adoption of the Directive on copyright in the DSM, the NSD was interpreted in a way that required the future infringements to be ‘clear’. However, these clear infringements may not only apply to other works of the same kind by the same user but also to other works of similar types infringed by a different user (if the service is particularly susceptible to infringements). As the Bundesgerichtshof acknowledged in Blog-Eintrag, it is not always possible for the hosting provider to immediately recognize whether an infringement has taken place.101 Accordingly, a host provider is required to act only if the notice he received is sufficiently specific. The notice must enable identification of the infringement without excessive difficulty; that is, without an in-depth legal and factual review.102 It is interesting, however, that the original notification has to be detailed enough to indicate an infringement without the need for a thorough legal and factual examination, yet the 99  See Joachim Nordemann, ‘Liability for Copyright Infringements on the Internet: Host Providers (Content Providers)—The German Approach’ (2011) 2(1) JIPITEC 42. 100  See Aleksandra Kuczerawy, ‘EU Proposal for a Directive on Copyright in the Digital Single Market: Compatibility of Article 13 with the EU Intermediary Liability Regime’ in Bilyana Petkova and Tuomas Ojanen (eds), Fundamental Rights Protection Online: The Future Regulation of Intermediaries (Edward Elgar, forthcoming 2020). 101  See Bundesgerichtshof [Supreme Court] (BGH) Blog-Eintrag [25 October 2011] I ZR 93/10 (Ger.). 102  See Angelopoulos (n. 52) 154.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

542   ALEKSANDRA KUCZERAWY host is required to recognize future infringements of other works of the same or even a different kind himself, without any new notification. It is not obvious how the strict interpretation of the notification requirements by the German courts is consistent with the subsequent obligation to remove future undefined infringements.103 Moreover, such a task will clearly require detailed legal and factual analyses to avoid overbroad removals. Such an application of the NSD mechanism does not properly reflect the nuances of copyright law, which foresees possibilities of the same content having different legal ­statuses when posted by different users, or the fact that the status of content changes over time. These aspects of the NSD mechanism will inevitable play a role in upcoming discussions on the implementation of the Directive on copyrights in the DSM. It remains to be seen what form the newly introduced NSD will take.

4.3.3  Appeal procedure Existence of a redress mechanism, for example an appeal procedure, can greatly improve the legitimacy of an N&A mechanism. The newly adopted Directive on copyright in the DSM provides in Article 17(9) that online content-sharing service providers put in place an effective and expeditious complaint and redress mechanism available to their users in case of any disputes regarding the uploaded content. Moreover, out-of-court redress mechanisms should be available for the settlement of disputes but without depriving the users of access to efficient judicial remedies. In particular, users should ‘have access to a court or another relevant judicial authority to assert the use of an exception or limitation to copyright and related rights’. The provision of the new Directive sounds reasonable but its effect in practice is yet to be seen. In the meantime, useful lessons on redress mechanisms can be learnt from the currently functioning NSD mechanisms. The currently existing NSD mechanism has a one positive aspect, namely, that it is administered by courts of law. This ensures that due process rights are respected. Moreover, the fact that any order to remove current and future infringing content requires a court decision means, in theory, that judicial redress is possible. Nevertheless, an obligation to detect and prevent future infringements creates several issues that currently go unaddressed. Despite the fact that the mechanism is administered by courts, the only parties that get to express their opinion are the plaintiffs (rightholders) and the intermediaries. The content providers do not take part in the process. They have, therefore, no say in the process that will effectively impact their rights by restricting their expression. The reason is that the Störerhaftung doctrine is a procedure aimed only at third parties who have not themselves committed an infringement, but who facilitate it, and are able to provide relief. It is a separate procedure from any proceedings against the actual wrongdoer. Moreover, providers of any future infringing content will have no possibility to appeal the decisions. The views of the reposting individuals are at no point taken into account when deciding about removals of the reposted content. In their case, the decision will be made by the intermediary and not by the court. There is no judicial oversight for decisions made regarding future infringements. If the intermediary makes 103 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

RISKS AND SAFEGUARDS FOR FREEDOM OF EXPRESSION   543 a wrong assessment, for example when the content has changed its status, content ­providers of the future infringing content have limited chances to exercise their right to an effective remedy.

5. Conclusions All over the world, different types of notice-and-action mechanisms are used to regulate online content. The variations range from classic notice and takedown to far-reaching notice and stay down. Countries that choose to introduce a specific mechanism ­typically also introduce detailed procedures that often contain a number of formal conditions or other requirements. The requirements are related to different stages of the procedure, for example at the moment of filing a notice or later in the appeal. Undoubtedly, they influence the final outcome of the process by either amplifying or preventing excessive interference with the right to freedom of expression. If the requirements are properly defined and followed, they can act as important safeguards. Their actual effect, however, may vary tremendously, depending on the form they take and accompanying restrictions. This can be seen in the example of counter-notifications and penalties for misrepresentations. Too strict an approach, as it turns out, may convert a reasonable due process element into an insignificant measure without much meaning. The same can be said about appeal procedures. If judicial redress is possible, the legitimacy and quality of the decision-making process increases. If the appeal procedure is limited to filing an objection to a public authority, the remedy may not be effective in practice. Involving intermediaries in content regulation may be inevitable. The legal framework, on which it is based, however, should ensure a certain quality of the law. Only then will a notice and action provide an effective but proportionate and balanced redress mechanism. In particular, the legal framework governing notice-and-action mechanisms should be clear and foreseeable in order to advance legal certainty. Moreover, it should ensure proportionality of the response and availability of an effective remedy, preferably administered by courts. These objectives can be achieved by introducing safeguards designed to ensure the effective exercise of the right to freedom of expression. Examples of what safeguards are needed, as well as how to implement them (and how not to implement them) in practice, can be found in multiple national jurisdictions around the world. It is therefore worth looking at the various national experiences as evidence-based illustrations of potential improvements.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 28

Mon itor i ng a n d Filter i ng: Eu ropea n R efor m or Gl oba l Tr en d? Giancarlo Frosio and Sunimal Mendis

Increasingly, proactive monitoring obligations have been imposed on intermediaries along the entire spectrum of intermediary liability subject matter. This has happened via voluntary measures, judicial decisions, and legislation, as in the case of the recent EU copyright law reform. Since its initial introduction, the proposal1 for an EU Directive on copyright in the Digital Single Market (C-DSM Directive) has been the subject of heated debate. A critical point of this controversy has been (and indeed continues to be) Article 17 (previously Art. 13) of the C-DSM Directive that imposes a heightened duty of care and an enhanced degree of liability on online content-sharing service providers (OCSSPs) as regards copyright-infringing content that is posted on their services by users. It has been argued that (at least in practical terms), the avoidance of liability under the new regime would compel OCSSPs to engage in the monitoring and filtering of user-generated content (UGC). If this is the case, Article 17 would signal a transition of EU copyright law from the existing ‘negligence-based’ intermediary liability system— grounded on the principle of ‘no monitoring obligations’—to a regime that requires OCSSPs to undertake proactive monitoring and filtering of content, and would almost certainly lead to the widespread adoption of automated filtering and algorithmic copyright enforcement systems. In reviewing this magmatic legal framework, the chapter considers the implications of this regulatory shift to the preservation of users’ fundamental freedoms online and in maintaining a healthy balance between the interests of rightholders, users, and online intermediaries. 1  European Commission, ‘Proposal for a Directive of the European Parliament and of the Council on copyright in the Digital Single Market’ COM (2016) 593 final (hereafter C-DSM Directive Proposal).

© Giancarlo Frosio and Sunimal Mendis 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

MONITORING AND FILTERING: EUROPEAN REFORM OR GLOBAL TREND?   545

1.  OSP as a ‘Mere Conduit’: ‘No Monitoring’ Obligations Since the early days of the internet industry, determining the nature and scope of online service provider (OSP) liability for content posted by third parties on online digital spaces and services provided by them, has been a pressing issue for judges and policymakers.2 In the EU, the e-Commerce Directive3 (ECD) provides the main legal framework4 for the regulation of intermediary liability while in the United States it is primarily dealt with under section 512 of the Digital Millennium Copyright Act (DMCA)5 and section 230 of the Communications Decency Act (CDA).6 Both the EU and US legal frameworks are characterized by a ‘negligence-based’ approach to intermediary liability that exempts OSPs from any general obligation to monitor the information stored or transmitted by them, or to actively seek facts or circumstances indicating illegal activity. Together with safe harbour provisions which impose liability based on knowledge,7 OSPs may become liable only if they do not take down allegedly infringing materials promptly enough upon knowledge of its existence, usually given by a notice from interested third parties.8 Although Article 14(3) read with recital 47 of the ECD does allow national law to provide for monitoring obligations ‘in a specific case’, it prohibits the imposition of general monitoring obligations.9 The ECD also acknowledges that Member States can impose duties of care on hosting providers ‘in order to detect and prevent certain types of illegal activities’.10 However, their scope should not extend to general monitoring obligations, if any meaning is to be given to the statement in recital 47 that only specific monitoring obligations are allowed. Furthermore, recital 48 of the ECD emphasizes that the duties of care required from service providers should be of a standard which could be ‘reasonably expected’ from them.11 As a general monitoring obligation goes beyond what could be reasonably expected from service providers, these 2  See Davis Wright Tremaine, ‘The evolution of Internet service provider and host liability’ (Lexology, 28 February 2018) . 3  See Directive 2000/31/EC of the European Parliament and the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1. The ECD is supplemented by Directive 2001/29/EC of the European Parliament and the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L167/10 and Council Directive 2004/48/EC of 29 April 2004 on the enforcement of intellectual property rights [2004] OJ L157/45. 4  See Directive 2000/31/EC (n. 3) Arts 12, 13, 14, and 15. 5  See the Digital Millennium Copyright Act of 1998, 17 USC § 512(m). 6  See Communication Decency Act [1996] 47 USC s. 230. 7  See e.g. Directive 2000/31/EC (n. 3) Arts 12–15; DMCA (n. 5) s. 512(c)(1)(A)–(C). 8  Note that there is no direct relation between liability and exemptions which function as an extra layer of protection that are intended to harmonize conditions to limit intermediary liability at the EU level. 9  Directive 2000/31/EC (n. 3) Art. 15(1). 10  ibid. recital 48. 11  ibid. (emphasis added).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

546   GIANCARLO FROSIO AND SUNIMAL MENDIS are explicitly barred by the Directive. In order to distinguish general from specific ­monitoring obligations, it should be considered that: (1) as an exception, specific monitoring obligations must be interpreted narrowly; (2) both the scope of the possible infringements and the number of infringements that can be reasonably expected to be identified, must be sufficiently narrow; and (3) it must be obvious which material constitute an infringement.12 As Van Eecke noted: [i]f [clear criteria] are not defined, or only vague criteria are defined by the court (e.g. ‘remove all illegal videos’), or if criteria are defined that would oblige the hosting provider to necessarily investigate each and every video on its systems (e.g. ‘remove all racist videos’), or if the service provider were required also to remove all variations in the future (e.g. ‘remove this video, but also all other videos that belong to the same repertory’), a general monitoring obligation would be imposed.13

The ‘negligence-based approach’ to liability is founded on a perception of OSPs as ­passive players or ‘mere conduits’ that facilitate the storing and transmission of content created and uploaded by third party users. In this role, they are only required to adopt a ‘reactive’ as opposed to ‘proactive’ role vis-à-vis illegal content that may be channelled through the digital spaces or services provided by them. As noted by Friedmann,14 the legal frameworks in the EU and the United States on intermediary liability were drafted around the beginning of the millennium at a time when ‘electronic commerce was perceived as being “embryonic and fragile”, and Internet auctions and social media were just a fledgling phenomenon’. Therefore, it was assumed that limiting the liability of OSPs in relation to content hosted on their services would assist in nurturing this fledgling industry and ensure the continued improvement of the efficiency of the internet and the expansion of the variety and quality of internet services.15 This negligencebased approach to intermediary liability has been adopted by jurisdictions across the world and for a long time remained the prevalent standard for determining the liability of OSPs regarding copyright-infringing content disseminated over their services. Although imperfect because of considerable chilling effects,16 a negligence-based intermediary liability system has inherent built-in protections for fundamental rights. 12  See Patrick Van Eecke, ‘Online Service Providers and Liability: A Plea for a Balanced Approach’ (2011) 48 CML Rev. 1455, 1486–7. 13  ibid. 1487. 14  See Danny Friedmann, ‘Sinking the Safe Harbour with the Legal Certainty of Strict Liability in Sight’ (2014) 9 JIPLP 148. 15  ibid. citing Viacom Int’l Inc. v YouTube Inc., 718 F.Supp.2d 514, 519 (2d Cir. 2010) (US). 16  See e.g. Wendy Seltzer, ‘Free Speech Unmoored in Copyright’s Safe Harbor: Chilling Effects of the DMCA on the First Amendment’ (2010) 24 Harv. J. of L. & Tech. 171, 175–6; Center for Democracy & Technology, ‘Campaign Takedown Troubles: How Meritless Copyright Claims Threaten Online Political Speech’ (September 2010) 1–19. There is abundant empirical evidence of ‘over-removal’ by internet hosting providers. See e.g. Althaf Marsoof, ‘Notice and Takedown: A Copyright Perspective’ (2015) 5 Queen Mary  J.  of Intell. Prop. 183, 183–205; Daniel Seng, ‘The State of the Discordant Union: An Empirical Analysis of DMCA Takedown Notices’ (2014) 18 Va. J.  of L.  & Tech. 369; Jennifer Urban and Laura Quilter, ‘Efficient Process or “Chilling Effects”? Takedown Notices under Section 512 of the Digital

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

MONITORING AND FILTERING: EUROPEAN REFORM OR GLOBAL TREND?   547 The Court of Justice of the European Union (CJEU) has confirmed multiple times—at least with regard to copyright and trade mark infringement—that there is no room for proactive monitoring and filtering mechanisms in EU law.17 The Joint Declaration of  the Special Rapporteurs on Freedom of Expression and the Internet is against the imposition of duties to monitor the legality of the activity taking place within the intermediaries’ services.18

2.  From ‘Mere Conduits’ to ‘­G ate-Keepers’? The Global Shift in Intermediary Liability However, there is increasing evidence of a global shift towards the imposition of a heightened standard of liability on OSPs as regards content uploaded by users. This approach is underscored through the imposition of obligations on OSPs to proactively engage in the monitoring and filtering of content stored and transmitted by them. This signifies a change in the perception of OSPs from being passive players or ‘mere conduits’ to being active ‘gate-keepers’ with the duty to prevent the posting of illegal content over digital spaces and online services managed by them. Although exceptions do apply, this transition is steadily gaining ground and evolving into a mainstream approach to intermediary liability. This shift is primarily reflected in developments in case law and appears to be rooted in the ‘internet threat’ discourse that is characterized by the fear that OSPs are becoming untameable monsters19 who are likely to inflict imminent harm unless subdued through enhanced legal obligations and liability. It has been supplemented by automated ­content-screening and filtering software adopted by influential industry players. Millennium Copyright Act’ (2006) 22 Santa Clara Comp. and High Tech. L.J. 621; Lumen (formerly Chilling Effects—archiving takedown notices to promote transparency and facilitate research about the takedown ecology). However, recent US case law gave some breathing space to UGC creators from bogus takedown notices in cases of blatant misrepresentation of fair use defences by copyright holders. See Stephanie Lenz v Universal Music Corp., 801 F.3d 1126, 1131 (9th Cir. 2015) (US) (holding that ‘the statute requires copyright holders to consider fair use before sending takedown notifications’). 17  See e.g. Case C-70/10 Scarlet Extended SA v Société belge des auteurs, compositeurs et éditeurs SCRL (SABAM) [2011] ECLI:EU:C:2011:771 (restating the principles in favour of access providers); C-360/10 Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v Netlog NV [2012] ECLI:EU:C:2012:85 (confirming the principle in favour of hosting providers); C-324/09 L’Oréal SA and Others v eBay International AG and Others [2011] ECLI:EU:C:2011:474. 18  See Organization for Security and Co-operation in Europe (OSCE), Joint Declaration on Freedom of Expression and the Internet 2.b (1 June 2011) . 19  See Giancarlo Frosio, ‘The Death of “No Monitoring Obligations”: A Story of Untameable Monsters’ (2017) 8 JIPITEC 199.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

548   GIANCARLO FROSIO AND SUNIMAL MENDIS

2.1  Case Law The decision delivered by the Brazilian Superior Tribunal de Justiça (STJ) in the Dafra20 case is a good starting point for illustrating this shift. In this decision, which concerned copyright-infringing videos posted on the YouTube platform (owned by Google) by users, the Brazilian STJ stressed the importance of imposing liability on intermediaries, stating that, ‘if Google created an “untameable monster”, it should be the only one charged with any disastrous consequences’. Accordingly, Google was required not only to remove the infringing video which was the object of the lawsuit but also to remove any similar and related unauthorized videos, even if they were uploaded by other users and bore a different title.21 The ‘technical impossibility’ defence raised by Google—that it was impossible to take down all videos due to the fact that blocking filters that are capable of identifying all infringing materials do not currently exist22—was quashed by the court on the ground that the lack of a technical solution for fixing a defective new product does not exempt the manufacturer from liability, or from the obligation of providing a solution.23 By evoking the untameable monster, Justice Salomão echoes a recurrent narrative in recent intermediary liability—especially copyright—policy that focuses on the ‘threat’ posed by digitalization and internet distribution.24 This has led to overreaching expansion of online enforcement. The court in Dafra stressed the importance of imposing liability on intermediaries saying that, ‘violations of privacy of individuals and companies, summary trials and public lynching of innocents are routinely reported, all practiced in the worldwide web with substantially increased damage because of the widespread nature of this medium of expression’.25 This ‘internet threat’ discourse also emerges in the statement made by Justice Newman of the US Court of Appeals for the Second Circuit in the decision delivered in the Universal v Corley case.26 Responding to the request made by the defendants to refrain from using the DMCA as an instrument of censorship, Justice Newman replied as follows: ‘[h]ere, dissemination itself carries very substantial risk of imminent harm because the mechanism is so unusual by which dissemination of means of circumventing access controls to copyrighted works threatens to produce virtually unstoppable infringement of copyright.’27 In the Baidu case in China, the Beijing Higher People’s Court imposed proactive monitoring obligations on hosting providers based on the popularity of the infringed works and high-volume views/downloads.28 The High Court of Beijing determined that it was reasonable to impose on the OSP Baidu, a duty to monitor and to examine the legal status of an uploaded work once it had been viewed or downloaded more than a 20  Superior Court of Justice Fourth Panel Google Brazil v Dafra [24 March 2014] Special Appeal no. 1306157/SP (Bra.). 21  ibid. para. 5.2. 22  ibid. para. 4. 23  ibid. para. 5.4. 24  See James Boyle, The Public Domain: Enclosing the Commons of the Mind (Yale  U.  Press 2008) 54–82. 25  Dafra (n. 20) para. 5.4. 26  Universal v Corley, 60 USPQ.2d 1953 (2d Cir. 2011) (US). 27  ibid. 1968. 28  See Beijing Higher People’s Court Zhong Qin Wen v Baidu [2014] Gao Min Zhong Zi no. 2045 (Ch.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

MONITORING AND FILTERING: EUROPEAN REFORM OR GLOBAL TREND?   549 certain number of times.29 As per its duty to monitor, Baidu is required to inspect the potential copyright status of the work by contacting the uploader, checking whether the work is originally created by the uploader or legally authorized by the copyright ­owners.30 However, the court failed to clearly indicate the number of views or downloads that would be sufficient for triggering the duty, thereby giving rise to legal uncertainty regarding the exact scope of an OSP’s liability in relation to popular content. A notable exception to the global trend in enforcing proactive monitoring obligations is the decision delivered by the Supreme Court of Argentina in the Belén Rodriguez case.31 In this case, a well-known public figure—Belén Rodriguez—brought an action against the search engines Google and Yahoo for linking search results to third party content which, she claimed, violated her copyright and her right to honour and privacy. In the lower courts, the search engines were found strictly liable under Article 1113 of the Argentinian Civil Code which imposes liability, regardless of knowledge or intention, to those performing risky acts such as indexing third party content, creating wider audiences for illegitimate content, or serving as the ‘guardians’ of the thing that generates the damage, such as the search engine’s software.32 However, the imposition of liability based on a regime of strict liability was repudiated by the Argentinian Supreme Court which adopted a test based on actual knowledge and negligence and required judicial review for issuing a notice to take down content­—except in a few cases of ‘gross and manifest harm’. The Supreme Court further rejected the imposition of any filtering obligation in order to prevent infringing links from appearing in the future.33 In the rather extreme view taken by the Argentinian Supreme Court, as a default rule, actual know­ ledge—and possibly negligence—would only arise after a judicial review had upheld the issuance of the notice. In any event, this conclusion—and the transaction costs it implies—is mitigated by a category of cases exempted from judicial review that might finally be quite substantial. Apparently, the Argentinian Supreme Court believes that, if harm is not manifest, a balancing of rights might be necessary, which can only be done by a court of law rather than a private party.

2.1.1  The European experience In Europe, too, the ‘internet threat’ discourse has been gaining ground. A decision in 2015 delivered by the European Court of Human Rights (ECtHR) almost echoed the statement made by Judge Newman in the US Court of Appeals in Universal v Corley (but this time in the context of hate speech rather than copyright infringement). The ECtHR 29 ibid. 30 ibid. 31  See Supreme Court Rodriguez M. Belén v Google y Otro s/ daños y perjuicios [29 October 2014] R.522.XLIX (Arg.). See also Pablo Palazzi and Marco Jurado, ‘Search Engine Liability for Third Party Infringement’ (2015) 10 JIPLP 244; Marco Rizzo Jurado, ‘Search engine liability arising from third parties infringing content: a path to strict liability?’ (2014) 9 JIPLP 718, 718–20. 32  See e.g. Cámara Nacional de Apelaciones en lo Civil de la Capital Federal S.M., M.S. v Yahoo de Argentina SRL y Otro s/ daños y perjuicios [6 November 2013] no. 89.007/2006 AR/JUR/XXXXX/2013 (Arg.); Cámara Nacional de Apelaciones en lo Civil de la Capital Federal Da Cunha, Virginia v Yahoo de Argentina S.R.L. and Google [10 August 2010] no. 99.620/2006, AR/JUR/40066/2010 (Arg.). 33 See Rodriguez Belén (n. 31).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

550   GIANCARLO FROSIO AND SUNIMAL MENDIS noted that on the internet, ‘[d]efamatory and other types of clearly unlawful speech, including hate speech and speech inciting violence, can be disseminated like never before, worldwide, in a matter of seconds, and sometimes remain persistently available online.’34 In multiple decisions, the ECtHR has been asked to consider whether an internet news portal should be liable for user-generated comments and obliged to monitor and proactively filter its networks in order to avoid liability. The Delfi case concerned readers’ comments containing clearly unlawful hate speech which had been posted below a news article published by Delfi (an Estonian internet news provider) on its online news portal. The Estonian Supreme Court denied Delfi’s claim of immunity from liability under the ECD, by designating it as a ‘provider of content services’ rather than as an ‘information service provider’ (ISP). Delfi finally sought redress from the ECtHR and claimed relief under its freedom to impart information. Thus, the ECtHR was required to strike a balance between the freedom of expression granted under Article 10 of the European Convention on Human Rights and the preservation of personality rights of third persons under Article 8 of the same Convention.35 The ECtHR tackled this conundrum by delineating a narrowly construed scenario in which the liability imposed on OSPs supposedly does not interfere with freedom of expression.36 The ECtHR held that in a situation of higher-than-average risk of defamation or hate speech,37 if comments from non-registered users are allowed,38 a professionally managed and commercially based internet news portal should exercise the full extent of control at its disposal—and must go beyond automatic keyword-based filtering or ex post notice-and-takedown procedures—to avoid liability.39 The ECtHR therefore concluded that finding Delfi liable for anonymous comments posted by third parties on its online platform did not breach its freedom to impart information.40 In later cases, the ECtHR has revisited—or at best clarified—the issue of liability for internet intermediaries. In MTE, the ECtHR concluded that ‘the notice-and-take-down-system could function in many cases as an appropriate tool for balancing the rights and interests of all those 34  Delfi AS v Estonia App. no. 64569/09 (ECtHR, 16 June 2015) para. 110. 35  ibid. para. 59. 36  For detailed comments of each relevant principle stated in the decision, see Giancarlo Frosio, ‘The European Court Of Human Rights Holds Delfi Liable For Anonymous Defamation’ (CIS Blog, 25 October 2013) . 37 See Delfi (n. 34) paras 144–6. A strikingly similar standard was also adopted by an older decision of the Japanese Supreme Court. See Supreme Court Animal Hospital Case [7 October 2005] (Jap.) (finding Channel 2, a Japanese bulletin board, liable on the rationale that—given the large amount of defamatory and ‘unreliable’ content in threads found on its site—it was not necessary for Channel 2 to know that each thread was defamatory, but it was sufficient that Channel 2 had the knowledge that there was a risk that such transmissions/posts could be defamatory). 38 See Delfi (n. 34) paras 147–51. 39  ibid. paras 152–9. 40 See e.g. Lisl Brunner, ‘The Liability of an Online Intermediary for Third Party Content: The Watchdog Becomes the Monitor: Intermediary Liability after Delfi v Estonia’ (2016) 16 Human Rights L. Rev. 163, 163–74.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

MONITORING AND FILTERING: EUROPEAN REFORM OR GLOBAL TREND?   551 involved’.41 Therefore, if the specifics of Delfi do not apply and the comments to be removed are ‘offensive and vulgar’ rather than hate speech,42 the ECtHR saw ‘no reason to hold that [the notice-and-takedown] system could not have provided a viable avenue to protect the commercial reputation of the plaintiff ’.43 Similarly, in the case of Pihl v Sweden, the ECtHR confirmed its previous reasoning by rejecting the claims of an applicant who had been the subject of a defamatory online comment published on a blog.44 The ECtHR reasoned that no proactive monitoring à la Delfi was to be imposed against the defendant because, although the comment had been offensive, it had not amounted to hate speech or an incitement to violence; it had been posted on a small blog run by a non-profit association; it had been taken down the day after the applicant had made a complaint; and it had only been on the blog for around nine days.45 Still, proactive and automated monitoring and filtering—although narrowly applied—does get singled out by the ECtHR as a privileged tool to tame the ‘untameable monster’ or the ‘internet threat’.46 Similar to the approach adopted by the Beijing Higher People’s Court in the Baidu decision, the ECtHR also seems to set a threshold for proactive monitoring based on the popularity of the content in question. In the Delfi decision, the ECtHR noted that Delfi could have anticipated the posting of negative comments based on the high degree of reader interest in the news article as demonstrated by the above-average numbers of comments posted below it. At the national level, too, courts of the EU Members States have implemented ­proactive monitoring obligations for hosting providers in apparent conflict with the well-settled jurisprudence of the CJEU. These decisions span the entire spectrum of intermediary liability subject matter. In the Allostreaming47 case—a landmark decision in France—the Paris Court of Appeal imposed an obligation on OSPs to block the illegal movie-streaming website Allostreaming and affiliated enterprises. In addition, search engines, including Google, Yahoo, and Bing, were required to proactively expunge any link to those websites from their search results.48 The Court of Appeal remarked that rightholders are ‘confronted 41 See Magyar Tartalomszolgáltatók Egyesülete and Index.Hu v Hungary App. no. 22947/13 (ECtHR, 2 May 2016) para. 91. 42  ibid. para. 64. 43  ibid. para. 91. 44 See Rolf Anders Daniel Pihl v Sweden App. no. 74742/14 (ECtHR, 7 February 2017). 45  ibid. para. 37. 46 See Delfi (n. 34) para. 110. 47  See Cour d’Appel Paris UPC et al. v Google, Microsoft, Yahoo!, Bouygues et al. [16 March 2016] (Fr.) (Allostreaming 2016) confirming Tribunal de grande instance [High Court] (TGI) Paris UPC et al. v Google, Microsoft, Yahoo!, Bouygues et al. (28 November 2013) (Fr.). See also Laura Marino, ‘Responsabilités civile et pénale des fournisseurs d’accès et d’hébergement’ (2016) 670 JCl. Communication 71, 71–9. But see Cour d’Appel Paris TF1 v DailyMotion [2 December 2014] (stating that DailyMotion enjoys limitation of liability as a hosting provider and is not required to proactively monitor users' infringing activities). See also Giancarlo Frosio, ‘France DailyMotion pays Damages for Late Removal of Infringing Materials’ (CIS Blog, 8 December 2014) . 48 See Allostreaming 2016 (n. 45) 7. In delivering this decision, the Court of Appeal confirmed in part the first instance decision delivered by the TGI Paris. Notably, the appellate decision reversed the first

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

552   GIANCARLO FROSIO AND SUNIMAL MENDIS with a massive attack’ and are ‘heavily threatened by the massive piracy of their works’.49 Hence, the Court of Appeal determined that, it is ‘legitimate and in accordance with the principle of proportionality that [ISPs and search engines] contribute to blocking and delisting measures’ because they ‘initiate the activity of making available access to these websites’ and ‘derive economic benefit from this access (especially by advertising displayed on their pages)’.50 In reviewing the matter, the Cour de cassation expanded intermediaries’ obligations even further by establishing that all enforcement costs must be borne by access and hosting providers.51 German courts have found that—under the Telemedia Act52—hosting providers are ineligible for the liability privilege provided under Article 14 of the ECD if their business model is mainly based on copyright infringement. In two disputes involving the Swissbased file-hosting service, RapidShare, the Federal Supreme Court of Germany— Bundesgerichtshof (BGH) imposed monitoring obligations on RapidShare.53 According to the BGH, although RapidShare’s business model was not primarily designed for violating copyright, it nevertheless provided incentives to third parties to illegally share copyrighted content.54 Therefore (as also affirmed by the BGH in the decision delivered in the case of Atari Europe v RapidShare55), RapidShare—and similar file-hosting services—are required to abide by more stringent monitoring duties. Thus, the BGH determined that a hosting provider is not only required to delete files containing copyrighted material as soon as it is notified of a violation by the rightholder but must also take steps to prevent similar infringements by other users in the future. File-hosting services are required to actively monitor incoming links to discover copyrighted files as soon as there is a specific reason to do so and then to ensure that those files become inaccessible to the public.56 In doing so, the service provider should use all possible resources— including search engines, Facebook, Twitter, or web crawlers—to identify links made accessible to the public by user-generated repositories of links.57 Furthermore, in the instance decision on the issue of the allocation of costs. According to the Court of Appeal, all costs related to blocking and delisting sixteen Allostreaming websites should be sustained by the search engines, rather than being equally shared between the infringing websites and the search engines as ­previously decided by the TGI. 49 ibid. 50 ibid. 51  Cour de cassation SFR, Orange, Free, Bouygues télécom et al v Union des producteurs de cinéma et al. [6 July 2017] no. 909 (Fr.). 52  See Telemediengesetz [Federal Telemedia Act] 2007 BGBl. I S 179 (Ger.). This Act implements the provisions of the ECD (including Art. 14(1)) in German law. 53  See BGH GEMA v RapidShare [15 August 2013] I ZR 79/12, [2014] GRUR-RR 136 (Ger.) (where the German copyright collective society, GEMA, sued RapidShare in Germany, alleging that over 4,800 copyrighted music files were shared via RapidShare without consent from GEMA or the rightholder). An English translation is available at . 54 ibid. 55  See BGH Atari Europe v RapidShare [12 July 2012] I ZR 18/11, [2013] GRUR 370 (Ger.) (in this case, RapidShare neglected to check whether certain files violating Atari’s copyright over the computer game ‘Alone in the dark’ were stored on its servers by other users). 56 See GEMA v RapidShare (n. 51) para. 60. 57 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

MONITORING AND FILTERING: EUROPEAN REFORM OR GLOBAL TREND?   553 so-called ‘internet auction’ cases58 which involved trade mark infringements taking place on internet auction platforms such as eBay, the BGH repeatedly decided that notified trade mark infringements imposed an obligation on those platforms to investigate future offerings, either manually or through software filters, if the necessary measures were possible and economically reasonable.59 The BGH based its decision on the German property law doctrine of Störerhaftung which, as codified in section 1004 of the German Civil Code, grants a proprietor a right to (permanent) injunctive relief against any person who causes an interference with his property.60 However, since the doctrine prevents any person from being held liable as a Störer (i.e. an interferer) if that would entail an unreasonable burden on him, the BGH struggled to determine the exact scope of the duty of care to be imposed on auction platforms which would qualify as ‘reasonable’ or ‘technically possible’. In the Internet Auction III case, the BGH determined that the investigation of clearly noticeable infringements—such as blatant counterfeit items— would pass the reasonability test, whereas a filtering obligation which endangers the business model of the internet auction platform would be unreasonable. However, as per a later decision of the BGH delivered in a different case, it was determined that an obligation to manually check and visually compare each product offered in an online auction against infringement—which was not clear or obvious—would be unreasonable.61 On the other hand, offering filtering tools to trade mark holders—as eBay does—in order to perform such manual checks themselves, would be sufficient for the purpose of avoiding liability under the doctrine of Störerhaftung.62 In Italy, a mixed case law has emerged. Some courts have imposed proactive monitoring obligations on intermediaries, whereas other courts have taken the opposite stance and confirmed that there is no monitoring obligation for intermediaries under European law.63 In a long-standing legal battle between Delta TV and YouTube being 58  See BGH Rolex v Ebay/Ricardo (aka Internetversteigerung I) [11 March 2004] I ZR 304/01 [2004] GRUR 860 (Ger.) para. 31; BGH Rolex v eBay (aka Internetversteigerung II) [19 April 2007] I ZR 35/04, [2007] GRUR 708 (Ger.); BGH Rolex v Ricardo (aka Internetversteigerung III) [30 May 2008] I ZR 73/05, [2008] GRUR 702 (Ger.). 59  See ibid. Internetversteigerung I (n. 56) para. 46. 60  This doctrine has been extended by analogy to intellectual property law and was in fact applied in the RapidShare cases mentioned earlier (nn. 51 and 53). 61  See BGH Kinderhochstühle im Internet [22 July 2010] I ZR 139/08, [2011] GRUR 152 (Ger.). 62 ibid. 63 For case law confirming the safe harbour and no monitoring obligations, see Corte d’appello Milano [Milan Court of Appeal] Reti Televisive Italiane S.p.A. (RTI) v Yahoo! Italia S.r.l. (Yahoo!) et al. [7 January 2015] N RG 3821/2011 (It.) (reversing a previous decision regarding the publication of fragments of television programmes through the now-terminated Yahoo! Video service and clarifying that RTI had the obligation to indicate in a ‘detailed, precise and specific manner’ the videos that Yahoo had to remove and the court of first instance could not ‘impose on a hosting provider general orders or, even worse, general monitoring obligations, which are forbidden by Directive 2000/31/EC’); Tribunale Milano [Milan Tribunal] Mediaset Premium S.p.a. v Telecom Italia S.p.a. et al. [27 July 2016] (It.) (discussing a blocking injunction against Calcion.at and clarifying that mere conduit internet providers do not have an obligation to monitor their networks and automatically remove content). See also Tribunale Roma [Rome Tribunal] Reti Televisive Italiane S.p.A. (RTI) v TMFT Enterprises LLC- Break Media [27 April 2016] (It.) (confirming no monitoring obligations but stating that rightholders do not need to list the URLs where the videos are made available).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

554   GIANCARLO FROSIO AND SUNIMAL MENDIS fought before the Tribunal of Turin, Delta TV sued Google and YouTube for copyright infringement of certain South American soap operas that users had uploaded to YouTube. In this case, Google complied with its notice-and-takedown policy, and the videos were removed as soon as the specific URLs were provided by Delta TV. In one interim decision, the court agreed with Delta TV’s claims and ordered Google and YouTube to remove the infringing videos and to prevent further uploads of the same content through the use of its Content ID software (using as a reference the URLs provided by Delta TV).64 The court stressed that these proactive monitoring obligations derive from the fact that YouTube is a ‘new generation’ hosting service, a role that foists on it a greater responsibility to protect third parties’ rights.65 In 2017, the Tribunal of Turin delivered a final decision on the matter, confirming the previous decision and an obligation on YouTube to partially monitor its network by preventing the re-uploading of previously removed content.66 The court noted that, ‘there subsists on YouTube an actual legal obligation to prevent further uploads of videos already flagged as infringing of third-party copyrights’.67 According to the court, this amounts to an ex post specific obligation or duty of care in line with recital 40 of the ECD. It is worth noting that multiple Italian cases have applied reasoning which is similar to that employed by the Brazilian STJ in the Dafra case, by stating that hosting providers, whether active or passive, have an obligation to prevent the repetition of further infringements once they have actual knowledge of the infringement, according to the principle cuius commoda, eius et incommoda (‘a party enjoying the benefits [of an activity] should also bear the inconveniences’).68 This civil law principle refers to a form of extra-contractual (or tort) liability under which any person who benefits from a certain activity should be liable for any damages that that activity may cause. On the other hand, in Spain, the decision delivered by the Madrid Court of Appeal in the Telecinco case marks a departure from the trend towards the imposition of general 64  See Tribunale Torino [Tribunal of Turin] Delta TV v YouTube [23 June 2014] N RG 15218/2014 (It.) (revising en banc a previous decision rejecting Delta TV’s request on the basis that: (1) there is no obligation on the part of Google and YouTube, as hosting providers, to assess the actual ownership of the copyrights in videos uploaded by individual users). See also Eleonora Rosati, ‘Italian court says that YouTube’s Content ID should be used to block allegedly infringing contents’ (IPKat, 21 July 2014) . 65 See Delta TV v YouTube (n. 62) 12. 66  See Tribunale Torino [Tribunal of Turin] Delta TV v Google and YouTube [7 April 2017] N RG 38113/2013 (It.). 67  Eleonora Rosati, ‘Italian court finds Google and YouTube liable for failing to remove unlicensed content (but confirms eligibility for safe harbour protection)’ (IPKat, 30 April 2017) . 68  See e.g. Tribunale Milano sez Penale [Milan Tribunal, Criminal Section] David Drummond and others [24 February 2010] no. 1972/2010 (It.) (discussing the notorious Vividown case and convicting Google executives for violating data protection law in connection with the online posting of a video showing a disabled person being bullied and insulted). See also Giovanni Sartor and Mario Viola de Azevedo Cunha, ‘The Italian Google-Case: Privacy, Freedom of Speech and Responsibility of Providers for User-Generated Contents’ (2010) 18 Int’l J. of Law Info. Tech. 356, 373–4.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

MONITORING AND FILTERING: EUROPEAN REFORM OR GLOBAL TREND?   555 monitoring obligations.69 Indeed, it is in line with several decisions delivered by the national courts of EU Member States (including Italy as mentioned earlier) that deny the application of general monitoring obligations in accordance with Article 14 of the ECD. In the Telecinco case, the Madrid Court of Appeal dismissed the request of Telecinco—a Spanish broadcaster owned by the Italian Mediaset—to issue an injunction to prevent the future posting of copyright-infringing content by users on the YouTube platform. In doing so, the Court of Appeal made a series of arguments to demonstrate how the imposition of a proactive monitoring obligation at the national level is pre-empted by EU law and jurisprudence. The Court of Appeal noted that although the CJEU interpreted Article 11 of the Enforcement Directive to mean that an ISP may be ordered ‘to take measures which contribute, not only to bringing to an end infringements of those rights by users of that marketplace, but also to preventing further infringements of that kind’,70 it also made it clear that this rule ‘may not affect the provisions of Directive 2000/31 and, more specifically, Articles 12 to 15 thereof . . . which prohibits national authorities from adopting measures which would require a hosting service provider to carry out general monitoring of the information that it stores’.71 Accordingly, the Court of Appeal concluded that issuing an injunction to prevent future infringements would be in contravention of EU law, as it would result either in an order to proactively monitor UGC (contrary to the ECD) or in an obligation to implement a filtering system that, according to the CJEU, would seriously endanger ISPs’ freedom to conduct business and users’ fundamental rights, including data protection and freedom of information.

2.2  Private Ordering: Emerging Industry Practice Proactive general monitoring is also being voluntarily adopted by OSPs as a private ordering mechanism in response to pressure from rightholders and governments to purge the internet of allegedly infringing content or illegal speech and as a means of protecting themselves from lawsuits as regards content uploaded by third parties. In 2008, Google launched its Content ID system—an automated content-screening and filtering mechanism— after being exposed to a major lawsuit brought against it by Viacom based on the unauthorized uploading and viewing of copyright-protected content on the YouTube platform (owned by Viacom) by users.72 Similarly, in 2014, Viacom adopted an analogous filtering system known as Copyright Match in the aftermath of a copyrightinfringement lawsuit brought against it by Capitol Records and EMI concerning several

69  See Madrid Court of Appeal YouTube v Telecinco [2014] decision no. 11/2014 (Sp.). 70  C-324/09 (n. 17) para. 144. 71  See C-360/10 (n. 17) para 32–3 and C-324/09 (n. 17) para. 139. 72 See Viacom Int’l v YouTube Inc., 676 F.3d 19 (2d. Cir 2012) (US) (upholding YouTube’s liability in the long-running legal battle with Viacom by holding that Google and YouTube had actual knowledge or awareness of specific infringing activity on its website). The lawsuit was subsequently settled.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

556   GIANCARLO FROSIO AND SUNIMAL MENDIS music videos that were uploaded to the Vimeo platform by users.73 Both technologies rely on digital fingerprinting to match an uploaded file against a database of protected works provided by rightholders. Google’s Content ID system applies four possible policies, including: (1) muting matched audio in an uploaded video; (2) completely blocking a matched video; (3) monetizing a matched video for the copyright owner by running advertisements against it; and (4) tracking a matched video’s viewer statistics.74 The Copyright Match system functions in a similar way. Tailoring of Content ID policies is also possible and rightholders can block content in some instances and monetize it in others, depending on the amount of copyrighted content included in the allegedly infringing uploaded file. The system also allows end-users to dispute copyright owners’ claims on content.75 Private ordering by OSPs through the voluntary application of monitoring mechanisms is being increasingly promoted by governments. In the EU, ‘Communication on Online Platforms and the Digital Single Market’ puts forward the idea that ‘the responsibility of online platforms is a key and cross-cutting issue’.76 In other words, the Commission would like to impose an obligation on online platforms to behave responsibly by addressing specific problems.77 Hosting providers—especially platforms— would be called on to actively and swiftly remove illegal material, instead of reacting to complaints. As the Commission puts it, the goal is ‘to engage with platforms in setting up and applying voluntary cooperation mechanisms’.78 Again, the Commission a 2017 Communication aims at promoting ‘enhanced responsibility of online platforms’ on a voluntary basis through ‘proactive measures to detect and remove illegal content online’.79 As an umbrella framework, in 2016 the Commission agreed with all major online hosting providers—including Facebook, Twitter, YouTube, and Microsoft—on a code of conduct that includes a series of commitments to combat the spread of illegal hate speech online in Europe.80 Also, in partial response to this increased pressure from the EU regarding the role of intermediaries in the fight against online terrorism, major tech companies announced that they would begin sharing hashes of apparent terrorist propaganda.81 For some time, YouTube and Facebook have been using Content ID and 73 See Capitol Records LLC v Vimeo, 972 F.Supp.2d 500 (SDNY 2013) (US) (denying in part Vimeo’s motion for summary judgment). 74  See YouTube, ‘How Content ID Works’ . 75 See YouTube, ‘Dispute a Content ID Claim’ . 76 European Commission Communication, ‘Online Platforms and the Digital Single Market: Opportunities and Challenges for Europe’ COM (2016) 288 final, 9. 77  ibid. 8. 78 ibid. 79  See European Commission, ‘Tackling Illegal Content Online. Towards an enhanced responsibility of online platforms’ COM(2017) 555 final, s. 3.3.1. 80  See European Commission Press Release, ‘European Commission and IT Companies Announce Code of Conduct on Illegal Online Hate Speech’ (31 May 2016) . 81  See ‘Google in Europe, Partnering to Help Curb the Spread of Terrorist Content Online’ (Google Blog, 5 December 2016) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

MONITORING AND FILTERING: EUROPEAN REFORM OR GLOBAL TREND?   557 other matching tools to filter ‘extremist content’.82 For this purpose, tech companies plan to create a shared database of unique digital fingerprints—known as hashes—that can identify images and videos promoting terrorism.83 When one company identifies and removes such a piece of content, the others will be able to use the hash to identify and remove the same piece of content from their own networks. The fingerprints will help to identify image and video content that is ‘most likely to violate all of our respective companies’ content policies’.84 Despite the collaboration, the task of defining removal policies will remain within the remit of each platform.85

3.  The EU Copyright Directive in the Digital Single Market: Legitimation through Legislation? Article 17 of the C-DSM Directive is a significant step towards consolidating the transformation of OCSSPs from passive neutral services to active ‘gate-keepers’, through legislative means. The C-DSM Directive limits itself to OCSSP liability relating to content that infringes copyright and preserves the existing intermediary liability framework as regards other illegal content such as defamatory statements, hate speech, violations of privacy, etc. The draft legislative provision has been subject to numerous amendments that have constantly reframed the scope of OCSSPs’ liability. The first draft of the C-DSM Directive was proposed by the Commission in September 2016 (hereafter C-DSM Directive Proposal).86 In July 2018, the European Parliament rejected a mandate proposed by the Legal Affairs (JURI) Committee to enter into negotiations with the European Council for the purpose of enacting the C-DSM Directive,87 following which (and after further negotiations) an amended version of the C-DSM Directive (including Art. 17—which 82  See Joseph Menn and Dustin Volz, ‘Exclusive: Google, Facebook Quietly Move Toward Automatic Blocking of Extremist Videos’ (Reuters, 25 June 2016) (apparently, the ‘automatic’ removal of extremist content is only about automatically identifying duplicate copies of video that have already been removed through human review). 83 See Olivia Solon, ‘Facebook, Twitter, Google and Microsoft Team up to Tackle Extremist Content’ (The Guardian, 6 December 2016) . 84 See ‘Partnering to Help Curb Spread of Online Terrorist Content’ (Facebook Newsroom, 5 December 2016) . 85 ibid. 86  C-DSM Directive Proposal (n. 1). 87  European Parliament Press Release, ‘Parliament to review copyright rules in September’ (5 July 2018) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

558   GIANCARLO FROSIO AND SUNIMAL MENDIS was then known as Art. 13) was adopted by the Parliament in September 2018 (hereafter ‘version of September 2018’). Finally, in February 2019 a further amended version of the C-DSM Directive and Article 17 (then Art. 13) was agreed upon during the trilogue negotiations between the Parliament, the Commission, and the Council (hereafter ‘agreed-upon text’). This agreed-upon text (with several inconsequential changes) was finally adopted by the Council in April 2019 (following its approval by the Parliament).88 Before proceeding to discuss the salient features of Article 17, a brief exposition of the aims and objectives of the C-DSM Directive would be helpful in understanding the context of Article 17 and in locating it within the ‘internet threat’ discourse. As noted in the explanatory memorandum to the C-DSM Directive Proposal of September 2016, a powerful trigger behind Article 17 is an attempt to close the so-called ‘value gap’ of the EU online digital economy which refers to an alleged unfair distribution of revenues generated from the online use of copyright-protected works among industry actors along the value chain.89 Working towards closing the ‘value gap’, the reform inter alia requires OCSSPs to engage in a more proactive role in preventing the availability of ­copyright-infringing content over services provided by them in order to enable ­rightholders to receive appropriate remuneration for the use of their works.90

3.1  Definition of an OCSSP Article 2(6) of the C-DSM Directive defines an OCSSP as: a provider of an information society service of which the main or one of the main purposes is to store and give the public access to a large amount of copyright- protected works or other protected subject-matter uploaded by its users, which it organises and promotes for profit-making purposes.91

Thus, the application of Article 17 is limited to UGC hosting providers.92 It is further required that the OCSSP plays an active role in organizing (including categorizing) the UGC content and promoting it for profit-making purposes, thereby excluding the

88  See Directive 2019/790/EU of the European Parliament and of the Council of 17 April 2019 on c­ opyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/ EC [2019] OJ L130/92. In the absence of an indication to the contrary, all references to the C-DSM Directive, including Art. 17, made in the course of the following discussion will refer to provisions of this final version. 89  See Part 1 of the Explanatory Memorandum to the C-DSM Directive Proposal (n. 1). ‘Evolution of digital technologies has led to the emergence of new business models and reinforced the role of the Internet as the main marketplace for the distribution and access to copyright-protected content. . . . It is therefore necessary to guarantee that authors and rightholders receive a fair share of the value that is generated by the use of their works and other subject-matter.’ 90  Directive 2019/790/EU (n. 86) recital 61 (recital 37 in the version of September 2018). 91  ibid. Art. 2(6) (emphasis added). 92  ibid. recitals 61 and 62.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

MONITORING AND FILTERING: EUROPEAN REFORM OR GLOBAL TREND?   559 application of the hosting liability exemption in Article 14 of the ECD.93 In doing so, the C-DSM Directive redeploys the language of the CJEU in L’Oréal v eBay94 where the optimization and promotion of offers for sale hosted on its platform by eBay was linked to the active role played by eBay in respect of those offers for sale, thereby removing it from the protection offered under Article 14 of the ECD. As per recital 62 of the C-DSM Directive, the definition of an OCSSP is expected to target online services which play an important role in the online content market by competing with other online content services such as online audio- and video-streaming services, for the same audiences. On the other hand, the definition specifically excludes OCSSPs who have another main purpose than enabling users to upload and share copyright-protected content for profitmaking purposes such as OCSSPs operating for non-profit motives (e.g. open-source software-development platforms, online encyclopedias), electronic communication services,95 online marketplaces whose main activity is online retail as opposed to giving access to copyright-protected content, and business-to-business cloud services (e.g. cyberlockers) that allow users to upload content for their own use.96 In sum, platforms like YouTube, Facebook, and DailyMotion would clearly fall within the scope of the definition of an OCSSP in Article 2(6). This further serves to underscore the primary aim of Article 17 as closing the ‘value gap’ by compelling platforms which obtain commercial profit through the sharing of copyright-protected content uploaded by users to appropriately remunerate rightholders. Although this may constitute a legitimate objective, the means employed by Article 17 appear to overreach this aim and threaten to severely upset the balance between the interests of rightholders, intermediaries, and users in the dissemination and use of copyright-protected content online.

3.2  A General Monitoring Obligation? Article 17 makes OCSSPs directly liable for copyright infringement based on the assumption that they perform an act of communication to the public97 or an act of making available to the public, by means of giving public access to copyright-protected content uploaded by users. This represents a radical shift from the prevailing regulatory framework under the ECD that imputes secondary liability based on actual knowledge 93  ibid. Art. 17(3) explicitly precludes the limitation of liability established in Art. 14(1) of the ECD from applying to situations covered by the C-DSM Directive, Art. 17. However, its application is preserved for uses of content that do not fall within the ambit of C-DSM Directive, Art. 17. 94  See C-324/09 (n. 17) para. 116. 95  As defined in Art. 2(4) of Directive (EU) 2018/1972 of the European Parliament and of the Council of 11 December 2018 establishing the European Electronic Communications Code [2018] OJ L321/36. 96  Directive 2019/790/EU (n. 86) Art. 2(6). 97  This constitutes an extension of the notion of ‘communication to the public’ under EU law which traditionally stressed the ‘indispensable’ or ‘essential’ role of the user in giving access to copyright-protected work to an additional public. See e.g. C-160/15 GS Media BV v Sanoma Media Netherlands BV [2016] ECLI:EU:C:2016:644; C-162/10 Phonographic Performance (Ireland) Ltd v Ireland and Attorney General [2012] ECLI:EU:C:2012:141.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

560   GIANCARLO FROSIO AND SUNIMAL MENDIS and ‘takedown’.98 Thus, under Article 17 OCSSPs would automatically be assumed to infringe copyright and be held directly liable for acts of copyright infringement that are materially committed by users who upload unauthorized content to online services ­provided by them. The imposition of direct liability for copyright infringement is combined with an elevated standard of care and due diligence that is exemplified through a three-tier framework of direct liability.99 First, OCSSPs are required to use ‘best efforts’ to obtain the authorization of the relevant copyright owners (rightholders) before communicating any copyright-protected content to the public.100 Secondly, they are required to use ‘best efforts’ to ensure the unavailability of specific works concerning which rightholders have provided the relevant and necessary information.101 Thirdly, on receiving a sufficiently substantiated notice from the rightholders, OCSSPs are required to act expeditiously to disable access or to remove from their websites the particular work and, furthermore, to use ‘best efforts’ to prevent the future upload of that content (i.e. ‘stay down’ of infringing content).102 As noted previously, the limitation of liability under Article 14 of the ECD is made expressly ­inapplicable to situations covered under Article 17.103 For the purposes of ensuring the unavailability of unauthorized content, the ‘best efforts’ of OCSSPs are assessed in accordance with ‘high industry standards of professional diligence’104 that take into account whether the OCSSP has taken all steps that would be taken by a diligent operator, in accordance with ‘best industry practices’.105 The effectiveness of the steps taken and their proportionality in achieving the relevant objective (i.e. to avoid and to discontinue the availability of unauthorized works) are also pertinent to the assessment. Furthermore, recital 66 of the C-DSM Directive notes that the assessment of ‘best efforts’ should inter alia consider ‘. . . the evolving state of the art of existing means, including future developments, for avoiding the availability of different types of content’.106 Thus, it is anticipated that the standard of due diligence expected from OCSSPs will increase in relation to future technological innovations that offer more improved and effective means of identifying and blocking unauthorized ­copyright-protected content from online platforms. It is argued here that the transformation effected by Article 17 to the nature and scope of OCSSPs’ liability for copyright infringement within the EU reflects a legitimization through legislative means of the shift initially observed through case law in perceiving OCSSPs as ‘gate-keepers’ with a proactive role in preventing the storage or transmission of copyright-infringing content over the online services provided by them. On the one 98  See Directive 2000/31/EC (n. 3) Art. 14. 99  Although a few limitations apply for services: (1) available in the EU for less than three years; (2) having an annual turnover of less than €10 million and not exceeding 5 million unique visitors per month. See Directive 2019/790/EU (n. 86) Art. 17(6). 100  ibid. Art. 17(4)(a). 101  ibid. Art. 17(4)(b). 102  ibid. Art. 17(4)(c). 103  ibid. Art. 17(3). 104  ibid. Art. 17(4)(b). 105  ibid. recital 66, para. 2. 106  ibid. (emphasis added).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

MONITORING AND FILTERING: EUROPEAN REFORM OR GLOBAL TREND?   561 hand, Article 17 is unprecedented in its imposition of direct liability on OCSSPs for copyright infringement and the high standard of care to which OCSSPs are required to adhere. Even CJEU case law that may be interpreted as reinforcing the position upheld by the C-DSM Directive has hitherto not reached so far.107 On the other hand, it is difficult to envision how an OCSSP could avoid liability under Article 17 without engaging in general monitoring of content. Ensuring the unavailability of specific works for which rightholders have provided relevant and necessary information would necessarily require the general and indiscriminate inspection of all content uploaded by users of an online service in order to promptly identify and block any potentially infringing content. Preventing future uploads of unauthorized content once they have been taken down would be especially difficult to achieve without engaging in general monitoring of all content uploaded to the online service so as to ensure the exclusion of specific ‘blacklisted’ works or subject matter. Through the decisions delivered in the cases of Scarlet, Netlog, and L’Oréal v eBay, the CJEU has authoritatively defined the distinction between general and specific monitoring obligations. In Netlog, referring specifically to hosting providers, the CJEU held that European law must be interpreted as precluding the requirement for a hosting provider to install a system for filtering: (1) information which is stored on its servers by the users of its service; (2) which applies indiscriminately to all of those users; (3) as a preventative measure; (4) exclusively at its expense; (5) for an unlimited period; and (6) which is capable of identifying electronic files containing musical, cinematographic, or audiovisual works.108 It is evident that any obligation to monitor all content uploaded by users to a website for the purpose of identifying specific works would qualify as a general monitoring obligation within the meaning of this definition. This is further substantiated by the decision delivered by the CJEU in the L’Oréal v eBay case where the CJEU made it clear that active monitoring of all data uploaded by users in order to prevent any future infringements would be precluded by EU law.109 As noted earlier, a monitoring obligation against a ‘blacklist’ of ‘specific and duly notified copyright protected works’ would still apply indiscriminately to all users, operate as a preventative measure over an unlimited time at the exclusive expense of the OCSSP, and apply to all kind of infringements, thus remaining general rather than specific. Therefore, although Article 17(8) explicitly states that ‘[t]he application of the provisions of this article shall not lead to any general monitoring obligation’,110 in practical terms, by requiring OCSSPs to use their ‘best efforts’ to ensure the unavailability of specific unauthorized content over services provided by them, 107  See Eleonora Rosati, ‘The CJEU Pirate Bay Judgment and its Impact on the Liability of Online Platforms’ (2017) 39(12) EIPR 737–48 (noting that ‘[i]n relation to the current EU policy discussion of the so called “value gap proposal”, the judgment reinforces the position of the European Commission, especially the basic idea that the making available, by a hosting provider, of third-party uploaded copyright content may fall within the scope of the right of communication to the public’). 108  See C-360/10 (n. 17) paras 26 and 52. 109  See C-324/09 (n. 17) para. 139. 110  Directive 2019/790/EU (n. 86) Art. 17(8) (emphasis added).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

562   GIANCARLO FROSIO AND SUNIMAL MENDIS Article 17 (albeit indirectly) compels them to engage in general monitoring of ­content posted by users on their services.111 However, a monitoring obligation is never spelled out in the reform and, as long as monitoring and filtering schemes (to fulfil the obligation to use best efforts to ensure unavailability of infringing content) is developed on a voluntary basis under the aegis of Article 17(10), the policy arrangement in Article 17 would effectively circumvent formal incompliance with Article 15 of the ECD. While the preclusion of a general monitoring obligation under Article 17(8) would prevent a court—or a Member State’s legislation or administrative regulation—from imposing a general monitoring obligation on OCSSPs, it does not in any way preclude OCSSPs from voluntarily engaging in the general monitoring of content uploaded by users in order to avoid liability under Article 17. In view of the high standard of care required of them, it is natural to assume that many risk-averse OCSSPs would engage in the general monitoring of content uploaded on their services as a safeguard against copyright-infringement suits. A framework for defining this voluntary monitoring scheme will be developed under stakeholder dialogues, organized by the Commission ‘to discuss best practices for cooperation between online content-sharing service providers and rightholders’.112 The results of the stakeholder dialogues will be crystallized in guidance issued by the Commission.113 Article 17 would be de facto imposing monitoring obligations but not ex lege, making the chances of success of a challenge on the basis of the afore-mentioned inconsistency with the ECD extremely low. Of even greater concern is the high likelihood that the enhanced risks of liability for copyright infringement would compel OCSSPs to adopt automated filtering systems (e.g. content-recognition technologies) and algorithmic enforcement mechanisms (e.g. automated content blocking). The achievement of general and indiscriminate monitoring of all content uploaded by users through manual filtering of content would impose a considerable financial and logistical burden on OCSSPs. Thus, automated filtering and blocking tools would prove to be the most efficient and cost-effective means of ensuring the unavailability of unauthorized content over online services. In fact, the assessment of ‘best efforts’ in accordance with industry standards and evolving technologies implies that OCSSPs may even be legally required to employ algorithmic monitoring and enforcement systems if these are determined to be the most effective and proportionate means of achieving the unavailability of specific copyright-protected content over online services, and moreover reflect the prevailing industry standard—which is increasingly becoming the case with powerful players such as Google resorting to automated monitoring systems. It is also interesting that in defining best practices for OCSSPs in preventing the availability of unauthorized content, the C-DSM Directive Proposal promoted the use of effective technologies such as content-recognition 111  In fact, EU Commissioner Gunther Oettinger who was a key proponent of the C-DSM Directive, admitted in 2019 that, ‘as things stand, upload filters cannot be completely avoided’. See ‘Uploadfilter laut CDU-Politikern nicht zu vermeiden’ (Speigel Online, 29 March 2019) (authors’ translation). 112  Directive 2019/790/EU (n. 86) Art. 17(10). 113 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

MONITORING AND FILTERING: EUROPEAN REFORM OR GLOBAL TREND?   563 tools.114 This reference to the use of effective technologies was expunged from the ­version of September 2018 which moreover included explicit counsel to avoid the use of automated content-blocking in defining best practices.115 Surprisingly, this reference to the avoidance of automated content-blocking systems was deleted from the final version, thereby leaving the door open for OCSSPs to use automated content-blocking in fulfilling their obligations under Article 17.

4.  Effect on Fundamental Rights The implications of an increase in the general monitoring by OCSSPs of content uploaded to online services, and the enhanced use of automated filtering and ­enforcement systems for that purpose, raise important questions relating to the preservation of users’ fundamental rights to expression and information. The CJEU has emphasized that general monitoring and filtering measures would fail to strike a ‘fair balance’ between copyright and other fundamental rights.116 In particular, automatic infringement-assessment systems may undermine freedom of expression and information by preventing users from benefiting from exceptions and limitations granted under EU copyright law to make certain privileged uses of copyright protected ­content.117 At the prevailing level of technological sophistication, automated systems are often unable to correctly appreciate the nuances between unauthorized uses of ­copyright-protected content and uses that are permissible by reason of falling within the ambit of copyright exceptions and limitations.118 Therefore, there is a high risk of false positives that may result in chilling effects and negatively impact users’ fundamental rights to freedom of expression and information. Also, complexities regarding the ­public domain status of certain works may escape the discerning capacity of contentrecognition technologies. In the CJEU’s own words, these measures ‘could potentially

114  See ibid. Proposal Art. 13(3). 115  See version of September 2018, Art. 13(3), ‘When defining best practices, special account shall be taken of fundamental rights, the use of exceptions and limitations as well as ensuring that . . . automated blocking of content is avoided’ (emphasis added). 116  See C-360/10 (n. 17) para. 52. 117 See e.g. Leron Solomon, ‘Fair Users or Content Abusers? The Automatic Flagging of NonInfringing Videos by Content ID on YouTube’ (2015) 44 Hofstra  L.  Rev. 237; Corinne Hui Yun Tan, ‘Lawrence Lessig v Liberation Music Pty Ltd—YouTube’s Hand (or Bots) in the Over-zealous Enforcement of Copyright’ (2014) 36 EIPR 347; Justyna Zygmunt, ‘To Teach a Machine a Sense of Art—Problems with Automated Methods of Fighting Copyright Infringements on the Example of YouTube Content ID’, Machine Ethics and Machine Law conference, Cracow, Poland (November 2016) 55. 118  See Giancarlo Frosio, ‘COMMUNIA Final Report on the Digital Public Domain’, report prepared for the European Commission on behalf of the COMMUNIA Network and the NEXA Center (2011) 99–103, 135–41 (discussing most of the relevant literature and major threats that technological protection measures pose for fair dealing and privileged and fair uses).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

564   GIANCARLO FROSIO AND SUNIMAL MENDIS undermine the freedom of information, since that system might not distinguish ­adequately between unlawful ­content and lawful content’.119 The redress mechanism under Article 17(9) that enables users to challenge the removal or blocking of access to content uploaded by them falls short of adequately preserving users’ interests to make privileged uses of copyright-protected content. An arrangement where all specific works duly notified would be blocked regardless of whether or not their use was privileged, with a redress mechanism operating ex post, defies the fundamental goal of the ECD’s liability system for hosting providers, which is intended to operate ex ante for the purpose of minimizing chilling effects, especially given the critical role of virality in online content distribution. Actually, the introduction of a complaints and redress mechanism120 inter alia to prevent misuses of or restrictions to the exercise of exceptions and limitations, turns a traditionally ex ante review mechanism into an ex post mechanism while content is taken down proactively by automated algorithmic filtering regardless of the fairness of the use, the application of exceptions and limitations, or the public domain status of the works. Again, Article 17 confirms this departure from traditional procedural arrangements for the enforcement of intellectual property rights (IPRs) by providing that, ‘Member States shall ensure that users have access to a court or another relevant judicial authority to assert the use of an exception or limitation to copyright and related rights.’121 Traditional IPR enforcement focuses on the merits of claims of infringement by rightholders, rather than on re-users asserting the use of an exception or limitation in court after their content is blocked through private-ordering. To do otherwise means placing a heavy burden on non-professional creators and UGC, as transaction costs of litigation will usually be too high for these creators who will predominantly choose not to seek any legal redress even if the blocking or takedown has apparently been bogus.122

5. Conclusions The foregoing discussion demonstrates that the imposition of proactive monitoring obligations on OCSSPs under Article 17 of the C-DSM Directive is not a novel contrivance wrought by an inventive EU legislature but, rather, represents the legislative culmination of a global trend that inclines towards the imposition of proactive monitoring and filtering obligations on OSPs. It argues that it constitutes a legitimation by legislative means of an emerging shift in the perception of OSPs from being passive players or ‘mere conduits’ to active ‘gate-keepers’ of content stored or transmitted through their services by third parties. It thereby reflects ‘a broader move towards enlisting OSPs as the Internet 119  C-360/10 (n. 17) para. 50. 120  See Directive 2019/790/EU (n. 86) Art. 17(9). 121 ibid. 122 See Giancarlo Frosio, Reconciling Copyright with Cumulative Creativity: the Third Paradigm (Edward Elgar 2018) 220–5.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

MONITORING AND FILTERING: EUROPEAN REFORM OR GLOBAL TREND?   565 police’123 and provides an impetus for the increased use of automated ­filtering and algorithmic enforcement systems. This has the potential to severely curtail the ability of users to benefit from legally granted exceptions and limitations that enable ­certain privileged uses of copyright-protected content and may even curb the use of c­ ertain public domain content. It remains to be seen whether the legislative transformation of the role of OSPs under Article 17 as regards copyright-protected content will expand to other subject matter along the spectrum of intermediary liability such as defamation, hate speech, violations of privacy, etc. In the meantime, it is vital to ensure that Article 17 is interpreted and enforced in a manner that preserves an equitable balance between the interests of users, intermediaries, and rightholders, especially in relation to the preservation of the fundamental rights to expression and information. 123  See Frosio (n. 19) 214.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 29

Bl ock i ng Or ders: Assessi ng Tensions w ith H um a n R ights Christophe Geiger and Elena Izyumenko*

Over the past few years, intellectual property (IP) enforcement by ordering internet access providers to block infringing websites has been rapidly evolving in Europe. The practice is increasing since going after direct infringers has proven to be ineffective and disproportionate, whereas targeting the website operators is not an easy task either, as these often run their services from another jurisdiction, can easily change location, or can conceal their identity. As a result, injunctions against internet access providers often remain the most efficient option left to the rightholders. In the European Union, the legal basis of such injunctions rests on Article 8(3) of the Information Society Directive.1 According to this Article, ‘Member States shall ensure that rightholders are in a position to apply for an injunction against intermediaries whose services are used by a third party to infringe a copyright or related right’. An almost identical provision, but with regards to all intellectual property rights, is enshrined in the third sentence of Article 11 of the Enforcement Directive.2 In addition, Article 12(3) of the e-Commerce Directive3 provides that the so-called ‘mere conduit’ liability exemption that would normally apply to internet access providers ‘shall not affect the possibility for a court or administrative authority, in accordance with Member *  This chapter draws for some parts on previous research published by the authors, in particular: ‘The Role of Human Rights in Copyright Enforcement Online: Elaborating a Legal Framework for Website Blocking’ (2016) 32(1) Am. U. Int’l L. Rev. 43. 1  See Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L167/10. 2  See Directive 2004/48/EC of the European Parliament and of the Council of 29 April 2004 on the enforcement of intellectual property rights [2004] OJ L195/16. 3  See Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1.

© Christophe Geiger and Elena Izyumenko 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

BLOCKING ORDERS: ASSESSING TENSIONS WITH HUMAN RIGHTS   567 States’ legal systems, of requiring the service provider to terminate or prevent an infringement’. Analogously, recital 45 of the e-Commerce Directive states that ‘[t]he limitations of the liability of intermediary service providers established in this Directive do not affect the possibility of injunctions of different kinds; such injunctions can in particular consist of orders by courts or administrative authorities requiring the termination or prevention of any infringement, including the removal of illegal information or the disabling of access to it.’4 At the same time, Article 15 of the e-Commerce prohibits a general monitoring obligation. Even though authorized in principle by the European legislator,5 blocking injunctions have also proven problematic with regard to fundamental rights enshrined in the European legal order, such as the users’ right of freedom of expression and information6 and the ability of the internet service providers (ISPs) to freely conduct their business. As a response to this situation of legal uncertainty, two major European courts—the Court of Justice of the European Union (CJEU) and the European Court of Human Rights (ECtHR)—have provided some guidance in two landmark decisions: the UPC Telekabel case and Akdeniz v Turkey. The case before the CJEU, UPC Telekabel,7 concerned a generic blocking order issued by the Austrian courts at the request of the two film production companies against the website kino.to, offering the companies’ films for streaming or download without their consent. Before the CJEU, the Austrian referring court queried, inter alia, compatibility of the result-oriented blocking injunctions, not specifying concrete means of preventing infringement (so-called outcome prohibitions), with Union fundamental rights. The CJEU held that such injunctions were legitimate in principle, but made it necessary to reconcile: (1) copyright and related rights; (2) an internet access provider’s freedom to 4  See, similarly, Information Society Directive (n. 1) recital 59. 5  Note though that not all European jurisdictions make website blocking available to rightholders. Notably, courts in Switzerland, which is not part of the EU and hence is not bound by EU law, has recently refused recourse to website blocking in cases of copyright infringement. This was done when the country’s Supreme Court rejected a request by a Swiss film studio to order a local ISP to block its customers from accessing a website which made the works of the studio available on the internet without authorization. According to the Swiss Federal Supreme Court, the possibility to access copyrightprotected works in such circumstances was covered by the Swiss private copy exception and thus did not amount to copyright infringement. See Federal Supreme Court of Switzerland, case No. 4A_433/2018, 8 February 2019 (Swi.) . See also, commenting on this case, Mirko Brüß, ‘31 Countries Offer “Site Blocking” in Cases of Copyright Infringement, But Switzerland is Not One of Them’ (IPKat, 1 March 2019) < http://ipkitten.blogspot.com/2019/03/31-countries-offer-site-blocking-in.html>. 6  The right to freedom of expression under Art. 10 of the European Convention on Human Rights (to which Art. 11 of the EU Charter of Fundamental Rights corresponds) includes ‘freedom to hold opinions and to receive and impart information and ideas’. Two aspects of this right can therefore be identified— the right to freedom of expression as such and the right to freedom of information. The European Court of Human Rights has over the years extended the latter right to inclusion of the right of access to information. See, notably, Magyar Helsinki Bizottság v Hungary [GC] App. no. 18030/11 (ECtHR, 8 November 2016) para. 149. On the freedom of information in the copyright context, see Christophe Geiger, Droit d’auteur et droit du public à l’information, approche de droit comparé (Litec 2004); and Dirk Voorhoof, ‘Freedom of Expression and the Right to Information: Implications for Copyright’ in Christophe Geiger (ed.), Research Handbook on Human Rights and Intellectual Property (Edward Elgar 2015) 331. 7  See C-314/12 UPC Telekabel Wien GmbH v Constantin Film Verleih GmbH [2014] ECLI:EU:C:2014:192.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

568   CHRISTOPHE GEIGER AND ELENA IZYUMENKO conduct a business; and (3) the freedom of information of Internet users. It observed in particular with regards to the internet users’ right to information that there would be no infringement of this right if the blocking was ‘strictly targeted’ and if the users were accorded an opportunity to assert their rights before the national court once the implementing measures taken by the ISP were known.8 No violation was established of the ISP’s freedom to conduct a business either, as an ‘outcome prohibition’9 left the ISP to choose which blocking technique was better adapted to its resources for implementation.10 Furthermore, the fact that the Austrian ‘outcome prohibition’ allowed the ISP to avoid liability by showing that it had taken all reasonable measures had, according to the Court, the effect that ‘unbearable sacrifices’ would not be required from the access provider.11 Finally, with regard to the right to intellectual property, the Court observed that the blocking had to be ‘sufficiently effective to ensure genuine protection of the fundamental right at issue’.12 In practical terms, this meant that even the possibility of circumvention did not preclude the blocking that only had to be ‘reasonable’13 in discouraging users from accessing infringing content.14 Interestingly, just a few days before the Telekabel judgment was rendered, another supranational court in Europe—the ECtHR—had likewise to decide on the issue of website blocking and its effects on human rights. The case before the Strasbourg Court, Akdeniz,15 concerned the blocking of access in Turkey to the websites myspace.com and last.fm because they were disseminating musical works in violation of copyright. As a user of the websites which had been blocked, the applicant complained about the collateral effects of blocking which, according to him, were disproportionate. The only issue raised before the Strasbourg Court thus concerned the freedom of information of internet users, as neither the websites in question nor their ISPs contested the blocking. Moreover, no provision analogous to that of the EU Charter’s freedom to conduct a business is envisaged in the European Convention on Human Rights (ECHR), which largely remains an instrument for the protection of civil and political rights. Unlike the CJEU, that recognized the users’ standing in analogous suits, the ECtHR declared the application inadmissible ratione personae.16 The Court noted that the applicant was only indirectly affected by the blocking alongside other Turkish users of the two music-sharing websites.17 The Court further observed that the websites were blocked because they did not comply with copyright legislation and that the applicant 8  ibid. paras 56–7. 9  ibid. para. 60 (the CJEU explaining that ‘the referring court describes the injunction so formulated as an “outcome prohibition” (“Erfolgsverbot”), by which is meant that the addressee of the injunction must prevent a particular outcome (namely access to the website) without the measures to be taken for that purpose by the addressee of the injunction being specified’). 10  ibid. para. 52. 11  ibid. para. 53. 12  ibid. para. 62. 13  ibid. para. 59. 14  ibid. para. 62. 15 See Akdeniz v Turkey (dec) App. no. 20877/10 (ECtHR, 11 March 2014). For comment on the broader context of the case law of the ECtHR on freedom of expression and copyright law, see Christophe Geiger and Elena Izyumenko, ‘Intellectual Property before the European Court of Human Rights’ in Christophe Geiger, Craig Nard, and Xavier Seuba (eds), Intellectual Property and the Judiciary (Edward Elgar 2018) 48 ff. 16 See Akdeniz v Turkey (n. 15) para. 29. 17  ibid. para. 24.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

BLOCKING ORDERS: ASSESSING TENSIONS WITH HUMAN RIGHTS   569 had only been deprived of one means of listening to music among many (legitimate) others.18 Furthermore, it was not alleged by the applicant that the websites at issue distributed information of specific interest to him or that the blocking deprived him of an importance source of communication.19 The Court also noted that the applicant acted as a ‘mere’ user, not as an owner or contributor to the websites in question.20 In addition, neither the collateral effects of blocking, nor the commercial nature of the websites, raised an important question of general interest.21 Finally, the need to balance freedom of information against the right to the property of copyright holders (likewise protected by the Convention) left the national authorities with a particularly wide margin of appreciation in regulating the dispute.22 As the previous cases demonstrate, European Courts advance several factors to inform—from the perspective of different fundamental rights—website-blocking practices for copyright enforcement in Europe.23 This chapter provides an overview of these factors, starting with the freedom of expression framework for website blocking and the rather revolutionary, at least for the European judiciary, concept of user rights that has being construed under it (Section 1). It then proceeds to discuss the limits of intermediaries’ involvement in digital enforcement dictated by the EU-specific freedom to conduct a business (Section 2). The required efficacy of the blocking resulting from the human right to property framework for IP merits separate examination (Section 3). Potential effects on the website-blocking practices of the recent EU copyright reform are then discussed before concluding (Section 4).

1.  A Freedom of Expression Perspective on Website Blocking: The Emergence of User Rights 1.1  User Rights One prominent consequence of the free speech review of website blocking in copyright infringement cases rests on the idea of user rights as enforceable rights of equal value (and not mere interests to be taken into account).24 18  ibid. para. 25. 19  ibid. para. 26. 20  ibid. para. 27 (emphasis added). 21  ibid. para. 28. 22 ibid. 23  This chapter’s focus is on substantive fundamental rights. The procedural (due process) guarantees are also discussed, but only to the extent that they affect substantive rights that are relevant here. For a more substantial discussion of the fair trial aspects of enforcement, see Jonathan Griffiths, ‘Enforcement of Intellectual Property Rights and the Right to a Fair Trial’ in Geiger (ed.) (n. 6) 438; and Kimberlee Weatherall, ‘Safeguards for Defendant Rights and Interests in International Intellectual Property Enforcement Treaties’ (2016) 32(1) Am. U. Int’l L. Rev. 211. 24  On the emergence of user’s rights in the case law of the CJEU, see Christophe Geiger, ‘The Role of the Court of Justice of the European Union: Harmonizing, Creating and Sometimes Disrupting

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

570   CHRISTOPHE GEIGER AND ELENA IZYUMENKO In Telekabel, the CJEU explicitly recognized the cause of action for those ISP’s customers whose information rights might be affected by website blocking. It held in particular that, ‘in order to prevent the fundamental rights recognised by EU law from precluding the adoption of an injunction such as that at issue in the main proceedings, the national procedural rules must provide a possibility for internet users to assert their rights before the court once the implementing measures taken by the internet service provider are known’.25 Thereby obliging the national authorities to avail the users of the procedural opportunity to challenge the blocking before the courts, the CJEU advanced the idea that freedom of expression may be invoked not as a mere defence, but as a right on which an action in the main case was based. Although the CJEU envisaged this possibility within quite a limited context of result-tailored injunctions, some commentators have considered that such procedural standing or locus standi of users would ‘also apply . . . to national courts issuing specific orders, unless proportionality has also been reviewed from the users’ perspective’.26 An important further step towards strengthening user rights—this time in their material and not procedural context—has been taken by the CJEU in two judgments from July 2019, Funke Medien and Spiegel Online, both concerning the interaction between copyright protection and media freedom.27 In those cases, not only the rights of internet users (or customers) in copyright enforcement proceedings, but also the so-called ‘exceptions and limitations’ to copyright have been characterized by the Court of Justice as user rights. As stated by the CJEU in Funke Medien and Spiegel Online, ‘although Article 5 of Directive 2001/29 is expressly entitled “Exceptions and limitations”, it should be noted that those exceptions or limitations do themselves confer rights on the users of works or of other subject matter’.28 Such an unequivocal recognition by the Copyright Law in the European Union’ in Irini Stamatoudi (ed.), New Developments in EU and International Copyright Law (Kluwer Law Int’l 2016) 435. 25  C-314/12 (n. 7) para. 57 (emphasis added). 26  Pekka Savola, ‘Proportionality of Website Blocking: Internet Connectivity Providers as Copyright Enforcers’ (2014) 5(2) JIPITEC 116, 121 para. 36 (emphasis added). See also Martin Husovec, ‘CJEU Allowed Website Blocking Injunctions with Some Reservations’ (2014) 9(8) JIPLP 631, 633. More generally on the principle of proportionality and its importance in EU copyright law, see Orit Fischman Afori, ‘Proportionality: A New Mega Standard in European Copyright Law’ (2014) 45(8) IIC 889; and Tuomas Mylly, ‘Regulating with Rights Proportionality? Copyright, Fundamental Rights and Internet in the Case Law of the Court of Justice of the European Union’ in Oreste Pollicino, Giovanni Maria Riccio, and Marco Bassini (eds), Copyright Versus (Other) Fundamental Rights in the Digital Age. A Comparative Analysis in Search of a Common Constitutional Ground (Edward Elgar, forthcoming 2020). 27 C-469/17 Funke Medien NRW GmbH v Bundesrepublik Deutschland [2019] ECLI:EU:C:2019:623, and C-516/17 Spiegel Online GmbH v Volker Beck [2019] ECLI:EU:C:2019:625. 28  Funke Medien (n. 27) para. 70, and Spiegel Online (n. 27) para. 54 (emphasis added). For further discussion, see Christophe Geiger and Elena Izyumenko, ‘The Constitutionalization of Intellectual Property Law in the EU and the Funke Medien, Pelham and Spiegel Online Decisions of the CJEU: Progress, But Still Some Way to Go!’, Centre for International Intellectual Property Studies (CEIPI) Research Paper No. 2019-09 or ; IIC (forthcoming 2020).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

BLOCKING ORDERS: ASSESSING TENSIONS WITH HUMAN RIGHTS   571 CJEU of copyright user rights bears important consequences on the enforceability of copyright ‘exceptions and limitations’. As a result of this conceptual change that dictates that copyright ‘exceptions and limitations’ should no longer be understood as mere defences to infringement, user rights might be deemed enforceable, among others, vis-à-vis online contracts and TPM that impose additional restrictions on the legislatively envisaged ‘exceptions and limitations’ to copyright.29 The CJEU’s stance on user rights has already influenced national courts. For example, in 2019 the Swedish Patents and Market Court of Appeal has interpreted the Telekabel judgment to the effect that ‘users should have the right to have their right to information considered in the event that a blocking injunction would leave room for an ISP to decide what measures should be taken in the event of infringement through its services. In such event, the Court should be presented with an opportunity to assess whether overblocking has occurred.’30 As it concerns the ECtHR, its stance on user rights in copyright disputes is less clear. Although in a number of website-blocking cases the Court ruled positively on the victim status of the applicant-users, none of those concerned copyright.31 Moreover, unlike the CJEU, the Strasbourg Court distinguishes between ‘active’ users involved in not only receiving, but also imparting information on the internet, and so-called ‘mere’ or ‘simple’ users32 acting as the passive recipients thereof. For example, in the case of Cengiz and Others,33 which concerned the blocking of access to YouTube, the Court recognized the victim status of the applicants who used the platform not only for accessing videos in which they were interested but also actively, by downloading and sharing files from their YouTube accounts.34 Analogously, in the case of Yildirim, 29  Further on the possible consequences of the recognition of user rights, including in the context of contracts and TPM, see Christophe Geiger, ‘Copyright as an Access Right, Securing Cultural Participation through the Protection of Creators’ Interests’ in Rebecca Giblin and Kim G. Weatherall (eds), What if We Could Reimagine Copyright? (Australian National University (ANU) Press 2016) 94; Pascale Chapdelaine, Copyright User Rights: Contracts and the Erosion of Property (OUP 2017) 53–4; Jacques de Werra, ‘Moving Beyond the Conflict Between Freedom of Contract and Copyright Policies: In Search of a New Global Policy for On-Line Information Licensing Transactions. A Comparative Analysis Between U.S. Law and European Law’ (2003) 25 Colum. J.  of L.  & Arts 239, at 336; Thomas Riis and Jens Schovsbo, ‘User’s Rights, Reconstructing Copyright Policy on Utilitarian Grounds’ (2007) 1 European Intellectual Property Rev. 1; Tatiana-Eleni Synodinou, ‘The Lawful User and a Balancing of Interests in European Copyright Law’ (2010) 41(7) IIC 819, at 826; and Thom Snijders and Stijn van Deursen, ‘The Road Not Taken—the CJEU Sheds Light on the Role of Fundamental Rights in the European Copyright Framework—A Case Note on the Pelham, Spiegel Online and Funke Medien Decisions’ (2019) IIC, . 30  Svea hovrätt. Patent- och marknadsöverdomstolen [Swedish Patents and Market Court of Appeal] Telia Sverige AB v Aktiebolaget Svensk Filmindustri and others [6 February 2019] case no. PMÖ 9945-18, 10 (Swe.). See also, for a translation from Swedish, Nedim Malovic, ‘Swedish Patents and Market Court of Appeal Finds Request for Blocking Injunction Against ISP Disproportionate’ (IPKat, 8 February 2019) . 31 See Ahmet Yildirim v Turkey App. no. 3111/10 (ECtHR, 18 December 2012); and Cengiz and Others v Turkey App. nos 48226/10 and 14027/11 (ECtHR, 1 December 2015). 32  Akdeniz v Turkey (n. 15) para. 27; Cengiz and Others v Turkey (n. 31) para. 50 (emphasis added). 33  Cengiz and Others v Turkey (n. 31) para. 49. 34  ibid. para. 50.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

572   CHRISTOPHE GEIGER AND ELENA IZYUMENKO locus standi was granted to the owner of the website blocked in the context of judicial proceedings unrelated to the applicant’s site. By contrast, in the only copyright case on website blocking decided by the Strasbourg Court so far, the ECtHR found the applicant to lack standing because, inter alia, the use concerned was qualified as passive.35 On their substance, however, the approaches of the CJEU and the ECtHR to the rights of users might not be that different. The decision on victim status was linked by the ECtHR to assessment on the merits—an approach that essentially boils down to the proportionality review akin to that of the CJEU when it weighs the right to property of copyright holders against the freedom of information of the users.36

1.2  Collateral Effects of Blocking: The Risk of Overblocking Collateral effects of blocking constitute, alongside the manner of site usage, an important factor to be taken into account under the freedom of expression framework. The CJEU stressed in Telekabel that the blocking should be ‘strictly targeted’,37 meaning that ‘the measures adopted by the internet service provider . . . must serve to bring an end to a third party’s infringement of copyright or of a related right but without thereby affecting internet users who are using the provider’s services in order to lawfully access information’.38 The CJEU thereby confirmed the principle first set down by the ECtHR in Yildirim and further reiterated in Akdeniz. In accordance with this principle, ‘any measure blocking access to a website [has] to be part of a particularly strict legal framework ensuring both tight control over the scope of the ban and effective judicial review to prevent possible abuse, because it [can] have significant effects of “collateral censorship” ’.39 Nevertheless, this does not mean that any collateral ‘overblocking’ of legitimate content precludes an injunction. In the majority of cases, it would suffice that a substantial proportion of the website was infringing, despite certain pieces of legitimate content also being affected. It is for this reason that the blocking of Newzbin2, for example, was found justified in the UK case of Twentieth Century Fox v BT,40 even after admitting that that measure potentially prevented non-infringing uses, which were, however, 35 See Akdeniz v Turkey (n. 15) para. 27. 36 On the proportionality review carried out by the ECtHR in copyright cases, see Geiger and Izyumenko (n. 15) 37 ff, and, comparing approaches of the ECtHR and CJEU to balancing copyright and freedom of expression, see Elena Izyumenko, ‘The Freedom of Expression Contours of Copyright in the Digital Era: A European Perspective’ (2016) 19(3)–(4) J. of World Intellectual Property 115. More generally on the principle of proportionality, see Jonas Christoffersen, ‘Human Rights and Balancing: The Principle of Proportionality’ in Geiger (ed) (n. 6) 19. 37  C-314/12 (n. 7) para. 56. 38  ibid. (emphasis added). See also C-484/14 Tobias Mc Fadden v Sony Music Entertainment Germany GmbH [2016] ECLI:EU:C:2016:689, para. 93. 39  Akdeniz v Turkey (n. 15) para. 28 citing Ahmet Yildirim v Turkey (n. 31) paras 64–6 (the translation from French draws on the Legal Summary of the case prepared by the Registry of the ECtHR, see Information Note on the Court’s case law no. 173, April 2014). 40 See Twentieth Century Fox Film Corp. & Others v British Telecommunications Plc [2011] EWHC 1981 (Ch) (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

BLOCKING ORDERS: ASSESSING TENSIONS WITH HUMAN RIGHTS   573 minimal.41 In the same vein, the German Federal Supreme Court of Justice has held that ‘the block . . . cannot only be accepted if solely unlawful information is made available on the website’.42 According to this court, it is ‘necessary, in the course of the balancing exercise, to look not at an absolute amount of legitimate content on the respective site, but at the overall ratio of lawful and unlawful content and whether the former constitutes a non-significant amount in comparison with the latter’.43 In Denmark, the blocking of an online shop Interior Addict (on which infringing replica products were sold) was allowed for similar reasons.44 In Norway, the blocking of seven file-sharing sites, the estimated amount of infringing content on which varied from 75 to 100 per cent, was authorized by the Oslo District Court.45 Swedish courts likewise place importance on the need to assess, ‘[a]s part of the proportionality evaluation, the extent to which copyright-protected material is made available through the relevant websites and domain names and how large a share of this material is, regard being had to all the content available on the platform.’46 Where the blocking of a website with substantial legitimate content is at stake, the national courts are likely to require a more targeted form of order.47 Thus, in Italy, the Court of Appeals of Rome overturned an order requiring local ISPs to block access to the video-streaming platform Filmakerz.org in its entirety, on consideration that the order was too broad.48 According to the Court, the partial blocking of specific URLs was to be preferred in that event over the blocking of an entire site.49 Likewise, the Court of Milan rejected the implementation of a ‘blank’ blocking injunction which concerned not only the infringing website but also any potential top-level domain or any other platform which could be related to an infringing website.50 41  ibid. [185]–[186]. 42  Bundesgerichtshof [Supreme Court] (BGH) [26 November 2015] I ZR 3/14, ECLI:DE:BGH:2015:26 1115UIZR3.14.0, para. 44 (Ger.) (authors’ translation from German). See also, to the same effect, BGH [26 November 2015] I ZR 174/14, DE:BGH:2015:261115UIZR174.14.0, para. 55 (Ger.). 43  ibid. (emphasis added). 44  See Maritime and Commercial Court in Copenhagen Fritz Hansen A/S and Others v Telia Danmark [11 December 2014] no. A-38-14 (Den.). 45  See Oslo Tingrett [Oslo District Court] Warner Bros Entertainment Norge AS and others v Telenor Norge AS and others [1 September 2015] no. 15-067093TVI-OTIR/05 (Nor.). For comment, see Josef Ohlsson Collentine, ‘Norway Blocks the Pirate Bay and Other Sites’ (Pirate Times, 3 September 2015) . 46  Telia Sverige AB (n. 30) 11 (authors’ translation from Swedish). See also, mutatis mutandis, Swedish Patent and Market Court of Appeal Universal Music Aktiebolag and Others v B2 Bredband AB [13 February 2017] case no. PMT 11706-15 (Swe.); Stockholms Tingsrätt/Patent- och marknadsdomstolen [Stockholm Patent and Market Court] Svensk Filmindustri and Others v Telia Sverige [25 October 2018] (Swe.). 47  See Savola (n. 26) 126 para. 72. 48  Ernesto Van der Sar, ‘Court Orders ISPs to Unblock “Pirate” Site’ (TorrentFreak, 3 April 2014) . 49 ibid. 50  See Tribunale di Milano [Milan Court of First Instance] Mediaset Premium v Orlando and Others [27 July 2016] RG no. 31892/2016 (It.). See also Eleonora Rosati, ‘Milan Court of First Instance Rejects Application for “Blank” Blocking Order’ (IPKat, 31 August 2016) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

574   CHRISTOPHE GEIGER AND ELENA IZYUMENKO

1.3  The ‘Value’ of Content The next factor which needs to be taken into account in the balancing process from a freedom of expression perspective is the general public interest in information affected by the blocking measure.51 Unlike the CJEU which was not unduly concerned with this criterion in Telekabel, the ECtHR provided some guidance on its potential implications for copyright enforcement. The Strasbourg Court pointed out in Akdeniz that ‘the applicant had not alleged that the websites in question disseminated information which could present a specific interest for him or that the blocking of access had had the effect of depriving him of a major source of communication’.52 It thus followed that ‘the fact that the applicant had been deprived of access to those websites had not prevented him from taking part in a debate on a matter of general interest’.53 What follows from this statement is that the blocking can be found contrary to the Convention free speech provision in other cases with a greater public interest at stake. The ECtHR distinguishes, in particular, political speech and speech in the general public interest among the prioritized fields of the protection of the freedom of expression.54 To give just a few examples, in Cengiz and Yildirim transmission of academic materials (thus serving educational and teaching purposes) constituted a factor favouring the freedom of expression finding.55 To the contrary, the use of ‘pirate’ websites to access music, film, and other analogous content, when other means of access were available without entailing breach of copyright, was ruled by the ECtHR to be outside the sphere of general public interest.56 Alongside education, criticism and news reporting also rank highly on the balancing scale for the freedom of expression.57 In application to other areas of IP, such as trade marks and designs, matters of public interest were ruled to include, in addition, artistic use of a famous fashion brand to 51  On the nature of the information at stake and its importance during the balancing exercise c­ arried out by the courts, see Christophe Geiger and Elena Izyumenko, ‘Copyright on the Human Rights’ Trial: Redefining the Boundaries of Exclusivity through Freedom of Expression’ (2014) 45(3) IIC 316, 325 ff. 52  Akdeniz v Turkey (n. 15) para. 26 (emphasis added). 53  ibid. (emphasis added). 54 See Sürek v Turkey (No. 1) [GC] App. no. 26682/95 (ECtHR, 8 July 1999) para. 61; Lindon, Otchakovsky-Laurens and July v France [GC] App. nos 21279/02 and 36448/02 (ECtHR, 22 October 2007) para. 46; Axel Springer AG v Germany [GC] App. no. 39954/08 (ECtHR, 7 February 2012) para. 90; Morice v France [GC] App. no. 29369/10 (ECtHR, 23 April 2015) para. 125; and Bédat v Switzerland [GC] App. no. 56925/08 (ECtHR, 29 March 2016) para. 49. See Geiger and Izyumenko (n. 51) 325 ff. 55 See Cengiz and Others v Turkey (n. 31) para. 50; Yildirim v Turkey (n. 31) para. 51. 56 See Neij and Sunde Kolmisoppi v Sweden (dec) App. no. 40397/12 (ECtHR, 19 February 2013); Akdeniz v Turkey (n. 15) paras 25 and 26. 57 See Pedersen and Baadsgaard v Denmark [GC] App. no. 49017/99 (ECtHR, 17 December 2004) para. 71; De Haes and Gijsels v Belgium App. no. 19983/92 (ECtHR, 24 February 1997) para. 37.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

BLOCKING ORDERS: ASSESSING TENSIONS WITH HUMAN RIGHTS   575 criticize the culture of consumerism,58 parody of a well-known cigarette brand to draw attention to the health risks of smoking,59 parodic uses of companies’ logos to highlight risks to the environment resulting from those companies’ activities or products,60 criticism of the social policy of those players in the economy,61 or simply playing ironically with the companies’ trade marks and advertising campaigns in order to advance the user’s own marketing interests.62

1.4  Alternative Means of Accessing Information Another factor under the freedom of expression framework is the availability of alternative means of accessing information. In Cengiz, in particular, a violation of Article 10 ECHR was established where the blocking rendered inaccessible a website with information of specific interest that was not otherwise easily available and for which there was no equivalent.63 Analogously, by noting in Akdeniz that many alternative (legal) means of access to music were available to the applicant,64 the Strasbourg Court left open the possibility of refusing the blocking injunction in a situation of a lack of (more and better?) legitimate offerings. As was highlighted in the UK Ofcom report, ‘the relative attractiveness of legal alternatives’ impacts ‘[t]he extent to which consumers and site operators will seek to circumvent blocking’.65 Accordingly, it has to ‘form . . . part of a broader package of measures to tackle infringement’.66

58  See Rechtbank ’s-Gravenhage [District Court of The Hague] ‘Darfurnica [4 May 2011] no. 389526/ KG ZA 11-294 (Neth.) (translation from Dutch by Kennedy Van der Laan at ). 59  Cour de cassation [French Supreme Court] Comité National contre les Maladies Respiratoires et la Tuberculose v Societe JT International GmbH, no. 1601, 19 October 2006 (Fr.). 60  See Cour de cassation Sté Esso v Greenpeace France [8 April 2008] no. 06-10961 (Fr.); Cour de cassation Associations Greenpeace France et Greenpeace New-Zealand v la Société Areva [8 April 2008] no. 07-11251 (Fr.). 61  See Cour d’Appel de Paris, Assoc. Le Réseau Voltaire pour la liberté d’expression v Sté Gervais Danone [30 April 2003] (Fr.). 62  See BGH ‘Lila Postkarte’ [3 February 2005] no. I ZR 159/02. See on most of these cases Christophe Geiger, ‘Trade Marks and Freedom of Expression—The Proportionality of Criticism’ (2007) 38(3) IIC 317. See also Martin Senftleben, ‘Free Signs and Free Use: How to Offer Room for Freedom of Expression Within the Trademark System’ in Geiger (ed.) (n. 6) 354. 63 See Cengiz and Others v Turkey (n. 31) paras 51–2. 64 See Akdeniz v Turkey (n. 15) para. 25. 65  Ofcom, ‘“Site Blocking” to Reduce Online Copyright Infringement: A Review of Sections 17 and 18 of the Digital Economy Act’ (27 May 2010) 6 . See also Cormac Callanan and others, Internet Blocking: Balancing Cybercrime Responses in Democratic Societies (Aconite/Open Society Institute, October 2009) . 66  Ofcom (n. 65) 5.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

576   CHRISTOPHE GEIGER AND ELENA IZYUMENKO Viewed in this light, the alternative-means factor is capable of remedying the current ‘lack of structural incentives for improving access’67 by partially shifting the onus to the rightholders. The latter have in the past often preferred deterrence models of enforcement to the introduction of more attractive alternatives in the marketplace. In this vein, Fred von Lohmann has argued that in the context of proceedings relating to Article 8(3) of the Information Society Directive there is only an accuser demanding reasonable measures from a defendant, whereas the most effective measure might be something that the accuser himself is capable of bringing to the marketplace.68 Indeed, while ‘the court in the context of Article 8(3) lacks the ability to weigh all the interests and improvise a complete set of remedies’,69 the emphasis on freedom of information, rights of users, and—more specifically—on the ‘alternative means’ factor, opens the way to a broader perspective.

2.  A Freedom to Conduct a Business Perspective on Website Blocking: The (Rising) Role of the ISPS in Digital Copyright Enforcement 2.1  Costs and Complexity of Blocking As noted in Telekabel, ‘[t]he freedom to conduct a business includes, inter alia, the right for any business to be able to freely use, within the limits of its liability for its own acts, the economic, technical and financial resources available to it.’70 This is consonant with Article 3(1) of the Enforcement Directive which requires that ‘the measures, procedures and remedies necessary to ensure the enforcement of the intellectual property rights . . .  [are not] unnecessarily complicated or costly’. The costs and complexity of blocking was one of the focal points of proportionality assessment in Telekabel. There, it was admitted that the outcome prohibition ‘constrains its addressee in a manner which restricts the free use of the resources at his disposal because it obliges him to take measures which may represent a significant cost for him, 67  Rebecca Giblin, ‘When ISPs Become Copyright Police’ (2014) 18(2) IEEE Internet Computing 84, 85. 68  See Fred von Lohmann, ‘Filtering away Infringement: Copyright, Injunctions and the Role of ISPs’, Information Influx: the 25th anniversary International Conference of the Institute for Information Law, Amsterdam (3 July 2014). 69 ibid. 70  C-314/12 (n. 7) para. 49. More generally on freedom to conduct a business in the IP context, see Gustavo Ghidini and Andrea Stazi, ‘Freedom to Conduct a Business, Competition and Intellectual Property’ in Geiger (ed.) (n. 6) 410.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

BLOCKING ORDERS: ASSESSING TENSIONS WITH HUMAN RIGHTS   577 have a considerable impact on the organisation of his activities or require difficult and complex technical solutions’.71 These constraints did not, nevertheless, in the Court’s reasoning, ‘seem to infringe the very substance of the freedom of an internet service provider . . . to conduct a business’.72 This was for two main reasons. First, the outcome prohibition ‘[left] its addressee to determine the specific measures to be taken in order to achieve the result sought, with the result that he [could] choose to put in place measures which are best adapted to the resources and abilities available to him’.73 Secondly, the Court’s conclusion was couched in the particular procedural form of an order at issue that allowed the ISP to avoid liability by showing that it had taken all reasonable steps to comply with the injunction. That ‘possibility of exoneration’, according to the Court, ‘clearly [had] the effect that the addressee of the injunction [would] not be required to make unbearable sacrifices’.74 The Court thus appeared to suggest that result-tailored injunctions encroach less on the freedom to conduct a business than their specific counterparts, as long as the former leave ISPs to make their enforcement choices freely. Some commentators disputed this conclusion. It was observed in particular that the ‘ISPs typically want specific conditions to be stated’, because, if not, they risk penalties for non-compliance.75 As with the possibility of exoneration envisaged in the Austrian outcome prohibition, when taking ‘all reasonable measures’, ISPs also have to respect the users’ information rights—a situation leaving an intermediary to navigate between possible liability for breach of the order and a potential dispute with its customers on freedom of expression grounds. All in all, despite the particular conclusion reached by the Court in Telekabel, the costs and complexity of blocking were highlighted as an important factor in the proportionality evaluation. In the Sabam cases,76 this criterion even led the CJEU to outlaw an injunction that required an ISP to install, as a preventive measure and exclusively at its own expense, a permanent system for filtering all electronic communications passing via its services.77 It is notable that both the Sabam and Telekabel judgments seem to assume that an ISP is the one to carry the costs of implementation.78 71  C-314/12 (n. 7) para. 50. 72  ibid. para. 51 (emphasis added). 73  ibid. para. 52. 74  ibid. para. 53. 75  Pekka Savola, ‘Website Blocking in Copyright Injunctions: A Further Perspective’ (The 1709 Blog, 28 March 2014) (emphasis added). See also in this sense, Husovec (n. 26) 632; Christina Angelopoulos, ‘CJEU in UPC Telekabel Wien: A Totally Legal Court Order . . . To Do the Impossible’ (Kluwer Copyright Blog, 3 April 2014) ; Steven James, ‘Digesting Lush v Amazon and UPC Telekabel: Are We Asking Too Much of Online Intermediaries?’ (2014) 25(5) Entertainment L. Rev. 175, 177. 76  See C-70/10 Scarlet Extended SA v Société belge des auteurs, compositeurs et éditeurs SCRL (SABAM) [2011] ECLI:EU:C:2011:771 and C-360/10 Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v Netlog NV [2012] ECLI:EU:C:2012:85. 77  See C-70/10 (n. 76) para. 48; and C-360/10 (n. 76) para. 46. See also, in the trade mark context, C-324/09 L’Oréal SA and Others v eBay International AG and Others [2011] ECLI:EU:C:2011:474, para. 139. 78  UPC Telekabel Wien (n. 7) para. 50; C-70/10 (n. 76) para. 48; C-360/10 (n. 76) para. 46.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

578   CHRISTOPHE GEIGER AND ELENA IZYUMENKO This is indeed the position taken in France, where the country’s Supreme Court held that non-liable intermediaries should bear the costs of blocking injunctions, ‘even if such measures may represent a significant cost for them’,79 simply because such intermediaries are better placed to fight copyright infringement.80 In other words, French courts operate on the assumption of the full allocation of costs to intermediaries, unless the latter prove, on the circumstances of each particular case, that ‘the execution of the ordered measures would require unbearable sacrifices from them’ or ‘would jeopardize their economic viability’.81 Interestingly, such a default allocation of costs to ISPs was rejected in the UK, albeit in the context of blocking targeted online trade mark infringement. According to the June 2018 ruling of the UK Supreme Court, ‘[a]s a matter of English law, the ordinary principle is that unless there are good reasons for a different order an innocent intermediary is entitled to be indemnified by the rights-holder against the costs of complying with a website-blocking order.’82 The court stressed, however, that this position was limited to the UK, noting that ‘the incidence of costs, whether of compliance or of the litigation, is a matter for national law’.83 Thus, each jurisdiction is free to decide on the allocation of costs, in conformity with the Information Society Directive’s principle that ‘[t]he conditions and modalities relating to . . . injunctions should be left to the national law of the Member States.’84 Such allocation should not, however, lead to disproportionate burdens on the intermediaries’ freedom to conduct a business. Some clarification of how to assess, in practice, the reasonableness of the allocation of costs to the ISPs was provided by the Regional Court of Munich. In its decision in February 2018, the court expressed the opinion that ‘the costs must be looked at in relation to the total revenues of the [access provider]’.85 Notably, the costs of the first-time instalment of a block are not taken into the equation when the implementation of one particular block is at issue.86 Rather, they should be looked at within the broader context 79  Cour de cassation [6 July 2017] judgment no. 909, ECLI:FR:CCASS:2017:C100909 (Fr.) (authors’ translation from French). 80 ibid. 81 ibid. 82  Cartier International AG and others v British Telecommunications Plc and another [2018] UKSC 28 [31] (UK) (hereafter Cartier 2018). The Supreme Court thereby overturned the position taken at first instance and on appeal, in accordance with which the costs are to be allocated with the ISPs, who are then in a position to pass on such costs to their customers in the form of higher subscription fees. See Cartier International AG and others v British Sky Broadcasting Ltd and others [2014] EWHC 3354 (Ch) [252]; confirmed on appeal: [2016] EWCA Civ 658 (hereafter Cartier 2016). 83  Cartier 2018 (n. 82) [28]. 84  Directive 2001/29/EC (n. 1) recital 59. 85  Landgericht [District Court] (LG) Munich, ‘Fack Ju Göhte 3 via Kinox.to’ [1 February 2018] case no. 7 O 17752/17, para. 6 (Ger.) (English translation of the original German judgment available at ). For comment on this case, see Benjamin Lotz, ‘First Blocking Order in Germany to Prevent Access to Copyright Infringing Website’ (Kluwer Copyright Blog, 10 May 2018) . 86  ‘Fack Ju Göhte 3’ (n. 85) para. 6.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

BLOCKING ORDERS: ASSESSING TENSIONS WITH HUMAN RIGHTS   579 of the overall income of the ISP. In the court’s opinion, ‘[o]therwise, any legal action against the ISP by a first rightholder would always fail due to the fact that the [ISP] could hold the initial set up costs against them.’87

2.2  Availability of Reasonable Alternatives (Subsidiarity) Another important factor to take into account when it comes to balancing in websiteblocking cases is the proportionality of going after an internet access provider instead of trying to put an end to the infringement at source, by suing, for example, direct infringers or website operators. The Information Society Directive seemed to have already answered that question back in 2001 when it stated (in recital 59) that ‘[i]n the digital environment, in particular, the services of intermediaries may increasingly be used by third parties for infringing activities. In many cases such intermediaries are best placed to bring such infringing activities to an end.’88 The Information Society Directive left open, however, whether rightholders are first required to try all other reasonable remedies before recourse to the internet access provider. Neither was this issue addressed by the Telekabel court, which only highlighted that the blocking was justified in the light of the objective of the Information Society Directive, ‘as shown in particular by Recital 9 thereof, which is to guarantee rightholders a high level of protection’.89 The question of reasonable alternatives is nevertheless important, as human rights law’s proportionality principle90 requires that any restriction of a fundamental right (the ISPs’ freedom to conduct a business in this case) should be the least intrusive measure which would effectively protect the counterbalanced interest (i.e. the rightholders’ property). Although both the Information Society Directive and the Enforcement Directive envisage the possibility of applying for injunctions ‘[w]ithout prejudice to any other measures, procedures and remedies available’,91 certain national courts have taken the subsidiarity criterion on board in their assessments of proportionality of blocking. The judgments by the German Federal Supreme Court of Justice on two cases brought by the German music industry and the German collecting society GEMA are illustrative in this respect. In those cases, the court refused to order the local internet access provider to block its customers’ access to the file-sharing website goldesel.to.92 This was 87 ibid. 88  Directive 2001/29/EC (n. 1) recital 59 (emphasis added). 89  C-314/12 (n. 7) para. 31 (emphasis added). 90  For a discussion of this principle, see e.g. Christoffersen (n. 36). 91  Directive 2004/48/EC (n. 2) recital 23; and, analogously Directive 2001/29/EC (n. 1) recital 59. 92  See BGH I ZR 3/14 and I ZR 174/14 (n. 42). For a comment on these cases, see Martin Schaefer, ‘ISP Liability Finally Achieved in Germany’ (Kluwer Copyright Blog, 22 December 2015) ; Martin Husovec, ‘BGH Accepts Website Blocking Injunctions’ (Huťko’s Technology Law Blog, 29 November 2015) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

580   CHRISTOPHE GEIGER AND ELENA IZYUMENKO because the claimants failed to show that they had first tried, with due diligence, to sue the primary or secondary infringers.93 According to the court, blocking injunctions against the internet access provider ‘can only be considered if the rightholder has ­initially made reasonable efforts to take action against those parties who committed the infringement themselves (such as the owner of the website) or who have contributed to the infringement by the provision of services (such as the hosting provider). Only when recourse to those parties fails or lacks any prospect of success, thereby creating a lacuna in legal protection, can recourse to the access provider be considered reasonable. Operators and host providers are much closer to the infringement than those who only generally provide access to the Internet.’94 Consequently, ‘when determining the priority of those against whom the claim should be brought, the rightholder has, to a reasonable extent, to make enquiries’.95 In the court’s view, those could ­comprise ‘hiring a detective agency or having recourse to the state investigation authorities’.96 The relevance of the subsidiarity principle has also been confirmed in a decision of the Regional Court of Munich from February 2018.97 The Advocate General’s Opinion on Telekabel likewise pointed that ‘the originator must, as a matter of priority, so far as possible, claim directly against the operators of the illegal website or their ISP’.98 Only in situations when such a claim is not possible, can recourse be made to the ISP of end-users. A different approach was adopted, however, by the High Court of Paris, when it ruled that, ‘if the text of Article  L.336-2 of the Code of Intellectual Property [the national implementation of Art. 8(3) of the Information Society Directive] is addressed to anyone who can contribute to remedy the violations of protected rights, no legal provision requires calling in the same instance the hosting providers and no principle of subsidiarity is envisaged’.99 The same approach is followed in Belgium, where the Antwerp Court of Appeals likewise struck out the subsidiarity factor considering that it was not required by Article 8(3) of the Information Society Directive.100 As Pekka Savola reports, targeting the direct infringers was also not required on an application of national law by the Helsinki Court of Appeals.101

93  BGH I ZR 3/14 (n. 42) paras 68–75; BGH I ZR 174/14 (n. 42) paras 81–7. 94  BGH, ‘Press Release no. 194/2015 on cases I ZR 3/14 and I ZR 174/14’ (authors’ translation from German). 95 ibid. 96 ibid. 97 See ‘Fack Ju Göhte 3’ (n. 85) para. 3. 98 C-314/12 UPC Telekabel Wien GmbH v Constantin Film Verleih GmbH and Wega Filmproduktionsgesellschaft mbH [2013] ECLI:EU:C:2013:781, Opinion of AG Villalón, para. 107. 99  TGI Paris APC and Others v Auchan Telecom and Others [28 November 2013] no. 11/60013 (Fr.) (authors’ translation from French) (emphasis added). 100  See Hof van Beroep Antwerpen [Antwerp Court of Appeals] VZW Belgian Anti-Piracy Federation v NV Telenet [26 September 2011] no. 2011/8314 (Bel.) cited in Savola (n. 26) para. 59. 101  See Helsinki Court of Appeals Elisa [15 June 2012] no. S 11/3097 (Fin.) cited in Savola (n. 26) para. 55.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

BLOCKING ORDERS: ASSESSING TENSIONS WITH HUMAN RIGHTS   581

3.  A Right to Property Perspective on Website Blocking: Effectiveness of the Blocking In Telekabel, the CJEU admitted that it was possible that ‘a means of putting a complete end to the infringements of the intellectual property right [did] not exist or [was] not in practice achievable, as a result of which some measures taken might be capable of being circumvented in one way or another.’102 Nonetheless, that did not preclude every blocking that—in view of the objective to guarantee rightholders a high level of protection— only had to be ‘sufficiently effective to ensure genuine protection of the fundamental right at issue’.103 According to the CJEU, this meant that an injunction ‘must have the effect of preventing unauthorised access to the protected subject-matter or, at least, of making it difficult to achieve and of seriously discouraging internet users . . . from accessing the subject-matter made available to them in breach of . . . [the] fundamental right [to intellectual property].’104 The issue of effectiveness was, however, somewhat differently addressed in the Netherlands, where the Court of Appeal of The Hague refused (in a decision rendered two months prior to the CJEU judgment in Telekabel) to issue an order requiring two major Dutch ISPs to block access to The Pirate Bay.105 The court reached this conclusion on consideration of the evidence (notably, an empirical study from the Institute for Information Law (IViR) of the University of Amsterdam), suggesting that ‘blocking access to file-sharing platforms seem relatively ineffective to reduce [the overall level of] unauthorised file-sharing’.106 This was due, first, to the ease of circumvention, and, secondly, to the availability of alternative torrent sites. Consequently, the blocking did not achieve its purpose of protecting IP rights and was therefore disproportionate with regards to the ISP’s freedom to conduct a business.107 This conclusion was not invalidated by the contention made by the claimant (the Dutch collecting society BREIN in that case) that the blocking would not have cost the ISP anything.108 According to the court, the blocking, after all, was an interference with the ISP’s freedom to act on its own 102  C-314/12 (n. 7) para. 60. 103  ibid. para. 62. This was also the position of AG Villalón. See C-314/12, Opinion of AG Villalón (n. 98) paras 100–1. 104  C-314/12 (n. 7) para. 60. See also C-484/14 (n. 38) para. 99 (finding that requiring the provider of an open wi-fi to password-protect its internet connection in order to prevent copyright infringement was ‘necessary in order to ensure the effective protection of the fundamental right to protection of intellectual property’) (emphasis added). 105  See Gerechtshof Den Haag [Court of Appeal of The Hague] Ziggo and XS4ALL v BREIN [28 January 2014] no. 200.105.418/01 (Neth.). 106  Joost Poort and others, ‘Baywatch: Two Approaches to Measure the Effects of Blocking Access to The Pirate Bay’ (2014) Telecommunications Policy 9 (emphasis added). 107  Ziggo (n. 105) para. 5.22. 108 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

582   CHRISTOPHE GEIGER AND ELENA IZYUMENKO discretion, which, in view of that blocking inefficacy, could not be justified by the need to protect intellectual property.109 All the more so, the court reasoned, since the ISP was not itself committing an infringement.110 Following the CJEU judgment in Telekabel, however, the Dutch Supreme Court had to reject the Court of Appeal’s evaluation of effectiveness.111 It held, with references to Telekabel, that the Court of Appeal had erred in its assessment of the blocking efficacy based on the overall effect of the measure on illegal file-sharing on the internet.112 Thereby, according to the Supreme Court, the lower court had failed to recognize that even if certain measures did not lead to a complete cessation of all copyright infringements, they could still be compatible with the requirement of proportionality of Article 52(1) of the EU Charter by at least making unauthorized access difficult or seriously discouraging it.113 The ‘reasonable effectiveness’ standard was likewise followed in a number of UK cases, including those issued prior to Telekabel,114 and, in France, where considerations of reasonable effectiveness informed the decision of the High Court of Paris ordering four of the country’s largest ISPs to block access to the Pirate Bay.115 As stated in that judgment, ‘[w]hile it is true that any blocking measure can be circumvented by some users, it is not established, first, that the vast majority of Internet users who are accustomed to free communications and to a number of Internet services, have a strong will to participate in global piracy on a large scale. Second, the requested measures target the largest number of users, who do not necessarily have the time and skills to discover the means that specialists find and retain in memory’.116 The court cited Telekabel, maintaining that impossibility to ensure complete enforcement of the orders was not an obstacle to blocking and did not have to result in the lack of recognition of the rights of IP holders by the courts.117 This was also the position taken by some courts in Germany, where the Regional Court of Munich found ‘unremarkable’ the fact that ‘ultimately all protection options can be circumvented with a little specialist knowledge’.118 According to the court, ‘it  cannot be expected that legal action against an internet access provider finally ­prevents the dissemination of copyright infringing content on the internet’.119 It is sufficient for the rightholders to demonstrate that such action can attain at least some efficacy.120

109 ibid. 110 ibid. 111  Hoge Raad [Dutch Supreme Court] Ziggo and XS4ALL v BREIN [13 November 2015] no. 14/02399 (Neth.). 112  ibid. para. 4.4.2. 113  ibid. with references to C-314/12 (n. 7) para. 62. 114  Twentieth Century Fox (n. 40) [198]; Cartier 2016 (n. 82) [64]. 115  TGI Paris SCPP v Orange, Free, SFR et Bouygues Télécom [4 December 2014] no. 14/03236 (Fr.). 116  ibid. 14 (authors’ translation from French). 117 ibid. 118  ‘Fack Ju Göhte 3’ (n. 85) para. 4. 119 ibid. 120  In the circumstances of this particular case, e.g., the rightholder ‘has shown credibly that in countries in which access to comparable sites has been blocked—also by foreign sister companies of the Respondent—the illegal download has considerably reduced’. ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

BLOCKING ORDERS: ASSESSING TENSIONS WITH HUMAN RIGHTS   583 It is worth noting, however, that the ‘reasonable effectiveness’ justifications for blocking have been questioned by some commentators.121 All in all, in the absence of evidence-based decisions by the European judiciary, the ‘reasonable effectiveness’ considerations advanced by the CJEU in Telekabel might still be called into question.

4.  Recent EU Copyright Reform and its Effects on Website Blocking and Fundamental Rights As the previous considerations demonstrate, a general trend can be observed in European law (with some deviations) to shift enforcement burdens onto intermediaries. In that sense, Telekabel contributes to the growing number of laws and judicial decisions that put the burden of IP enforcement, previously allocated primarily to rightholders, on ISPs. The Directive adopted by the European Parliament in 2019 on copyright in the Digital Single Market contributes to this development.122 Article 17 of the new Directive (former Art. 13) states that ‘Member States shall provide that an online content-sharing service provider performs an act of communication to the public or an act of making available to the public for the purposes of this Directive when it gives the public access to copyright-protected works or other protected subject matter uploaded by its users’.123 This implies that ISPs will be liable for infringing content made available through their services, unless they can demonstrate that they have: (1) used best efforts to obtain authorization from rightholders; (b) used best efforts to ensure the unavailability of the infringing works; and (c) acted expeditiously, on receiving notice from the rightholders, to disable access to, or to remove from, their websites the notified works or other subject matter, and used best efforts to prevent their future uploading.124 It is therefore expected that ISPs will adopt filtering measures, such as automated upload filters, in order to avoid liability for the activity of their users.125 Although this requirement concerns the content-hosting platforms, and not the internet access providers that were the focus of Telekabel, it adds to the general phenomenon of shifting IP enforcement competences to intermediaries. The main concern of 121  See e.g. Savola (n. 26) para. 75. 122  See Directive 2019/790/EU of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92. 123  ibid. Art. 17(1). 124  ibid. Art. 17(4). 125 See e.g. Martin Senftleben, ‘Bermuda Triangle—Licensing, Filtering and Privileging UserGenerated Content Under the New Directive on Copyright in the Digital Single Market’, SSRN Research Paper no. 3367219 (2019) 5 (noting that ‘[a]lthough the provision contains neutral terms to describe this alternative scenario, there can be little doubt in which way the “unavailability of specific works and other subject matter” can be achieved. As today’s internet users upload a myriad of literary and artistic works every day, the employment of automated filtering tools to ensure that unauthorized protected content does not populate UGC platforms seems inescapable’).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

584   CHRISTOPHE GEIGER AND ELENA IZYUMENKO those opposing this tendency usually deals with the argument that once intermediaries are asked to do more in their new role as active IP enforcers, ‘it may be hard to identify a plausible cut-off point’126 at which ‘impartial enforcement’ would not be at risk.127 Martin Senftleben notes, for example, that Article 17 of the new Directive, among others, ‘[is] likely to pose substantial difficulties in practice and make far-reaching inroads into the freedom of expression and information which the current configuration of platform liability allows.’128 Concerns have also been raised with regard to the possible incompatibility of Article 17 with the e-Commerce Directive’s ‘safe harbour’ provision for hosting providers and its prohibition on general monitoring.129 Article 17 poses further risks for the implementation of certain exceptions and limitations to copyright. In recognition of this risk, the EU legislator incorporated a paragraph envisaging the need to ensure that users in each Member State are able to rely, when uploading and making available content generated by them on online contentsharing services, on the exceptions for the purposes of quotation, criticism, and review, or caricature, parody, and pastiche.130 However, these built-in safeguards for copyright exceptions and freedom of expression in the EU are hardly applicable, as the content filters arguably will not be able to recognize what is or is not permitted.131

5. Conclusions In conclusion, it appears that, even in the light of the guidance provided by the European Courts, national judges will continue to have a difficult time identifying Europe’s standards applicable in the field of blocking orders and their collateral effects on fundamental rights. This is all the more so considering additional enforcement obligations that the recent EU copyright reform imposes on online intermediaries. Nevertheless, existing 126  Maurice Schellekens, ‘Liability of Internet Intermediaries: A Slippery Slope?’ (2011) 8(2) SCRIPTed 154, 154. 127  Niva Elkin-Koren, ‘After Twenty Years: Revisiting Copyright Liability of Online Intermediaries’ in Susy Frankel and Daniel Gervais (eds), The Evolution and Equilibrium of Copyright in the Digital Age (CUP 2014) 48. 128  Senftleben (n. 125). 129  See European Copyright Society, ‘General Opinion on the EU Copyright Reform Package’ (24  January 2017) 7 . See also, suggesting that Europe is currently witnessing the death of ‘no-monitoring obligations’, Giancarlo Frosio, ‘The Death of “No Monitoring Obligations”: A Story of Untameable Monsters’ (2017) 8(3) JIPITEC 212. 130  Directive 2019/790/EU (n. 122) Art. 17(7). 131  The Art. 17 incorporation of the freedom of expression safeguards in the text of copyright legislation somewhat mirrors the CJEU’s attempts in Telekabel to find a balance with fundamental rights, albeit on the level of judicial lawmaking. For the suggestion of a workable solution on how to incorporate and internalize the currently largely external freedom of expression safeguards in European copyright law, see Christophe Geiger and Elena Izyumenko, ‘Towards a European “Fair Use” Grounded in Freedom of Expression’ (2019) 35(1) Am. U. Int’l L. Rev. 1.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

BLOCKING ORDERS: ASSESSING TENSIONS WITH HUMAN RIGHTS   585 national and European Courts’ practice still allows identification of some criteria that can serve to untangle this complex puzzle. In this chapter, those criteria were summarized as follows (as per the right to freedom of expression): (1) the manner in which the website is used (active or passive); (2) adverse effects of blocking on legitimate content; (3) the general public interest in information; and (4) the availability of alternative means of accessing the information. Insofar as the (EU-specific) freedom to conduct a business is concerned, the factors that can tilt the scales of proportionality evaluation encompassed: (1) the costs and complexity of blocking; and (2) subsidiarity. Finally, the right to property perspective on website blocking dictates a required degree of ­efficacy of the blocking. On the whole, it is notable, on the one hand, that the importance of proportionality analysis in enforcement and competing fundamental rights has been confirmed.132 The CJEU was even prepared to go as far as to mandate that the ‘user rights’ could be enforced in court, thereby being arguably more observant of freedom of expression and information than was the ECtHR when it first ruled on copyright website blocking. On the other hand, the CJEU (unlike its Advocate General or the ECtHR) shifted a considerable part of the human-rights-sensitive enforcement choices onto the intermediaries, taking a rather delicate policy decision. The Luxembourg Court seems to consider that intermediaries cannot be entirely passive but must ‘do something’ to prevent infringement, not knowing exactly what and in which cases. Arguably, it is not an easy task and it is difficult to blame the CJEU for trying to come up with some kind of solution to a very delicate issue, with the legal tools it has at hand. The same can largely be said of the ECtHR, which is not accustomed to resolving issues of copyright enforcement.133 However, this should be understood as an invitation to the legislator to take responsibility and define the rules of the game of the information society. Whether the EU legislator has succeeded in this task when passing the recent Directive on copyright in the Digital Single Market is very uncertain, as numerous questions are left open by the new provisions regarding the liability of intermediaries. Without doubt, several referrals to the CJEU will be needed to understand their full legal implications. Thus, it is foreseeable that courts will continue to play a central role in balancing the interests at stake in the future. 132  On the increasing influence of human and fundamental rights on the resolution of IP disputes, see Christophe Geiger, ‘“Constitutionalising” Intellectual Property Law? The Influence of Fundamental Rights on Intellectual Property in Europe’ (2006) 37(4) IIC 371; Christophe Geiger, ‘Reconceptualizing the Constitutional Dimension of Intellectual Property’ in Paul Torremans (ed.), Intellectual Property and Human Rights (3rd edn, Kluwer Law Int’l 2015), 115; Christophe Geiger, ‘Implementing Intellectual Property Provisions in Human Rights Instruments: Towards a New Social Contract for the Protection of Intangibles’ in Geiger (ed.) (n. 6) 661; Christophe Geiger, ‘“Fair Use” through Fundamental Rights: When Freedom of Artistic Expression allows Creative Appropriations and Opens up Statutory Copyright Limitations’ in Shyam Balganesh, Wee Loon Ng-Loy, and Haochen Sun (eds), The Cambridge Handbook of Copyright Limitations and Exceptions (CUP, forthcoming 2020). 133  The last few years have seen, however, a notable increase in the Strasbourg Court’s activity in this sphere: see further on this, Izyumenko (n. 36); Geiger and Izyumenko (n. 15); and Geiger and Izyumenko (n. 131).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 30

A dmi n istr ati v e En forcem en t of Copy r ight I n fr i ngem en t i n Eu rope Alessandro Cogo and Marco Ricolfi*

In the last few decades, the involvement of administrative bodies in the enforcement of copyright has expanded in several countries. However, administrative enforcement is by no means an absolute novelty; on the contrary, it has for a long time been familiar in offline, analogue environments.1 There, its perimeter has usually remained quite limit­ed to take into account the fact that enforcement of copyright as a rule involves conflicting private rights rather than a public interest. In recent years, however, the number of countries which have opted for the adoption of administrative measures has increased as a result of the remarkable difficulties that the enforcement of copyright faces in a digital environment.2 The drivers behind this development remain to be seen. To begin with, technological developments may have played a role. Indeed, in the last decades internet services *  Sections 1 to 3 were written by Marco Ricolfi; Section 4 by Alessandro Cogo. 1  For exemplifications, see Yaniv Benhamou, ‘Website Blocking Injunctions under Swiss Law. From civil and administrative injunctions to criminal seizure or forfeiture’ (2017) Expert Focus 885, 888; Carmelita Camardi, ‘Inibitorie amministrative di attività’ [2012] AIDA 268, mentioning Art. 157 of Law No. 633 of 1941 (the Italian Copyright Law, hereafter ICL), to which also Arts 171 ff, 174-quinquies(2) should be added. 2  For an overview, see Maria Lillà Montagnani, Internet, contenuti illeciti e responsabilità degli intermediari (Egea 2018) 131 and Giancarlo Frosio, ‘Why Keep a Dog and Bark Yourself? Intermediary Liability to Responsibility’ [2017] 25 IJLIT 1, 16. A broad comparative discussion, also encompassing China, is Bruno Tonoletti, ‘La tutela dei diritti di proprietà intellettuale fra giurisdizione ed amministrazione’ [2013] AIDA 335.

© Alessandro Cogo and Marco Ricolfi 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ADMINISTRATIVE ENFORCEMENT IN EUROPE   587 providers (ISPs) were required in many jurisdictions to put in place filtering and blocking systems for different purposes, including enabling parental control of internet content accessed by minors or fighting phenomena such as pedopornography.3 The availability of this tool may have suggested granting public administrations the power to require ISPs to extend the use of the newly acquired technological tools to also combat online copyright piracy. This explanation is incomplete, though, as it still leaves open the question why some systems have considered courts’ intervention sufficient for the task, while others have preferred to open up a ‘double track’ and also enlist the public administrations in spite of the difficulties the latter option may entail. As countering online infringement of copyright may resort to a variety of tools, including soft law instruments,4 it stands to reason that from time to time the adopted mix depends to some extent on the overall context. So, it has been suggested that resort to public administration powers may in some countries have been prompted by failures in self-regulation.5 Also, geopolitics may go a long way towards explaining choices in favour of administrative involvement. Diplomatic sources indicate that the US administration put considerable pressure on certain states to resort to administrative action where those states were perceived as laggards in combating copyright piracy.6 This is particularly true in the case of Spain, Italy, and Greece, which in fact are the European jurisdictions that in the last decade resorted to administrative enforcement of copyright. A quite different story is the basis of the adoption in France of the measure known as HADOPI, after the name of the administrative authority set up to implement it, the ‘Haute autorité pour la diffusion des oeuvres et la protection des droits sur Internet’.7 3  For a comprehensive discussion, see the chapter on content regulation in Milton Mueller, Networks and States. The Global Politics of Internet Governance (MIT Press 2010) 185 and the Swiss Institute of Comparative Law (SICL), ‘Comparative Study on Filtering, Blocking and Take-down of Illegal Content of the Internet’, study commissioned by the Council of Europe (20 December 2015) 3, 7, 13–15 . A clear outline of the reasons that led to the adoption of automated blocking measures by ISPs in these specific areas is in Cartier International AG and others v British Sky Broadcasting Ltd and others [2014] EWHC 3354 (Ch) (UK) (hereafter Cartier v BSkyB 2014). 4  For comprehensive reviews, see SICL, ‘Comparative Study’ (n. 3) 13, 17 (noting the complementarity of soft law and the general legal framework in several jurisdictions), 21 (on the need of compliance with the human rights standards), 24 and 30 (on the impact of EU law on self-regulatory blocking and noticeand-takedown schemes) and Frosio (n. 2) 17, 20. 5  On the various publicly sponsored self-regulation initiatives in Italy and their quite limited success, see Montagnani (n. 2) 163. On the rationale behind private ordering initiatives, unilateral or based on self-regulation, see Giancarlo Frosio, ‘The Death of “No Monitoring Obligations”: A Story of Untameable Monsters’ (2017) 8(3) JIPITEC 199, 202 and Frosio (n. 2) 10, 21. 6  Italy was removed from the Watch List in the 2014 Special 301 Report adopted by the US Trade Representative. For comments, see Andrea Stazi, ‘La tutela del diritto d’autore in rete: bilanciamento degli interessi, opzioni regolatorie europee e “modello italiano” ’ [2015] Dir. inf. e informatica 89. For Spain and Greece, see Office of the US Trade Representative April 2016, 10, 56 . 7  Law No. 2009-669 of 12 June 2009, Promoting the Dissemination and Protection of Creative Works on the Internet, as amended, after the intervention of the Constitutional Court, by Law No. 2009-1311 of 28 October 2009, on the Criminal Protection of Literary and Artistic Property on the Internet (Fr.)

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

588   ALESSANDRO COGO AND MARCO RICOLFI This measure was adopted in 2009, arguably as a result of direct pressure from rightholders. This form of administrative intervention is of limited interest in this chapter, which primarily discusses orders directed towards ISPs, as the ‘three strikes’ contemplated in the French measure were intended to discourage ‘piracy’ by the public.8 Accordingly, HADOPI focuses on action directed towards end-users accessing illegal content and adopts an approach which has not been followed by other European countries as it has proved both unpopular and legally questionable.9 Administrative enforcement of copyright raises several concerns. Typically, online enforcement of copyright by courts entails jurisdictional safeguards. In fact, prior notice of the alleged infringer, due process, and transparency are essential components of judicial involvement.10 In all these regards, algorithmic enforcement of copyright online11 is located at the opposite end of the spectrum: to the extent that it entails automated blocking, it gives short shrift to the alleged infringer’s right to be heard. Administrative enforcement is located at some point on the continuum between the two extremes of judicial and algorithmic enforcement. Where this point is to be found largely depends on the institutional arrangements from time to time adopted, to which we now turn our attention.

1.  The European Landscape: Spain, Italy, and Greece Spain was the first European jurisdiction to opt for the administrative enforcement of copyright.12 This step was taken by the Ley Sinde, adopted on 4 March 2011 and amended (empowering the administrative authority to send warnings to identified infringers and transfer the case to a court in cases of repeat infringement; the judge may order a range of penalties, including a thirty-day account suspension). See also, for a detailed discussion of HADOPI, Chapter 27. 8  The literature on the so-called ‘graduated response’ adopted by HADOPI is extensive: see among the many Montagnani (n. 2) 136; Frosio (n. 2). 9  In specific connection with end-users’ internet freedoms, see the amendment to Art. 3a of the Framework Directive for electronic communication networks and services contained in Directive 2009/140/EC of the European Parliament and of the Council of 25 November 2009 amending [Directives 2002/19–21/EC] [2009] OJ L 337/37, Art. 1, para. 1, and Regulation 2015/2120/EU of 25 November 2015 laying down measures concerning open internet access . . . [2015] OJ L310/1, recital 13 and Art. 3(3)(a). 10  See, for a thoughtful discussion, Jack Balkin, ‘Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation’ (2018) 51 UC Davis L. Rev. 1149. In specific connection with online infringement of IPRs, see Martin Husovec, Injunctions against intermediaries in the EU. Accountable but not liable? (CUP 2017) and Christina Angelopoulos, European Intermediary Liability in Copyright. A Tort-Based Analysis (Wolters Kluwer 2016). 11 On which, see Daniel Friedmann, Trademarks and Social Media. Towards Algorithmic Justice (Edward Elgar 2015); Giancarlo Frosio, ‘To Filter, or Not to Filter? That is the Question in EU Copyright Reform’ (2017) 36 Cardozo Arts & Entertainment L.J. 331 and Frosio (n. 2) 20, 25. 12  Actually, the frontrunner in this specific regard was Turkey, which in the fourth year of President Erdogan’s rule, in 2007, gave internet-blocking powers to the Prime Minister. See Council of Europe, ‘Comparative Study’ (n. 3) 14 and also for references Montagnani (n. 2) 141.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ADMINISTRATIVE ENFORCEMENT IN EUROPE   589 by Law 2/2012 of 29 June 2012, which has since been incorporated into the Spanish Copyright Act by Law 2/2019.13 The Spanish Copyright Commission, an administrative body set up in 2011 under the oversight of the Ministry of Culture, currently acts with the powers conferred on it by Article 193(2) of Law 2/2019.14 Italy promptly followed suit. The Italian Authority for Communication Guarantees (Autorità per le garanzie nelle comunicazioni, AGCOM),15 by decision no. 680/13 adopted a Regulation concerning copyright enforcement on electronic communications networks and the corresponding implementing procedures. This initial measure was amended by decision no. 490/18.16 The departments in charge are the Directorate for Media Services and, for final decisions, the ‘Commissione per i servizi e prodotti’ which is a specific Board Committee of the same Directorate. Finally came Greece. Article 52 § 1 of Law No. 2281/2017 introduced a new mechanism to fight online copyright infringement. A three-member administrative Committee on Internet Violations of Intellectual Property (CIPIV) was set up by Decree of the Minister of Culture and Sports.17 The Committee, which consists of the Chairman of the Hellenic Copyright Organization, a representative of the Hellenic Telecommunications and Post Commission, and a representative of the Hellenic Data Protection Authority, commenced operations on 3 September 2018.18 These three sets of provisions exhibit a number of common features. They empower administrative, non-jurisdictional, bodies to take measures intended to combat online copyright infringement. The addressees of the orders contemplated by the relevant le­gis­la­tions are primarily ISPs.19 These intermediaries are inevitable accessories in another’s online infringement, as online infringement may not be carried out without resorting to hosting or access providers.20 Nevertheless, this does not mean that under 13  Ley 2/2019, de 1 de marzo, por la que se modifica el texto refundido de la Ley de Propiedad Intelectual, aprobado por el Real Decreto Legislativo 1/1996, de 12 de abril, y por el que se incorporan al ordenamiento jurídico español la Directiva 2014/26/UE del Parlamento Europeo y del Consejo, de 26 de febrero de 2014, y la Directiva (UE) 2017/1564 del Parlamento Europeo y del Consejo, de 13 de septiembre de 2017 (Sp.). . 14  The Commission was established by Royal Decree 1889/2011 implementing the provisions of the Ley Sinde (hereafter Implementing Regulation of the Ley Sinde). 15  Established by the legge 31 luglio 1997, n. 249 [Act no. 249 of 31 July 1997] (It.) setting up the Authority (hereafter AGCOM Act 1997). 16  The consolidated text is available at (hereafter 2013 AGCOM Regulation as amended). The Italian provisions mirror to a large extent the text of the Ley Sinde. 17 Established by Ministerial Decree 196/2018 (Gr.). See Charis Tsigou, ‘Notice-and-Takedown Procedure under Greek Intellectual Property Law 4481/2017’ (2018) 9 JIPITEC 201. 18  The Committee adopted its first three decisions blocking access to several infringing sites by orders of 7 November 2018 . 19  However, the whole Chapter IV of the Italian Regulation concerns administrative measures directed towards audiovisual and radio services as defined in letters m)–o) of Art. 1 of the Regulation itself. Unsurprisingly, resort to the provisions of that chapter has been so limited (see s. 5) that only occasional references will be made to it. 20  It should be noted that access providers usually have contractual links with end-users rather than with infringing web operators. However, their services are also used by the latter; which explains why

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

590   ALESSANDRO COGO AND MARCO RICOLFI the legal rules currently prevailing in these jurisdictions, they themselves are also necessarily deemed either direct or contributory infringers.21 Even when they are not deemed infringing, they may still receive injunctive remedies.22 This outcome can be accounted for in different ways, for example by holding that ISPs have an equitable positive duty to cooperate in righting the wrong. However, to the extent that they are not wrongdoers themselves, their position is that of nominal, rather than effective, defendants in infringement proceedings.23 The remedies adopted by the legislative systems under review encompass a variety of measures. Usually, these measures are described as resorting to a dichotomy: accessblocking and takedown orders, the former directed to access providers and operating ex ante, the latter directed to hosting providers and operating ex post.24 In practice, the distinction between the two is not always as clear-cut as it would appear in this char­ac­ ter­iza­tion.25 While takedown is normally understood to be removal of the infringing content from the website, this is not necessarily the case. In fact, the remedy may also include orders directed towards hosting providers to selectively disable or block access to websites and web pages carrying infringing material.26 Also, takedown orders to access providers may be addressees of injunctions. As indicated by C-557/07 LSG-Gesellschaft zur  Wahrnehmung von Leistungsschutzrechten GmbH v Tele2 Telecommunication GmbH [2009] ECLI:EU:C:2009:107, para. 43 (‘access providers who merely enable clients to access the Internet . . .  provide a service capable of being used by a third party to infringe a copyright or related right, inasmuch as those access providers supply the user with the connection enabling him to infringe such rights’); the same conclusion was reached in C-314/12 UPC Telekabel Wien GmbH v Constantin Filmproduktionsgesellschaft mbH [2014] ECLI:EU:C:2014:192, paras 34–5. For a thorough discussion of the issue, see Cartier v BSkyB 2014 (n. 3), para 155. 21  For an overview of the legal position of ISPs in connection with direct and indirect copyright infringement, inclusive of current legislative developments, see Frosio (n. 11) 331 and Martin Husovec, ‘Injunctions against innocent third parties. The Case of Website Blocking’ (2012) 4 JIPITEC 116. A change in the characterization of the hosting providers’ role is underway: online content-sharing service providers storing and giving access to large amounts of works and other subject matter uploaded by their users is now legislatively deemed to be engaging in ‘communication to the public’ under Directive 2019/790/ EU of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92, Art. 17; a similar result has already been derived from stretching the interpretation of the previous notion by EU case law: see C-527/15 Stichting Brein v Jack Frederik Wullems [2017] ECLI:EU:C:2017:300, para. 32; C-610/15 Stichting Brein v Ziggo BV and XS4All Internet BV [2017] ECLI:EU:C:2017:456, paras 26–32. Also, a referral by the Bundesgerichtshof (German Supreme Court, BGH) to the Court of Justice of the European Union (CJEU) is pending. See Nemo Studios v You Tube and Google 13 September 2018 (order). 22  See C‑324/09 L’Oréal SA and others v eBay International AG and others [2011] ECLI:EU:C:2011:474, paras 128, 134, 144. 23  This is the expression appropriately used by Saulius Lukas Kalėda, ‘The Role of the Principle of Effective Judicial Protection in Relation to Website Blocking Injunctions’ (2017) 8 JPITEC 216, 220. 24  See Balkin (n. 10) 1176; SICL, ‘Comparative Study’ (n. 3) 5. 25  It is often remarked that the remedies referred to in the text may to a large extent overlap, also because the terminology used to describe them is not always consistent. See in this connection Dafne Keller, ‘A Glossary of internet content blocking tools’ (Stanford CIS Blog, 29 January 2018) . 26  See Sections 4.4 and 4.5. For a case of a transfer of the domain name by means of a release of the ‘autohinfo code’, also known as EPP code (Extensible Provisioning Protocol) or transfer key from one

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ADMINISTRATIVE ENFORCEMENT IN EUROPE   591 hosting intermediaries may become stay-down orders when the intermediary is asked to ensure that the infringer does not repeat the same violation.27 For their part, it is access providers who are normally on the receiving end of selective disabling and blocking orders. In either case, all these orders raise concerns when they deny access to IP addresses even though these are shared by both infringers and legitimate users. Indeed, if a website, or a web page, is found to be infringing, the provider may be ordered to ‘filter’ all access requests to that IP address or domain name (DNS) and redirect the end-user towards a page indicating that the requested address or page has been disabled. This approach may have several advantages over alternative options. First, it steers clear of direct attacks against end-users,28 which may prove unpopular and raise all too obvious fundamental rights concerns.29 Secondly, while resort to mechanism such as notice and takedown (NTD) in itself may prove ineffective, since the owner of the website or the web page may well keep infringing just by opening a new website, a blocking order may be more difficult to circumvent, particularly if complemented by the practice of ‘dynamic blocking’. More specifically, the orders may include provision for rightholders to notify additional IP addresses or URLs to the ISP in respect of the websites which have been ordered to be blocked. This allows rightholders to respond effectively to efforts made by the website operators to circumvent the orders by changing their IP addresses or URLs.30 Thirdly, administrative enforcement is often resorted to in preference to action before the courts, where jurisdictional proceedings may be deemed to involve shortcomings, in terms of speed, formalities, and costs. However, several questions loom large over administrative enforcement. To begin with, rightholders’ claims are directed to protecting their private property against infringement by other private players; while administrative bodies, in all the systems considered, are supposed to act in the public, rather than private, interest. This tension raises the question whether and to what extent administrative enforcement of copyright registrar to another, see the judicial order to that effect in Tribunale [Court] Milano (order) Rolex Italia spa v ITM Sagl Trgovina doo, Innovative Technology & Materials Sagl in liquidazione and Aruba spa [13 June 2017] in Giur. Ann. Dir. Ind. 6544. 27  Whatever ‘the same’ may mean in this context. On this issue, see Patrick Van Eecke, ‘Online Service Providers and Liability: A Plea for a Balanced Approach’ (2011) CML Rev. 1455, 1479–80. It is also conceivable that the order mandates the provider to look for copies of the file (‘information’) that are identical to the file which triggered the complaint: for German judicial precedent to this effect, see the decision of the BGH of 12 July 2012 in [2013] GRUR 370, ‘RapidShare’. 28  Such as the three-strike mechanism established in France with the HADOPI I and II Acts of 2009. 29  Reference is made here both to privacy and end-users’ internet freedoms. 30  As a rule, responsibility falls on the rightholders to identify the IP addresses and URLs to be notified to ISPs in this way. For this purpose, the rightholders may engage a third party to monitor server locations and domain names used by the target websites. For criticism of this approach that relies on findings made out of court, on the basis of the provisions mandating ‘quality of law’ for interference on rights, see Husovec, ‘Injunctions’ (n. 10), 124, who, at 122, characterizes dynamic blocking orders as ‘open’ (as opposed to ‘fixed’). A predecessor to dynamic blocking may be found in the so-called ‘list system’, often used to update the list of blocked URLs or domain names that are found to be engaging in making available pedo-pornographic material and in other serious crimes. See SICL, ‘Comparative Study’ (n. 4) 4, 17.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

592   ALESSANDRO COGO AND MARCO RICOLFI (and of intellectual property at large) is admissible under the legal principles of the, from time to time relevant, system. Even where the reply is in the affirmative, a positive outcome opens up a number of consequential questions. More specifically, the question is bound to arise whether the applicable administrative proceedings are amenable to the kind of safeguards characteristic of jurisdictional actions mandated by supranational norms and EU and domestic constitutional provisions. Against this background, it is appropriate to turn now to the legal context in which administrative enforcement of copyright is rooted.

2.  The Legal Context 2.1  International and EU Legislative Provisions 2.1.1 TRIPs In this regard, the most relevant international provisions are to be found in the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs Agreement),31 which in Part III deals with the enforcement of intellectual property rights. While the TRIPs Agreement provides that intellectual property rights (IPRs) may also be protected via administrative procedures, that option is available under the condition that they conform to principles substantially equivalent to judicial proceedings.32 This precept has important ramifications, requiring inter alia the opportunity for judicial review of administrative decisions,33 the mandate to ‘fair and equitable procedures’,34 and to ‘safeguards against abuse’35, both in connection with actions on the merits and interlocutory relief.36

31  For comment of the relevant TRIPs provisions, see Justin Malbon, Charles Lawson, and Mark Davison, The WTO Agreement on Trade-Related Aspects of Intellectual Property Rights. A Commentary (Edward Elgar 2014) 613. For a discussion, see Michele Bertani, ‘Internet e la “amministrativizzazione” della proprietà intellettuale’ [2012] AIDA 129, 151 and Angelo Maria Rovati, ‘Il dibattito sulla legittimità del regolamento AGCOM: le opinioni in campo’ in Luigi Carlo Ubertazzi (ed.), Il regolamento AGCOM sul diritto di autore (Giappichelli 2014) 194. 32  Art. 49: ‘To the extent that any civil remedy can be ordered as a result of administrative procedures on the merits of a case, such procedures shall conform to principles equivalent in substance to those set forth in this Section’. See Malbon, Lawson, and Davison (n. 31) 670. 33 See TRIPs Agreement (15 April 1994) Marrakesh Agreement Establishing the World Trade Organization, Annex 1C, 1869 UNTS 299, 33 ILM 1197, Art. 41(2) (‘Parties to a proceeding shall have an opportunity for review by a judicial authority of final administrative decisions’). According to Malbon, Lawson, and Davison (n. 31) 632, the provision read together with Art. 42 may suggest that civil judicial review is mandated; even if administrative, rather than civil, judicial review were admissible, it is submitted that it should extend to the merits. 34  TRIPs Agreement (n. 33) Art. 42. 35  ibid. Art. 41(1), last para. 36  ibid. Art. 50(8) (‘to the extent that any provisional measure can be ordered as a result of administrative procedures, such procedures shall conform to principles equivalent in substance to those set forth in this Section’).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ADMINISTRATIVE ENFORCEMENT IN EUROPE   593

2.1.2  The EU At the EU level, the relevant provisions result from the interplay of three separate sets of provisions. First, all the last paragraphs of each of the three provisions of the e-Commerce Directive37 granting immunity to ISPs end by stating that ‘this Article shall not affect the possibility for a court or administrative authority, in accordance with member States’ legal systems, of requiring the service provider to terminate or prevent an infringement’. This provision clearly opens the way for setting up administrative enforcement or, more precisely, to a ‘double track’, as administrative remedies may complement, rather than replace, jurisdictional ones.38 Secondly, the Enforcement Directive details specific rules concerning both interlocutory and final injunctions against intermediaries.39 Article 11 indicates in its third sentence that Member States shall ensure that rightholders are in a position to apply for an injunction against intermediaries whose services are used by a third party to infringe IPRs. It is true that the rules for the operation of these injunctions, relating to the conditions to be met and to the procedure to be followed, are a matter for national law. However, the principles mandated by European law set clear limits on the enforcement powers conferred by these provisions.40 Therefore, the Enforcement Directive raises the question of determining to what extent the principles and rules set out by the EU lawmakers are binding on the Member States when the enforcement of IPRs—and here specifically of copyright—is entrusted to administrative bodies rather than to courts. This is again an issue of ‘equivalence in substance’, as suggested by Article 49 of the TRIPs Agreement. The question has in turn significant ramifications, considering that the Enforcement Directive lays down quite 37  Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1, Arts 12–14. Even though Art. 12 refers to the somewhat broader notion of ‘mere conduit’ (‘that consists of the transmission in a communication network of information provided by a recipient of the service, or the provision of access to a communication network’), the current debate concentrates on access providers. Therefore, we will mainly refer to access providers. 38  It was initially suggested that EU lawmakers had subsequently backtracked in that regard, by adopting the Enforcement Directive 2004/48/EC of 29 April 2004 that, in dealing with jurisdictional proceedings, would indicate a choice of courts over administrative bodies. See Tonoletti (n. 2) 362 and Nexa Center . This position (which we originally backed) is nowadays no longer followed as it is not supported either by the case law or literature. 39  See Directive 2004/48/EC of the European Parliament and of the Council of 29 April 2004 on the enforcement of intellectual property rights [2004] OJ L195/16, Arts 9 and 11. The very fine—but at the same time very crucial—point concerning the difference between injunctions against intermediaries as stipulated in the first sentence of Art. 11 and in the third section of the same provision is discussed in C‑324/09 (n. 22) para. 128 40  C‑324/09 (n. 22) para. 135; Cartier Int’l AG and others v British Telecommunications Plc and another [2018] UKSC 28 (UK) (hereafter Cartier 2018) (in connection with trade marks); Cour de cassation [French Supreme Court] SFR, Orange, Free, Bouyegues v Union des producteurs de cinéma and others [6 July 2017] no. 909, ECLI:FR:CCASS:2017:C100909 (Fr.) (hereafter Allostreaming).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

594   ALESSANDRO COGO AND MARCO RICOLFI exacting requirements for enforcement ‘measures, procedures and remedies’.41 Under Article 3, paragraph 1, these must, inter alia, ‘be fair and equitable and . . . not . . . unnecessarily complicated . . ., or entail unreasonable time-limits or unwarranted delays’, and, under paragraph 2 of the same provision, ‘shall also be effective, proportionate and dissuasive and shall be applied in such a manner as to avoid the creation of barriers to legitimate trade and to provide for safeguards against their abuse’. Moreover, a combined reading of the Enforcement and e-Commerce Directives indicates that Member States may not impose on ISPs a general obligation to monitor.42 The third—and final—piece of the EU puzzle is to be found in the Information Society Directive,43 which chronologically comes in between the two Directives we discussed earlier and, notably, uses in its Article 8(3) exactly the same wording which was to be adopted in the third proposition of Article 11 of the Enforcement Directive a few years later.44

2.1.3  The EU Charter of Fundamental Rights In relation to the EU Charter of Fundamental Rights,45 there is no doubt that public bodies entrusted with the task of copyright enforcement are included in the notion of ‘institutions, bodies and agencies’ of the Member States; and that these are set up with the specific purpose of ‘implementing Union law’,46 as copyright is to a very large extent harmonized through EU Directives. Therefore, such public bodies are bound to respect, inter alia, the principles of freedom of expression and information (Art. 11), the right to conduct a business (Art. 16), and intellectual property (Art. 17(2)). Administrative enforcement should also comply with Article 47, the right to an effective remedy and to a fair trial.47 Most relevant, in the present context, is Article 52, the first paragraph of 41  See Directive 2004/48/EC (n. 39) Art. 3. 42  According to Art. 15(1) of the e-Commerce Directive, ‘Member States shall not impose a general obligation on providers, when providing the services covered by Articles 12, 13 and 14, to monitor the information they transmit or store, nor a general obligation actively to seek facts or circumstances indicating illegal activity.’ On the current debate on the continued viability of this principle, see Frosio (n. 5) 202. 43  See Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L167/10. 44  Except, of course, it refers to ‘copyright or related right’ instead of ‘intellectual property right’. We abstain here from a discussion of the so-called internet freedom provisions (on which n. 9). Those provisions are usually understood as dealing with restrictions of access to the internet by end-users, as exemplified by the HADOPI legislation; it is, however, submitted that they may also impact the assessment of restrictions targeting hosting and access providers, as these are bound to affect, at least indirectly, end-users’ access. 45  Charter of Fundamental Rights of the European Union [2012] OJ C326/391. It was originally proclaimed by the European Parliament, Council, and Commission at Nice in December 2000. As amended in December 2007, it became legally binding with the coming into force of the Lisbon Treaty in December 2009. 46  Charter (n. 45) Art. 51(1). See also Directive 2004/48/EC (n. 39) recital 32. 47  See Kalėda (n. 23) 220; Daniele Domenicucci and Fabio Filpo, ‘La tutela giurisdizionale effettiva nel diritto dell’Unione Europea’ in Roberto Mastroianni, Oreste Pollicino, Silvia Allegrezza, Fabio Pappalardo and Orsola Razzolini (eds), Carta dei diritti fondamentali dell’Unione Europea (Giuffrè 2017) 864.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ADMINISTRATIVE ENFORCEMENT IN EUROPE   595 which aptly provides that ‘any limitation on the exercise of the rights and freedoms recognised by this Charter must be provided for by law and respect the essence of those rights and freedoms. Subject to the principle of proportionality, limitations may be made only if they are necessary and genuinely meet objectives of general interest ­recognised by the Union or the need to protect the rights and freedoms of others.’48 This is a quite emphatic restatement of the classic rule of law, the continued relevance of which we now have a chance to revisit by turning our attention to the legal basis of the domestic provisions on which the administrative enforcement of copyright is based.

2.2  Domestic Legal Basis The legal basis of much of the legislation setting up the administrative enforcement of copyright has been, at best, very shaky. To begin with Italy, it is most doubtful that back in 2013 AGCOM had the implied powers necessary to adopt a Regulation concerning copyright enforcement on electronic communications networks.49 However, the issue has in the meantime become moot, as the lack of rule-making power that undermined the original Regulation has arguably been cured by a (belated) legislative measure.50 It would appear also that the Greek administrative enforcement does not fare much better in terms of legal basis. As indicated in the specialist literature, decisions by CIPIV would appear to be rather vulnerable. The Greek copyright office (OPI) is governed by private law; CIPIV, that is not even an emanation of the OPI, being hosted in the premises of the latter but also composed of members coming from other public administrations,

48  See also the ‘prohibition of abuse of rights’ set out in the Charter (n. 45) Art. 54. Only for brevity’s sake do we not deal here with the European Convention on Human Rights (ECHR) and with the questions raised by its interface with intellectual property, on which see, however, Council of Europe, ‘Comparative Study’ (n. 3) 17; Husovec (n. 10) 123 and, in connection with freedom of expression under Art. 10 of the ECHR and the liability of a web portal for posting a link, Magyar Jeti Zrt v Hungary App. no. 11257/16 (ECtHR, 4 December 2018). On the relationship of the ECHR with intellectual property, see Annette Kur and Thomas Dreier, European Intellectual Property Law. Text, Cases and Materials (Edward Elgar 2013) 78. See also Chapter 29. 49  See, however, Constitutional Court Altroconsumo v Confindustria and others [3 December 2015] (It.) (not reaching the merits of the question because of the failures of the referring lower court in framing the questions submitted to the court); Tribunali Amministrativi Regionali [Regional Administrative Court] (TAR) Lazio Altroconsumo, Assoprovider and others v AGCOM [30 March 2017] (It.) It is arguable whether in Italy the need for administrative intervention was originally felt with special urgency because the legislation implementing the e-Commerce Directive required that notice of infringement should take the form of a ‘communication of the competent authority’ (Art. 16, letter b) of Legislative Decree no. 70 of 2002 (It.), so that a notice, however detailed and complete, sent by a private party, could not be deemed sufficient to terminate ISPs’ immunities. However, in the meantime the Corte di Cassazione in Reti Televisive Italiane spa v Yahoo! Inc and Yahoo! Italia srl [29 March 2019] no. 7708 (It.) has clarified that such communication of the public authority is not required for hosting providers (as opposed to caching providers, on which see the decision of the same court no. 7709 om the same day). 50  See Act no. 167 of 20 November 2017, provisions for the implementation of EU provisions, Art. 2 (It.). Decision no. 490/18 was adopted pursuant this legislative mandate.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

596   ALESSANDRO COGO AND MARCO RICOLFI would not appear to meet either the formal or the functional criteria to qualify as a public sector body.51

3.  The Essential Features Against this background, we can now turn to the essential features of the administrative enforcement of copyright adopted in the three jurisdictions relevant here.52

3.1  The Independence of the Public Bodies Entrusted with the Task To the extent that applicable legal provisions require a minimum of independence of the public body in charge, this threshold would not appear to be met in Spain.53 Also the Greek CIPIV is arguably bedevilled by a possible case of conflict of interest: on the one side, the Copyright office which hosts CIPIV and contributes one member to it, is financed by the associations representing one of the parties to potential disputes (i.e. the collective management organizations) and also receives a fee from complainants for taking action.54 For its part, AGCOM is classified among the independent authorities.

3.2  Protected Subject Matter and Violations Administrative enforcement, on the basis of the national rules reviewed earlier, covers copyright and related rights.55 Italian rules expressly envisage intervention in connection with the violation of the exclusive right of making available works and protected materials on a web page;56 however, the language of the same provision is expanded to 51  See Tsigou (n. 17) 204. It should be added that it appears most doubtful whether the legal basis for the legislation setting up CIPIV could plausibly be found in Art. 36 of EU Directive 2014/26/EU on collective management of copyright and related rights, that deals with ‘competent authorities’ to be  set up for the entirely different task of monitoring ‘the compliance by collective management organisations . . . with the provisions of national law adopted pursuant to the requirements laid down in this Directive’. 52  While here reference will be made mainly to the Italian rules, Greek and Spanish rules will also be considered where appropriate. 53  See Law 2/2019, Art. 193(4) and, previously, Ley Sinde, Art. 158(4), s. 4 and Implementing Regulation of the Ley Sinde (n. 14) Art. 14 on which see Miquel Peguera, ‘The Spanish experience’, EIPIN Congress, Alicante (2016) . 54  See Tsigou (n. 17) 206. 55  See 2013 AGCOM Regulation as amended (n. 16) Art. 6(1); Art. 193(2)(b) of Law 2/2019. See also Section 4.3. 56  See 2013 AGCOM Regulation as amended (n. 16) Art. 6(1).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ADMINISTRATIVE ENFORCEMENT IN EUROPE   597 encompass the more amorphous notion of ‘offer of products, components and services’ in ‘violation of copyright and related rights’, which may be construed as reaching phys­ic­al products which can be ordered via the web, as well as ‘advertising, promoting, and describing’ acts that are in violation of the same. Accessing streaming services is also covered by the provision.57 The wording of the Regulation raises several questions, two of which are worthy of mention. While the language of the Regulation does not spell out whether the violations considered are only the ones directly incurred by the addressees of an order by AGCOM, context indicates that they also encompass indirect violations as well as third party violations in which ISPs happen to be involved, even though they may be not infringing either directly or indirectly.58 As a matter of fact, very often the ISPs themselves may not infringe the rights considered by the Regulation; rather, they may be involved in violations by third parties. If this is the case, the intermediary must be summoned in the proceedings,59 but is merely a nominal defendant.60 In principle, enforcement should never target situations that are covered by limitations or exceptions to copyright exclusivity. Article 2(2) of the Regulation indicates that AGCOM is bound to take into account limitations and exceptions as provided by copyright legislation as well other defences based on fundamental rights. Therefore, these defences are in theory available to the interested parties, who can raise them within the procedure.61 However, as we will presently see, the design of the procedure itself in practice discourages and even prevents taking into account said defences.

3.3 Parties The Italian Regulation gives the standing to sue to rightholders and their licensees.62 Collective right management organizations and industry associations set up by rightholders and independent management entities63 also have standing to sue if duly authorized by rightholders.64 57  See also the reference to communication to the public of works via link or torrent or by other means in ibid. (n. 16) Art. 1, letter ff), subject to selective removal under Art. 8(3). 58  These situations are also considered in Art. 8(3) of the Information Society Directive and Arts 9 and 11(1), final sentence of the Enforcement Directive. 59  See 2013 AGCOM Regulation as amended (n. 16) Art. 7(1). 60  See Kalėda (n. 23) 220; see Section 1. 61  See 2013 AGCOM Regulation as amended (n. 16) Art. 7(4). 62  Exclusive or otherwise. The inclusion of non-exclusive licensees is unusual in other fields of intellectual property; not necessarily so in the field of copyright (see Arts 156 and 167 ICL). 63  As defined in Art. 3 of Directive 2014/26/EU of the European Parliament and of the Council of 26 February 2014 on collective management of copyright and related rights and multi-territorial licensing of rights in musical works for online use in the internal market [2014] OJ L84/72. 64  For Spain, see Law 2/2019, Art. 195(3) para. 2. Also Greek Law no. 2281/2017, Art. 52 s. 1.1 gives standing to collective rights management organizations. See Tsigou (n. 17) 204.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

598   ALESSANDRO COGO AND MARCO RICOLFI In terms of standing to be sued, letter aa) of Article 1 of the Regulation mentions in this connection the uploader of the file alleged to infringe; website and web page op­er­ators are also mentioned in letters g) and f). In turn, ISPs engaged in mere conduit and hosting (i.e. access and hosting providers) are referred to in letter f);65 audiovisual media and radio operators in letters m) and n).66 This determination of the perimeter of persons and entities with standing to be sued raises a few interesting issues. In the mechanism set up by Article 7 of the Italian Regulation, ISPs are necessary parties; website and web page operators and uploaders are not. The former must be summoned; the latter may be summoned before the Authority, only to the extent that they can be traced.67 This rule is in line with the Italian case law developed before civil courts: joint tortfeasors need not be co-joined.68 Also from a practical perspective, it makes perfect sense to dispense with a requirement (of notice to third parties which all too often are out of reach) which is usually very difficult to meet. This approach, however, has a clear downside: as fair use and other defences are normally raised by uploaders and entities running websites and web pages, rather than by ISPs, hosting ISPs, lacking an input from uploaders and website holders are not in a position to challenge a violation complaint or a notice adopted by AGCOM. While this possibility is theoretically open to them,69 ISPs tend to err on the side of caution. If they insist on providing their ­services after receiving a notice, they risk losing their immunity and becoming liable for copyright violation. On the contrary, if they desist, their only risk is incurring a contractual violation towards uploaders and website operators. This risk is slight: operators normally do not have an incentive to litigate the matter against ISPs.

3.4 Procedure The shortcoming indicated in the afore-mentioned subsection is exacerbated by the fact that the Italian rules do not even require that, to file a complaint, a rightholder must show that he has given prior notice to the third parties alleged to have infringed; even when a complaint is filed, website and web page operators as well as uploaders must be summoned only if they can be traced.70 Therefore, the party that is alleged to be directly or indirectly 65  It is arguable, however, whether the third category of ISPs, engaged in caching, are encompassed in the broader text of Art. 2(1) of Act no. 167 of 20 November 2017, which only deals with protective orders. 66  Audiovisual media operators are considered in a separate chapter, Chapter IV, of the AGCOM Regulation, which deals in a much milder way with violations. 67  See 2013 AGCOM Regulation as amended (n. 16) Art. 7(1). On the similar provision in Greek law, see Tsigou (n. 17) 206. 68  See the order of the Milan Court 8 May 2017, Mediaset Premium spa v Quasi networks Ltd, Telecom Italia spa, Vodafone Italia spa, Fastweb spa, Tiscali Italia spa, Wind Tre spa, Assotelecomunicazioni—Asstel (It.) in [2019] AIDA forthcoming. 69  Under the 2013 AGCOM Regulation as amended (n. 16) Art. 7(4). 70  See ibid. Art. 5. On the requirement under Spanish law that an attempt is made to have the infringing content removed or disabled by the infringer prior to requesting a blocking order, see Art. 195(3) of

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ADMINISTRATIVE ENFORCEMENT IN EUROPE   599 infringing may have no inkling as to the initiation of proceedings which may lead to the selective disabling or blocking of the information it has uploaded or made available and is thereby deprived of the chance to oppose the charge of infringement or to raise a defence. Therefore, the provision of Article 8(2) of the Italian Regulation, whereby orders must respect the principles of ‘graduation, proportionality and adequacy’, may occasionally ring hollow. To begin with, it is doubtful whether an administrative body is equal to the task of undertaking a balancing exercise between competing values and, even more so, of adjudicating a conflict between private parties.71 This is even more so if one considers that blocking orders, and even orders that mandate selective disabling, may be at the expense of unrelated, innocent third parties. This is the case when the order targets an entire website, or domain name or IP address, thereby preventing access both to illegitimate and legitimate information. This practice may be found to simultaneously violate two fundamental rights: the right to be heard and to fair process for the innocent third party (EU Charter, Art. 47) and the freedom of expression and information of the innocent content providers and of the public at large (EU Charter, Art. 11r).72 Moreover, orders adopted by a Member State’s administrative authority must, as indicated, comply with the requirements for enforcement ‘measures, procedures and remedies’, mandated by Article 3 of the Enforcement Directive. It is submitted that an order extending the blocking to unrelated innocent third parties also entails a ‘barrier to legitimate trade’ forbidden by the second paragraph of the provision.

3.5  Abbreviated Proceedings and Protective Orders In principle, it makes sense that an abbreviated procedure is foreseen by Article 9 of the Regulation. This is particularly so when the loss to rightholders is especially significant.73 Also, interlocutory relief may be granted by protective orders under Article 9-bis, that Law 2/2019 and previously Peguera (n. 53); similarly, for Greece, see Law no. 2281/2017, Art. 52 s. 1.4.c) on which Tsigou (n. 17) 205. On the importance of prior notice to the alleged infringer in this context, see Balkin (n. 10) 1197 and Husovec (n. 10) 123. Also Italian courts dispense with the requirement of notice to the alleged infringer, arguing that the situation does not require a joinder: see in this connection Mediaset Premium spa v Quasi networks Ltd (n. 68). 71  See Kalėda (n. 23) 220 and Tonoletti (n. 2) 547. But see C-314/12 (n. 20) para. 46 (obiter). 72  See Kalėda (n. 23) 221. According to EU case law ‘the measures adopted by the internet service provider’—or, a fortiori, required of it—‘must be strictly targeted, in the sense that they must serve to bring an end to a third party’s infringement of copyright or of a related right but without thereby affecting internet users who are using the provider’s services in order to lawfully access information. Failing that, the provider’s interference in the freedom of information of those users would be unjustified in the light of the objective pursued’. C-314/12 (n. 20) para. 56. See also C-484/14 Tobias Mc Fadden v Sony Music Entertainment Germany GmbH [2016] ECLI:EU:C:2016:689, para. 93. With reference to the ECtHR see Yildirim v Turkey App no 3111/10, (ECtHR, 18 December 2012) (holding that wholesale blocking of access to Google sites was arbitrary and the judicial review of the measure was insufficient). 73  See 2013 AGCOM Regulation as amended (n. 16) Art. 9(2), letters a) and c). See also Section 4.4.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

600   ALESSANDRO COGO AND MARCO RICOLFI are issued ex parte. It is, however, likely that the extremely short terms of reaction foreseen in the abbreviated procedure to submit remarks (three days, rather than five),74 and the lack of prior notice in protective orders, may further erode both the procedural and substantive safeguards mandated by overarching EU and international provisions.

3.6 Costs Costs are inevitably involved both in taking part in proceedings and in complying with the orders issued at their close. This is a very relevant matter,75 which clearly belongs to national (rather than EU) law.76 However, the Italian Regulation does not mention the issue. Lacking any provision in this regard, the result is that each party bears its own costs, both for legal fees and for compliance with the order. This outcome has been challenged before an Italian Administrative Court, to no avail. It was held that it is only nat­ural that ISPs should bear the costs of the ‘negative externalities’ they generate.77

3.7 Remedies If at the end of the proceedings the Authority finds an infringement, it can order the ISP to prevent the violation or to put an end to it and to adopt the orders detailed by the Regulation and other measures appropriate to avoid further infringement detailed in the order.78 If the server hosting the work in violation is located in Italy, the measures consist of selective removal of the infringing items and adoption of the steps appropriate to prevent uploading.79 An order blocking access ‘to the infringing works’ is provided in the event of ‘massive infringement’.80 Blocking orders are the only alternative available if the server is located outside Italy. The AGCOM has a duty to list additional websites to 74  cf. ibid. Art. 9(1), letter b) to Art. 7(4) of the same. 75  For the reasons convincingly shown by Husovec (n. 10) 125. 76 See Cartier 2018 (n. 40) [30]. 77 See Altroconsumo v AGCOM 2014 (n. 49) para. 15. Also C-314/12 (n. 20) para. 50 assumes obiter that a telco which is at the receiving end of an injunction bears its own costs; a full discussion of the issue is in Cartier 2018 (n. 40), differentiating, under applicable English law, between rules applicable to access and hosting providers, at [37], and holding the former entitled to be reimbursed for their own costs by claimant rightholders; on the contrary, Allostreaming (n. 40), comes to the opposite conclusion under French law by arguing that the basis is not the liability of the ISP, which may well be lacking, but its duty of cooperation to avoid violation of private legal rights. On the other hand, the possibility for rightholders to recover costs incurred in giving notice to ISPs is considered in C-484/14 (n. 72) paras 72–9. 78  See 2013 AGCOM Regulation as amended (n. 16) Art. 8(2). For an illustration of remedies under Greek law, see Tsigou (n. 17) 206. In this connection, see C‑324/09 (n. 22) paras 131–4. 79  See 2013 AGCOM Regulation as amended (n. 16) Art. 8(3); under Art. 1, letter ff) selective removal means the deletion from the web page of the infringing works or of the connection to them via link or torrent or by other means. 80  2013 AGCOM Regulation as amended (n. 16) Art. 8(3).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ADMINISTRATIVE ENFORCEMENT IN EUROPE   601 which the blocking order must be extended.81 In terms of territorial reach, no order is directed to foreign hosting providers; the addressees here are the access providers, which are under a responsibility to make the works (or the links to them) inaccessible from the Italian territory. These orders remain subject to the prohibition for Member States to impose a general monitoring obligation on ISPs.82 Failure to comply with an order carries penalties and may lead to the temporary shutting down of the operations of the party in breach of the order.83

3.8 Transparency Public administrations are subject to a mandate of transparency. This obligation appears to be complied with by AGCOM, that regularly publishes on its website both statements of objections and decrees.84 This is a welcome feature, considering that in Greece the publication of decisions adopted by CIPIV is left to the discretion of the Committee itself.85

3.9  Double Track The Italian Regulation provides for a double track: the same alleged infringement may be brought to the attention of both the administrative authority and of courts. The co­ord­in­ation between the jurisdictional and administrative path is designed by Article 6(3) and (7) of the Regulation. No complaint can be filed if on the same matter proceedings are pending before a court; in turn, administrative proceedings are terminated by the initiation of court proceedings. In Spain, the double track is more limited. Courts are a complement, rather than a substitute, for the administrative procedure. Originally, no measure affecting freedom 81  ibid. Art. 8(4); such provision may be seen as a parallel to dynamic injunctions, except that its responsibility rests with the authority rather than with the rightholder; it is the same authority that, in the event of repeat infringers, is tasked with updating the list under Art. 8-bis of the Regulation. Art. 8(5) provides that in the event of blocking, automated redirecting is also to be provided by the intermediary. Under Art. 8(2)bis, the authority may also turn the file over to the police. 82  On which see, also for references, Frosio (n. 2) 22. 83  See AGCOM Act 1997 (n. 15) Art. 1(31) and (32). Spanish legislation provides for a fine between €150,000 and €600,000: Art. 195(6) of Law 2/2019; see also Peguera (n. 53); the possibility of a temporary shutdown is established by the Implementing Regulation of the Ley Sinde (n. 14) Art. 24. On the fine of €500 to €1,000 per day in Greek law, see Tsigou (n. 17) 206. 84  See AGCOM, ‘Interventi dell’Autorità a tutela del diritto d’autore e dei diritti connessi’ (hereafter AGCOM Copyright Decisions); see also, however, the recent—and unfortunate—developments discussed in Section 4.1. 85  See Ministerial Decree 196/2018, Art. 5 s. 5; Tsigou (n. 17) 206. Also in Spain the cases are not public according to Peguera (n. 53).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

602   ALESSANDRO COGO AND MARCO RICOLFI of expression and information as protected by Article 20 of the Spanish Constitution could be adopted by the Copyright Commission, unless authorized by a court; but this requirement was waived by the corresponding provision of the current Law 2/2019.86

3.10 Review Under general principles of Italian administrative law, as well general constitutional and international provisions,87 orders must spell out the grounds on the basis of which they are taken. Article 8(2) of the Italian Regulation provides that orders, as earlier indicated, must respect the principles of ‘graduation, proportionality and adequacy’. According to Article 17 of the Regulation, the orders adopted by AGCOM are subject to judicial review by an administrative court. However, this review is restricted to the limits of the administrative purview, that can take into account only three grounds: lack of jurisdiction by the authority issuing the order, misuse of powers, or violation of law. More fundamentally, review by the administrative judge concerns the issue of whether the administrative powers were used correctly, while it has no say over the substance of conflicts between the competing rights of private parties.88 Even at this stage, defences originating from uploaders and website operators have no chance of being taken up, which, again, is to be seen as a failure if considered against EU standards.89 Not only the provision establishing a review by an administrative court appears unfortunate, it also appears to run counter to the specific provisions of the TRIPs Agreement90 and the EU Charter.91

3.11  Safeguards against Abuse Both the TRIPs Agreement and the EU Enforcement Directive mandate safeguards against abuse; in turn, the EU Charter proclaims the prohibition of abuses as a general principle.92 The Italian Regulation is altogether silent on the matter.

86  See Law 2/2019, Art. 195(5) para. 6, adopted after a quite tortuous legislative itinerary. For the previous regime, see Peguera (n. 53). 87  See Italian Constitution, Art. 24, EU Charter (n. 45) Art. 47 and Art. 6 ECHR. A duty to state reasons is also foreseen in Spain, see Peguera (n. 53). 88  The point is forcefully made by Bertani (n. 31) 129, 155 and, in connection with review of the decisions by another administrative authority, AGCM, by the same administrative court, see Michele Bertani, Pratiche commerciali scorrette e consumatore medio (Giuffrè 2016) 17. 89  See C-314/12 (n. 20) para. 57. 90  See TRIPs Agreement (n. 33) Art. 41(2); for interlocutory injunctions, see also Art. 50(8). For similar remarks Tonoletti (n. 2) 352; Bertani (n. 31) 156. 91  See Art. 47; see in this connection Domenicucci and Filpo (n. 47) 869. 92  See TRIPs Agreement (n. 33) Art. 42, Directive 2004/48/EC (n. 39) Art. 3(2) and EU Charter, Art. 54. See also, in this connection, Husovec (n. 10) 123.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ADMINISTRATIVE ENFORCEMENT IN EUROPE   603

4.  The AGCOM Regulation in Practice: A Case Study The administrative enforcement of copyright and related rights became effective in Italy in 2014. A factsheet periodically updated by AGCOM shows that, so far,93 the authority has received 1,338 applications, almost all of them aimed at online intermediaries—only seven being against audiovisual and radio services.94 One-third did not make it through the preliminary phase, having been withdrawn by applicants (54) or dismissed immediately by AGCOM’s Directorate for Media Services (180).95 The rest resulted in the institution of 950 proceedings:96 290 ended with termination of proceedings due to voluntary compliance (278)97 or withdrawal by the applicant (12); 650 led to an adjudication by the Commission on Services and Products.98 On 600 occasions the Commission found in favour of the applicant, issuing 665 blocking orders aimed at access providers operating in Italy.99

4.1 Transparency Until May 2016, AGCOM published, in a specific section of its website,100 information on the outcome of all applications lodged under the Regulation. Then, it discontinued the service, which has been replaced by a searchable database101 covering only adjudications by the Commission on Services and Products.102 As a matter of fact, information about cases that ended in the preliminary phase,103 or were dismissed by the Directorate for Media Services once the proceeding had started,104 became unavailable. This is unfortunate, as it makes the system less transparent and predictable, particularly so if one considers that some interesting inference in its overall functioning can be drawn precisely from the source of information which has been discontinued. In fact, the Directorate for Media Services—the decisions of which have in the meantime become unavailable—is supposed to dismiss at the outset applications not fulfilling the formal requirements or lacking essential information,105 those not substantiated by 93  As of 18 January 2019. 94  See 33/16/CSP and 180/18/CSP (the acronym designates decisions adopted by the Commission on Services and Products). The analysis in the following sections is limited to proceedings aimed at ISPs. Remedies against audiovisual and radio services follow different rules, which are not considered here. 95  See 2013 AGCOM Regulation as amended (n. 16) Art. 6(4). See Section 4.1. 96  See 2013 AGCOM Regulation as amended (n. 16) Art. 7(6). 97  ibid. Art. 7(3), (3)-bis, and (6).    98  ibid. Art. 8(1) and (2).    99  ibid. Art. 8(4). 100  See Section 3.8 and AGCOM Copyright Decisions (n. 84). 101 ibid. 102  See 2013 AGCOM Regulation as amended (n. 16) Art. 8. 103  ibid. Art. 6(4), letters a) to d). 104  ibid. Art. 7(3) and (6). 105  Commonly, for failure to submit a copy of the allegedly infringed work (75/14/DDA; 86/15/DDA; 92/15/DDA; 102/15/DDA; 103/15/DDA; 109/15/DDA) or for failure to identify precisely the targeted web

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

604   ALESSANDRO COGO AND MARCO RICOLFI documents demonstrating possession of the relevant rights by the applicant,106 or which have been pre-empted by pending judiciary proceedings,107 as well as those falling outside the scope of application of the Regulation or which are manifestly unfounded.108 Ostensibly, decisions based on the last two grounds provide valuable insights both into the availability of remedies offered by the Regulation and the level of protection of third parties’ interests.109 Moreover, if proceedings end due to voluntary compliance, that outcome has to be reflected in a decision by the same board. It is submitted that know­ledge of these decisions would help to understand the dynamics of the interaction between all the parties concerned.110

4.2  Protected Subject Matter The Directorate for Media Services used quite extensively its power to filter out applications falling outside the scope of the Regulation. It did so by considering the Regulation subject to (sometimes inherent or implied) limitations as to subject matter, violations, remedies, and addressees of the orders requested. On the one hand, the Directorate took the view that only ‘digital works’, as defined by the Regulation itself, can be protected. This has several implications. First, applications must refer to literary, musical, audiovisual, and photographic works, including video games and computer programs. Any other subject matter protected by copyright or related rights does not qualify.111 Secondly, said work needs to be protected by the Italian Copyright Law, which means that the Directorate checks whether the work meets the page (8/14/DDA; 91/15/DDA; 9/16/DDA; 21/16/DDA) (the acronym DDA, ‘diritto di autore’ for short, indicates decisions adopted by the Directorate). 106  See 53/15/DDA; 7/16/DDA. In a few cases, the Directorate relied on general knowledge (4/14/ DDA) or information provided by the applicant himself (95/15/DDA and 96/15/DDA) to dismiss due to lack of standing to sue. 107  Invariably, the Directorate found the relevant information in the news and did not consider relevant whether the applicant was party to the judiciary proceeding: see 8/14/DDA; 39/14/DDA; 71/14/DDA. 108  AGCOM Regulation as amended (n. 16) Art. 6(4), letters a) to d). 109  A major issue in the system put in place by the Regulation. See Sections 3.3, 3.4, and. 3.8. 110  In particular, it helps to understand how ISPs behave (see Section 3.3). In this regard, it is worth noting that—unsurprisingly—access providers have so far never voluntarily blocked a website because of a statement of objections published by AGCOM. On the contrary, hosting providers tend to be more cooperative and sometimes take down infringing content voluntarily, particularly if they are located in Italy, arguably because they are easier targets for follow-on damage claims. It remains to be seen whether they do so following procedures that take into account the interests of the content providers. See n. 144. 111  As easily predictable, applications claiming infringement of trade marks (see 4/14/DDA and 23/16/ DDA) or rights of publicity (53/15/DDA) were turned down. More surprisingly, the Directorate also dismissed applications regarding a TV format (8/16/DDA), contractual terms and conditions (20/16/ DDA), and a press conference (3/14/DDA). Arguably, applications referring to databases, whether or not creative, will be doomed to dismissal. However, the Directorate has admitted applications claiming protection for websites, apparently on the assumptions that they could somehow qualify as a combination of literary and photographic works.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ADMINISTRATIVE ENFORCEMENT IN EUROPE   605 relevant requirements.112 Thirdly, the work needs to be one that has been made available on electronic communication networks. It remains unclear whether this should be considered as a requirement of the work for which protection is sought or as a reference to the kind of infringing uses that can be repressed under the Regulation.113 On the other hand, the Directorate has assumed that the Regulation does not apply to cases of alleged plagiarism.114 This is an assumption which is rather dubious, at least considering the wording of the relevant provision,115 that would have called for an explanation, which on the contrary is totally lacking in the relevant decisions.116 Similarly, it is hard to understand why the Regulation has been deemed inapplicable to links that have been published by search engines. The Directorate holds that this follows from the fact that search engines are caching providers, and caching providers are not ‘service providers’ for the purposes of the Regulation.117 However, this reasoning is somewhat puzzling, since search engines do not come into consideration here as addressees of orders, as do ‘service providers’ under the Regulation, but as owners of the web page where the link can be found.118

4.3  Scope of Violations It could be argued that within the system set out by the Regulation, protection of third parties’ interests depends, to a great extent and for a number of reasons,119 on the exercise of a certain amount of decisional discretion by AGCOM. One such power consists in the dismissal of applications which appear manifestly unfounded. The Directorate assumes that its task entails, first and foremost, fact checking, to make sure that the work is actually available on the allegedly infringing website,120 and, secondly, a preliminary assessment as to the infringing nature of the use in question. Usually this assessment is performed with caution. On the one hand, the Directorate appears reluctant to allow applications regarding less than perfect copies of the allegedly infringed work, also 112  On this ground, the Directorate has dismissed applications regarding photographs of objects (5/14/DDA, 31/14/DDA, and 33/14/DDA), a brief description of rules and tricks for online games (17/16/ DDA), a title for a radio programme (23/16/DDA, Viva la radio), and a catalogue (32/15/DDA). Little to no explanation is given for the reasons supporting such findings. 113  A few applications have been dismissed for lack of relevant information on the work to be protected. At least one of them raises the doubts expressed in the text, particularly Det. 75/14/DDA. However, the form that has to be filled in to prompt AGCOM’s intervention suggests that such a requirement does not exist (see ). 114  See 85/15/DDA and 7/16/DDA, in which (apparently) the applicant claimed a violation of moral rights alone. 115  AGCOM Regulation as amended (n. 16) Art. 6(1). 116  On the contrary, it is obvious that the Regulation does not apply to printing. See 8/16/DDA. 117  See 25/14/DDA and 67/14/DDA. 118  See Section 3.3. 119  See Sections 3.3 and 3.4. 120  It makes perfect sense that proceedings should not even start if the allegedly infringing work is not (or is no longer) accessible at the URL mentioned in the application (see e.g. 15/14/DDA and 82/14/ DDA), if the work is only mentioned (Det 97/15/DDA), if the website is unavailable (94/14/DDA), or access from Italy is denied (58/15/DDA).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

606   ALESSANDRO COGO AND MARCO RICOLFI dismissing several cases in which differences appeared ‘substantial’121 and applying the exception on quotations in a few more instances.122 On the other hand, the Directorate has allowed a broad interpretation of the concept of making a work available on a web page in violation of Italian copyright law,123 also including in the notion from the outset the mere provision of links to works, torrent files,124 and more recently pirate internet protocol television (IPTV).125 Following the amendment of the Regulation adopted in October 2018,126 the same concept has been further extended to include the advertisement and offer of products and components (software) principally designed to circumvent technological protection measures applied to video games and ­corresponding consoles.127

4.4 Remedies Applications that make it through the preliminary phase lead to the institution of formal proceedings, which end with adjudication by the Commission on Services and Products unless voluntary compliance occurs. In this regard, some very clear trends seem to emerge. First and foremost, cases resulting in blocking orders present recurrent characteristics. The application usually128 comes from a collecting society,129 an association, or other entity representing rightholders in a specific industry,130 or a company offering online surveillance and enforcement services.131 The application contains a list of works which have been made available on the allegedly infringing website, together with relevant URLs.132 However, there have been cases in which apparently only works without an indication of corresponding URLs have been mentioned,133 which have ended with a blocking order all the same. The request often claims that the list is just a sample, the infringement extending to many other protected works. All that considered, AGCOM typically accepts dealing with a case according to the rules on abbreviated proceedings.134 The holder of the domain name, the hosting provider, and the owner of the server are summoned by email.135 As the servers are usually located abroad, access providers 121  See 40/15/DDA; 80/15/DDA; 88/15/DDA. 122 See 90/15/DDA, 101/15/DDA, 5/16/DDA, 6/16/DDA, all of them regarding the exception for quotations. 123  See AGCOM Regulation as amended (n. 16) Art. 6(1). 124  See e.g. 50/14/CSP. 125  See e.g. 28/18/CSP. 126  See Section 3.2. 127  See 331/18/CSP. 128  For an individual application, see e.g. 212/2018/CSP. 129  See e.g. 265/18/CSP. 130  See e.g. 334/18/CSP. 131  See e.g. 332/18/CSP. 132  See e.g. 287/18/CSP. 133  e.g. 3/18/CSP, 15/18/CSP, 47/18/CSP, 68/18/CSP and 147/18/CSP. When this is the case, the resulting statement of objections issued by the Authority might arguably fall short of compelling the notice recipient hosting provider to take down infringing content under Art. 14(1)(b) of the e-Commerce Directive. See RTI v Yahoo! (n. 49), which distinguishes on this matter between passive and active hosting providers. 134  See Section 3.4. 135  To the extent that they can be identified, see Section 3.3 and later in this section.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ADMINISTRATIVE ENFORCEMENT IN EUROPE   607 operating in Italy also have to be summoned. However, this does not take place via email. Taking advantage of the law governing administrative proceedings in general, AGCOM holds that a communication on its own website suffices, considering the large number of service providers that would otherwise have to be given notice individually.136 Holders of domain names normally hide themselves behind anonymity service providers.137 When this is the case, it is all too obvious that they prefer to remain in the shadows138 to avoid liability, particularly if one considers that they have better options available to them than appearing in the proceedings. They can simply take down infringing contents if they wish to avoid a blocking order,139 or if they do not care, they can rely on the fact that AGCOM’s orders until recently140 were of limited efficacy. Since they operate at the level of the domain name system,141 they can be easily circumvented either by creating aliases or by letting users take advantage of virtual private networks. Hosting providers and providers of servers never reply. As for access providers, they might not know—considering how they get summoned—and certainly do not seem to care, as there is no trace of their intervention in the proceedings. The outcome of all this is a blacklist of more than 700 domain names, according to information available at the beginning of 2019. This list is likely to grow in the future at a faster pace, considering that the 2018 amended Regulation enables AGCOM to adopt dynamic blocking orders.142

136  It has been so from the very beginning. See 41/14/CSP. 137  See e.g. 332/18/CSP. If they do not, they seem to be more willing to reply and cooperate. However, it is not clear what they have to do to avoid a blocking order. If the application claimed that the infringement was massive and the Authority found that there were many works on the website other than those listed in the application, by way of example, AGCOM did not consider the voluntary removal of the works in the list sufficient. See 179/18/CSP. It is easy to argue that significantly more has to be done to avoid the order, so much that the residual infringement, if any, could be deemed neither grave nor massive. This approach raises obvious concerns if the domain name holder qualifies as a hosting provider under Art. 14 of the e-Commerce Directive. Blocking access to its services from the Italian territory on the basis of its failure to remove all infringing content from its servers would run counter to the prohibition to impose a general obligation to monitor provided for in Art. 15(1) of the same Directive (see Section 2.1.2) 138  Exceptions are very rare and usually insignificant. E.g. in 231/18/CSP the company that registered the domain name revealed itself and replied to the Authority’s communication, claiming that the allegedly infringing content had been deleted. However, AGCOM found that the content was still available and added the domain name to its blacklist. 139  If they do, and the Directorate comes to know, the proceedings end with a dismissal. See e.g. 63/14/ DDA 140  See Section 3.7 and n. 142. 141  So far, AGCOM has never blocked IP addresses. 142  This opportunity was promptly seized by the Authority (see e.g. 334/18/CSP), as it allows considerable saving of time and resources. Previously, to block an alias of a website already included in the blacklist, the whole proceedings had to be repeated, although following the swifter rules provided for abbreviated proceedings (Art. 9). See e.g. 6/18/CSP; and 283/18/CSP, which also offers a sample of the characteristics considered relevant by the Authority to qualify a website as an alias. Now, the Directorate for Media Services, on request from the rightholder, can extend the effects of an already issued blocking order if it deems that the violation denounced is actually the same. Next, all concerned parties are summoned so they can appeal to the Commission for Products and Services. See AGCOM Regulation as amended (n. 16) Art. 9-bis.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

608   ALESSANDRO COGO AND MARCO RICOLFI As expected, quite the opposite happens when the infringement is clearly fortuitous or episodic. In this situation, rightholders apply individually, as each case is different from the others. People or companies involved in the alleged infringement can be identified and summoned. Very often, either they reply stating their case143 or comply with the request to take down infringing content.144 In the very limited number of cases in this category in which AGCOM adjudicated the case, no blocking orders have ever been issued. Most of the time, the Commission on Services and Products has held the contested websites to be non-infringing145 or has resorted to the traditional rule in dubio pro reo.146 In the remaining decisions, AGCOM has relied on the principle of proportionality to dismiss the case.

4.5  Relevance of the Principle of Proportionality A tiny group of cases show that the Regulation is unfit to provide protection if the applicant does not claim or demonstrate that the infringement is sufficiently relevant to justify a blocking order.147 When this happens, AGCOM dismisses the case and transmits its file to 143  In 110/15/CSP, the respondent persuaded AGCOM that the works had been made under a research contract and that it was owner or co-owner of the copyright. In 243/16/CSP, the respondent showed that the work had been made available on the request of the very same applicant. 144  Most of the time, content is removed by the owner of the website (see e.g. 6/14/DDA and 24/14/ DDA, both regarding newspapers), although there are a few cases in which the owner of the web page (2/15/DDA), the uploader (52/14/DDA; 91/14/DDA; 26/15/DDA), or the hosting provider (41/14/DDA; 78/15/DDA; 87/15/DDA) do so. Not unfrequently, the website hosts a blog, a forum, or a video-sharing platform. When this is the case, it often remains unclear whether its owner or the hosting provider consulted the owner of the web page or the uploader before removing the allegedly infringing contents (see 14/14/DDA; 50/14/DDA; 87/14/DDA; 14/15/DDA; 27/15/DDA; 64/15/DDA; 74/15/DDA; 27/16/DDA), although they certainly did so in the proceedings ended by 91/14/DDA; 82/15/DDA 33/15/DDA; 50/15/ DDA (in all these cases the uploader agreed). It is worth mentioning that hosting providers may help AGCOM in reaching the owner of the website, which then complies without necessarily revealing its identity (see e.g. 54/14/DDA; 2/15/DDA; 43/15/DDA). 145  e.g. in 67/14/CSP AGCOM took note that the website operator had modified its links in such a way as to enable it to resort to the ruling of the CJEU in the Svensson case. See also 144/16, in which AGCOM found out that the link published on the contested website did not work but, considering that it was made available on a subscription service, gave notice to the judiciary police and to AGCM, the Italian Authority for Competition and Market Guarantees, claiming a violation of the laws against unfair and misleading commercial practices. 146  See 64/14/CSP in which a contractual dispute between the applicant and the website operator left unclear whether or not content had to be taken down. In 65/14/CSP, the allegedly infringing website made available links pointing to external resources protected by a paywall. In this case, AGCOM considered relevant the fact that, according to blogs and fora, files hosted on the linked website were systematically corrupted. 147  As already mentioned (see Sections 1 and 3.7), blocking orders aimed at access providers are the only alternative available when the server is located outside Italy. If, on the contrary, the server is located within the national borders, AGCOM can order the hosting provider to selectively remove infringing content or, in the event of massive infringement, to disable access to the infringing works. However, in 64/18/CSP the Authority considered that a hosting provider, which confines itself to storing information, without playing any role in the transmission or management of content hosted on its network, due to

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ADMINISTRATIVE ENFORCEMENT IN EUROPE   609 the judiciary police; in other words, no immediate redress will be granted.148 At the core of AGCOM’s decisions is the principle of proportionality, which has been used by the authority to weigh different factors, such as the dimension of the infringement, its effect on the economic interests of the applicant, and the nature of the service provided by the infringing website.149 On the one hand, as already mentioned, if the application was made by an association of rightholders or by a major content provider and is aimed at a website providing access to a large repertoire of copyrighted works,150 AGCOM normally considers the case as one of massive infringement and issues a blocking order. There are exceptions, though. It is not easy to make sense of them, particularly because AGCOM’s decisions do not provide a precise description of the cases or extensive grounds.151 However, it would appear that the authority refuses to block services that are not clearly meant to infringe or to facilitate infringement. This applies to services used for substantial non-infringing purposes, even though they may be extensively used to infringe.152 For instance, AGCOM refused has to block a web portal,153 a cyberlocker,154 a web writing platform,155 an online messaging service,156 and a URL-shortening service.157 Also, it let go a website that featured, as illustrations of the products offered, non-creative photo­graphs copied from the outlet of a competitor, on the rather shaky assumption technical reasons (without any further explanation) cannot disable access to individual works and, therefore, can only be ordered to disable access to an entire website. This assumption expands the relevance of the principle of proportionality, which can be relied on to deny immediate redress in a larger number of cases. Its adverse impact on the interests of the rightholders should not be overestimated, though. After receiving a notice from AGCOM, hosting providers probably become ‘aware of facts or circumstances from which the illegal activity or information is apparent’ and, therefore, become exposed to claims for damages in view of Art. 14 of the e-Commerce Directive. It is submitted that this effect might explain the far more cooperative behaviour of Italian hosting providers within AGCOM’s proceedings, which, in turn, makes it extremely rare for AGCOM to adjudicate cases involving their services. 148  See AGCOM Regulation as amended (n. 16) Art. 8(2-bis). 149  AGCOM does not seem to consider relevant whether the applicant has, or has not, tried voluntary notice-and-takedown procedures before asking for the Authority’s intervention. From time to time, its decisions let it slip that attempts have been made. See e.g. 38/18/CSP. 150  Applicants usually provide for exemplificative lists of ten to thirty works and URLs. In the vast majority of its decisions, AGCOM explicitly bases its blocking order on the finding that the works illegally made available on the contested website appear to be many more. There are cases in which the same statement cannot be found. However, it might well be a lapsus. Therefore, it would probably be inaccurate to draw conclusions from decisions, e.g. 254/18/CSP, in which the Authority issued a blocking order apparently on the mere presence on the contested website of eleven audiovisual works. 151  Most of the time, the description of the alleged infringement, as well as the explanation offered to support the finding in favour of the applicant, rely on stereotyped formulas. This might be seen as a violation of the principle that the order must spell out the grounds on the basis of which it has been taken (see Section 3.10). 152  It remains to be seen whether the balancing of interests underlying this approach gets past the concerns raised earlier, in the last paragraph or in Section 3.4. 153  See 5/18/CSP. 154  See 333/18/CSP. 155  See 321/18/CSP. 156  See 72/18/CSP, 73/18/CSP, 248/18/CSP, 306/18/CSP, and 314/18/CSP, all of which concerned public channels of an online messaging service used to distribute newspapers and magazines. 157  See 276/17/CSP.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

610   ALESSANDRO COGO AND MARCO RICOLFI that it would be disproportionate to ‘close’ an online shop to protect related rights on pictures shown therein.158 On the other hand, if the application relates to a limited number of works, AGCOM normally dismisses the case. It is unclear precisely where the threshold sits. Making available one159 to four,160 or ‘some’161 literary or editorial works, one162 or two163 photo­graphs, a video game,164 twelve tracks from the same album,165 one166 or two167 audiovisual works, or even an entire TV series168 has been deemed insufficient. Overall, it could be argued that, if the infringement is not massive, the number of works that are illegally made available on a website will never be big enough to convince AGCOM that a blocking order is proportionate, unless other factors weigh in the balance. In the case of audiovisual works, it has been considered sufficient that a movie had been made available online while it was still running in cinemas.169 In respect of music recordings, football matches, and TV subscription services, blocking orders have been issued if recordings,170 matches,171 and programmes were made available systematically.172

158  See 174/17/CSP and 175/17/CSP. 159  See 40/15/CSP; 48/15/CSP; 142/15/CSP; 10/16/CSP; 30/16/CSP; 62/16/CSP; 111/16/CSP. 160  See 99/15/CSP. 161  See 218/15/CSP. 162  See 9/16/CSP. 163  See 49/17/CSP. 164  See 68/14/CSP. 165  See 151/17/CSP. 166  See 79/14/CSP; 43/15/CSP; 58/15/CSP; 59/15/CSP; 63/15/CSP. 167  See 24/15/CSP, in which AGCOM did not consider that the list had been provided by the applicant exempli gratia. 168  See 109/14/CSP. 169  See 212/18/CSP. On the same note, 44/17/CSP considered enough the availability of a ‘significant quantity’ of audiovisual works, taking into account that they were new, and that the website had already come to the attention of the Authority for being linked by other websites already included in the blacklist. 170  See 3/18/CSP, 15/18/CSP, 47/18/CSP. 171  See eg 126/18/CSP-129/18/CSP. 172  See 61/18/CSP, 62/18/CSP, 63/18/CSP, 66/18/CSP and 67/17/CSP.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Pa rt V I

I N T E R M E DI A RY R E SP ONSI BI L I T Y, AC C OU N TA BI L I T Y, A N D PR I VAT E OR DE R I NG

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 31

Accou n ta bilit y a n d R esponsibilit y of On li n e I n ter m edi a r ies Giancarlo Frosio and Martin Husovec

Legal theory is increasingly shifting the discourse from liability to enhanced ‘responsibilities’ for intermediaries under the assumption that the capacity of online service providers (OSPs) to affect the informational environment is exceptional.1 Hence, academia, policymakers, and society increasingly ascribe a public role to online intermediaries.2 According to Shapiro, ‘in democratic societies, those who control the access to information have a responsibility to support the public interest . . . . these gatekeepers must assume an obligation as trustees of the greater good.’3 The discourse focuses more and more on the moral responsibilities of OSPs in contemporary societies and aims at building ethical frameworks for the understanding of OSPs responsibilities, for example corporate social responsibilities or human rights. Responsible behaviour beyond the law finds justification in intermediaries’ corporate social responsibilities and

1 See e.g. European Commission Communication, ‘Tackling Illegal Content Online. Towards an enhanced responsibility of online platforms’ COM(2017) 555 final, s. 6 (noting ‘the constantly rising influence of online platforms in society, which flows from their role as gatekeepers to content and information, increases their responsibilities towards their users and society at large’). 2  Pressure comes increasingly from users as well as recent lawsuits against platforms supposedly liable of fomenting extremism and radicalism—and related terroristic actions. See ‘Orlando nightclub victims’ families sue Twitter, Google, Facebook’ (CNBC, 21 December 2016) . 3  Andrew Shapiro, The Control Revolution: How the Internet is Putting Individuals in Charge and Changing the World We Know (Public Affairs 2000) 225.

© Giancarlo Frosio and Martin Husovec 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

614   GIANCARLO FROSIO AND MARTIN HUSOVEC their role in implementing and fostering human rights.4 In the introduction to The Responsibilities of Online Service Providers, Mariarosaria Taddeo and Luciano Floridi noted that—given their prominent role in the present society—online intermediaries are increasingly expected to act according to current social and cultural values, which raises ‘questions as to what kind of responsibilities OSPs should bear, and which ethical principles should guide their actions’.5 Policy approaches might be returning towards implementing moral theories of intermediary liability, rather than utilitarian or welfare theories. In this case, justification for policy intervention would be based on responsibility for the actions of users as opposed to efficiency or balance innovation vs harm.6 In Europe, the ‘Communication on Online Platforms and the Digital Single Market’ puts forward the idea that ‘the responsibility of online platforms is a key and cross-cutting issue’.7 Again, in another Communication, the Commission made this goal even clearer by openly pursuing ‘enhanced responsibility of online platforms’ on a voluntary basis.8 Online platforms would be invested with a duty to ‘ensure a safe online environment’ against illegal activities.9 As the Commission puts it, the goal is ‘to engage with platforms in setting up and applying voluntary cooperation mechanisms aimed at depriving those engaging in commercial infringements of intellectual property rights (IPRs) of the revenue streams emanating from their illegal activities, in line with a “follow the money” approach’.10 Hosting providers—especially platforms—would be called on to actively and swiftly remove illegal materials, instead of reacting to complaints. They would be called on to adopt effective voluntary ‘proactive measures to detect and remove illegal content online’11 and are encouraged to do so by using automatic detection and filtering technologies.12 Again, ‘online platforms must be encouraged to take more effective voluntary action to curtail exposure to illegal or harmful content’ such as incitement to terrorism, child sexual abuse, and hate speech.13 Looking at the legal liability rules always tells only half of a story. Legal rules are often only basic expectations which are further developed through market transactions, business decisions, and political pressure. Therefore, the real responsibility landscape is equally determined by a mixture of voluntary agreements, self-regulation, 4  See Emily Laidlaw, Regulating Speech in Cyberspace: Gatekeepers, Human Rights and Corporate Responsibility (CUP 2015); Dennis Broeders and others, ‘Does Great Power Come with Great Responsibility? The Need to Talk About Corporate Responsibility’ in Mariarosaria Taddeo and Luciano Floridi (eds), The Responsibilities of Online Service Providers (Springer 2017) 315–23. 5  Mariarosaria Taddeo and Luciano Floridi, ‘New Civic Responsibilities for Online Service Providers’ in Taddeo and Floridi (eds) ibid. 1. 6  For further discussion on justifications for intermediary liability, see Chapter 3, Section 4. 7 European Commission Communication, ‘Online Platforms and the Digital Single Market: Opportunities and Challenges for Europe’ COM(2016) 288 Final, 9. 8  See Communication (n. 1).    9  ibid. s. 3. 10  See Communication (n. 7) 8. 11  Communication (n. 1) s. 3.3.1 (noting that adopting such voluntary proactive measures does not lead online platforms to automatically lose the hosting liability exemption provided by the e-Commerce Directive). 12  ibid. s. 3.3.2. 13  Communication (n. 7) 9.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ACCOUNTABILITY AND RESPONSIBILITY OF ONLINE INTERMEDIARIES   615 corporate social responsibility, and ad hoc deal-making. Accountability schemes can differ significantly, ranging from legal entitlements to request assistance in enforcement to entirely voluntary private-ordering schemes. In this chapter, we try to provide a mapping of these approaches in order to illustrate the richness and trade-offs associated with such measures. Miscellaneous policy and enforcement tools, such as monitoring and filtering, graduated response, payment blockades and follow-the-money strategies, private DNS content regulation, and online search manipulation, are discussed to complement the typical legal liability view of the regulation of intermediaries.

1.  Tools for Increasing Responsibility Miscellaneous forms of ‘responsible’ behaviours beyond the law—such as codes of conduct, three-strike schemes, voluntary filtering, online search manipulation, followthe-money strategies, and private denial of service (DNS) content regulation—reflect a globalized, ongoing move towards privatization of law enforcement online through proactive actions and algorithmic tools that span all subject matter relevant to intermediary liability online. Their common denominator is that that they go beyond the baseline legal expectations created by the legal liability framework. Inherently, their common trajectory is towards a more proactive tackling of illegal or otherwise objectionable content. However, these policies often differ in the way in which they come about. Even the same type of enforcement arrangements, such as graduated response, can be the result of a private-ordering scheme, ad hoc governmental policy administered by agencies, or of the application of legal claims to assistance in enforcement. In this section, we provide a brief mapping of different arrangements which have been developed over the years. In the next section, we then highlight how the mechanisms behind these arrangements have significant consequences for the parameters of the rule of law, due process, transparency, or cost allocation.

1.1  Graduated Response So-called ‘graduated response’ or ‘three-strike’ regulations—seeking to block out household internet connections of repeat infringers—emerged as an early form of ‘responsible’ behaviour of OSPs. In some instances, graduated response arrangements have been judicially or legislatively mandated. Often, they result from voluntary arrangements. Although little is known regarding the specifics of these agreements, rightholders might attempt to leverage their content and would only license if the providers implemented a disconnection strategy. The French HADOPI Act and other countries such as New Zealand, South Korea, Taiwan, and the UK, have mandated gradual response schemes,

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

616   GIANCARLO FROSIO AND MARTIN HUSOVEC actually managed by administrative agencies, rather than intermediaries.14 However, industry-led self-regulation makes up the largest part of graduated response schemes as  in the case of the ‘six strikes’ Copyright Alert System (CAS), discontinued in January 2017.15 CAS implemented a system of multiple alerts. After a fifth alert, ISPs  were allowed to take ‘mitigation measures’ to prevent future infringement.16 Mitigation measures included ‘temporary reductions of Internet speeds, temporary downgrade in Internet service tier or redirection to a landing page until the subscriber contacts the ISP to discuss the matter or reviews and responds to some educational information about copyright, or other measures (as specified in published policies) that the ISP may deem necessary to help resolve the matter’.17 In Australia, an industrynegotiated graduated response code was submitted to the Australian Communications and Media Authority (ACMA) for registration as an industry code, requiring ISPs to  pass on warnings to residential fixed account holders who were alleged to have infringed copyright.18 In Europe, Eircom was one of the first European ISPs to implement a voluntary Graduated Response Protocol under which Eircom would issue copyright infringement notices to customers, after a settlement had been reached between record companies and Eircom.19 The Irish Supreme Court later upheld the validity of the scheme against an Irish Data Protection Commissioner’s enforcement notice requiring Eircom to cease its operation of the Protocol.20 Recently, an agreement has been negotiated between major British ISPs and rightholders—with the support of the UK government—under the name of Creative Content UK.21 This voluntary scheme would implement four educational-only notices or alerts sent by the ISPs to their subscribers based on IP addresses supplied by the rightholders, where the IP address is alleged to have been used to transmit infringing content.22 14  See Law no. 2009-669 of 12 June 2009, promoting the dissemination and protection of creative works on the Internet (aka HADOPI Act) (Fr.); Law no. 2009-1311 of 28 October 2009, on the criminal protection of literary and artistic property on the Internet, Arts 7 and 10 (Fr.) (providing internet suspension sanctions for those using the internet to commit infringement and obligations for the owners of internet access to secure their internet access); Copyright (Infringing File Sharing) Regulations 2011 (NZ); Copyright Act as amended on 22 January 2014, Art. 90-4(2) (Taiwan); Digital Economy Act 2010 (UK) (however, the ‘obligations to limit Internet access’ have not yet been implemented). 15  David Kravets, ‘RIP, “Six Strikes” Copyright Alert System’ (ArsTechnica, 30 January 2017). 16  See Center for Copyright Information, Copyright Alert System (CAS) . 17 ibid. 18  See Communications Alliance Ltd, ‘C653:2015—Copyright Notice Scheme Industry Code’ (April 2015) . 19  See EIR, ‘Legal Music—Frequently Asked Questions’ . 20 See EMI v Data Protection Commissioner [2013] IESC 34 (Ire.). 21  Creative Content UK . 22 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ACCOUNTABILITY AND RESPONSIBILITY OF ONLINE INTERMEDIARIES   617

1.2  Changes to Online Search Results Online search manipulation—and so-called demotion—enforce sanitization of ­presumptively illicit activities online through voluntary measures and private ordering. Demotion spans multiple subject matters and online allegedly illicit activities. Starting from the most recent effort of this kind, under the aegis of the UK Intellectual Property Office, representatives from the creative industries and leading UK search engines developed a Voluntary Code of Practice dedicated to the removal of links to infringing content from the first page of search results.23 However, Google have been demoting allegedly pirate sites for some time now. In 2012, Google altered its PageRank search algorithm taking into account the number of Digital Millennium Copyright Act (DMCA)-compliant notices for each website.24 Shortly thereafter, in 2014, Google started to demote autocomplete predictions returning search results containing DMCAdemoted sites.25 Voluntary measures have been traditionally implemented with regard to manifestly illegal content, such as child pornography.26 In addition, Google has adopted specific self-regulatory measures for revenge porn, which Google delists from internet searches.27 Other major platforms followed Google’s lead. After being ordered by a Dutch court to identify revenge porn publishers in the past,28 Facebook decided to introduce photomatching technology to stop revenge porn and proactively filter its reappearance.29 Finally, search manipulation and demotion begun to be applied to curb extremism and radicalization. Plans have been also revealed of a pilot scheme to tweak searches to make counter-radicalization videos and links more prominent.30

23  See Intellectual Property Office, ‘Press Release: Search Engines and Creative Industries Sign AntiPiracy Agreement’ (20 February 2017) . 24 See Annemarie Bridy, ‘Copyright’s Digital Deputies: DMCA-plus Enforcement by Internet Intermediaries’ in John Rothchild (ed.), Research Handbook on Electronic Commerce Law (Edward Elgar 2016) 200. 25 ibid. 26 See Anchayil Anjali and Arun Mattamana, ‘Intermediary Liability and Child Pornography A Comparative Analysis’ (2010) 5 J. of Int’l Comm. L. Tech. 48. 27  See Joanna Walters, ‘Google to Exclude Revenge Porn from Internet Searches?’ (The Guardian, 21 June 2015) . 28  See Agence France-Press, ‘Facebook Ordered by Dutch Court to Identify Revenge Porn Publisher’ (The Guardian, 26 June 2015) . 29  See Emma Grey Ellis, ‘Facebook’s New Plan May Curb Revenge Porn, but Won’t Kill It’ (Wired, 6 April 2017). 30  See Ben Quinn, ‘Google to Point Extremist Searches Towards Anti-radicalization Websites’ (The Guardian, 2 February 2016) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

618   GIANCARLO FROSIO AND MARTIN HUSOVEC

1.3  Payment Blockades and Follow the Money Payment blockades—notice-and-termination agreement between major rightholders and online payment processors—and ‘voluntary best practices agreements’ have been applied widely as part of ‘a long-term, evolving strategy on the part of corporate copyright and trademark owners’.31 In its Joint Strategic Plan for Intellectual Property Enforcement, the US government backed-up voluntary measures and fully endorsed a ‘follow the money’ strategy.32 Similarly, in the Communication  ‘Towards a Modern, More European Copyright Framework’, the European Commission endorses a similar ‘follow the money’ approach.33 According to the Commission, follow-the-money mechanisms should be based on a self-regulatory approach through the implementation of codes of conduct, such as the Guiding Principles for a ‘Stakeholders’ Voluntary Agreement on Online Advertising and IPR’.34 As stated by the principles, ‘the purpose of the agreement is to dissuade the placement of advertising on commercial scale IP infringing websites and apps (eg on mobile, tablets, or set-top boxes), thereby minimising the funding of IP infringement through advertising revenue’.35 Payment processors like MasterCard and Visa have been pressured to act as IP enforcers, extending the reach of IP law to websites operating from servers and  physical facilities located abroad.36 Across 2011 and 2012, American Express, Discover, MasterCard, Visa, and PayPal, PULSE, and Diners Club entered into a best practice agreement with thirty-one major rightholders.37 The voluntary agreement was implemented by the launch of the Payment Processor Initiative run by the International AntiCounterfeiting Coalition (IACC).38 There are a number of instances where payment intermediaries’ terms of service have been used as pressure points against protected speech. Inter alia, payment blockades crippled Wikileaks of 95 per cent of its revenues, when PayPal, Moneybookers, Visa, and MasterCard stopped accepting public donations. No legal proceedings were ever actually initiated against Wikileaks. In Backpage v Dart, Tom Dart—the Sheriff for Cook County, Illinois—sent letters to Visa and MasterCard demanding that they cease doing business with Backpage.com due to content in the ‘adult services’ section of the classified

31  See Annemarie Bridy, ‘Internet Payment Blockades’ (2015) 67 Fla L. Rev. 1523. 32  See Office of the Intellectual Property Enforcement Coordinator, ‘Supporting Innovation, Creativity & Enterprise: Charting a Path :head, US Joint Strategic Plan for Intellectual Property Enforcement FY 2017–2019’ (2017). 33 See European Commission Communication, ‘Towards a Modern More European Copyright Framework’ COM (2015) 260 final 10–11. 34  European Commission, ‘The Follow the Money Approach to IPR Enforcement—Stakeholders’ Voluntary Agreement on Online Advertising and IPR: Guiding Principles’ . 35 ibid. 36  ibid. 1523. 37  See ‘Best Practices to Address Copyright Infringement and the Sale of Counterfeit Products on the Internet’ (16 May 2011). 38  See Bridy (n. 31) 1549.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ACCOUNTABILITY AND RESPONSIBILITY OF ONLINE INTERMEDIARIES   619 ads site.39 The credit card companies both complied, cutting off services to the entire site’s worldwide operations.40 Backpage claimed that the sheriff ’s informal censorship pressure amounted to a prior restraint on speech.41 The case was finally appealed to the Seventh Circuit, which reversed the previous decision and upheld a prior restraint on speech defence.42 As it turned out, however, a single action from a governmental official—lacking any due process scrutiny—was potentially capable of putting in jeopardy an online business operating worldwide. The Backpage case highlights how intermediaries often face business incentives that make them more likely to yield to pressure. In the United States, a prior restraint on speech defence would not be directly actionable against intermediaries—if governmental pressures cannot be proved—as it does apply only against public parties. Under European law, there might be room to claim interference with online providers’ freedom to conduct a business,43 still this is costly and uncertain to prove. Other tools which were developed to tackle money flow are information disclosures against payment providers. The European system of information disclosures against third parties was used in litigation to unveil the identity of potential infringers by invoking these measures against banking institutions. Although the courts recognized the interest in secrecy and data protection as important, their application has to be balanced with the right to an effective remedy of IP rightholders.44

1.4  Private DNS Content Regulation Domain hopping would evade law enforcement by moving from one ccTLDs (country code top-level domains) or gTLDs (generic top-level domains) registrar to another, thus driving up time and resources spent on protecting IP right. In this context, responsible behaviour would rely on stewards of the internet’s core technical functions, such as ICANN, and implicates internet infrastructure and governance.45 Apparently, ICANN might be increasingly directly involved with online content regulation through its contractual facilitation of a ‘trusted notifier’ copyright-enforcement programme. ICANN contractual architecture for the new gTLDs embeds support for private, DNS-based 39  See Letter from Sheriff Thomas J. Dart to Mr Charles W. Scharf, Chief Executive, Visa Inc. (29 June 2015) ; Letter from Sheriff Thomas J. Dart to Mr Ajaypal Banga, President and Chief Executive Officer, MasterCard Inc. (29 June 2015). 40  See Rainey Reitman, ‘Caving to Government Pressure, Visa and MasterCard Shut Down Payments to Backpage.com?’ (EFF, 6 July 2015) . 41 See Backpage.com v Sheriff Thomas  J.  Dart, 1:15-cv-06340 (ND Ill. 2015) (US) (Complaint for Injunctive and Declaratory Relief and Damages). 42 See Backpage.com v Sheriff Thomas J. Dart, 807 F.3d 229 (7th Cir. 2015) (US). 43  See Charter of Fundamental Rights of the European Union [2000] 2000/C OJ 364/1, Art. 16. 44  See C-580/13 Coty Germany. 45  See, for further in-depth review of this enforcement strategy, Chapter 32.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

620   GIANCARLO FROSIO AND MARTIN HUSOVEC content regulation on behalf of copyright holders—and, potentially, other ‘trusted’ parties—imposing on registry operators and registrars an express prohibition on IP infringement and obligations including suspension of the domain name.46 Through this contractual framework, ICANN facilitated voluntary enforcement agreements between DNS intermediaries and rightholders, such as the DNA’s Healthy Domains Initiative. The registry operators agrees, if ‘the domain clearly is devoted to abusive behaviour . . . in its discretion [to] suspend, terminate, or place the domain on registry lock, hold, or similar status’ within ten business days of the complaint.47 In general, as Bridy explained, ‘in creating that architecture, ICANN did nothing to secure any procedural protections or uniform substantive standards for domain name ­registrants who find themselves subject to this new form of DNS regulation’.48

1.5 Standardization The European Commission also increased pressure through its soft law by creating a set of expectations that should be followed by the intermediaries to avoid further regulation. In the Communication ‘Tackling Illegal Content Online’, this point is reinforced by endorsing the view that: In order to ensure a high quality of notices and faster removal of illegal content, criteria based notably on respect for fundamental rights and of democratic values could be agreed by the industry at EU level. This can be done through self-regulatory mechanisms or within the EU standardisation framework, under which a particular entity can be considered a trusted flagger, allowing for sufficient flexibility to take account of content-specific characteristics and the role of the trusted flagger. Other such criteria could include internal training standards, process standards and quality assurance, as well as legal safeguards as regards independence, conflicts of interest, protection of privacy and personal data, as a non-exhaustive list.49

In addition, especially in the domain of terrorist propaganda, extremism, and hate speech, as mentioned, the European Commission ‘encourages that the notices from trusted flaggers should be able to be fast-tracked by the platform’, and user-friendly anonymous notification systems.50 The goal of these mechanisms is to standardize procedures, relationships with the notifying parties, and the technologies used to

46  See ICANN-Registry Agreement (2013) s. 2.17 Specification 11; ICANN Registrar Accreditation Agreement (2013) s, 3.18. 47  See Donuts.Domains, ‘Characteristics of a Trusted Notifier Program’ . 48  Annemarie Bridy, ‘Notice and Takedown in the Domain Name System: ICANN’s Ambivalent Drift into Online Content Regulation’ (2017) Wash. & Lee L. Rev. 1345, 1386. 49  See Communication (n. 1) s. 3.2.1. 50  ibid. ss. 3.2.1. and 3.2.3.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ACCOUNTABILITY AND RESPONSIBILITY OF ONLINE INTERMEDIARIES   621 implement them in order to further increase the efficiency of law enforcement within the existing legal framework.

1.6  Codes of Conduct In the aftermath of the refugee crisis, the fight against online hate speech became one of the important political issues. In a wave of regulatory euphoria, the German political leaders were perhaps the most inclined to regulate the removal of hate speech. Because self-regulation expected by governments appears to lag behind, Germany and later some other EU Member States threatened to bring in a law to impose heavy fines on a platform failing to take down hate-based criminal content.51 In response to this, the European Commission acted swiftly by coordinating EU-wide self-regulatory efforts by which online platforms should be directed to fight hate speech, incitement to terrorism, and prevent cyber-bullying.52 As an immediate result of this new policy trend, in 2016 the Commission agreed with all major online hosting pro­viders—including Facebook, Twitter, YouTube, Microsoft, Instagram, Snapchat, and DailyMotion—on a code of conduct that endorses a series of commitments to combat the spread of illegal hate speech online in Europe.53 The code spells out commitments such as faster notice and takedown for illegal hate speech that will be removed within twenty-four hours or special channels for notices from government and NGOs to remove illegal content.54 In partial response to this increased pressure from the EU regarding the role of intermediaries in the fight against online terrorism, major tech companies—Facebook, Microsoft, Twitter, and YouTube—announced  that they will begin sharing hashes of apparent terrorist propaganda.55 The Code of Conduct for hate speech is not the only EU-brokered self-regulatory mechanism increasing responsibility. Historically the first was the Memorandum of

51  See e.g. the 2017 Network Enforcement Act (Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken, NetzDG) (Ger.). 52  See Communication (n. 7) 10. Several other documents coming out of the EU on anti-radicalization and countering extremism, including the UK Counter Extremism Strategy and the EU Parliament’s ‘Civil Liberties committee draft report on anti-radicalization’, emphasize a stronger role for intermediaries in policing online content. See European Commission, ‘Proposal for a Directive on Combating Terrorism and Replacing Council Framework Decision 2002/475/JHA on Combating Terrorism’ (2  December 2015) COM(2015) 0625 final; European Parliament, ‘Draft Report on Prevention of Radicalization and Recruitment of European Citizens by Terrorist Organizations’ (1 June 2015) 2015/2063(INI); Home Department (UK), Counter-Extremism Strategy (Cmd 9148, 2015). 53  See European Commission Press Release, ‘European Commission and IT Companies Announce Code of Conduct on Illegal Online Hate Speech’ (31 May 2016) . 54 ibid. 55  See ‘Google in Europe, Partnering to Help Curb the Spread of Terrorist Content Online’ (Google Blog, 5 December 2016) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

622   GIANCARLO FROSIO AND MARTIN HUSOVEC Understanding in the area of trade mark infringements.56 In 2018, the European Commission adopted a new Code of Practice against disinformation.57

1.7 Filtering Filtering and proactive monitoring have been increasingly sought—and deployed—as an enforcement strategy online.58 Proactive monitoring comes first—and largely—as a private-ordering approach following pressure from rightholders and government to purge the internet from allegedly infringing content or illegal speech. In the mist of major lawsuits launched against them,59 YouTube and Vimeo felt compelled to implement filtering mechanisms on their platforms on a voluntary basis. Google launched Contend ID in 2008.60 Vimeo adopted Copyright Match in 2014.61 Both technologies rely on digital fingerprinting to match an uploaded file against a database of protected works provided by rightholders.62 Technologies from these initiatives inspired part of the solutions debated within the 2019 EU Copyright Reform. According to its Article 17, selected providers are subject to a preventive obligation if they fail to conclude licensing agreements and are given the necessary information to trigger such technologies. According to some, this effectively means the imposition of filtering content-recognition technologies to prevent the availability of infringing content.63 Enforcing online behaviour through automated or algorithmic filtering and fair use is heavily debate in the literature. Julie Cohen and Dan Burk argued that fair use cannot be programmed into an algorithm, so that institutional infrastructures will always be required instead.64 In general, it was noted that ‘the design of copyright enforcement robots encodes a series of policy choices made by platforms and rightsholders and, as a result, subjects online speech and cultural participation to a new layer of private ordering and private control’.65 According to Matthew Sag, automatic copyright-filtering systems 56  See European Commission, ‘Memorandum of Understanding on online advertising and IPR’ (May  2011) . 57  See European Commission, ‘Code of Practice on Disinformation’ (28 September 2018) . 58  See Chapters 28 and 29. 59 See Viacom Int’l v YouTube Inc., 676 F.3d 19 (2d Cir. 2012) (US); Capitol Records LLC v Vimeo, 826 F.3d 78 (2d Cir. 2015) (US). 60  See YouTube, ‘How Content ID Works’ . 61  See Chris Welch, ‘Vimeo Rolls Out Copyright Match to Find and Remove Illegal Videos’ (The Verge, 21 May 2014) . 62  YouTube (n. 60). 63  See Chapter 28, Section 4.2. 64 See Dan Burk and Julie Cohen, ‘Fair Use Infrastructure for Copyright Management Systems’, Georgetown Public Law Research Paper 239731/2000 (2000) . 65  See Matthew Sag, ‘Internet Safe Harbors and the Transformation of Copyright Law’ (2017) 93 Notre Dame L. Rev. 499, 538.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ACCOUNTABILITY AND RESPONSIBILITY OF ONLINE INTERMEDIARIES   623 ‘not only return platforms to their gatekeeping role, but encode that role in ­algorithms and software’ and fair use only nominally applies online.66 On the other hand, Niva Elkin-Koren67 and Husovec68 argued that technologies might be the only way we can address the concerns of overblocking on a large scale and with the necessary speed. YouTube and Facebook have been using other matching tools to filter ‘extremist content.’69 In this context, tech companies plan to create a shared database of unique digital fingerprints—known as hashes—that can identify images and videos promoting terrorism.70 When one company identifies and removes such a piece of content, the ­others will be able to use the hash to identify and remove the same piece of content from their own network.71 Similar initiatives equally relying on hashing technologies were also implemented in the area of child abuse material. PhotoDNA is Microsoft’s technology that has been widely used to find the pictures and stop their distribution.72 The technology is being used by the Internet Watch Foundation, which operates its dedicated internet crawler,73 and private firms, such as Microsoft, Twitter, Google, and Facebook for some of their own products.74 The European Commission would like to provide a regulatory framework for these initiatives with special emphasis on tackling the dissemination of terrorist content online. In a recent Recommendation, the Commission singled out automated filtering means as the optimal policy solution: Hosting service providers should take proportionate and specific proactive measures, including by using automated means, in order (1) to detect, identify and ex­ped­ itious­ly remove or disable access to terrorist content (36) (2) in order to immediately prevent content providers from re-submitting content which has already been

66 ibid. 67  See Niva Elkin-Koren, ‘Fair Use by Design’ (2017) 64 UCLA L. Rev. 22. 68 See Martin Husovec, ‘The Promises of Algorithmic Copyright Enforcement: Takedown or Staydown? Which is Superior? And Why?’ (2018) Colum. J. of L. & Arts 53. 69  See Joseph Menn and Dustin Volz, ‘Excusive: Google, Facebook Quietly Move Toward Automatic Blocking of Extremist Videos’ (Reuters, 25 June 2016) . 70  Olivia Solon, ‘Facebook, Twitter, Google and Microsoft Team up to Tackle Extremist Content’ (The Guardian, 6 December 2016) . 71 See ‘Partnering to Help Curb Spread of Online Terrorist Content’ (Facebook Newsroom, 5 December 2016) . 72  See Microsoft, ‘PhotoDNA’ . 73 See ‘Using Crawling and Hashing Technologies to Find Child Sexual Abuse Material—the Internet Watch Foundation’ (NetClean, 11 February 2019) . 74  See Wikipedia, ‘PhotoDNA’ .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

624   GIANCARLO FROSIO AND MARTIN HUSOVEC removed or to which access has already been disabled because it is considered to be terrorist content75

A proposal for a Regulation for preventing dissemination of terrorist content online endorses similar principles and is under consideration before the EU Parliament.76

1.8 Website-Blocking Another enforcement tool popularized in the area of IP law has been website-blocking measures.77 These injunctions led to considerable case law in some of the Member States. Over the years, the courts have tried to fleshing out the conditions and legal and technical modalities under which such orders should be available at the national level in the European Union.78 These measures were then sometimes adopted in national law by means of administrative regulations which entrusted authorities with special powers to block websites under specific conditions.79 Even before these court-imposed injunctions entered the European landscape, a number of providers were engaging in voluntary website-blocking schemes. Perhaps the most prominent of these was the anti-child abuse programme operated by the Internet Watch Foundation (IWF).80 In 2002, IWF started distributing its URL list for the purposes 75  European Commission, ‘Recommendation on measures to effectively tackle illegal content online’ C(2018) 1177 final. 76  See European Parliament, ‘Legislative resolution of 17 April 2019 on the proposal for a regulation on preventing the dissemination of terrorist content online’ [2019] P8_TA-PROV(2019)0421. 77  See, inter alia, Chapters 4, 16, 20, and 29. 78  See Martin Husovec and Lisa Van Dongen, ‘Website Blocking, Injunctions and Beyond: View on the Harmonization from the Netherlands’ [2017] 7 GRUR Int. 580; Pekka Savola, ‘Proportionality of Website Blocking: Internet Connectivity Providers as Copyright Enforcers’ (2014) 5(2) JIPITEC 116, 116–38. 79 See e.g. AGCOM Regulations regarding Online Copyright Enforcement, 680/13/CONS, 12  December 2013 (It.) (providing AGCOM with administrative power to enforce online copyright infringement); Royal Legislative Decree no. 1/1996, enacting the consolidated text of the Copyright Act, 12 April 1996 (as amended by the Law No. 21/2014, 4 November 2014) (Sp.) (creating an administrative body—the Second Section of the Copyright Commission (CPI)—which orders injunctions against information society services who infringe copyright); Omnibus Bill no. 524 of 26 June 2013, amending provisions in various laws and decrees including Law no. 5651 ‘Regulation of publications on the internet and suppression of crimes committed by means of such publications’, Law No. 5809 ‘Electronic Communications Law’ and others (Tur.) (empowering the Presidency of Telecommunications and Communications with broad administrative enforcement prerogatives online); Federal Law no. 139-FZ, on the protection of children from information harmful to their health and development and other legislative acts of the Russian Federation (aka ‘Blacklist law’), 28 July 2012 (Rus.) (putting the Roskomnadzor in charge of the Registry and site-blocking enforcement); Act on the establishment and operation of Korea Communications Commission (KCC) last amended by Act no. 11711 of 23 March 2013 (Kor.) (establishing the KCC implementing deletion or blocking orders according to the request and standards of the Korea Communications Standards Commission, also instituted by the same law). For further indepth discussion of administrative enforcement of IP rights online, see Chapter 30. 80 See IWF, ‘URL List Policy’ .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ACCOUNTABILITY AND RESPONSIBILITY OF ONLINE INTERMEDIARIES   625 of implementing blocking or filtering solutions.81 Next to internet access pro­viders, a number of other technological companies voluntarily subscribe to the list.82

2.  Mechanisms and Legal Challenges After reviewing the most significant ways in which the landscape of responsibilities is shifting beyond the mere legal liability-imposed baseline, it is time to highlight how these are the result of different mechanisms.

2.1  Market and Private Ordering In particular in the area of IP, much of the increased responsibility is a result of private ordering achieved through markets. Broadly speaking, this happens for two reasons. Either it is in an intermediary’s self-interest to implement such enforcement tools, or appears rational given the business dealings with rightholders. As for the first category, a number of factors contribute to self-interest in increasing one’s own responsibility.83 First of all, is user experience. Often, illegal content misleads users or attempts to defraud them. For instance, it makes commercial sense for a newspaper to remove abusive or spam comments because they can hurt users’ feelings or expose them to fraud. If an environment is dominated by offensive comments, many readers are discouraged from contributing themselves84 and this is bad for the business of intermediaries. Secondly, is credibility and reputation, which services often strive to achieve. More accurate user content is more competitive, and has a better potential for attracting advertising or other investments. For instance, Yelp, despite having no legal obligation to do so, has incorporated a right-to-reply into its review service after public pressure from the business community.85 Perhaps more typical situations are when increased responsibility results from market transactions. Some rightholders might be in a position to cut deals with platforms, or leverage their existing business relationships. To give an example, Amazon, asking its users to review their purchasing experience with sellers, is in a business relationship 81 ibid. 82  IWF, ‘IWF URL List recipients’ . 83  See Marin Husovec, Injunctions Against Intermediaries in the European Union: Accountable But Not Liable? (CUP 2017) 13. 84 For an overview of industry practices and their corresponding business reasons, see Emma Goodman, Online Comment Moderation: Emerging Best Practices (WAN-IFRA 2013) . 85  See Claire Miller, ‘The Review Site Yelp Draws Some Outcries of Its Own’ (New York Times, 3 March 2009); Claire Miller, ‘Yelp Will Let Businesses Respond to Web Reviews’ (New York Times, 10 April 2009).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

626   GIANCARLO FROSIO AND MARTIN HUSOVEC with both sellers and users (buyers). Sellers will certainly voice their concerns about fraudulent reviews in negotiations about conditions for sale, and failure to respond to such demands could lead them away from Amazon to its competitors. Provided that the market is competitive, Amazon must internalize the harm of its customers by action. A different type of rightholder is existing business partners which have other leverage points. It has been observed in many countries that voluntary enforcement schemes were usually initiated when intermediaries such as internet access providers tried to vertically integrate into markets where they had to do business with major rightholders and license their content (e.g. video on demand). Licence agreements then often served as a tool for negotiating higher enforcement efforts of the same intermediaries for their other services.86 The measures discussed earlier, especially filtering and changes to online searches, could be said to have resulted from these incentives. YouTube’s Content ID was a winwin solution for YouTube which was not interested in continuous takedown of content and needed a way credibly to monetize videos and increase collaboration with rightholders. The downside of these privately agreed-on solutions is that they happen entirely in the dark, and thus the public has very little information about them. Terms of their operation are often confidential. Especially for human rights law, this creates a challenge because, without governmental intervention, exposure to legal safeguards is more challenging. Human rights were, after all, designed to protect against the state.

2.2  Corporate Social Responsibility Corporate social responsibility theory has been ported to cyberspace to deploy human rights principles to non-public bodies, which operate largely outside the remit of trad­ ition­al human rights law.87 Arguments have been made that obligations pertaining to states—such as those endorsed by the UN Human Rights Council’s declaration of internet freedom as a human right88—should be extended to online platforms as well.89 86  e.g. in the Netherlands, US rightholders, such as Disney and Warner, attempted to leverage their rights to content, when some Dutch providers decided to start providing video on demand. Rightholders were reported to only license if the providers implemented some form of disconnection strategy. See Door A. Vermeer, ‘Vrije internettoegang ook in Nederland onder vuur’ (Bits of Freedom, 4 January 2011) (Dutch provider @Home, currently Ziggo, in a press release from 2006 stated that, in the course of a VOD-deal, they also agreed to a three-strike regime). 87  See Laidlaw (n. 4) (noting that ultimately, however, the largely voluntary nature of corporate social responsibility instruments makes it a problematic candidate as a governance tool for international internet gateways (IIGs) and freedom of speech). 88 See UN Human Rights Council, ‘Resolution on the Promotion, Protection and Enjoyment of Human Rights on the Internet’ (2012). 89  See Florian Wettstein, ‘Silence as Complicity: Elements of a Corporate Duty to Speak out Against the Violation of Human Rights’ (2012) 22(1) Business Ethics Quarterly 37, 37–61; Stephen Chen, ‘Corporate Responsibilities in Internet-Enabled Social Networks’ (2009) 90(4) J. of Business Ethics 523, 523–36.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ACCOUNTABILITY AND RESPONSIBILITY OF ONLINE INTERMEDIARIES   627 Other international instruments to that effect have been identified in the Declaration of Human Duties and Responsibilities,90 the preamble to the UN Norms on the Responsibilities of Transnational Corporations and Other Business Enterprises,91 and the UN Guiding Principles on Business and Human Rights.92 In 2014, the UN Human Rights Council adopted a resolution on the promotion, protection, and enjoyment of human rights on the internet, which also addressed a legally binding instrument on corporations’ responsibility to ensure human rights.93 In the EU, the Directive on the disclosure of non-financial and diversity information by certain large undertakings and groups prescribes a certain level of transparency and accountability for large companies, at least when it concerns ‘environmental, social and employee matters, respect for human rights, anti-corruption and bribery matters’.94 Recital 9 directly references the UN Global Compact, the Guiding Principles on Business and Human Rights implementing the UN ‘Protect, Respect and Remedy’ Framework. This non-financial performance information should help investors, consumers, policymakers, and other stakeholders to evaluate the large companies in the economy and potentially indirectly encourage them to develop a responsible approach to business.95 Corporate social responsibility is sometimes hard to distinguish from another ­reason why intermediaries have increased their responsibility—the desire to avoid regulation. A good example here is Facebook’s increased focus on tackling the spread of disinformation.96 Unlike in other areas, the risk of Facebook being held liable for disinformation is often not too severe because such information is not always illegal. At the same time, not acting could provide ground for intervention by legislation. Facebook, however, sells its efforts as part of its ambition to be a good citizen, its corporate social responsibility. The obvious downside of the corporate social responsibility approach is that it does not prescribe any specific steps and, rather, tries to create an environment in which companies will act in a responsible way. The expectations are very often very vague and thus hard to measure or evaluate. 90  See UNESCO, Declaration of Human Duties and Responsibilities (Valencia Declaration) (1998). 91 See UN, Norms on the Responsibilities of Transnational Corporations and Other Business Enterprises (13 August 2003). 92  See UN, Human Rights, Office of the High Commissioner, ‘Guiding Principles on Business Human Rights: Implementing the United Nations “Protect, Respect, and Remedy” Framework’ (2011) [hereinafter UN GPBHRs]. 93  See UN Human Rights Council, ‘The Promotion, Protection and Enjoyment of Human Rights on the Internet’ A/HRC/RES/26/13 (20 June 2014). 94  Directive 2014/95/EU of the European Parliament and of the Council of 22 October 2014 amending Directive 2013/34/EU as regards disclosure of non-financial and diversity information by certain large undertakings and groups [2014] OJ L330/1, Art. 29a (emphasis added). See also ibid. Art. 19a. 95  This information should be available in company reports starting from 2018. 96  See e.g. Adam Mosseri, ‘Working to Stop Misinformation and False News’ (Facebook for Media, 7 April 2017) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

628   GIANCARLO FROSIO AND MARTIN HUSOVEC

2.3  Involuntary Cooperation in IP Rights Enforcement Increasingly, governments—and interested third parties such as intellectual property rightholders—try to coerce online intermediaries into implementing voluntary measures and bear much of the risk of online enforcement. Husovec has argued that European Union law increasingly forces internet intermediaries to work for the rightholders by making them accountable even if they are not tortiously liable for the actions of their users.97 According to Husovec, the shift from liability to accountability has occurred by derailing injunctions from the tracks of tort law.98 The practical outcome of this is that rightholders can potentially ask for all sorts of help in enforcement of their rights without having to argue about what the intermediaries did wrong. This is because the sole reason for their involvement is that they are in a position which attracts responsibility as such. From the examples discussed, we can see a number of enforcement tools originating in this mechanism. Website-blocking orders are the primary example. The approach of adding duties of responsibility beyond those provided in the liability framework is becoming more present in policy debates. The UK government’s Online Harms White Paper published in 2019 reinforces this discourse by proposing a new duty of care towards users, holding companies to account for tackling a comprehensive set of online harms, ranging from illegal activity and content to behaviours which are harmful but not necessarily illegal.99 The goal of the proposal is to set out ‘high-level expectations of companies, including some specific expectations in relation to certain harms’.100 Violation of the duty of care would be assessed separately from liability for particular items of harmful content. This ‘systemic form of liability’ essentially superimposes a novel duty to cooperate on the providers and turns it into a separate form of responsibility, which is enforceable by public authorities by means of fines.101 At the same time, it leaves the underlying responsibility for individual instances of problematic content intact.

2.4  Public Deal-Making One of very prevalent mechanisms observed in the responsibility landscape is the phenomenon of deal-making with public authorities. As a result of liability safe harbours, providers are partially freed from responsibility for their users’ content. Thus, they ef­fect­ive­ly have the power to decide about content which users post. However, this power is not supplement by a responsibility towards their users to respect their rights of speech in some particular form. This has famously led Tushnet to call it ‘power without

97  See Husovec (n. 83). 98 ibid. 99  See Department for Digital, Culture, Media & Sport and Home Department, Online Harm White Paper (CP 59, 2019). 100  ibid. 67. 101  ibid. 59.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

ACCOUNTABILITY AND RESPONSIBILITY OF ONLINE INTERMEDIARIES   629 responsibility’.102 This responsibility gap103 then invites government to pressure for removal of information without following a proper process. Again, public deal-making is here closely intertwined with the interest of platforms to avoid new forms of regulation. In Against Jawboning, Derek Bambauer discusses government pressure on internet intermediaries that spans a large variety of content types and subject matter.104 Bambauer cites Representative James Sensenbrenner, pressing the US Internet Service Provider Association to adopt putatively voluntary data-retention scheme in the following terms: ‘if you aren’t a good rabbit and don’t start eating the carrot, I’m afraid we’re all going to be throwing the stick at you’.105 A cost–benefit analysis would most likely suggest that online intermediaries play along and adopt the solutions pushed. Many of the enforcement tools presented earlier result, at least in part, from such governmental pressure.106 The EU Code of Conduct for hate speech is perhaps the most prominent European example. The problem of these solutions is due process, prior restraint, and generally the applicability of the human rights safeguards.107

2.4  Circulation of Solutions As demonstrated by a number of examples, sometimes solutions that first originated in private-ordering or injunction cases, eventually inspired changes in the law. For instance, Content ID inspired European plaintiffs to ask for filtering, and that inspired the European Commission to propose it in the law. However, the same cycle also has a reverse order. For instance, the HADOPI Acts inspired private plaintiffs to demand simi­lar solutions in countries where legislation was absent, for example Ireland,108 or similar private-ordering schemes in the United States.109

102 Rebecca Tushnet, ‘Power Without Responsibility: Intermediaries and the First Amendment’ (2008) 76(4) George Washington L. Rev. 986, 986. 103  See ibid.; Daphne Keller, ‘Who Do You Sue? State and Platform Hybrid Power over Online Speech’, Aegis Series Paper no. 1902 (2019) . 104  See also Derek Bambauer, ‘Against Jawboning’ (2015) 100 Minnesota L. Rev. 51 (discussing federal and state governments increasing regulation of online content through informal enforcement measures, such as threats, at the edge of or outside their authority). 105  ibid. 51–2. 106  See Danielle Keats Citron, ‘Extremist Speech, Compelled Conformity, and Censorship Creep’ (2018) 93 Notre Dame L. Rev. 1035. 107  See Evelyn Aswad, ‘The Role of US Technology Companies as Enforcers of Europe’s New Internet Hate Speech Ban’ (2016) 1(1) Colum. Human Rights L. Rev. Online 1, 6. 108 See Sony Music Entertainment (Ireland) Ltd v UPC Communicaitons Ireland Ltd (No. 1) [2015] IEHC 317. 109  See Kerry Sheehan, ‘It’s the End of the Copyright Alert System (as We Know It)’ (Electronic Frontier Foundation, 6 February 2017) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

630   GIANCARLO FROSIO AND MARTIN HUSOVEC

3. Conclusions The responsibility of intermediaries has emerged as a powerful slogan for policymakers. The European Commission has plainly admitted in recent documents that the forthcoming Digital Single Market has been shaped according to the idea that ‘responsibility of online platforms is a key . . . issue’, stressing that the path is set ‘towards an enhanced responsibility of online platforms’. The new terminology, however, does represent a substantial shift in intermediary liability theory that will apparently move away from a wellestablished utilitarian approach towards a moral approach by rejecting negligence-based intermediary liability arrangements. In turn, this theoretical approach portends the enhanced involvement of private parties in online governance and a broader move towards private enforcement online. Public enforcement lacking technical knowledge and resources to address an unprecedented challenge in terms of global human semiotic behaviour would coactively outsource enforcement online to private parties. The deployment of miscellaneous self-regulation and voluntary measures—such as graduated response, monitoring and filtering, website-blocking, online search manipulation, payment blockades and follow-the-money strategies, and private DNS content ­regulation—reflects this change in perspective. This development poses plenty of challenges. First, enforcement through private ordering and voluntary measures moves the adjudication of lawful and unlawful content out of public oversight. In addition, private ordering—and the retraction of the public from online enforcement—does push an amorphous notion of responsibility that incentivizes intermediaries’ self-intervention to police allegedly infringing activities on the internet. Further, enforcement would be looking once again for an ‘answer to the machine in the machine’.110 By enlisting online intermediaries as watchdogs, governments would de facto delegate online enforcement to algorithmic tools—with limited or no accountability.111 Finally, tightly connected to the points above, transferring regulation and adjudication of internet rights to private actors highlights the inescapable tensions with fundamental rights—such as freedom of information, freedom of expression, freedom of business, or the fundamental right to internet access—by limiting access to information, causing chilling effects, or curbing due process.

110  Charles Clark, ‘The Answer to the Machine is in the Machine’ in Bernt Hugenholtz (ed.), The Future of Copyright in a Digital Environment (Kluwer Law Int’l 1999) 139. See also Christophe Geiger, ‘The Answer to the Machine Should Not Be the Machine, Safeguarding the Private Copy Exception in the Digital Environment’ (2008) 30 EIPR 121. 111  See Joshua Kroll and others, ‘Accountable Algorithms’ (2017) 165 U. Pa. L. Rev. 633.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 32

A ddr essi ng I n fr i ngem en t: Dev el opm en ts i n Con ten t R egu l ation i n the US a n d the DNS Annemarie Bridy

From a public law perspective, surprisingly little has changed in the US regulatory en­vir­on­ment for online service providers since the internet’s early days. Apart from the eternal tug of war over net neutrality, the regulatory framework governing internet intermediaries has remained relatively stable for two decades. Section 230 of the Communications Decency Act (CDA) of 19961 and section 512 of the Digital Millennium Copyright Act (DMCA) of 19982 remain the pillars of US internet law. Together, they define how internet service providers handle users’ content and the associated risk of legal liability.3 In the years between the resounding defeat of the Stop Online Piracy Act (SOPA)4 in 2012 and the enactment of the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA)5 in 2018, US policymakers showed little interest in legislation that would alter the regulatory status quo for online intermediaries. Following FOSTA, which rode a powerful wave of anti-tech sentiment to easy adoption, the United States may be entering 1  Communications Decency Act 1996, s. 230 (US). 2  Digital Millennium Copyright Act of 1998, s. 512 (US). 3  See Annemarie Bridy, ‘Remediating Social Media: A Layer-Conscious Approach’ (2018) 24 BU J. of Sci. & Tech. L. 193, 205–13. 4  Stop Online Piracy Act 2011 (US). 5  Allow States and Victims to Fight Online Sex Trafficking Act 2017 (US).

© Annemarie Bridy 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

632   ANNEMARIE BRIDY a new era in the public law of intermediary liability. As now-dominant social media platforms like Facebook and YouTube continue to lose lustre amid controversies over the viral spread of hate speech, junk news, and state-sponsored propaganda, they face a realistic prospect that the legal protections they have long enjoyed will erode.6 FOSTA’s inroads on CDA immunity with respect to user speech ‘facilitating’ prostitution may be just the beginning of the backlash. Seeking to capitalize on public disenchantment with Silicon Valley, copyright industry trade groups have intensified their long-running campaign against the DMCA safe harbours. In the EU, the outlook for tech companies is even worse, as controversial mandatory filtering and ‘link tax’ proposals from the Digital Single Market Platform Consultation become legislative reality.7 At the same time, governments throughout the EU are demanding that intermediaries remove hate speech8 and terrorism-related content9 within a short time of receiving notice, on penalty of steep fines. Globally, the scales are tipping quickly in favour of legislatively mandated, on-demand takedown for a wide range of user-generated online content. It would be a mistake, however, to conclude that the twenty-year stretch between the DMCA and FOSTA was a quiet time in the world of intermediary liability and online content regulation. The contrary is true. Over the course of the last decade, in response to significant pressure from the US and other governments, service providers have assumed private obligations to regulate online content that have no basis in public law.10 For US tech companies, a robust regime of ‘voluntary agreements’ to resolve contentrelated disputes has grown up on the margins of the DMCA and the CDA.11 For the most part, this regime has been built for the benefit of intellectual property rightholders attempting to control online piracy and counterfeiting beyond the territorial limits of the United States and without recourse to judicial process. The reach of privately ordered online content regulation is wide and deepening. It is wide in terms of the range of service providers that have already partnered with cor­por­ ations and trade associations to block sites, terminate accounts, and remove or demote content without court orders. That range now includes payment processors and digital advertising networks in addition to internet access providers, search engines, and social 6  See Alan Rozenshtein, ‘It’s the Beginning of the End of the Internet’s Legal Immunity’ (Foreign Policy, 2017) . 7  See Directive 2019/790/EU of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92, Arts 15 and 17. 8 See the 2017 Network Enforcement Act (Gesetz zur Verbesserung der Rechtsdurchsetzung in ­sozialen Netzwerken, NetzDG) (Ger.). 9  See European Commission, ‘State of the Union 2018: Commission Takes Action to Get Terrorist Content off the Web—Questions and Answers’ . 10  See Natasha Tusikov, Chokepoints: Global Private Regulation on the Internet (University of California Press 2017); Annemarie Bridy, ‘Internet Payment Blockades’ (2015) 67 Fla L. Rev. 1523. 11 See Annemarie Bridy, ‘Copyright’s Digital Deputies: DMCA-Plus Enforcement by Internet Intermediaries’ in John Rothchild (ed.), Research Handbook on Electronic Commerce Law (Edward Elgar 2016).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

DEVELOPMENTS IN CONTENT REGULATION IN THE US AND THE DNS   633 media platforms.12 It is deepening with reference to the internet’s protocol stack, migrating downward from the application layer into the network’s technical infrastructure, specifically, the Domain Name System (DNS), which is commonly described as the internet’s address book.13 While enforcement of intellectual property rights is the purpose for which these agreements exist, the notice-and-action procedures they institutionalize are readily adaptable for use in censoring all kinds of disfavoured content. Recent private agreements between DNS intermediaries and intellectual property rightholders cross the Rubicon. Such agreements, which are the subject of this chapter, are cause for special concern among open internet advocates, because they transform technical network intermediaries into content regulators in an unprecedented way. They expand the remit of domain name registrars and registry operators beyond their raison d’être, which is the administration of the DNS and the maintenance of its op­er­ ation­al security and stability. As these private, under-the-radar agreements multiply, they are taking a tangible but hard-to-measure toll on the global environment for freedom of speech and access to information online.

1.  ICANN, the DNS, and DNS Intermediaries Administration and operation of the DNS are under the control of the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is a private‑sector, non‑profit corporation that was created in 1998 to manage the internet’s DNS under contract with the US government, which at the time more or less owned the internet. ICANN was created for one reason: to perform the internet’s IANA (Internet Assigned Numbers Authority) functions. The IANA functions are, as ICANN describes them, ‘key technical services critical to the continued operations of the Internet’s underlying address book, the DNS’.14 ICANN’s mandate is technical and administrative; it has not historically included enforcement of national or international law governing the content that is available on the internet. The DNS is the technical infrastructure that allows users to access the sites and services they use on the internet every day. A domain name is a string of letters (e.g. Amazon.com) that corresponds to a string of numbers called an Internet Protocol (IP) address (e.g. 205.251.242.54). Every piece of hardware connected to the internet, including 12  ibid. 188; Tusikov (n. 10) 66. 13  See Milton Mueller, Networks and States: The Global Politics of Internet Governance (MIT Press 2010) 197; Annemarie Bridy, ‘Notice and Takedown in the Domain Name System’ (2017) 74 Wash. & Lee L. Rev. 1343. 14  ICANN, ‘Welcome to ICANN!’ accessed 20 September 2018.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

634   ANNEMARIE BRIDY every server that acts as a website host, has a unique IP address. IP addresses are hard to remember, but domain names are not. The DNS obviates the necessity for users to keep track of long lists of IP addresses and the websites to which they correspond. The DNS accomplishes this by means of a collection of databases called domain name registries. For each of the internet’s generic top-level domains (gTLDs) (e.g. .com, .org, and .gov), there is a separate registry. For every domain name in a given gTLD, there is an entry in the registry that links the domain name to its corresponding IP address. When a user enters the domain name of a website into the address bar of her web browser, the browser spontaneously queries the appropriate registry to look up the associated IP address and then directs the browser to that IP address. This process is called resolving a domain name. Each gTLD registry is administered and controlled by a registry operator. Individuals and businesses that want to register domain names do so through domain name registrars. Both registry operators and registrars must be accredited by ICANN and are governed in their operations by a web of contracts specifying their rights and duties to ICANN, each other, registrants, and third parties. Registry operators are bound by the ICANN-Registry Agreement. Registrars are bound by both the ICANN Registrar Accreditation Agreement and separate contracts with individual gTLD registry op­er­ ators. The terms and conditions in these agreements are set by ICANN through a multistakeholder governance process, in which registrars and registry operators participate as stakeholders. Other stakeholders that participate in ICANN governance include national governments, commercial internet users (including intellectual property rightholders), and non-commercial internet users (including digital civil liberties advocates and other civil society groups). The voluntary agreements that are the subject of this chapter are facilitated and encouraged by ICANN, but they operate on the margins of ICANN’s contractual infrastructure for DNS intermediaries and its multistakeholder governance process.15 Without the DNS and the intermediaries that maintain it, navigating the internet with the efficiency we take for granted would be impossible. When DNS intermediaries block a domain name from resolving, users lose access to all of the content hosted at the associated IP address. For some domains, that can amount to thousands of unique websites and tens (or even hundreds) of thousands of individual web pages (uniform resource locators, or URLs). Domain name blocking is trivially easy as a technical matter; all it requires to make a domain globally unavailable is a simple edit to a database. That simplicity leads proponents of DNS blocking as a tool for regulating content to insist that it is neither burdensome nor unreasonable. From a human rights point of view, however, site blocking by DNS intermediaries can have p ­ rofound effects on the right to receive and impart information, especially if such blocking becomes routine.

15  See Bridy (n. 13) 1371.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

DEVELOPMENTS IN CONTENT REGULATION IN THE US AND THE DNS   635

2.  The History of Intellectual Property Enforcement in the DNS Historically speaking, intellectual property rights have been enforced in the DNS in a very tightly circumscribed way—only in the field of trade marks, and only to address the practice known as cybersquatting, which is defined as bad faith registration and use of a domain name.16 In the early days of the internet, cybersquatters pre-emptively registered domain names containing famous trade marks and then offered to transfer the registrations to later‑arriving trade mark holders for exorbitant prices.17 Under tremendous pressure from trade mark rightholders, a perennially powerful force within ICANN’s universe of stakeholders, ICANN in 1999 adopted a binding policy requiring all domain name registrars to contractually require all domain name registrants to participate in an alternative dispute-resolution (ADR) system designed specifically to adjudicate disputes involving alleged cases of cybersquatting. The system, which applies to all registrants in all gTLDs, is called the Uniform Dispute Resolution Policy (UDRP). Trade mark infringement, counterfeiting, and dilution claims fall outside the UDRP’s subject matter scope. The UDRP’s remedial scope is also narrow; its only available remedy is cancellation or transfer of the disputed domain name from the registrant to the complainant.

2.1  The UDRP Under the UDRP, ICANN‑accredited arbitrators decide cybersquatting disputes via a streamlined, online process. Once a UDRP complaint is filed by a complainant (who can choose from a list of ICANN‑approved providers), a registrant must participate in the UDRP process until its conclusion. If either party to a UDRP proceeding is ­dissatisfied with the result, that party can file a claim contesting the result in a court of competent jurisdiction. A registrant seeking judicial recourse following an adverse UDRP decision has ten business days to file a claim in court and produce evidence that she has done so to the registrar. If the registrant files a lawsuit in a timely manner, the prevailing complainant’s remedy is stayed pending the outcome of the litigation. If the losing registrant fails to file suit within the ten‑day window, the domain name is cancelled or transferred. UDRP outcomes have historically skewed heavily in favour of complainants. The World Intellectual Property Organization (WIPO) reports that for all years the UDRP has been active, 89 per cent of disputes have resulted in cancellation or transfer of the 16  See ICANN, Uniform Dispute Resolution Policy, s. 4(a). 17 See Panavision International LP v Toeppen, 141 F3d 1316 (9th Cir. 1998).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

636   ANNEMARIE BRIDY disputed domain name to the complainant.18 Registrants have prevailed in only 11 per cent of ­cases.19 Critics of the process point to these numbers and to the fact that a small number of providers handle the vast majority of UDRP complaints as evidence that the system has created strong structural incentives for providers to rule in favour of complainants.20 A provider whose results do not demonstrably favour complainants can easily find itself without any customers—as happened to eResolution, an accredited provider that went out of business in the early years of the UDRP for lack of a sustainable caseload.21

3. ICANN’s New gTLD Programme and IP Stakeholder Demands Beginning in 2013, in a long-anticipated and much-ballyhooed move, ICANN created over 1,200 new gTLDs in the DNS. In the rollout of the new gTLD programme, rightholders saw an opportunity to lobby within ICANN to extend the reach of IP enforcement in the DNS beyond the UDRP and cybersquatting. The MPAA, which represents Hollywood movie studios, and the RIAA, which represents major record labels, demanded that ICANN and its new gTLD contractors promote a ‘safe internet ecosystem’ by enforcing their members’ copyrights in films and music.22 Their appeal to ‘safety’ conflates copyright piracy with the distribution of malware, strategically blurring an otherwise clear line between the protection of physical network integrity—a classic IANA concern—and the protection of intellectual property rights. From the rightholders’ perspective, a ‘safe internet ecosystem’ is one in which ICANN’s contracts with DNS intermediaries are revised to require registrars to block domain names upon notice of infringement. Rightholders also want ICANN, through its formal contractual compliance process, to discipline any registrar that demands a court order before taking action against a registrant accused of infringing copyrights.23 All registrars, they argued, should be compelled to implement notice-based domain blocking for copyright infringement on pain of losing their accreditation and, consequently, their ability to do business. 18 See WIPO Statistics, Case Outcome (Consolidated): All Years . 19 ibid. 20  See Orna Rabinovich Einy, ‘The Legitimacy Crisis and the Future of Courts’ (2015) 17 Cardozo J. of Conflict Resolution 23, 54. 21  See Michael Froomkin, ‘ICANN’s Uniform Dispute Resolution Policy—Causes and Partial Cures’ (2002) 67 Brook L. Rev. 605, 608. 22  Letter from Victoria Sheckler, Deputy Gen. Counsel, RIAA, to Steve Crocker, Chairman of the Bd., ICANN, and Fadi Chehade, CEO, ICANN (5 March 2015) . 23  See Bridy (n. 13) 1368.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

DEVELOPMENTS IN CONTENT REGULATION IN THE US AND THE DNS   637 There was considerable opposition to these demands within ICANN’s multistakeholder community—most notably from domain name registrars and civil society groups.24 They opposed expanding ICANN’s historically limited authority over the DNS into the field of online content regulation, for IP enforcement or any other purpose. Such an expansion, they correctly argued, is incompatible with ICANN’s limited technical role as the manager of the IANA functions. Most registrars understandably do not want to be in the law enforcement and claim adjudication business.

4.  Expanding IP Enforcement in the DNS: Within and Without ICANN Rightholders achieved a partial victory in the battle within ICANN over domain blocking in the new gTLDs: a legal scaffolding within ICANN’s new gTLD registry contracts for a notice-and-takedown programme that can function, with the aid of willing registry operators, as a workaround in cases where registrars refuse to block domains without a court order. ICANN would not leverage its contractual compliance process to compel registrars to play ball with rightholders, but it did provide a means for rightholders to strike private deals with registry operators to bypass or override recalcitrant registrars. The key to this workaround is a provision in the 2013 ICANN‑Registry Agreement known as Specification 11—Public Interest Commitments.25 Specification 11 is a pass‑along or flow‑down provision requiring registry operators to include in their contracts with registrars a provision requiring registrars to include in their contracts with registrants ‘a provision prohibiting Registered Name Holders from . . . piracy, trade mark or copyright infringement, . . . and providing (consistent with applicable law and any related procedures) consequences for such activities including suspension of the domain name’.26 Through this provision, an express prohibition on copyright infringement and the identification of (unspecified) consequences for it are pushed down the DNS hierarchy from ICANN to registry operators to registrars. This is the same legal mechanism that makes the UDRP binding on all gTLD registrants. The endpoint in this cascade of contractual obligations is, of course, the registrant, whose registration is conditioned on her acceptance of the prospect that her domain name may be suspended if she is found to have engaged in copyright (or trade mark) 24 See Jeremy Malcolm and Mitch Stoltz, ‘Shadow Regulation: the Back-Room Threat to Digital Rights’ (EFF Deeplinks, 29 September 2016) ; ‘ICA Deeply Concerned with Proposal to Enable Domain Transfers Based upon Copyright Claims’ (Internet Commerce Association, 10 February 2017) . 25  See ICANN, Registry Agreement, s. 2.17 . 26  ibid. 97.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

638   ANNEMARIE BRIDY infringement. Notably, Specification 11 does not condition suspension of a registrant’s domain name on receipt of a court order or other valid legal process. The elegance of the flow-down provision from a rightholder’s perspective is that it need not be enforced by registrars at all. It can also be enforced by registry operators that are willing to act on rightholders’ notices without any prior adjudication. As keepers of the DNS zone files for the gTLDs they control, registry operators can block or otherwise disable any domain name within their zones. New gTLD registry operators supportive of Specification 11 explain their willingness to cooperate with rightholders in terms of defensive or pre-emptive self-regulation.27 If we do not regulate ourselves, they say, the government will step in and impose onerous obligations on us. This self-regulatory rhetoric obscures the plain fact that the real regulatory targets of intellectual property enforcement within the DNS are domain name registrants and the websites they operate. Registry operators that partner with rightholders to block domains are acting not as regulators of their own content but as regu­lators of third party content, without the benefit of judicial process. They appear to regard the websites underlying domain names as their own premises to police. It seems disingenuous, however, to characterize Specification 11 as the foundation for a selfregulatory framework. It is equally questionable to hang the imprimatur of ‘public interest’ on the privately ordered enforcement of private property rights through the DNS.

4.1  Present Arrangements: ‘Trusted Notifier’ Agreements In 2016, the MPAA announced that it had entered into ‘trusted notifier’ agreements with two new gTLD registry operators—Donuts and Radix.28 Together, they control hundreds of new gTLDs, with Donuts operating the lion’s share. In 2018, the registry op­er­ator EURid announced a similar agreement with the International Anti-counterfeiting Coalition (IACC), which counts the MPAA and RIAA among its members.29 EURid controls the .eu country code TLD (ccTLD). The MPAA’s agreements with Donuts and Radix are copyright-focused. The IACC’s agreement with EURid presumably includes both trade mark and copyright enforcement; however, EURid declined to release any information about the agreement beyond a press release announcing the fact of its existence. Lack of transparency, a hallmark of privately ordered inter-industry intellectual property enforcement, is a perennial challenge for researchers and civil society groups trying to assess the impact of voluntary agreements on the openness of the internet and the 27 See Meeting Transcript, ‘Marrakech—Industry Best Practices—the DNA’s Healthy Domains Initiative’ (6 March 2016) 12–13 . 28  ‘Radix and the MPAA Establish New Partnership to Reduce Online Piracy’ (MPAA News, 16 May 2016) . 29  See David Goldstein, ‘EURid and IACC Team Up to Fight Cybercrime in .EU and .ЕЮ’ (Domainpulse, 28 June 2018) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

DEVELOPMENTS IN CONTENT REGULATION IN THE US AND THE DNS   639 environment for free expression online. Because these agreements are private, and generally include non-disclosure provisions, the public usually learns nothing about them beyond what can be learned from the occasional press release. To its credit, the MPAA did release a fact sheet in connection with the Donuts agreement, but it has since been taken offline.30 The following paragraphs describe the programme outlined in that fact sheet. The fact sheet defines ‘trusted notifier’ very broadly to mean ‘an industry representative trade association that represents no single company, a recognized no[t]‑for‑profit public interest group dedicated to examining illegal behavior, or a similarly situated entity with demonstrated extensive expertise in the area in which it operates and ability to identify and determine the relevant category of illegal activity’.31 The document does not specify what makes any given group ‘recognized’ for ‘examining illegal behavior’ or what counts as ‘demonstrated extensive expertise’ when it comes to qualifying as a trusted notifier. Judgments about eligibility for trusted notifier status appear to be left to the discretion of the participating registry operator. It is also worth noting that the def­in­ition includes a reference to ‘illegal activity’ writ large, which suggests possible scope for such programmes beyond the realm of intellectual property enforcement. As an operational matter, the registry operator agrees to treat the trusted notifier’s complaints ‘expeditiously and with a presumption of credibility’. ‘Expeditiously’ means, ‘absent exceptional circumstances’, that the ‘[r]egistry will coordinate with the ap­plic­ able registrar’ and render a final decision within ten business days of the complaint. Notably, the registry operator has no obligation under the agreement to independently investigate the complaint before imposing a sanction, though it ‘may conduct its own investigation’ if it is inclined to do so. The fact sheet outlines a workflow in which the notifier complains to the registry, the registry coordinates with the registrar, and ‘as appropriate’ either the ‘registrar (or [r]egistry if registrar declines) may provide’ the complaint to the registrant with a ‘reasonable deadline’ for a response. The provision for bypass of uncooperative registrars is the heart of the framework. The ‘as appropriate’ and ‘may provide’ terms signal that the regis­trant will not necessarily receive notice of the complaint or be given an opportunity to respond. If the registrant does get the chance to respond, how much time is ‘reasonable’ is an open question, but given the ten‑day window for the registry’s ultimate decision, the registrant’s deadline must necessarily be a very short one—no more than a matter of a few days. For a registrant whose business is dependent on her active domain, such a short amount of time for a response strains the definition of reasonableness. And the consequences for failing to meet a registry’s tight deadline are potentially final, because a registrant has neither a right to appeal the registry’s adverse decision nor a 30  See ‘Characteristics of a Trusted Notifier Program’ (on file with author). 31  This concept circulates in the EU policy conversation concerning voluntary measures under the term ‘trusted flaggers’. See Giancarlo Frosio, ‘Why Keep a Dog and Bark Yourself: From Intermediary Liability to Responsibility’ (2017) 26 IJILT 1, 14 (citing a communication from the European Commission to the EU Parliament on ‘tackling illegal content online’).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

640   ANNEMARIE BRIDY claim for breach of contract against the registry operator. The registrant’s only ­contractual relationship is with the registrar. If the registry operator agrees with the notifier that ‘the domain clearly is devoted to abusive behavior’, then ‘the [r]egistry, in its discretion, may suspend, terminate, or place the domain on registry lock, hold, or similar status’. Because the programme does not require the registry operator to actually investigate the complaint or to solicit a response from the registrant, there is a high risk that participating registries will default to a rubber stamp approach. Indeed, the programme is designed to have DNS intermediaries intrinsically trust and quickly execute the notifier’s legal judgments. A registrant who disagrees with the registry operator’s unilateral decision could try to seek redress from ICANN through its contractual compliance process, but that registrant would likely be rebuffed in the same way that rightholders have been—on the ground that ICANN has no jurisdiction over intellectual property disputes concerning content on websites.32 After one year of operation, Donuts released a high-level summary of actions taken under the Donuts–MPAA agreement.33 No information about enforcement under the Radix–MPAA agreement has ever been published. Donuts reported that the MPAA sent notices involving twelve domain names. Seven of those were suspended or cancelled by their respective registrars. Three (i.e. a quarter of all cases) were suspended by Donuts, presumably because the registrars declined to take action. One was addressed by the website’s hosting provider, and the remaining one was found to warrant no action at all. The summary did not specifically identify any of the domains that were the subject of notices, making further inquiry into the facts surrounding the complaints impossible. All we can know is that the MPAA and Donuts believed that the sites in question were pervasively infringing. Neither Donuts nor the MPAA has made any subsequent public disclosures about the operation of the programme or the number of domains it has affected. The low volume of notices in the first year of the programme’s operation is likely attributable to a provision in the agreement banning the use of bots to identify putative infringements and generate notices. In the context of DMCA takedowns and notices to ISPs reporting alleged peer-to-peer file-sharing activity, the growing use of bots has led to significant abuse and over-claiming.34 To the extent that the MPAA is exercising restraint in its notice-sending, greater transparency about that could help quell concerns about over-enforcement and concomitant harm to free expression.

32 See Letter from Stephen  D.  Crocker, Chair of the Board, ICANN, to Greg Shatan, President, Intellectual Property Constituency (30 June 2016) . 33 See Andrew Allemann, ‘11 Domains Affected by Donuts’ Trusted Notifier Deal with MPAA’ (Domain Name Wire, 28 February 2017) . 34  See ‘Topple Track Attacks EFF and Others with Outrageous DMCA Notices’ (Electronic Frontier Foundation Takedown Hall of Shame, 22 July 2018) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

DEVELOPMENTS IN CONTENT REGULATION IN THE US AND THE DNS   641

4.1.1  The trusted notifier model and the UDRP compared Unlike the UDRP, the trusted notifier programme is ICANN‑enabled but not ICANN‑developed or sponsored, meaning that participating registries and rightholders were free to negotiate a deal mutually agreeable to them, without vetting the terms through ICANN’s multistakeholder policy development process.35 With the trusted notifier programme, ICANN has facilitated a programme of private, DNS‑based ­content regulation for which it now disclaims responsibility and oversight.36 The resulting set of procedures is loosely defined and heavily biased in favour of complainants. It altogether lacks uniform, substantive standards for determining what constitutes ‘clear and pervasive abusive behavior’ that will justify a registry operator in cancelling or suspending a registrant’s domain name. The publicly released document describing the programme is completely generic with respect to what qualifies as actionable conduct. The programme gives trusted notifiers an open invitation to provide a ‘[n]on‑exhaustive [i]dentification of the law(s) being violated’. Surprisingly, given the copyright‑specific nature of the MPAA’s interests, the document makes no reference to copyright infringement at all—a worrying sign that copyright may simply be the thin edge of the wedge when it comes to notice-driven content regulation through the DNS. The UDRP, in contrast, begins and ends with cybersquatting, the elements of which the programme’s procedures clearly define. The UDRP and the trusted notifier programme also differ significantly in terms of the nature and amount of information complainants must provide to initiate a claim. A trusted notifier’s complaint can consist of little more than identification of a law al­leged­ly being violated, a ‘clear and brief ’ description of how the site is violating the law, and evidence of alleged illegality in the form of sample URLs and screenshots. The UDRP, by contrast, requires a complainant to plead a case based on specific factors for determining a narrowly defined category of wrongful conduct—bad faith registration and use of a domain name. The trusted notifier programme also differs from the UDRP in that the UDRP guarantees the registrant an opportunity to respond and, if the outcome is unfavourable, to bring a claim for declaratory judgment in a court of competent jurisdiction. An opportunity to respond and a right to appeal an adverse judgment are basic to fair process. They are especially necessary in a programme like the trusted notifier programme, which calls on registry employees with no particular expertise or training in the law to make domain‑wide determinations about the legality of content under an unspecified range of laws from an unspecified range of jurisdictions, some of which may have conflicting laws on the same subject matter. The risk of error by inexpert decision‑makers when so much content and so many legal variables are in play is obviously high. For all its asserted shortcomings and intimations of pro‑complainant bias, the UDRP is at least run by accredited legal professionals who are tasked with applying uniform legal standards neutrally to the cases before them. Unlike registries participating in the trusted notifier programme, UDRP adjudicators are never expected to 35  See Meeting Transcript (n. 27) 2. 36 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

642   ANNEMARIE BRIDY give complainants the benefit of the doubt when deciding the merits of a complaint. Whether they do or not in practice is subject to debate, but there is certainly no pro‑complainant bias written into the UDRP, as there is written into the trusted notifier programme.

4.2  Future Plans: A Copyright-Specific UDRP? The idea of a UDRP for online copyright disputes has long been discussed in the ­academic literature, but without real impact.37 The door to copyright enforcement through the DNS is now open, however, thanks to Specification 11 and the new gTLD trusted notifier agreements it spawned. In 2017, the Domain Name Association (DNA), an industry trade group to which both Donuts and Radix belong, proposed such a programme in a white paper on ‘Registry/Registrar Healthy Practices’.38 As introduced, the so-called Healthy Domains Initiative consisted of four goals, the last two of which relate to expanded intellectual property enforcement: addressing threats to network security (e.g. malware, phishing, and pharming); eliminating child pornography and child abuse imagery; streamlining the handling of complaints related to so-called rogue pharmacies; and establishing a UDRP-like voluntary system for hand­ling copyright infringement.39 Of the four, only the first is related in any meaningful way to the IANA functions. The DNA branded its ‘UDRP-like system’ the Copyright Alternative Dispute Resolution Policy (Copyright ADRP).40 Coinciding with the DNA’s announcement, the Public Interest Registry (PIR) announced plans to adopt a Sys­ temic Copyright Infringement Alternative Dispute Resolution Policy (SCDRP), adhering to the prin­ ciples laid out in the DNA’s ‘Healthy Practices’ white paper.41 PIR, which controls the .org gTLD, is listed on the DNA’s website as a ‘strategic member’.42 It appears that the SCDRP is simply PIR’s branding of the DNA’s Copyright ADRP. PIR did not release a separate description of its programme.

37  See Steven Tremblay, ‘The Stop Online Piracy Act: The Latest Manifestation of a Conflict Ripe for Alternative Dispute Resolution’ (2014) 15 Cardozo J. of Conflict Resol. 819; Mark Lemley and Anthony Reese, ‘A Quick and Inexpensive System for Resolving Peer-to-Peer Copyright Disputes’ (2005) 23 Cardozo Arts & Ent. L.J. 1; Andrew Christie, ‘The ICANN Domain-Name Dispute Resolution System as a Model for Resolving Other Intellectual Property Disputes on the Internet’ (2002) 5 J. of World Intell. Prop. 105. 38  DNA Healthy Domains Initiative, ‘Registry/Registrar Healthy Practices’ (2017) (hereafter ‘Healthy Practices’ White Paper). 39  ibid. 2. 40  ibid. Appendix D. 41  See Kevin Murphy, ‘The Pirate Bay Likely to Be Sunk as .org Adopts “UDRP for Copyright” ’ (Domain Incite, 7 February 2017) . 42  DNA Membership List < https://thedna.org/current-dna-members/>.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

DEVELOPMENTS IN CONTENT REGULATION IN THE US AND THE DNS   643 This section summarizes the publicly available information about the two plans, both of which were abruptly put on hold almost as soon as they became publicly known.43 The model was developed in private consultations outside ICANN’s multistakeholder process. One commentator described the coordinated pullback as a response to objections from groups representing domain name registrants (i.e. the Internet Commerce Association) and free speech interests (i.e. the Electronic Frontier Foundation), which objected to the closed development process, among other, substantive concerns.44

4.2.1  The DNA’s copyright ADRP The DNA’s Copyright ADRP is modelled on the UDRP. According to the DNA, ‘[t]he system would be available for voluntary participation by registries and registrars who would like to work with content owners to combat illegal activity on a more efficient and cost-effective basis, but still adhering to key tenets of due process.’45 The promise of adherence to principles of due process distinguishes the DNA proposal from the trusted notifier programme described earlier, in which the registry operator functions as an unconstrained, all-in-one investigator, adjudicator, and enforcer. Participation in the Copyright ADRP would become binding on registrants through provider-drafted terms of service, as with the UDRP. The system would thus be voluntary only for registries and registrars; it would be mandatory for registrants. Like the MPAA’s trusted notifier programme, the Copyright ADRP would be designed to address alleged ‘pervasive’ infringement or infringement on sites whose ‘primary purpose’ is allegedly copyright infringement.46 The white paper does not quantitatively define ‘pervasive infringement’ or give examples of specific practices that would make domains suitable targets for complainants. Registry operators could not be named as parties; nor could registrars, although registrars would be permitted to intervene in disputes at their discretion. The programme would be administered by an ADR provider and staffed by third party neutrals. Substantive rules of decision would be those of the jurisdiction in which the registry operator or registrar is located. Decisions rendered by neutrals could be appealed by either the complainant or the registrant to a court of competent jurisdiction. The only available remedy would be—as under the UDRP—suspension or transfer of the domain name. Defaults by non-responsive registrants would result in automatic suspension or transfer of the domain name as long as a complaint sets forth a prima facie claim of ‘pervasive copyright infringement’.47 43 See Philip Corwin, ‘ICA Concerns Heard—Copyright UDRP on Indefinite Hold’ (Internet Commerce Association, 25 February 2017) . 44  See Kevin Murphy, ‘Angry Reactions to UDRP for Copyright’ (Domain Incite, 10 February 2017) . 45 ‘Domain Name Association Unveils Healthy Domains Initiative Practices’ (Business Wire, 8  February 2017) . 46  ‘Healthy Practices’ White Paper (n. 38) Appendix D. 47 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

644   ANNEMARIE BRIDY It is not clear from the DNA’s high-level programme description what the elements of the prima facie claim would be or what qualifications would be required for third party neutrals. All of the same concerns about structural pro-complainant bias that have been raised over the years about the UDRP apply with equal force to the DNA’s proposed Copyright ADRP. As with the UDRP, complaining rightholders under the Copyright ADRP would be the ADR provider’s paying customers. Those providers would thus have an existential incentive to make and keep rightholders happy.

4.2.2  PIR’s SCDRP The DNA’s evangelism for ‘healthy domains’ has been directed at legacy gTLD registry operators as well as new gTLD intermediaries. Because .org is the current ‘home’ of the Pirate Bay, PIR has been the target of public demands from the RIAA to cancel the website’s domain name.48 In keeping with its terms of service, the Pirate Bay’s registrar, Ontario-based EasyDNS, has declined to take action against the site’s operators without a court order.49 PIR’s general counsel has said publicly that she would like to see the site eliminated from the .org namespace, but she is uncomfortable with the lack of due process inherent in the MPAA’s trusted notifier model.50 PIR appears to view the SCDRP as a pragmatic compromise between the full due process afforded by public courts and the complete lack of due process embodied in the MPAA deals with Donuts and Radix. In a bipolar enforcement environment where zero due process has been floated as the replacement for full due process, some due process takes on the appearance of middle ground. Considering the timing of the initiatives, it is not inconceivable that the MPAA’s trusted notifier programme was introduced strategically—to redefine the Overton window for DNS takedown practices in preparation for the bigger win r­ epresented by the SCDRP. The reach of the SCDRP would dwarf that of the MPAA’s trusted notifier agreements, and the benefits would accrue to all rightholders, not just Hollywood studios. PIR administers 10.3 million domain names in the .org gTLD alone.51 Because PIR is a legacy gTLD, its current Registry Agreement with ICANN does not include Specification 11 and its express prohibition on copyright and trade mark infringement. The agreement does, however, contain flow-down provisions relating to dispute resolution. ICANN requires PIR to require registrants to: acknowledge and agree that PIR reserves the right to deny, cancel or transfer any registration . . . or place any domain name(s) on registry lock, hold, or similar status,

48  See Letter from Bradley Buckles, Executive Vice-President for Anti-Piracy, RIAA, to Elizabeth Finberg, General Counsel, PIR (2 June 2016) . 49  See Ernesto Van der Sar, ‘RIAA Fails to Take Down Pirate Bay Domain, For Now’ (TorrentFreak, 6 June 2016) . 50  See Murphy (n. 41). 51  See PIR, Frequently Asked Questions .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

DEVELOPMENTS IN CONTENT REGULATION IN THE US AND THE DNS   645 that it deems necessary, in its discretion . . . to comply with any applicable laws . . . or any dispute resolution process . . . . 52

In addition, PIR must require registrants to ‘acknowledge and agree that . . . PIR also reserves the right to place upon registry lock, hold or similar status a domain name during the resolution of a dispute’.53 These provisions, which specifically reference ADR, appear in an appendix to the ICANN–PIR Registry Agreement executed in 2013. They would allow PIR to implement the SCDRP at some future date. Under its current terms of service, PIR will not take action against registrants in response to complaints from private actors: ‘When [non-governmental] third parties make demands for take down of .org . . . domains, Public Interest Registry responds to the orders of courts with jurisdiction over Public Interest Registry.’54 PIR’s list of abusive practices for which it reserves the right to cancel or suspend domains does not currently include copyright or trade mark infringement.55 If PIR were to implement the SCDRP, the RIAA would almost certainly succeed in having the Pirate Bay quietly shut down without subjecting itself to the expense and publicity associated with a lawsuit against its long-time nemesis. Any resistance from EasyDNS would be moot, because the Pirate Bay would be bound by PIR’s ­dispute-resolution flow-down provision and thereby subject to the SCDRP and its penalty of domain transfer. PIR has ‘paused’ its implementation of the SCDRP until further notice, and the DNA has revised its ‘Healthy Practices’ white paper to delete all references to the Copyright ADRP.56 It is all but certain, however, that these plans will be reintroduced in the future. When they are, open internet and human rights advocates will (and should) be quick to respond. In the meantime, it is unclear exactly what the DNA and PIR are waiting for, or whether they are making revisions to the withdrawn model and/or overtures to stakeholders excluded from prior planning.

5. Conclusions In the United States, in the seemingly quiet years between the defeat of SOPA and the passage of FOSTA, voluntary agreements between corporate intellectual property 52  See ICANN, .org Registry-Registrar Agreement, Appendix 8 s. 3.6.5 (October 2015) . 53 ibid. 54  PIR, Takedown Policy . 55 ibid. 56 PIR, ‘Systemic Copyright Infringement Alternative Dispute Resolution Policy (SCDRP)’ (23  February 2017) . The version of the ‘Healthy Practices’ White Paper cited in n. 38 is the original version. The version that was available on the DNA’s website at the time of writing was the edited version.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

646   ANNEMARIE BRIDY rightholders and online intermediaries proliferated. These agreements were formed and are operating in the shadow of democratic process, without the checks and balances that are built in to public lawmaking and law enforcement. Because of the internet’s global nature, private law enforcement actions taken under these agreements reach far beyond the jurisdictions of the participating parties. The most recent, and most troubling, development in this regulatory space has been the penetration of privately ordered siteblocking into the internet’s technical layers through agreements between rightholders and DNS intermediaries. The adoption and normalization of privately ordered domain blocking within the DNS is troubling because DNS infrastructure has been, until now, dedicated to addressing and navigational functions that are independent of content-related c­ onsiderations. There are compelling reasons to insist that content regulation be confined to the internet’s application layer, and that any content-related law enforcement action by DNS intermediaries against domain name registrants be subject to court orders. Privately ordered intellectual property enforcement in the DNS should be resisted, whether it comes in the form of trusted notifier agreements or mandatory ADR policies that are developed on the margins of multistakeholder governance institutions.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 33

I n ter m edi a ry Li a bilit y i n Russi a a n d the Rol e of Pr i vate Busi n ess i n the En forcem en t of State Con trol s ov er the I n ter n et Sergei Hovyadinov

The internet is not an ethereal substance. Its infrastructure (cables, servers, routers, computers, etc.) exists in a physical form, located in real places, operated by real com­ pan­ies and people. Telecom companies like Verizon and Deutsche Telekom operate the backbone infrastructure and connect (or disconnect) people online; search engines like Google and Yandex index web pages and display search results in an order determined by their proprietary algorithms; online storage companies like Dropbox store personal data ‘in the cloud’, on their servers that exist in multiple undisclosed locations; and employees of social network companies like Facebook and VKontakte block usergenerated content that allegedly violates their companies’ policies or local law. These private companies act as a ‘gateway for information and an intermediary for expression’.1 They can make some websites easier to access than others, direct our atten­ tion to specific materials based on our interests and location, or block access to specific websites (or to the internet altogether) based on government demands.

  David Kaye, ‘Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression’ (2016) A/HRC/32/38, 3. 1

© Sergei Hovyadinov 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

648   Sergei Hovyadinov For many years, optimists used to regard intermediaries as a source of ‘liberation technology’2 and ‘accelerators of social transformation’.3 Others have critiqued them  for  failing to protect user privacy rights and have raised concerns about their increasing  power, lack of transparency, and financial incentives to cooperate with governments.4 These intermediaries do not operate in isolation; their real role is specific to the regu­ latory environment in which they operate. States have used them as ‘online proxy agents’5 to help enforce intellectual property (IP) rights, data protection, and state se­cur­ity. Most recently, the scope of intermediary liability—which might be better described as ‘intermediary accountability’6—has been changing. Governments around the world are increasingly asking online platforms to take on more significant roles in content moderation and assisting in online surveillance. Russia has often been cited as a regime that has been successful in establishing a tight grip on the media.7 Censorship and control have been part of the country’s lived experience for centuries.8 The freedom of Russian citizens on the internet was slowly decreasing until 2011–12, when anti-government demonstrations in Moscow swiftly transformed the Kremlin’s attitude towards the internet in general—and US internet companies specifically—from relative indifference into a cyberphobia that is driven by national security imperatives and undergirded by a more general paranoia against the west.9 Since then, a series of new laws have imposed ever greater restrictions. One of the key features of how the Kremlin established control over the internet was the role of internet businesses enlisted by the government to conduct a range of censorship and surveillance activities. The state has come to rely on the cooperation of the internet private sector and enforces compliance as it deems necessary. This has equipped the government with previously unexplored, but very effective, forms of power over information flow and user activity.   Larry Diamond and Marc Plattner (eds), Liberation Technology: Social Media and the Struggle for Democracy (John Hopkins U. Press 2012). 3   Ekaterina Stepanova, ‘The Role of Information Communication Technologies in the “Arab Spring” ’ [2011] Ponars Eurasia Policy Memo no. 159, 2. 4   See Rebecca MacKinnon and others, Fostering Freedom Online: The Role of Internet Intermediaries: (UNESCO Publishing 2015); Evgeny Morozov, The Net Delusion: The Dark Side of Internet Freedom (Public Affairs 2012). 5  Seth Kreimer, ‘Censorship by Proxy: The First Amendment, Internet Intermediaries, and the Problem of the Weakest Link’ (2006) 155 U.  Pa. L.  Rev. 11; Jonathan Zittrain, ‘A History of Online Gatekeeping’ (2005) 19 Harv. J. of L. & Tech. 253. 6   Marin Husovec, Injunctions Against Intermediaries in the European Union: Accountable But Not Liable? (CUP 2017). 7   See e.g. ‘Country Reports on Human Rights Practices for 2017’ . 8   See Katherine Ognyanova, ‘In Putin’s Russia, Information Has You: Media Control and Internet Censorship in the Russian Federation’ in Mika Merviö (ed.), Management and Participation in the Public Sphere (IGI Global 2015) 62–79. 9   See Natalie Duffy, ‘Internet Freedom in Vladimir Putin’s Russia: The Noose Tightens’, 12 American Enterprise Institute (2015) . 2

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Intermediary Liability in Russia and the Role of Private   649 Previous research on Russian state control over the internet has addressed legislative requirements,10 presented quantitative measurements,11 evaluated the transparency of intermediaries12 and the effect of public opinion on state control,13 reported human rights violations online,14 and provided historical background about the Russian segment of the internet (often referred to as ‘RuNet’) and its regulation.15 However, we know little about the role of local and global platforms in shaping and participating in the enforcement of the state’s policies. In this chapter, we attempt to demonstrate the evolution of intermediary liability in Russia since 2011–12, when the government drastically changed its stance on internet regulation. We describe how this regulation is followed and enforced with respect to content removal and online surveillance, and how private companies have become an integral part of the state’s internet control apparatus. We examine the legislative frame­ work, enforcement statistics, and most prominent cases for two areas of government control: content and surveillance. ‘Content’ refers to the types of information the state seeks to restrict online, while ‘surveillance’ refers to how the state collects user data and online activity. Within both categories, we focus on the obligations of intermediaries (telecom operators, web-hosting providers, and social media platforms). We further complement the findings with quotations from semi-structured interviews that were conducted with executives of telecom and web-hosting companies, telecom lawyers, internet activists, and representatives of industry associations.16

1.  Russian Internet In the last decade, Russia has seen a phenomenal growth in the number of internet users, from 18 per cent in 2006 to 76 per cent in 2017.17 Despite the increasing prominence of 10   See Duffy (n. 9); Julien Nocetti, ‘Russia’s “Dictatorship-of-the-Law” Approach to Internet Policy’, Internet Policy Review (2015) ; Ognyanova (n. 8). 11   See ‘Indeks Svobody Interneta’ (no date) . 12   See MacKinnon and others (n. 4); Ranking Digital Rights, ‘2018 Corporate Accountability Index’ . 13   See Gregory Asmolov, ‘Welcoming the Dragon: The Role of Public Opinion in Russian Internet Regulation’, Internet Policy Observatory Research Paper 1 (2015) . 14   See ‘Country Reports on Human Rights Practices for 2017’ (n. 7); Freedom House, ‘Russia Country Report: Freedom on the Net 2017’ (14 November 2017) . 15   See Andrei Soldatov and Irina Borogan, The Red Web: The Struggle Between Russia’s Digital Dictators and the New Online Revolutionaries (Public Affairs 2015). 16   All but one of our interviewees asked to remain anonymous. Thus, when providing quotations, we cited by reference number we assigned to an interviewee and a communication method (e.g. phone, Skype). Quotations have been edited for grammar and clarity. 17   Worldbank, ‘Individuals Using the Internet (% of Population): Data’ .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

650   Sergei Hovyadinov

Table 33.1  Most popular websites in Russia Website

Category

1. YouTube.com 2. Yandex.ru 3. Vkontakte (Vk.com) 4. Google.ru 5. Mail.ru 6. Google.com 7. Avito.ru 8. Ok.ru 9. Aliexpress.com 10. Wikipedia.org

Video hosting Search engine; news portal Social network Search engine Search engine; news portal Search engine Classified advertisements Social network Online marketplace Online encyclopaedia

Source: ‘Top Sites in Russia—Alexa’ .

global internet companies such as Facebook, Twitter, Google, and Microsoft, Russia’s local internet ecosystem remains very strong: VKontakte and Odnoklassniki continue to dominate social networking, and Yandex—the biggest RuNet portal with a daily audience of 27 million users—is the most widely used online portal (see Table 33.1).18 The public cloud services market is largely dominated by international players (e.g. Microsoft, SAP, Amazon), but local players including CROC, SKB Kontur, and DataLine are gaining ground, supported by the government’s strong push towards import substitutions in the IT sector.19 The country’s four largest mobile operators (based on number of subscribers, rev­ enue, and network coverage) are Mobile TeleSystems (MTS), MegaFon, Beeline, and Tele 2. Over the last decade, they have established a nationwide presence; together, they now control nearly 83 per cent of all Russian wireless subscribers.20 In the broadband market, Rostelecom (a company in which the government holds more than 48 per cent of its shares) and TransTelecom (part of the state-run Russian Railways) hold almost 50 per cent of the market. Rostelecom controls nearly all key elements of the broadband value chain (access, backbone, and international connectivity), and it operates a 500,000km-plus digital fibre-optic backbone network that transmits voice, data, and video across the entire Russian Federation. While international companies remain largely independent of Russian authorities, the state has initiated changes in regulation and ownership structure of major local internet actors, similar to what has been done to traditional media outlets, seeking to establish both de jure and de facto control over them. This strengthening of control is  often framed   ‘WEB-Index: The Audience of Internet Projects’ .   ‘Russia Cloud Services Market: 2018–2022 Forecast and 2017 Analysis’ (IDC: The premier global market intelligence company) . 20  ‘Russia Wireless’ (2017) . 18

19

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Intermediary Liability in Russia and the Role of Private   651 as  ‘measures to increase national security as well as to safeguard the individual ­security of Russian citizens’, and it has been implemented in stages, starting with the  ­telecom and broadband providers and followed by the most popular platforms and services.21 The state’s control over internet service providers (ISPs) is instituted largely via legal mechanisms, such as telecom licensing, mandating that the providers identify users, surveil their online activities, and cooperate with law enforcement. Failure to comply with these requirements can lead to fines and a possible revocation of the provider’s licence. Another, less straightforward source of leverage is the government’s domin­ ance in backbone infrastructure through government control of Rostelecom and TransTelecom. Together, these companies hold almost 50 per cent of the broadband market. As any mobile operator or ISP needs to rely on backbone infrastructure, and the overall dominance in this market segment gives the government enormous power over the key network infrastructure and its operations.22 Among the most notable instance of ‘government interest’ in internet companies is the priority share in Yandex held by a leading state bank Sberbank, giving it veto power on the sale of more than 25 per cent of Yandex’s shares.23 Other examples include Mail. ru, VKontakte, and Odnoklassniki, which of this writing are controlled by an oligarch closely affiliated with the Kremlin.24 When establishing control over global platforms was not feasible through ownership, the Kremlin ensured that regulatory restrictions and the accompanying liability for non-compliance applied broadly to all platforms, regardless of their country of residence. The nexus of Russian jurisdiction could include the mere accessibility of the platform in Russia, the processing of personal data of Russian citizens, or the availability of a particular function to Russian users.25 The government has also come to rely on the policies of these global companies to remove or block sensitive content. For instance, according to the latest transparency report from Twitter, the company withheld twentythree accounts and 630 tweets for violation of Russian law in response to requests submitted by the Russian government during the second half of 2017, and removed ‘some content’ in 224 accounts for alleged violations of Twitter’s terms of service.26 21   Carolina Vendil Pallin, ‘Internet Control through Ownership: The Case of Russia’ [2016] Post-Soviet Affairs 1. 22   See Carlo Maria Rossotto and others, A Sector Assessment: Broadband in Russia (World Bank Group 2015); ‘Russia Wireless’ (n. 20). 23   Yandex NV, ‘Form 20-F (SEC Filings)’ (sec.gov, 2017) . 24   ‘VKontakte CEO Sells Stake to Billionaire Usmanov’s Partner’ (Bloomberg News, 25 January 2014) ; Amar Toor, ‘How Putin’s Cronies Seized Control of Russia’s Facebook’ (The Verge, 31 January 2014) . 25   For example, the package of anti-terrorism amendments passed in July 2016, known as ‘Yarovaya’s Law’, mandates that online services offering encryption must assist the Federal Security Service (FSB) with decoding encrypted data. 26   Twitter, ‘Removal Requests’ .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

652   Sergei Hovyadinov

2.  The Evolution of Internet Regulations and ISP Liability For a long time, the Russian internet remained relatively free compared to the media sector, with the government and the presidential administration adopting a liberal, hands-off approach to its regulation.27 Key policy stakeholders publicly denied any plans to regulate the internet or censor content. Similarly, the regulation of intermediary liability was scarce. It was only in 2006, when the Law On Information, Information Technologies and Protection of Information (‘Law On Information’) introduced an EU-like approach to intermediary liability by exempting ISPs involved in transmitting, storing, and facilitating access to information from liability for third party content and actions.28 However, the law excluded liability for IP infringements, which was left to the courts to adjudicate. In 2008, in the landmark case Kontent i Pravo v Masterhost, the Supreme Arbitration Court of the Russian Federation outlined key principles under which an ISP can be exempt from liability for hosting copyright-infringing content: a provider is not liable if it does not initiate transmission, select the addressee, or modify the transmitted content.29 That regulatory gap was finally closed in July 2013, when the Russian Parliament (the Duma) adopted amendments to the Civil Code that extended the rules on intermediary liability to IP matters and introduced a notice-and-takedown system.30 Since 2011–12, while trying to cover its desire for control with a patina of legitimacy, the government pushed forward new regulatory restrictions—first under the pretext of protecting children online, and then under the rationale of combating terrorism and protecting national security. On a global level, it focused on promoting ‘internal stability, as well as sovereignty over its “information space” ’,31 and even considered the possibility 27  See Michael Birnbaum, ‘Russian Blogger Law Puts New Restrictions on Internet Freedoms’ (Washington Post, 31 July 2014) ; Duffy (n. 8); Vendil Pallin (n. 21). 28   Federal’nyy zakon ot 27.07.2006 N 149-FZ ‘Ob informatsii, informatsionnykh tekhnologiyakh i o zashchite informatsii’//‘Sobraniye zakonodatel’stva RF’, 31.07.2006, N 31 (1 ch), st. 3448. See also Federal Law No. 149-FZ, on Information, Information Technologies and Protection of Information . 29   Postanovleniye Prezidiuma VAS RF ot 23.12.2008 N 10962/08 po delu N A40-6440/07-5-68 . See also ‘Kontent I Pravo v. Masterhost’ . 30   See Federal’nyy zakon ot 02.07.2013 N 187-FZ ‘O vnesenii izmeneniy v otdel’nyye zakonodatel’nyye akty Rossiyskoy Federatsii po voprosam zashchity intellektual’nykh prav v informatsionnotelekommunikatsionnykh setyakh’//‘Sobraniye zakonodatel’stva RF’, 08.07.2013, N 27, st. 3479. See also Federal Law No. 187-FZ . 31   See US Department of State, Christopher Painter, Statement Before the Senate Foreign Relations Subcommittee on East Asia, the Pacific, and International Cybersecurity Policy, ‘International Cybersecurity Strategy: Deterring Foreign Threats and Building Global Cyber Norms’ (25 May 2016) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Intermediary Liability in Russia and the Role of Private   653 of isolating the national segment of the internet by imposing limitations on local and international traffic (see Table 33.2).32 When the government first introduced the Blacklist law in 2012, it was met with fierce resistance from the industry and public outcry from users. Local and global companies like Google, Yandex, Mail.ru, Wikimedia, and many others collectively launched a public campaign against the government’s proposal to use IP blocking to censor online content. The Russian-language version of Wikipedia shut down its page for one day, Google

Table 33.2  Key regulatory developments that have affected online intermediaries since 2011 Year

Development

2011

A government-affiliated advocacy group called ‘The League for Safe Internet’ begins lobbying for a law that will ‘protect’ children from harmful information online.33

2012

The Duma adopts a so-called ‘Blacklist law’ that introduces a unified register of websites that contain information about drug usage, advocate suicide or describe methods of suicide, or contain child pornography, as well as prescribes a mechanism for blocking those sites.34

2013

The Duma adopts a general framework for intermediary liability for IP infringements and introduces a notice-and-takedown system, as well as a preliminary injunctive relief mechanism to block websites accused of copyright infringement.35

2013

The so-called ‘Lugovoi Law’ introduces a simplified blocking procedure for websites that contain calls for extremist activities or participation in unsanctioned mass protests, giving the General Prosecutor and his deputies the power to initiate the blocking of such websites within 24 hours and without a court order.36 (continued )

32  Anastasia Golitsyna and Pavel Kantyshev, ‘Runet Budet Polnost’yu Obosoblen K 2020 Godu’ (Vedomosti, 13 May 2016) . 33  ‘Russia’s Internet Defense League: A “Grassroots Initiative” Created By The Government’ (RadioFreeEurope/RadioLiberty, 18 July 2012) . 34   Federal’nyy zakon ot 28.07.2012 N 139-FZ ‘O vnesenii izmeneniy v Federal’nyy zakon “O zashchite detey ot informatsii, prichinyayushchey vred ikh zdorov’yu i razvitiyu” i otdel’nyye zakonodatel’nyye akty Rossiyskoy Federatsii’//‘Собрание законодательства РФ’, 30.07.2012, N 31, ст. 4328. See also ‘Federal Law No. 139-FZ of 28 July 2013(“Blacklist law”) on the protection of children from information harmful to their health and development and other legislative acts of the Russian Federation’ . 35   Federal Law No. 187-FZ (n. 30). 36   Federal’nyy zakon ot 28.12.2013 N 398-FZ ‘O vnesenii izmeneniy v Federal’nyy zakon “Ob informat­ sii, informatsionnykh tekhnologiyakh i o zashchite informatsii”//‘Sobraniye zakonodatel’stva RF’, 30.12.2013, N 52 (chast’ I), st 6963. See also Federal Law No. 398-FZ of 21 December 2013 On amending the federal law ‘on information, information technologies and information protection’ .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

654   Sergei Hovyadinov

Table 33.2  Continued Year

Development

2014

The so-called ‘Bloggers’ Law’ mandates that bloggers with more than 3,000 daily visitors refrain from using their blogs for illegal activities, ensure correctness of published information, publish their contact details, and comply with all applicable laws (electoral, mass media, privacy, defamation, etc.). The law also introduces the broad concept of ‘organizers of distribution of information’ (ODI), defined as those who ‘enable the receipt, transmission, delivery and/or processing of electronic messages of Internet users’. ODIs are required to notify the Roskomnadzor, a government telecom watchdog, before commencing the provision of services; to retain and store in Russia certain user data; and to cooperate with law enforcement authorities. Failure to comply may result in legal fines and the blocking of ODI’s service or website.37

2014

A new ‘Data Localization Law’ requires that the processing of personal data of Russian citizens is conducted only with the use of servers located in Russia. Failure to comply with this requirement may result in blocking of the operators’ services or websites.38

2014

Amendments to the Law On Information introduce a procedure for the ‘perpetual blocking’ of websites involved in ‘repeated’ IP infringements, with no right of reinstatement.39

2015

The Russian version of the ‘right to be forgotten’ law requires that search engines operating in Russia remove search results listing information on individuals when such information is unlawfully disseminated, untrustworthy, outdated, or irrelevant.40

37   Federal’nyy zakon ot 05.05.2014 N 97-FZ ‘O vnesenii izmeneniy v Federal’nyy zakon “Ob infor­ matsii, informatsionnykh tekhnologiyakh i o zashchite informatsii” i otdel’nyye zakonodatel’nyye akty Rossiyskoy Federatsii po voprosam uporyadocheniya obmena informatsiyey s ispol’zovaniyem informatsionno-telekommunikatsionnykh setey’//‘Sobraniye zakonodatel’stva RF’, 12.05.2014, N 19, st. 2302. See also Federal Law No. 97-FZ of 5 May 2014 ‘on amending the federal law on information, information technologies and protection of information’ and other legislative acts of the Russian Federation concerning putting in order information exchange using information and telecommunication networks . 38   Federal’nyy zakon ot 21.07.2014 N 242-FZ ‘O vnesenii izmeneniy v otdel’nyye zakonodatel’nyye akty Rossiyskoy Federatsii v chasti utochneniya poryadka obrabotki personal’nykh dannykh v informatsionnotelekommunikatsionnykh setyakh’//‘Sobraniye zakonodatel’stva RF’, 28.07.2014, N 30 (Chast’ I), st. 4243. See also Federal Law No. 242-FZ of 21 July 2014 ‘on amending certain legislative acts of the Russian Federation as to the clarification of the processing of personal data in information and telecommunications networks’ . 39   Federal’nyy zakon ot 24.11.2014 N 364-FZ ‘O vnesenii izmeneniy v Federal’nyy zakon “Ob informat­ sii, informatsionnykh tekhnologiyakh i o zashchite informatsii” i Grazhdanskiy protsessual’nyy kodeks Rossiyskoy Federatsii’//‘Sobraniye zakonodatel’stva RF’, 01.12.2014, N 48, st. 6645. See also Federal Law No. 364-FZ of 24 November 2014 ‘On Amending Federal Law “On Information, Information Technologies, and Data Protection” and Civil Procedural Code of the Russian Federation’ . 40   Federal’nyy zakon ot 13.07.2015 N 264-FZ ‘O vnesenii izmeneniy v Federal’nyy zakon “Ob informatsii, informatsionnykh tekhnologiyakh i o zashchite informatsii” i stat’i 29 i 402 Grazhdanskogo protsessual’nogo kodeksa Rossiyskoy Federatsii’//‘Sobraniye zakonodatel’stva RF’, 20.07.2015, N 29 (chast’ I), st. 4390. See also Federal Law No. 264-FZ of 13 July 2015 amending the Federal Law ‘On

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Intermediary Liability in Russia and the Role of Private   655 2016

The bundle of anti-terrorism amendments known as ‘Yarovaya’s Law’ mandates that telecom operators and online service providers store users’ metadata and content locally, and provide that information to law enforcement authorities on request. The Law also requires that online services offering encryption must assist the Federal Security Service (FSB) with decoding encrypted data.41

2016

LinkedIn becomes the first global online service to be blocked in Russia for failing to comply with data localization requirements.42

2016

The ‘News Aggregators Law’ mandates that news aggregators operating in Russia can only be owned by Russian companies or citizens and should be responsible for the reliability of the information that appears on their services.43

2017

Government agency Roskomnadzor blocks BlackBerry, Line, and Imo messengers, as well as the video service Vchat, for failing to comply with data localization requirements.44

2017

The communication app Zello is blocked for failing to register as an ‘organizer of distribution of information.’45

2017

The ‘Anti-VPN Law’ requires that operators of search engines, virtual private networks (VPNs), and internet anonymizers cooperate with Roskomnadzor and block access to websites from the Roskomnadzor’s blacklist of banned websites.46 (continued )

Information, Information Technologies, and Information Protection’ and Articles 29 and 402 of the Civil Procedural Code of the Russian Federation’ (aka Right to be Forgotten Law) . 41   Federal’nyy zakon ot 06.07.2016 N 374-FZ ‘O vnesenii izmeneniy v Federal’nyy zakon “O pro­ tivodeystvii terrorizmu” i otdel’nyye zakonodatel’nyye akty Rossiyskoy Federatsii v chasti ustanovleniya dopolnitel’nykh mer protivodeystviya terrorizmu i obespecheniya obshchestvennoy bezopasnosti’// ‘Sobraniye zakonodatel’stva RF’, 11.07.2016, N 28, st. 4558. See also Federal Law No. 374-FZ of 7 July 2016 on amending the Federal Law ‘on combating terrorism and certain legislative acts of the Russian Federation regarding the establishment of additional counter-terrorism measures and public security’ . 42   See Ingrid Lunden, ‘LinkedIn Is Now Officially Blocked in Russia’ (TechCrunch, 17 November 2016) . 43   Federal’nyy zakon ot 23.06.2016 N 208-FZ ‘O vnesenii izmeneniy v Federal’nyy zakon “Ob infor­ matsii, informatsionnykh tekhnologiyakh i o zashchite informatsii” i Kodeks Rossiyskoy Federatsii ob administrativnykh pravonarusheniyakh’//‘Sobraniye zakonodatel’stva RF’, 27.06.2016, N 26 (Chast’ I), st. 3877. 44   See ‘Russia Bans Several Messaging Apps for “not Complying with Law”’ (RT International) . 45   See Isaac Webb, ‘Russia Blocks Walkie-Talkie App Zello As Truckers Strike’ (Global Voices, 10 April 2017) . 46   Federal’nyy zakon ot 29.07.2017 N 276-FZ ‘O vnesenii izmeneniy v Federal’nyy zakon “Ob infor­ matsii, informatsionnykh tekhnologiyakh i o zashchite informatsii”//‘Sobraniye zakonodatel’stva RF’, 31.07.2017, N 31 (Chast’ I), st. 4825.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

656   Sergei Hovyadinov

Table 33.2  Continued Year

Development

2017

The ‘Messengers Law’ requires that companies providing instant messaging services in Russia identify users based on their mobile phone number, store Russian users’ data locally, restrict transmission of messages containing prohibited information, and restrict the transmission of certain messages at the request of the government.47

2018

The messaging service Telegram is banned by court order for rejecting the FSB’s request to provide access to encrypted user communications. Roskomnadzor enforces the ban by blocking more than 15 million IP addresses, including ones associated with Amazon and Google services.48

published a blogpost criticizing the law, and Yandex posted a black banner on its homepage.49 In response, government officials and members of the parliament set up con­sult­ations with multiple stakeholders but, according to our interviewees, these consultations only created an appearance of seeking feedback from the industry and did not have a meaningful impact. Currently, few industry representatives dare to challenge the government’s initiatives for fear of being called a foreign agent or an ‘enemy of the children’. For others, any consultation with the government is doomed to fail, either because the officials use it as a cover to legitimize new restrictions or because the industry is unable to speak with one voice. As some anonymous study subjects said: Everything is decided somewhere at the top. There are no consultations with the industry. Because of this, they adopt insane laws that cannot be fulfilled. Regarding consultations with the industry, the state only creates an appearance to be able to say later ‘we consulted with the industry’.50 In short, government doesn’t care about ISP’s opinion. ISPs from their side prefer to avoid public criticism of government initiatives.51 There is a public discussion of laws as such, but nobody pays any attention to it. The position within the industry is not completely unified either, and many companies prefer to remain in the shadows without raising a public profile. . . . it is a complete mess, there is no single voice within the industry, although a significant part of the industry understands the futility and meaninglessness of such participa­ tion . . . someone from the industry may support new control measures, someone is against, someone is neutral, and says that nothing will help.52 47   Federal’nyy zakon ot 29.07.2017 N 241-FZ ‘O vnesenii izmeneniy v stat’i 10.1 i 15.4 Federal’nogo zakona “Ob informatsii, informatsionnykh tekhnologiyakh i o zashchite informatsii”//‘Sobraniye zakonodatel’stva RF’, 31.07.2017, N 31 (Chast’ I), st. 4790. 48   See Natasha Lomas, ‘Telegram Hit with Block in Russia over Encryption’ (TechCrunch, 13 April 2018) . 49  See ‘Russian Duma Adopts “Web Blacklist” Bill despite SOPA-Style Censorship Outcry’ (RT International, 11 July 2012) . 50   Sergei Hovyadinov, Interview with I5, Skype. 51   ibid., Interview with I1, Email.    52  ibid., Interview with I2, Skype.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Intermediary Liability in Russia and the Role of Private   657 One notable exception was the telecom industry’s unified, albeit unsuccessful, pushback against a set of draft laws known as ‘Yarovaya’s Law’ in 2016, which required telecom companies to locally store users’ metadata and the content of their communications. However, the major concern of these companies did not relate to the restriction of users’ freedom and privacy online, but rather to the significant financial investment needed to ensure compliance. As a spokesperson for one of Russia’s largest mobile operators said, the bill ‘put the industry on the brink of collapse’.53

3.  How the Government Relies on Intermediaries To operationalize the new restrictions, the government and legislature introduced regu­ latory obligations on intermediaries that were focused on content blocking, data stor­ age, and surveillance requirements. For example, when a government agency or a court determines any content to be illegal, internet access providers are required to promptly block user access to that content. On the surveillance side, telecom operators are obliged to cooperate with law enforcement authorities, among other responsibilities, and install equipment that permits the authorities to intercept online communication directly, without going through any other proxy.54 Email communication providers are asked to store user content locally and, if requested, provide access to it by law enforcement. In effect, the introduction of these regulations expanded intermediary liability from a trad­ ition­al exemption from liability for third party content and actions to a set of mandatory obligations designed to assist the state with informational control.

3.1 Content The Law On Information lists content categories that are subject to a centralized block­ ing procedure and grants authority to specified administrative agencies and courts to make decisions on blocking such content. While the criteria for most categories are defined by law, there is also a vague category, ‘other illegal information’, which is left to the discretion of the judges and public prosecutors who oversee these types of cases (see Table 33.3). 53   Mansur Mirovalev, ‘ “This Is 100% a Step toward an Iron Curtain.” Russia Weighs Anti-Terrorism Law Criticized as Draconian’ (Los Angeles Times, 27 June 2016) . 54   This technical system for search and surveillance is also known as ‘SORM’. See e.g. Wikipedia con­ tributors, ‘SORM’ (Wikipedia, 3 October 2018) .

Table 33.3  Content categories that are subject to centralized blocking mechanism under the Law On Information Authority that decides on the legality of content

State register

Type of intermediary and obligations

(1) Materials with pornographic images of minors and/or announcements about engaging minors as performers for participation in entertainment events of a pornographic nature

Federal Service for Supervision of Communications, Information Technology, and Mass Media (Roskomnadzor)

(2) Information about ways and methods of developing, making, and using narcotic agents, psychotropic substances, and their precursors; places where such agents, substances, and their precursors can be acquired; methods and places for cultivating plants containing narcotic agents

Federal Drug Control Service (until 25 November 2016)

The Unified Register of Domain Names, Internet Website Page Locators, and Network Addresses that Allow the Identification of Internet Websites Containing Information Prohibited for Distribution in the Russian Federation (‘Unified Register’)

Hosting provider: • on notification from Roskomnadzor, inform a site owner and request the deletion of a web page containing prohibited information; • in case of non-compliance by the site owner, block access to the website.

(3) Information on ways to commit suicide, as well as calls to commit suicide

Federal Service for Surveillance on Consumer Rights Protection and Human Wellbeing (Rospotrebnadzor)

Ministry of Internal Affairs (since 25 November 2016) Federal Service for Supervision of Communications, Information Technology, and Mass Media (Roskomnadzor)

(4) Information about minors being victims To be designated of unlawful actions (omission to act) whose dis­sem­in­ation is prohibited by federal law (5) Information that violates the provisions Federal Tax Service of laws on the organization and conduct of gambling and lotteries (6) Information with proposals for the retail To be designated sale of alcohol products when their retail sale is restricted or prohibited by applicable laws (7) Any other information declared by a court District courts as information the dissemination of which is prohibited in the Russian Federation

Telecom provider: • monitor the state registers; • block access to the website for its users. Operators of search engines, internet anonymizers, and virtual private networks: • block access to websites from the Unified Register, exclude these resources from search results.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Prohibited content

The Register of Domain Names, Internet   Website Page Locators, and Network Addresses that Allow the Identification of Internet Websites Containing Riotous Statements and Appeals to Resort to Extremism or to Participate in Mass (Public) Events Held in Violation of Applicable Laws (‘Register 398-FZ’)

(9) Copyright-infringing materials

The Register of Domain Names, Internet   Website Page Locators, and Network Addresses that Allow the Identification of Internet Websites Containing Information Distributed in Violation of Exclusive Rights (‘NAP Register’)

Moscow City Court

Source: Law On Information; Government Decree No. 1101 of October 26, 2012.55

  Postanovleniye Pravitel’stva RF ot 26 oktyabrya 2012 N 1101 ‘O yedinoy avtomatizirovannoy informatsionnoy sisteme “Yedinyy reyestr domennykh imen, ukazateley stranits saytov v informatsionno-telekommunikatsionnoy seti ‘Internet’ i setevykh adresov, pozvolyayushchikh identifitsirovat” sayty v informatsionno-telekommunikatsionnoy seti “Internet”, soderzhashchiye informatsiyu, rasprostraneniye kotoroy v Rossiyskoy Federatsii zapreshcheno’//Sobraniye zakonodatel’stva Rossiyskoy Federatsii ot 29 oktyabrya 2012 g N 44 st. 6044. 55

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

General Prosecutor, deputies of the (8) Information that contains calls for mass disorder, pursuance of extremist activities, General Prosecutor or participation in mass (public) events conducted in breach of established procedure; ­information or materials of a foreign or international non-governmental organization whose activities are deemed undesirable in the territory of the Russian Federation

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

660   Sergei Hovyadinov In general, there is a standard process for blocking content. When allegedly illegal content is reported to the state telecom watchdog Roskomnadzor,56 one of the desig­ nated administrative agencies is tasked to establish its legality. If the content is found to be illegal, Roskomnadzor orders a hosting provider, regardless of its country of residence, to initiate its removal or block Russian users from accessing identified websites or materials hosted on its platform. If the content is not promptly blocked or removed, that website’s IP address is added to one of three state registers (see Table 33.3), which triggers an obligation for telecom operators throughout the country to prevent their users from accessing the website. The process is more expedited for ‘extremism’ and ‘calls for public protests’, which can be blocked immediately based on a decision by the General Prosecutor or his deputies. Roskomnadzor not only adds certain URLs to their website registries, but also their domain names and IP addresses, leaving the operator to decide which of the  blocking methods to deploy.57 Blocking by URL is the least intrusive method and, according to our interviews, is preferred by operators. However, this method requires the use of add­ition­al equipment, which is costly and often unavailable to many small and medium-sized operators. In addition, if web content is transmitted using the encrypted HTTPS protocol, ISPs and telecom operators cannot block ­individual web pages within the domain, and have no choice but to block either an entire domain or IP address. Both actions result in the same consequence: the ­collateral blocking of adjacent legitimate resources.58 This happened, for example, in  August 2015, when Russian telecom op­er­ators temporarily blocked the entire Reddit platform following an order from Roskomnadzor to block a thread related to recreational drug use.59 According to RosKomSvoboda, more than 4 million legit­ imate resources remain blocked by op­er­ators that employ such ‘wholesale’ methods of blocking.60 Some local web-hosting companies try to shield their clients from this collateral damage and, at their own initiative, move ‘controversial content’ to separate IP addresses. This way, blocking a ‘blacklisted’ IP address does not affect other hosted domains and web pages: It is very simple. The request [from Roskomnadzor] comes in—we send it to a ­client. If they don’t respond, we look at that content and remove the URL—or if I see 56  The full name of Roskomnadzor is the Federal Service for Supervision of Communications, Information Technology, and Mass Media. 57   For a good overview of the key terms related to online blocking, see Daphne Keller, ‘Appendix 3: Glossary: Internet Content Blocking Options and Vocabulary’ in Daphne Keller (ed.), Law, Borders, and Speech: Proceedings and Materials (Stanford Law School CIS 2017) 51 . 58   See ‘Russia Country Profile’ (2016) Freedom on the Net 2016 1; ‘Country Reports on Human Rights Practices for 2017’ (n. 7); Freedom House, ‘Russia Country Report’ (n. 14). 59   See Andrew Griffin, ‘The Whole of Russia Can’t Get on Reddit because of Magic Mushrooms’ (The Independent, 13 August 2015) . 60   See ‘Reyestr Zapreshchennykh Saytov’ .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Intermediary Liability in Russia and the Role of Private   661 that the content is controversial, I can allocate to that client a separate IP address and leave this URL up, in case the client wants to appeal the Roskomnadzor request . . . Usually, none of our clients go into disputes . . . Our clients that are poten­ tially subject to blocking (generally, sites with music and movies) are hosted on separate IPs so that [their blockage does] not affect the other clients.61

Roskomnadzor uses a combination of technical and legal mechanisms to ensure compliance. In December 2016, it ordered all telecom operators to install a monitoring system called ‘Revizor’. The system now covers nearly all licensed operators that provide internet access and allows Roskomnadzor to monitor the status of the blocking of pro­ hibited information remotely and around the clock.62 When operators fail to consult with the registries or block content, Roskomnadzor initiates administrative proceedings and brings infringers to court. According to its latest public report, in 2017 alone the agency brought 3,481 such administrative cases against violators.63

3.2 Surveillance A state’s capacity to conduct surveillance may depend on the extent to which business enterprises cooperate with or resist such surveillance.64 Russian law does not prohibit anonymity online, but the government has introduced a set of measures that require intermediaries to identify internet users. For example, providers of internet access, including wi-fi access in public places, have to collect user IDs;65 instant messaging services have to identify the phone numbers of their users;66 and bloggers with 3,000 or more daily readers must register with Roskomnadzor and publish their contact information (see Table 33.4).67 Under the Law On Communications and the conditions of telecom licences, telecom operators must cooperate with law enforcement authorities and install SORM equip­ ment that permits law enforcement to intercept online communications directly, with­ out going through another proxy. Formally, online surveillance should be accompanied by a court order in each case, but the law prohibits the disclosure of information about SORM measures and the content of court orders is not public. With so little oversight and lack of transparency, the process is easily susceptible to abuse. There have been sev­ eral reported cases where law enforcement agencies tried to obtain user data or intercept communications outside a valid legal process. Pavel Durov, the founder and former   Sergei Hovyadinov, Interview with I3, Skype.  Roskomnadzor, ‘Public Report of the Federal Service for Supervision of Communications, Information Technology, and Mass Media 2017’ (2018) . 63  ibid.   64  See Kaye (n. 1). 65   See Postanovleniye Pravitel’stva Rossiyskoy Federatsii ot 10 sentyabrya 2007 g N 575 ‘Ob utverzhde­ nii Pravil okazaniya telematicheskikh uslug svyazi’//Sobraniye zakonodatel’stva Rossiyskoy Federatsii ot 2007 g, N 38, st. 4552. 66   See Federal Law 241-FZ (n. 47).    67  See Federal Law 97-FZ (n. 37). 61

62

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

662   Sergei Hovyadinov

Table 33.4  The main obligations imposed on intermediaries in order to facilitate online surveillance Type of data

Type of intermediary

Intermediary obligations

User identity

Operators providing internet access, including public wi-fi

Identify users and users’ equipment Provide user information to law enforcement authorities

Any online intermediary that collects personal data of Russian citizens

Store personal data of Russian users locally Provide user information to law enforcement authorities

Providers of instant messaging services

Identify users based on their mobile phone number Collect and store Russian users’ data locally Provide user information to law enforcement authorities

 

User activity Telecom operators (metadata and content)

Any intermediary that enables the receipt, transmission, delivery, and/or processing of electronic messages of internet users (‘organizers of distribution of information’)

Install equipment that permits law enforcement to intercept online communication (‘SORM’) Collect and store metadata for three years Collect and store content of user communication for six months Provide metadata and content of user communication to law enforcement authorities at their request Collect and store metadata for one year Collect and store content of user communication for six months Provide metadata and content of user communication to law enforcement authorities at their request Provide encryption keys and assist law enforcement in decoding user communication

CEO of the social network VKontakte, alleged that during public protests in Ukraine in 2013–14, Russian security services sought the private user data for members of several Ukrainian protest groups from his company.68 In another case, the Anti-Corruption Foundation, led by prominent opposition leader Alexei Navalny, accused the Russian mobile operator MTS of helping law enforcement hack into their accounts on the messaging application Telegram.69 68   See Brian Ries, ‘Founder of “Russia’s Facebook” Says Government Demanded Ukraine Protestors’ Data’ (Mashable, 16 April 2014) . 69   See Max Seddon, ‘Activists Say Russian Telecoms Group Hacked Telegram Accounts’ (Financial Times, 6 May 2016) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Intermediary Liability in Russia and the Role of Private   663 When operators fail to fulfil their obligations—identifying users, installing SORM equipment, or locally storing the personal data of Russian users—Roskomnadzor initi­ ates administrative proceedings and brings them to court, seeking to impose adminis­ trative penalties or block their service. According to the Roskomnadzor public report, in 2017 it audited more than 30,000 wi-fi access points and identified more than 2,000 violations of their requirement to identify network users; in many of those instances, Roskomnadzor brought administrative cases against infringers.70 Our interviews uncovered an interesting phenomenon of compliance with SORM obligations, or what one of the interviewees called ‘compliance on paper’. While major national telecom operators strive to follow all the licensing conditions thoroughly, smaller operators, which are not prominent targets for the FSB, may avoid additional costs by either installing cheaper, less powerful equipment or creating an outward appearance of compliance. . . . It’s easier to pay and install equipment from a ‘recommended’ supplier that will not work properly anyway. It’s a bit like racketeering.71 You can always ‘make a deal’, as practice shows. They may tell you, ‘here is a company, you should pay them.’ . . . Everyone who pays this entity has SORM ‘on paper’. In different regions, there is a different level of ‘serving the motherland’. Someone is more accommodat­ ing, someone requires doing everything ‘to the letter’. Nobody pays much attention to small operators.72

In November 2016, the professional networking platform LinkedIn became the first victim of a new data localization law. Roskomnadzor blocked its website after the company failed to store Russian user data in servers located on Russian territory. That case set a clear precedent for global companies, which are now reviewing their compliance models in Russia. ‘Everyone needs to abide by the law . . . everything will be as it should be for sure,’ said the head of Roskomnadzor Aleksander Zharov, clearly hinting at larger platforms like Facebook, Google, and Twitter.73

4.  Compliance Dilemma The new set of requirements has created a compliance dilemma for private companies. They have to strike the right balance between full obedience with government requests to the prioritization of their users, human rights law obligations, and becoming a trusted guardian of users’ data.   Roskomnadzor (n. 62).    71  Sergei Hovyadinov, Interview with I4, Skype.   ibid., Interview with I5, Skype. 73   Clare Sebastian, ‘Russia Threatens to Block Facebook’ (CNNMoney, 26 September 2017) . 70 72

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

664   Sergei Hovyadinov That choice is much simpler, or nearly absent, for local companies. These companies are controlled by local shareholders, their main assets and employees are based in Russia, they were created and operate under local laws, and they are enmeshed in the local legal culture. Currently, the idea of openly opposing the government’s demands while putting human rights first is nearly unheard of among these actors. The state has many formal and informal mechanisms to mandate compliance both from a company and its decision-makers. One scenario in which local companies still care about their popular perception is when they expand internationally and start trading publicly on one of the major inter­ national stock markets. At this point, their reporting obligations significantly expand and their practices start attracting global attention from regulators and internet activists. In 2011, in the run-up to its listing on Nasdaq, Yandex had to address challenging questions about the disclosure of data on Russian political activists to FSB, while acknowledging in its listing prospectus that it may be subject to ‘aggressive application of contradictory or ambiguous laws or regulations, or to politically-motivated actions’ in Russia.74 For global companies, this option is much harder. On the one hand, submissive com­ pliance with government requests may trigger a negative backlash among share­ holders,75 policymakers,76 users,77 and employees at home.78 On the other hand, being a standard-bearer for values such as freedom of speech in an autocratic environment like Russia, and pushing back on excessive government requests while at the same time pursuing revenue maximization, might be unpalatable to the local regulatory environ­ ment. See Figure 33.1. When foreign companies fail to cooperate, the government is not shy about launch­ ing public relations (PR) campaigns to put additional pressure on their management and employees. In early 2013, when YouTube went to a local court to invalidate the government’s decision to block an allegedly suicidal video (which was actually an instructional video for putting on Halloween make-up), the authorities launched a PR

74   Ekaterina Drobinina, ‘Yandex Passed Information to FSB’ (BBC, 3 May 2011) . 75   In 2007 and 2008, during annual shareholder meetings Google’s executives were criticized by its shareholders for operating a version of its search engine in China that allegedly complied with China’s censorship rules. See Google Annual Shareholder Meeting, 2007 , and 2008 . 76   In 2006, Google, Yahoo, Microsoft, and Cisco Systems came under fire at the Congressional hearings for their alleged collaboration with the Chinese government. See Tom Zeller Jr, ‘Web Firms Are Grilled on Dealings in China’ (New York Times, 16 February 2006) . 77   See e.g. an appeal to global internet companies not to move their users’ personal data to Russia: ‘Don’t move personal data to Russia!’ . 78   See Kate Conger and Daisuke Wakabayashi, ‘Google Employees Protest Secret Work on Censored Search Engine for China’ (New York Times, 16 August 2018) .

OUP CORRECTED PROOF – FINAL, 05/06/2020, SPi

Intermediary Liability in Russia and the Role of Private   665 50000

Revenue (rubles, in millions)

40000

30000

20000

10000

0 2012

2013

2014

2015

2016

2017

Figure 33.1  Google’s revenue in Russia, 2012–17 Source: Audit-it.ru.79

campaign chastising YouTube and publicly naming and shaming its local employees for not complying with the law and ‘compromising’ their fight against dangerous videos that promote suicide.80 The authorities have also repeatedly issued threats to block global platforms in Russia for their lack of compliance with data localization or content-removal requests. In May 2014, Maksim Ksenzov, then the deputy head of Roskomnadzor, plainly outlined the government’s approach to dealing with global platforms. ‘We can shut down Twitter or Facebook tomorrow within several minutes,’ Ksenzov said. ‘We do not see big risks in that. If we estimate that the consequences of “turning off ” the social networks will be less significant compared to the harm to the Russian society caused by the non-constructive position of the leadership of these global companies, we will do what is required by law.’81 Transparency reports published by global companies operating in Russia suggest that the government has been trying to remove more and more content from these platforms, and the companies have largely complied with the government’s demands. For instance, between 2012 and 2017, the number of content-removal requests Google received from the Russian authorities increased almost 200-fold, from 120 in 2012 to   Bukhgalterskaya otchetnost’ i fin. analiz GUGL za 2012-2017 gg. (INN 7704582421)—Audit-It.ru’ . 80   ‘Rospotrebnadzor obvinil YouTube v podryvnoy rabote protiv zashchity detey’, ‘Роспотребнадзор обвинил YouTube в подрывной работе против защиты детей | Forbes.ru’ (Forbes.ru, 20 March 2013) . 81   Alena Sivkova, ‘My Ne Vidim Bol’shikh Riskov v Blokirovke Twitter v Rossii’ (Izvestiya, 16 May 2014) (authors translation). 79

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

666   Sergei Hovyadinov 23,823 in 2017.82 For Twitter, that amount increased from 0 in 2012 to 2,505 in 2017.83 However, their level of compliance with the requests for the last reported period differed; while Google pushed back on 22 per cent of the government’s requests, Twitter did so in 49 per cent of the cases.84

5.  Transparency and Compliance with Human Rights In his 2016 report to the UN Human Rights Council, the Special Rapporteur on freedom of expression, David Kaye, highlighted the need for the private sector to develop and implement policies that take into account their potential impact on human rights, ensuring ‘the greatest possible transparency in their policies, standards and actions that implicate the freedom of expression and other fundamental rights’.85 Similarly, in March 2018, the Council of Europe, of which Russia is a member, issued a recommendation on the roles and responsibilities of internet intermediaries, calling for them to ‘respect the internationally recognised human rights and fundamental freedoms of their users and of other parties who are affected by their activities’, regardless of the abilities or willingness of the states in which they operate ‘to fulfil their own human rights obligations’.86 In autocratic environments like Russia, where governments increasingly pressure companies to monitor user activity online and restrict access to information, the will­ ingness of intermediaries to build their practices around international human rights standards can help keep government influence under control and provide some min­ imum protection for users. Being more transparent can help, not hurt. Without trans­ parency, users are unable to understand the restrictions placed on their freedoms and challenge those restrictions when appropriate. Several interviewees highlighted the difference between global and local companies in their response to government requests for user data and removal requests: global companies are more transparent. They must satisfy the requirements of both their coun­ try of origin and those of Russia, and must follow a more diligent, formalized review

82   ‘Google Transparency Report’ . 83   ‘Removal Requests’ (n. 26). 84   However, according to Roskomsvoboda, no Twitter URLs have been added to the Unified Register, which suggests that Roskomnadzor, for some reason, has been satisfied with Twitter’s removal practices. 85   Kaye (n. 1). 86   Council of Europe, ‘Recommendation of the Committee of Ministers to Member States on the Roles and Responsibilities of Internet Intermediaries’ (2018) CM/Rec(2018)2 .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Intermediary Liability in Russia and the Role of Private   667 process. In contrast, local companies often demonstrate the very opposite approach. According to the Ranking Digital Rights 2018 Corporate Accountability Index, which evaluates the world’s most powerful internet, mobile, and telecommunications com­pan­ ies on their disclosed commitments and policies affecting freedom of expression and privacy, Russian companies Yandex and Mail.ru scored poorly. Compared to their foreign counterparts, they disclose very little information about policies that affect their users’ freedom of expression and privacy, and almost nothing about how they handle government demands to remove content or to hand over user data. Moreover, these companies lack clear disclosure of users’ options for controlling what information they collect and share, and whether and how they track users across the internet using ­cookies, widgets, or other tracking tools.87 Another study by RosKomSvoboda and the Internet Protection Society that evalu­ ated the transparency practices of major Russian mobile operators showed similar findings.88 None of the telecommunications operators inform users that their equip­ ment is connected to the FSB as part of their compliance with SORM requirements, publish any information about government requests for user information, describe how they restrict access to allegedly illegal content, or explain how users can appeal their decisions. These findings reflect the rather submissive stance these operators take with respect to cooperating with law enforcement, which is consistent with what we found in our interviews. Most smaller ISPs and telecom operators rarely conduct due diligence of law enforce­ ment requests, preferring not to question their validity. According to our interviewees, many telecom operators and hosting providers have direct channels of communication with the police and FSB and provide the requested information without delay: This is law. There is no point in appealing [law enforcement requests]. More so, operators have personnel responsible for communication with the authorities. In our company, it is me. They have my phone number, and they call me, so I am confident that all requests are valid.89 Some [hosting providers] do not check the requirements or may not require official documents . . . These are the requests for user data, user activity, from which IP address user came to the site—here there is no need for a court decision. I usually do a check—if they [law enforcement agencies] call me, I then give them a call back to at least confirm that they are calling from where they say. Law enforcement agen­ cies are afraid to submit official requests, sign, and put seals . . . Other companies can provide information without an official request or based on an email with an attached scan of the request—no one knows whether such a request is valid.90

  Ranking Digital Rights, ‘2018 Corporate Accountability Index’ (n. 12).  Sarkis Darbinyan, Mikhail Klimarev, and Sergei Hovyadinov, Ranking Transparency of Mobile Network Operators in Russia (Roskomsvoboda and OZI 2018) . 89   Sergei Hovyadinov, Interview with I5, Skype.    90  ibid., Interview with I3, Skype. 87

88

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

668   Sergei Hovyadinov

6. Conclusions The rapid expansion of the internet in Russia, combined with the Russian government’s paranoia about its potential to instigate a social uprising, has led to the adoption of ­significant regulatory restrictions on online content and anonymity. Faced with new technical challenges, the Kremlin has enlisted competent and technically capable internet actors—search engines, social networks, hosting providers, and network operators—to help implement these restrictions and control the flow of information. This indirect method of state control of the internet is not unique to Russia. Other more democratic governments around the world also rely on intermediaries as their ‘online proxy agents’ to assist with the enforcement of IP rights, data protection, and state security. What is different in Russia is how closely these private companies, especially local ones, have become an integral part of the state’s internet control apparatus without much transparency and public accountability. The trajectory of the Russian government’s actions against online freedoms since 2014 indicates that governmental control will strengthen, not weaken. Not only does the state rely on online intermediaries to implement their control measures, but internet users and civil society rely on these companies to protect their online freedoms and provide checks and balances against government malpractice. Prioritizing human rights over the state’s request in an autocratic environment like Russia is not an easy task. The state can use many levers, official and unofficial, to mandate compliance from both local and global companies. But while Russia is still bound by com­ mitments to international human rights, it is a combination of better transparency, a robust internal review process of responding to government requests, and reliance on inter­ national human rights law, including the practice of the European Court of Human Rights, that the companies could deploy in order to counter excessive governmental demands. The UN Guiding Principles on Business and Human Rights provide a solid framework for greater transparency and predictability in that respect. Both local and global internet com­ panies operating in Russia will only benefit from implementing its key principles. In the end, maximum transparency and adhering to standards of universal human rights when dealing with autocratic governments will only help these platforms earn the trust of Russian users while staying within the framework of ap­plic­able law.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 34

Gua r di ng th e Gua r di a ns: Con ten t Moder ation by On li n e I n ter m edi a r ie s a n d the Ru le of L aw Niva Elkin-Koren and Maayan Perel*

Online intermediaries have become a focal point of content moderation. They may en­able or disable access by removing or blocking controversial content, or by terminating users’ accounts altogether. Consequently, governments, rightholders and users around the world are pressing online intermediaries to hone their gatekeeping functions and censor content amounting to hate speech, inciting materials, or copyright infringement.1 The rising pressure on online platforms to block, remove, monitor, or filter il­legit­im­ ate content is fostering the deployment of technological measures to identify potentially objectionable content, which may expose platforms to legal liability or raise a public outcry. As a result, online intermediaries are effectively performing three roles at the same time: they act like a legislature, in defining what constitute legitimate content *  This research was supported by the Israel Science Foundation (grant no. 1820/17). 1  An increasing number of laws encourage online intermediaries to moderate online content: one classic example is the notice-and-takedown regime established by the US Digital Millennium Copyright Act, which requires platforms expeditiously to remove allegedly infringing copyright materials upon receiving a notice; 17 USC § 1201. Another example is the 2017 Network Enforcement Act (NetzDG) in Germany, which requires intermediaries to delete content which appears to be evidently unlawful, within twenty-four hours of a complaint being filed. See the Network Enforcement Act (Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken, NetzDG) (Ger.). Similarly, a proposal by the European Commission would require hosting service providers to remove terrorist content online or disable access to it, within one hour of receipt of a removal order; European Commission Press Release, ‘State of the Union 2018: Commission proposes new rules to get terrorist content off the web’ (12 September 2018) .

© Niva Elkin-Koren and Maayan Perel 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

670   Niva Elkin-Koren and Maayan Perel on their platform, like judges who determine the legitimacy of content in particular instances, and like administrative agencies who act on these adjudications to block il­legit­im­ate content.2 Thus, in content moderation, public law enforcement power and adjudication power converge in the hands of a small number of private mega platforms. Existing platform-liability regimes have demonstrated that platforms often fail in weeding out legitimate content, and therefore holding them liable for users’ content may result in effectively silencing lawful speech.3 Yet, pervasive power must be restrained in order to ensure civil liberties and the rule of law. Hence, we argue that even if they are not liable for illegitimate content made available by their subscribers, online inter­medi­ ar­ies must still be held accountable for content moderation. Traditional legal rights and processes, however, are ill equipped to oversee the robust, non-transparent, and relatively effective nature of algorithmic content moderation by online intermediaries. We currently lack sufficient safeguards against over-enforcement of protected speech as well as under-enforcement of illicit content. Allowing unchecked power to escape traditional schemes of constitutional restraints is potentially gamechanging for democracy as it raises serious challenges to the rule of law as well as to notions of trust and accountability. This chapter describes three ways in which content moderation by online inter­medi­ ar­ies challenges the rule of law: it blurs the distinction between private interests and public responsibilities; it delegates the power to make social choices about content legitimacy to opaque algorithms; and it circumvents the constitutional safeguard of the separation of powers. The chapter further discusses the barriers to accountability in online content moderation by intermediaries, including the dynamic nature of algorithmic content moderation using machine learning; barriers arising from the partialness of data and data floods; and trade secrecy which protects the algorithmic decisionmaking process. Finally, the chapter proposes a strategy to overcome these barriers to ac­count­abil­ity of online intermediaries, namely ‘black box tinkering’: a reverseengineering methodology that could be used by governmental agencies, as well as social activists, as a check on private content moderation. After describing the benefits of black box tinkering, the chapter explains what regulatory steps should be taken to promote the adoption of this oversight strategy.

1.  Content Moderation by Platforms and The Rule of law When we ask online intermediaries to engage in content moderation, we assume that discourse on these platforms constitutes our public sphere, and therefore platforms can 2  See Maayan Perel and Elkin-Koren, ‘Accountability in Algorithmic Enforcement’ (2016) 19 Stan. Tech. L. Rev. 473, 482–3. 3  See Jennifer Urban, Joe Karaganis, and Brianna Schofield, ‘Notice and Takedown: Online Service Provider and Rightsholder Accounts of Everyday Practice’ (2017) 64 J. Copyright Soc’y 371.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Content Moderation by Online Intermediaries and Rule of Law   671 (and according to some, should)4 exercise power to remove unwarranted content.5 Public discourse is always subject to limits, to protect private interests (e.g. defamation law), fundamental rights (e.g. laws prohibiting hate speech), or national security (e.g. espionage laws protecting classified materials). The laws of different countries will often strike a different balance between free speech and these other interests, reflecting societies’ social choice. Content moderation performed by online intermediaries raises several challenges to the rule of law: one arising from the dual role of platforms as private companies and facilitators of speech, the next arising from the algorithmic implementation of content policies, and the third arising from informal collaboration with state apparatus. First, online intermediaries are private, profit-maximizing entities whose financial interests derive from the very content they are expected to moderate. Content-sharing platforms thrived on democratic notions of freedom and openness enabling diverse users to create, upload, and share their ideas with the rest of the world. As facilitators of users’ speech, online intermediaries were given immunity from liability to harm caused by their users.6 Immunity was intended to facilitate free flow of information and avoid any intervention that may unduly censor or chill speech. At the same time, however, as numerous scholars have shown, online intermediaries also exercise editorial power, by curating online content, and by ranking, or giving priority to some content, while diverting attention from other types of information.7 As commercial speakers, online intermediaries might be entitled to constitutional protection of free speech, but they might also be held liable for illegal content.8 Online intermediaries thus play a dual role: they are commercial players, which compete in data capitalist markets for users, business partners, and data-driven innovation, and they also act as governor of other people’s speech, hence are expected to advance public welfare.9 This dual role of online intermediaries blurs the distinction between public fora and private companies. The fact that online intermediaries profit from facilitating online speech of users conflates their commercial interests and their governing roles. Requiring them to guard against those who potentially tend their garden, and adjudicate the content they distribute, may trap intermediaries in a conflict of interests. 4  See e.g. Ronen Perry and Tal Zarsky, ‘Who Should be Liable for Online Anonymous Defamation?’ (2015) 82 U. Chicago L. Rev. Dialogue 162, 172–5 (arguing that platforms should be held liable for online anonymous unlawful speech, when the speaker is not reasonably reachable). 5  See Martin Husovec, Injunctions Against Intermediaries in the European Union: Accountable but not Liable? (CUP 2017). 6  See the Communication Decency Act, 47 USC § 230, Ch. V; Directive 2000/31/EC of 17 July 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1, Art. 12. 7  See Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (Yale U. Press 2018); Lucas Introna and Helen Nissenbaum, ‘Shaping the Web: Why the Politics of Search Engines Matters’ (2006) 16(3) The Information Society 169. 8  See Jack Balkin, ‘Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation’ (2018) 51 UC Davis L. Rev. 1149. 9  SeeKate Klonick, ‘The New Governors: The People, Rules, and Processes Governing Online Speech’ (2018) 131 Harv. L. Rev. 1598.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

672   Niva Elkin-Koren and Maayan Perel A second challenge to the rule of law arises from the use of technological measures to identify, filter, and block illegitimate content. Online content moderation involves normative choices regarding the boundaries between legitimate and illegitimate content. Content-filtering systems effectively blend norm setting, law enforcement, and adjudication powers. Removal of copyright infringing content, for instance, involves many discretional questions of fact and law: determining the degree of ‘originality’ required to establish copyrightability;10 deciding what amounts to ‘substantial similarity’ to establish infringement;11 or considering what constitutes ‘permissible use’ under fair use.12 Before algorithms can implement these qualitative doctrines they must be translated into ‘codish’ thresholds, a process that in itself may result in unintentional alterations of settled doctrines.13 Algorithms could err in identifying the content accurately, and may also err in determining whether a particular use of content is illegal, since such de­ter­min­ation depends on the context.14 Such errors in algorithmic content moderation may result in censoring legitimate content, and sometimes also in disproportionally censoring some groups.15 Privately designed systems of content moderation are often applied ex ante, as filtering occurs before the content ever becomes publicly available.16 Such ‘prior restraints’ on speech, embodied in online filters, may cause even greater harm to free speech, and strengthen the need to ensure that protected speech is not censored and procedures are scrutinized.17 Thirdly, content moderation by online platforms effectively circumvents an im­port­ ant constitutional safeguard of the rule of law: separation of powers. Normally, substantive decisions on unprotected speech are made by law, and are subject to judicial review. Content-filtering systems, however, effectively blend law enforcement and adjudication powers, reflecting a profound transformation in our traditional system of governance by law. Such practices reflect an institutional shift in lawmaking power, from state to private companies, and a fundamental transformation of the nature of law enforcement.

10 See Feist Publ’ns, Inc. v Rural Tel Serv. Co., 499 US 340, 345 (1991) (US). 11 See Ideal Toy Corp. v Fab-Lu Ltd, 266 F.Supp. 755, 756 (SDNY 1965), aff ’d, 360 F.2d 1021 (2d Cir. 1966) (US). 12 See Cambridge Univ. Press v Patton, 769 F.3d 1232, 1282 (11th Cir. 2014) (US). 13 See Harry Surden and others, ‘Representational Complexity in Law’ (2007) 11 Int’l Conf. on Artificial Intelligence & L. 193; Jay Kesan and Rajiv Shah, ‘Deconstructing Code’ (2004) 6 Yale J. of L. & Tech. 277, 283 (‘STS [Science & Technology Studies] examines how technology is shaped by societal factors such as politics, institutions, economics, and social structures’). 14  SeeEvan Engstrom and Nick Feamster, ‘The Limits of Filtering: A Look at the Functionality and Shortcomings of Content Detection Tools’ (Engine, March 2017) . 15  See Daphne Keller, ‘Inception Impact Assessment: Measures to Further Improve the Effectiveness of the Fight Against Illegal Content Online’, SSRN Research Paper no. 3262950 (29 March 2018) . 16  See Perel and Elkin-Koren (n. 2) 504. 17  See Dawn Carla Nunziato, ‘How (Not) to Censor: Procedural First Amendment Values and Internet Censorship Worldwide’ (2011) 42 Geo. J. of Int’l L. 1123.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Content Moderation by Online Intermediaries and Rule of Law   673 Enforcement of speech restrictions by online intermediaries is potentially more invasive to civil rights than a warrant enforced by law enforcement agencies. That is because, online platforms exercise control over the content shared by users and access to a communication channel, and, most importantly, they hold vast amounts of personal data on their users. This unprecedented power allows intermediaries to monitor, remove, and prevent unwarranted content. The use of data analytics further enhances investigative and predictive capacities, enabling platforms to predict the illicit use of content and take precautions to prevent it before it even occurred. For instance, platforms may apply machine learning to live chats of users and the metadata of videos, to predict copyright infringement in live video streams.18 Moreover, governments increasingly rely on informal collaboration with online intermediaries in cybersecurity, surveillance, censorship, and general law enforcement tasks.19 This is becoming a powerful tool for governments seeking to block content under the radar of the judiciary. According to the Google Transparency Report, for instance, Russia has come out on top of a ranking of government requests to have online content removed or blocked.20 This is also becoming a popular technique for governments in liberal democracies, which issue removal requests based on the platform’s Community Guidelines.21 Such guidelines often define objectionable speech more broadly than illegal speech defined by law.22 While law enforcement agencies are authorized to operate under the rule of law within constitutional restraints, private bodies are not subject to any constitutional limits on search or censorship and are under no duty to respect free speech or other fundamental rights. Therefore, relying

18  See  D.  Zhang and others, ‘Crowdsourcing-based copyright infringement detection in live video streams’, Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (2018). 19  See Niva Elkin-Koren and Eldar Haber, ‘Governance by Proxy: Cyber Challenges to Civil Liberties’ (2016) 82 Brooklyn L. Rev. 105 (2016); Danielle Keats Citron, ‘Extremist Speech, Compelled Conformity, and Censorship Creep’ (2018) 93 Notre Dame L. Rev. 1035. See also European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online, A contribution from the European Commission to the Leaders’ meeting in Salzburg on 19–20 September 2018, Brussels, 12 September 2018’ COM(2018) 640 final. 20  See Brian Snyder, ‘Most Requests to Remove Online Content Come from Russia, Google Says’ (Moscow Times, 22 July 2017) . 21  See e.g. the ‘Code of Conduct on Countering Illegal Hate Speech Online’, where platforms agreed to prohibit ‘hateful conduct’, namely, speech inciting violence or hatred against protected groups; European Commission Press Release IP/16/1937, ‘European Commission and IT Companies Announce Code of Conduct on Illegal Online Hate Speech’ (31 May 2016) . See also EDRi, ‘Europol: Non-transparent cooperation with IT companies’ (EDRi, 2016) (critically describing the use of terms of service by law enforcement agencies as a basis for removal requests). 22  e.g. the Facebook NetzDG Transparency Report, filed under the German NetzDG demonstrates that during the first six months of the law, Facebook removed 1,704 items of content based on 886 NetzDG legal notices, while removing millions of items during the same period based on its Community Guidelines reporting system.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

674   Niva Elkin-Koren and Maayan Perel on content mod­er­ation by online intermediaries enables governments to bypass ­constitutional constraints.23 Overall, the governing function of online intermediaries in content moderation ­cannot be restrained by market forces alone. In their governing functions as law enforcers, online intermediaries act like judges who determine the legitimacy of content and like administrative agencies who act upon these adjudications to block illegitimate content.24 Just as judicial review facilitates public scrutiny to strengthen public trust and promote the rule of law, so too should online intermediaries who perform public functions be subject to oversight.25 Online intermediaries must therefore be held ac­count­able for content moderation and must comply with the rule of law.

2.  Barriers to Accountability Transparency is often conceived as the principal safeguard for accountability. It is generally assumed that public knowledge of the details of exercising governmental powers should counter abuse of power and dysfunctional governance.26 Indeed, some legal reforms which require removal by online intermediaries mandate reports on removal practices.27 Still, transparency alone might not prove as useful to ensure accountability in content moderation by private online intermediaries, using algorithmic tools. The shortcomings of transparency in this context are caused by several factors: first, algorithmic content moderation relies on dynamic machine learning. Providing a neural network with thousands of posts labelled ‘adult content’ or ‘safe content’, for instance, will enable the network to tune the weights of its neurons to be able to classify future content in those two categories (a process known as ‘supervised learning’).28 The learning process 23  See Michael Birnhack and Niva Elkin-Koren, ‘The Invisible Handshake: The Reemergence of the State in the Digital Environment’ (2003) 8 Va. J. of L. & Tech. 6 (2003). 24  See Perel & Elkin-Koren (n. 2) 482–3. 25  See e.g. Edward Lee, ‘Recognizing Rights in Real Time: The Role of Google in the EU Right to Be Forgotten’ (2016) 49 UC Davis L. Rev. 1017, 1055–73 (explaining that in relation to the implementation of the right to be forgotten Google functions similarly to a government agency or administrative body). 26  See e.g. Mark Fenster, ‘Seeing the State: Transparency as Metaphor’ (2010) 62 Admin. L. Rev. 617, 619 (‘[t]o be held accountable and to perform well, [the government] must be visible to the public’); Mark Fenster, ‘The Opacity of Transparency’ (2006) 91 Iowa L. Rev. 885, 900 (‘[Transparency] enables the free flow of information among public agencies and private individuals, allowing input, review, and criticism of government action, and thereby increases the quality of governance’); Adam Samaha, ‘Government Secrets, Constitutional Law, and Platforms for Judicial Intervention’ (2006) 53 UCLA L. Rev. 909, 917 (‘[p]opular accountability need[s] a system for disclosing information about government’); Frederick Schauer, ‘Transparency in Three Dimensions’ (2011) U. of Ill. L. Rev. 1339, 1346 (‘[f]oremost among [the aims of transparency], at least in much of contemporary discourse, is what is commonly described as accountability’). 27  See NetzDG (n. 1). 28  See Ben Dickson, ‘The challenges of moderating online content with deep learning’ (TechTalks, 10 December 2018) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Content Moderation by Online Intermediaries and Rule of Law   675 allows the system to identify trends, relationships, and unexpected patterns in disparate groups of data and shape performance accordingly. Therefore, disclosing the code which underlines algorithmic systems of content moderation does not tell us much about the actual performance of such systems. The underlying values embedded in the code will only be relevant to the time of disclosure, implying nothing about its actual performance which could be tweaked by learning. The second reason why transparency may not ensure accountability in content mod­ er­ation by online intermediaries relates to the underlying data. To begin with, as private actors, online intermediaries are generally not subject to mandatory disclosure obligations: they are free to determine which specific data to share with the public in accordance with their private business interests.29 Providing their voluntary disclosure, which might be incomplete, misleading, or even biased, with determinative weight is like counting on a guard to objectively review its own guardianship. At the same time, voluntary disclosures could still be overwhelming, creating a problem of magnitude.30 Making sense of the volume of data requires proper tools to analyse voluminous data.31 But then we are left with a vicious cycle, where more transparency only strengthens users’ dependence on algorithms, which further increases the need to ensure adequate accountability of the algorithms themselves. Moreover, voluntary disclosure often fails to report content that was filtered before it ever became publicly available, and this further bolsters the shortcomings of transparency in generating accountability.32 Thirdly, trade secrecy may create another weighty barrier to transparency as a measure of accountability in content moderation.33 Indeed, embedding discretional choices about content legitimacy in dynamic algorithms serves the platforms’ business interests and is therefore treated as their own intellectual property. Moreover, if online inter­ medi­ar­ies disclose how exactly they implement their content-moderation system, 29  e.g. obtaining comprehensive information about government requests to Google to remove content is very challenging, given that in its Transparency Report, Google merely ‘provides a rough outline of the purpose of the content removal request and the type of governmental entity that requests it’, without providing information about which requests in particular it did or did not comply with. See Jacquelyn Fradette, ‘Online Terms of Service: A Shield for First Amendment Scrutiny of Government Action’ (2014) 89 Notre Dame L. Rev. 947, 967. 30  See Niva Elkin-Koren and Eli M Salzberger, Law, Economics and Cyberspace (Edward Elgar 2004) 70, 94–6 (arguing that while the costs of retrieving information in cyberspace may go lower, the cognitive barriers to individual choice are likely to become stronger); Omri Ben-Shahar and Carl Schneider, ‘The Failure of Mandated Disclosure’ (2011) 159 U. Pa. L. Rev. 647, 686 (explaining the ‘quantity problem’ of mandated disclosure). 31 e.g. Joel Reidenberg, Jaspreet Bhatia, and Travis Breauk recently addressed the technology of Natural Language Processing (NLP) that is capable of identifying and measuring ambiguity in website policies, while providing companies with a useful mechanism to improve the drafting of their policies. See Joel Reidenberg, Jaspreet Bhatia, and Travis Breauk, Automated Measurement of Privacy Policy Ambiguity (work-in-progress, on file with authors). 32  See Perel and Elkin-Koren (n. 2). 33  See Maayan Perel and Niva Elkin-Koren, ‘Black Box Tinkering: Beyond Disclosure in Algorithmic Enforcement’ (2017) 69 Fla L. Rev. 181, 193.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

676   Niva Elkin-Koren and Maayan Perel interested parties could easily game the system and bypass their restrictions. Therefore, keeping some aspects of the process secret may ensure its efficiency.

3.  Enhancing Intermediaries’ Oversight How can we hold online intermediaries accountable and ensure they comply with the rule of law? The limits of transparency as a guard against illegitimate suppressions of speech suggest that new innovative measures must be introduced to complement it. We therefore advocate moving beyond transparency in the pursuit of proper oversight: instead of relying on a static check, based on publicly available (but incomplete) data or on platforms self-reporting their content moderation, we propose a dynamic approach. This assumes the undisclosed nature of content moderation (‘black box’) and advocates a proactive and ongoing strategy of monitoring its performance. Our approach is based on ‘tinkering’ which allows people ‘to understand, discuss, repair, and modify the technological devices [they] own’.34 It is important to facilitate ‘freedom of thought, study, inquiry, self-expression, [and] diffusion of knowledge’ and to ‘foster . . . privacy, autonomy, human flourishing, and skills building interests of tinkerers’.35 In addition, tinkering may further facilitate social activism, while creating a policy lever for checks and balances of the hidden practices of algorithmic decisionmaking.36 Thus, it could be exploited to check the credibility, fairness, and trustworthiness of content moderation by online intermediaries, especially if these cannot be adequately pursued by traditional means of transparency. How can we tinker with online systems of content moderation? We can systematically test and record how online intermediaries respond to representatives, like-real content that we prepare and submit to the platforms. For instance, we have studied the compliance of the takedown policy of hosting platforms with copyright law, by uploading different types of infringing, non-infringing, and fair use materials, and systematically recording the platforms’ response.37 Similarly, a recent study conducted at the Princeton University Center for Information Technology policy sought to check the error rate of ads moderation by Google and Facebook during the 2018 US elections. To test whether common ads, which were not related to the election, were prohibited, the team used 34  Ed Felten, ‘The New Freedom to Tinker Movement’ (Freedom to Tinker, 21 March 2013) . 35  See Pamela Samuelson, ‘Freedom to Tinker’, UC Berkeley Pub. Law & Legal Theory Research Paper no. 2605195 (2015), 1–2 . 36  See Perel and Elkin-Koren (n. 33) 199–200. 37 We previously tested this methodology in a pilot conducted at the Haifa Center for Law & Technology, in Israel. In that pilot, we tested systematically how hosting websites implement the noticeand-takedown policy in copyright law by examining popular local image-sharing and video-sharing platforms. Accordingly, different types of infringing, non-infringing, and fair use materials were uploaded to the hosting facilities, each intended to trace choices made by the black box system throughout its enforcement process. The findings are presented in Perel and Elkin-Koren (n. 33).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Content Moderation by Online Intermediaries and Rule of Law   677 software that auto-generated non-election advertisements which several US citizens had attempted to upload. They found systematic evidence that Facebook’s policy en­for­cers were more likely to prohibit some kinds of non-election ads than others (specifically, Google did not prohibit any of the ads posted, whereas Facebook prohibited 4.2 per cent of the submitted ads).38 Tinkering as an active tool for generating oversight could address the three barriers discussed in Section 2.39 Firstly, tinkering overcomes the dynamics of machine learning used by online intermediaries for content-moderation purposes because it relies on ongoing testing and recording of the system’s operation. Secondly, tinkering can easily address all the problems associated with the data disclosed because it does not depend on direct access to the data generated by the platform, or on disclosures by the inter­ medi­ar­ies who moderate content. Rather, with tinkering, the investigators or activists extract their own data set in real time.40 Thirdly, tinkering also addresses the problem of trade secrecy because it does not seek to reveal the underlying algorithm which determines content legitimacy, but purports to uncover concrete misapplications of it. If tinkering could enhance public scrutiny of content moderation by online inter­ medi­ar­ies, why has it not become prevalent? Unfortunately, several legal challenges must be addressed. First, tinkering may be discouraged by different ‘anti-hacking’ and ‘computer intrusion’ laws, such as the US Computer Fraud and Abuse Act (CFAA),41 which may consider it an unlawful intrusion into intermediaries’ computer networks. For instance, in a complaint filed with the district court of California in 2018, a group of researchers challenged the access section of the CFAA, arguing that it would criminalize their research activities which were meant to find out if automated transactions were discriminatory. Their research methodology relies on ‘outcomes-based audit testing’, which involves accessing a website or other network service repeatedly, generally by creating false or artificial user profiles, to see how websites respond to users who display characteristics attributed to certain classes. Insofar as this methodology’s application violates certain website’s terms of service, the complaining researchers worry that their actions would be deemed unauthorized access to a computer, hence in violation of the CFAA. Whether this is actually true remains unknown for now, as this case is still pending resolution.42 Second, and related, the exploitation of tinkering is discouraged by different ­contractual barriers imposed by online intermediaries or their software developers. As any act of tinkering with platforms’ online systems inevitably involves agreeing to their terms of use (ToU), any violation of these terms may also be enforceable directly under contract law.43 Nevertheless, to the extent that the potential harm of violating the 38  See Austin Hounsel and others, ‘Estimating Publication Rates of Non-Election Ads by Facebook and Google’ (GitHub, 1 November 2018) . 39  Perel and Elkin-Koren (n. 33) 201–2. 40 ibid. 41  Pub L. No. 98-473, 98 Stat. 2192 (1986) (codified as amended at 18 USC § 1030 (2012)) (US). 42 See Christian W. Sandvig et al. v Attorney General, no. 1613-68 (DDC 2018) (US). 43 See Fteja v Facebook Inc., 841 F.Supp.2d 829, 837, 841 (SDNY 2012) (US) (noting that ‘a number of courts . . . have enforced forum selection clauses in clickwrap agreements’).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

678   Niva Elkin-Koren and Maayan Perel platform’s ToU is negligible, we expect that researchers engaging in such experimentation will not be exposed to any meaningful contractual liability. Since the exploitation of tinkering to check the operation of content moderation may affect third parties—not only users who are exposed to the content, but also the platform itself—we believe that it is necessary to think about a safe harbour regime for research purposes that will allow testing that is conducted to ensure compliance with the rule of law. Such a regime should be designed to ensure that researchers, journalists, and activists acting in good faith to test compliance by platforms, are immune from li­abil­ity for their intentional, yet de minimis, legal violations in the course of tinkering. Enacting such a statutory immunity would promote active engagement in revealing the hidden practices of governance by platforms (not just content moderation). This would help raise awareness to the role of platforms in shaping human behaviour, and encourage the public to review and eventually affect the way these algorithms function, while ultimately guarding those who apply them effectively.

4.  Future Challenges We have reached a point in time where it has become impossible to expect intermediary liability to protect the public interest in a safe internet, without stifling free speech and the free flow of information. Numerous studies have shown that platforms’ potential liability for content that they facilitate leads to over-removal of legitimate content.44 Intentionally or mistakenly, protected speech is being eliminated from the public discourse daily. As information floods are expected only to grow, and with them attempts by interested parties to control or manipulate information, things will only become worse. An appropriate balance between free speech and clashing public values can no longer be assumed but rather assured ceaselessly. Maintaining this balance in an era controlled by data giants is a huge challenge, yet we believe that the key to resolving this puzzle is in our hands, and not—as others presume—in the hands of online inter­medi­ ar­ies. Rather than pressing the latter to block more (e.g. by subjecting them to direct li­abil­ity), we should press them to block better. Better content moderation will be achieved if online intermediaries are offered precise definitions regarding the content to be removed, but are also held accountable for their choices, even where they are not liable for them. Accountability will promote self-review by online intermediaries and will force them to abide by the law and not merely count on it as a shield against liability.

44  See Urban, Karaganis, and Schofield (n. 3); Sharon Bar-Ziv and Niva Elkin-Koren, ‘Behind the Scenes of Online Copyright Enforcement: Empirical Evidence on Notice & Takedown’ (2018) 50 Conn. L. Rev. 339.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 35

A lg or ith mic Accou n ta bilit y: Towa r ds Accou n ta bl e Systems Ben Wagner

Algorithmic accountability refers to the process in which both information systems themselves, their developers, and the organizations behind them are held accountable for the decisions made by those information systems.1 While the information system is sometimes reduced to a specific technical subset of that system2—the algorithm itself—many of the debates about algorithmic accountability tend to focus on the role of information systems in society.3 As such it would be unreasonable to reduce algorithmic accountability to a specific technical subsystem. Why should algorithms be accountable at all? The main argument typically used in this context is of algorithms as a form of power. While some authors believe that this power is overstated,4 most tend to use the justification of the power embedded within algorithms as a reason to justify the need for accountability.5 For example, Just and 1  See Richard Mason and Ian Mitroff, ‘A Program for Research on Management Information Systems’ (1973) 19 Management Science 475. 2  See Rob Kitchin, ‘Thinking Critically about and Researching Algorithms’ (2017) 20 Information, Communication & Society 14. 3  See Felicitas Kraemer, Kees Overveld, and Martin Peterson, ‘Is There an Ethics of Algorithms?’ (2010) 13 Ethics and Information Tech. 251; Z. Tufekci, ‘Algorithms in Our Midst: Information, Power and Choice When Software Is Everywhere’, Proceedings of the 18th ACM Conference on Computer\ldots (2015) ; Zeynep Tufekci and others, ‘The Ethics of Algorithms: From Radical Content to Self-Driving Cars’, European University Viadrina (2015) . 4  See Malte Ziewitz, ‘Governing Algorithms: Myth, Mess, and Methods’ (2016) 41 Science, Tech., & Human Values 3. 5  See Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (HUP 2015); Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens

© Ben Wagner 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

680   Ben Wagner Latzer6 discuss how algorithms express power in a form of algorithmic governance, while Karen Yeung argues they constitute a form of ‘design-based regulation’.7 The argument about algorithms’ power8 is frequently coupled with a claim of lack of ac­count­abil­ity, in which algorithms are an inscrutable black box,9 where important decisions are made by automated algorithmic systems without sufficient accountability.10 Beyond this general definition much is unclear about the specific nature of ac­count­ abil­ity within the context of algorithmic accountability.11 In particular, the precise nature of the governance mechanism is unclear, as a result of which the debate around algorithmic accountability typically leaves open two key questions regarding the actual holding to account of algorithms:

1.  Accountable to whom? Who should algorithms be accountable to? There are a variety of positions and perspectives on to whom an algorithm should be accountable. From a user-centric perspective, algorithms should be accountable to their users,12 and there are many scholars working on algorithmic accountability who share this view.13 Another stream argues that algorithms should be accountable to those affected by them, suggesting that there should be an accountability towards ‘decision subjects’.14 Beyond these two, there are scholars who call for great accountability of algorithms towards public regulators15 or simply that their source code should be disclosed to the general public.16 Democracy (Crown 2016); Taina Bucher, ‘Want to Be on the Top? Algorithmic Power and the Threat of Invisibility on Facebook’ (2012) 14(7) New Media & Society 1164. 6 See Natascha Just and Michael Latzer, ‘Governance by Algorithms: Reality Construction by Algorithmic Selection on the Internet’ (2017) 39 Media, Culture & Society 238. 7  Karen Yeung, ‘ “Hypernudge”: Big Data as a Mode of Regulation by Design’ (2017) 20 Information, Communication & Society 118, 118. 8 Ben Wagner, ‘Algorithmic Regulation and the Global Default: Shifting Norms in Internet Technology’ (2016) 10 Etikk Praksis Etikk i Praksis 5. 9  Pasquale (n. 5). 10  See Maayan Perel and Niva Elkin-Koren, ‘Accountability in Algorithmic Copyright Enforcement’ (2016) 19 Stan. Tech. L. Rev. 473. 11  See Ziewitz (n. 4); Kitchin (n. 2). 12  See Don Norman, The Design of Everyday Things: Revised and Expanded Edition (revd, expanded edn, Basic Books 2013). 13  See Nicholas Diakopoulos, ‘Algorithmic Accountability’ (2015) 3 Digital Journalism 398; O’Neil (n. 5). 14  See Michael Veale, Max Van Kleek, and Reuben Binns, ‘Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making’ (2018) arXiv:1802.01029 [cs] 1, 6. 15  See Pasquale (n. 5); Ben Wagner, ‘Algorithms and Human Rights: Study on the Human Rights Dimensions of Automated Data Processing Techniques and Possible Regulatory Implications’, Council of Europe (2018) DGI(2017)12 . 16  See O’Neil (n. 5).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Algorithmic Accountability: Towards Accountable Systems   681 In this context, it is imperative that information systems are accountable at minimum to their users. Without this very basic level of accountability, it is impossible for users  to  build any kind of trust in the systems they are using. If software is not accountable to its users and it exists outside their control, this typically means that the information system is operating without any meaningful oversight at all. It is very difficult to build any accountability into the system without at least at minimum accountability to the user. Of course, there are other areas where accountability will also be necessary, however there is—to the best of our knowledge—none that constitute a substitute for user ac­count­abil­ity. As such, it is important to remember that while other forms of accountability may be necessary as well, taking the user seriously is equally important. Moreover, user accountability is difficult to achieve through blind trust of a closed information system or its vendor. While the free software movement has attempted to remedy part of this through providing access to the source code of the system,17 it is questionable whether this is a sufficient accountability mechanism.18 While providing access to the source code can provide some degree of accountability, it does not enable a user to understand what the overall system is actually doing.

2.  Accountability for what? As noted by Veale and others, the main current public debate in algorithmic ac­count­ abil­ity is related to bias: ‘Discrimination has taken centre-stage as the algorithmic issue that perhaps most concerns the media and the public.’19 While bias is currently taking centre stage in public debates, it is also a comparatively easy issue to communicate and understand. However, it also suggests to a considerable degree that these biases are errors within the information system and that if these biases are simply fixed then the problem itself is resolved. A far greater challenge would be for algorithmic systems to ensure the non-manipulation of users,20 preventing negative societal impacts,21 adherence to relevant ethical frameworks,22 or even more broadly enabling and safeguarding human 17  See Richard Stallman, The GNU Manifesto (1985); Julio Cesar Sampaio do Prado Leite and Claudia Cappelli, ‘Software Transparency’ (2010) 2 Business & Information Systems Engineering 127. 18 See Christian Payne, ‘On the Security of Open Source Software’ (2002) 12 Information Systems J. 61; Jaap-Henk Hoepman and Bart Jacobs, ‘Increased Security through Open Source’ (2007) 50 Communications of the ACM 79; Henrik Plate, Serena Elisa Ponta, and Antonino Sabetta, ‘Impact Assessment for Vulnerabilities in Open-Source Software Libraries’ (2015) arXiv:1504.04971 [cs] . 19  Veale, Van Kleek, and Binns (n. 14). 20  See Sarah Spiekermann, Ethical IT Innovation: A Value-Based System Design Approach (CRC Press 2015). 21  See Wagner (n. 15). 22  See Aral Balkan, ‘Ethical Design Manifesto’ (Ind.ie, 2017) ; Gry Hasselbalch and Pernille Tranberg, Data Ethics: The New Competitive Advantage (Publishare 2016).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

682   Ben Wagner rights frameworks and the values embedded within them.23 These types of challenges require a different approach than simply correcting or improving the existing information system. Instead, they involve redesigning the relationship between the information system and user to ensure that the system is systematically accountable to the user. Veale and others note in their field study of algorithmic systems in the UK public ­sector24 that those developing algorithms should not be assumed to be unaware or naive in their development of algorithms, and that rather than focusing on ‘gotcha moments’ a  more collaborative approach may be necessary to improve algorithms over time. While this may be true in the public sector in some cases, it is questionable whether this insight extends beyond the public sector. This is because the public sector in many countries like the UK has numerous existing organizational accountability mechanisms such as Freedom of Information requests that contribute to algorithmic accountability which do not exist within the private sector. Without such information-provision mechanisms required by law, it seems more difficult to argue for a collaborative relationship. Given the ongoing rise in manipulative and coercive practice in modern information system,25 it is important to acknowledge that the relationship between users and software developers may in many cases be adversarial. Users are entitled not just to be given a simple ‘take it or leave it’ understanding of the software they use—in which only blind trust is possible—but to systematically be able to actively hold the software they use to account.

3.  Challenges with Algorithmic Accountability However, in order for users to be able to so systematically, there are numerous stumbling blocks along the way. None of these are necessarily insurmountable challenges, but they provide an overview of technical, organizational, and regulatory challenges in ensuring access to data.

3.1  Access to the Algorithmic System The first set of issues relates to getting access to the data, through which the information system might more effectively be held to account. While this may sound like a relatively 23  See Wagner (n. 15); Balkan (n. 22); Pasquale (n. 5); Niels ten Oever and Davide Beraldo, ‘Routes to Rights: Internet Architecture and Values in Times of Ossification and Commercialization’ (2018) 24 XRDS: Crossroads, The ACM Magazine for Students 28; Corinne Cath and Luciano Floridi, ‘The Design of the Internet’s Architecture by the Internet Engineering Task Force (IETF) and Human Rights’ (2017) 23 Science and Engineering Ethics 449. 24  Veale, Van Kleek, and Binns (n. 14). 25  See Colin Gray and others, ‘The Dark (Patterns) Side of UX Design’ (ACM Press 2018) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Algorithmic Accountability: Towards Accountable Systems   683 simple task, it is often one of the most difficult, particularly in order to ensure that the data is accessible in a machine-readable format. This is particularly the case with recommender systems, which typically provide transitory quick information to users, which is not typically documented or catalogued meaningfully. Some initiatives that attempt to ensure algorithmic accountability have turned towards users entering this data themselves,26 while by far the most common technique is scraping this information. Another more recent method has been to use data subject access requests under the EU General Data Protection Regulation (GDPR) that came into force on 25 May 2018, or previously existing data protection regulation. A wide variety of initiatives have taken this path both before and after the GDPR came into force,27 however with a mixed degree of success. As such, access to reliable data from information systems remains a challenge for all types of information systems.

3.2 Verification The second set of issues relates to knowing to what extent the statements made by a system are true. This is even harder than access, as it typically requires access to ‘ground truth’ data at scale which can then be compared to the data in the information system. Aside from the considerable privacy issues related to collecting all of this data, this poses the wider challenge of accurately assessing what an unbiased, fair, impartial, or ethical response of an information system would look like.

3.3 Aggregation The third set of issues relates to aggregating sufficient amounts of ground truth data. This ground truth data then needs to be compared to the actual data presented to the user by the system in order to ascertain accuracy. In many cases, this comparison is only possible if a large sample of decisions are known, and any potential bias frequently refers to different forms of personalization across large datasets.

3.4  Measuring the Effect of a System on User Behaviour The fourth set of issues is measuring the effect that the biases have on user behaviour. While statements about bias typically focus on the information system28 there is an 26  See Resa Mohabbat Kar and Peter Parycek, Berechnen, Ermöglichen, Verhindern: Algorithmen Als Ordnungs-Und Steuerungsinstrumente in Der Digitalen Gesellschaft (DEU 2018). 27  See Hadi Asghari and others, ‘The Right of Access as a Tool for Privacy Governance’, HotPETs ­during The 17th Privacy Enhancing Technologies Symposium (2017). 28 See Engin Bozdag, ‘Bias in Algorithmic Filtering and Personalization’ (2013) 15 Ethics and Information Tech. 209; Benjamin Edelman, ‘Bias In Search Results?: Diagnosis And Response’ (2011) 7 Ind. J. of L. & Tech. 16.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

684   Ben Wagner implicit assumption behind this that bias is not just presented to the user, but also has an effect on user behaviour. However, the effects of systemic bias on user behaviour is rarely studied and requires considerable additional analysis.

3.5  Users’ Interpretation of the Capabilities of the Systems They are Using Fifthly, another associated challenge is how users learn to adapt to the biases, quirks, and other input they receive from the information system. There is an extensive literature on how users adapt simply to failures of information systems,29 and more broadly in computer–human interaction, as a result of which it is too simplistic to claim that users take computer input at face value.

3.6  Improving the Quality of Technical Systems Another associated challenge is whether algorithmic accountability contributes to improving information systems systematically. As argued earlier, separately, meaningful accountability comes from independent external oversight. Importantly, we believe that while collaborative models of information system improvement as suggested by Veale and others30 is certainly possible, it should not be considered the default. Instead, an adversarial model needs to be considered the default if meaningful accountability is to be developed.31

3.7  Accountability of the Socio-Technical System Finally, one last key challenge is ensuring users have access to organizational factors related to the whole system in order to ensure meaningful accountability. For example, factors such as a for-profit/non-profit developer, public/private sector organization, closed source/open source, accessibility of bug reporting, etc. could provide relevant contextual information about algorithmic accountability. This information extends algorithmic accountability beyond a specific information system to the overall or­gan­ iza­tion­al system that develops the software itself, as well as the institutional context within which it is embedded. 29  See Norman (n. 12); Donald A. Norman, ‘Human-Centered Design Considered Harmful’ (2005) 12 interactions 14; Henry Petroski, Success through Failure: The Paradox of Design (Princeton U. Press 2018). 30  Veale, Van Kleek, and Binns (n. 14). 31 See Mireille Hildebrandt, ‘Privacy As Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning’, SSRN Scholarly Paper no. 3081776 (2017) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Algorithmic Accountability: Towards Accountable Systems   685

4.  What does Algorithmic Accountability Mean in the Context of Intermediary Liability Online? Within the context of intermediary liability online, there are considerable challenges related to algorithmic accountability. These stem from the ongoing regulatory and legislative focus on increasing the burden of liability for intermediaries, coupled with the perceived potential for cost-saving by using automated techniques. The results of this challenging combination are an increasing shift towards automated ‘algorithmic’ content regulation across almost all large digital platforms. This is part of an ongoing ‘spiral of privatized regulation’,32 where private rather than state actors are increasingly responsible for mediating the flow of content online. At the same time, it can equally be argued that there are good reasons to increase the burden of liability on online intermediaries. There are numerous widespread and welldocument problems with the way many of them design and operate their platforms, from issues with hate speech on platforms33 to the frequent lack of transparent and accountable governance mechanisms,34 to wider issues around human rights in large online platforms.35 Algorithmic accountability can be helpful in this context, because it serves to provide a more granular specification of what intermediary liability is actually trying to achieve: using the threat of liability to ensure that platforms act more accountably and re­spon­ sibly, considering not just their own interests but also those of others as well. While this may be a very broad specification of what intermediary liability attempts to achieve, it is not entirely off the mark. The imposition of liability essentially serves as a check on private power36 in a similar manner to the focus of algorithmic accountability, which attempts to do the same. Thus, algorithmic accountability could serve as a mechanism to improve existing intermediary liability mechanisms, both in ensuring a higher quality of implementation and ensuring that the manner of implementation is more likely to be consistent with fundamental rights. Regulators have thus far shied away from using more specific 32  Ben Wagner, ‘Free Expression?—Dominant Information Intermediaries as Arbiters of Internet Speech’ in Martin Moore and Damian Tambini (eds), Digital Dominance: Implications and Risks (OUP 2018) 225. 33  See Julia Angwin and Hannes Grassegger, ‘Facebook’s Secret Censorship Rules Protect White Men from Hate Speech But Not Black Children’ (ProPublica, 28 June 2017) . 34  See David Kaye, ‘Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression’ (2018) A/HRC/38/35 . 35  See Wagner (n. 32). 36 Ben Wagner, ‘Algorithmic Regulation and the Global Default: Shifting Norms in Internet Technology’ (2016) 10 Etikk Praksis Etikk i Praksis 5.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

686   Ben Wagner stipulations on the specific technical implementation of intermediary liability regulations, preferring instead to simply mandate that content must be removed to avoid legal li­abil­ity, sometimes even within twenty-four hours for manifestly illegal content (German NetzDG)37 or even within one hour in the proposed EU terrorist content Regulation.38 This intense pressure to remove content has not significantly contributed to ensuring that intermediaries meaningfully create human rights-enabling environments online. If anything, it creates incentives for limiting freedom of expression39 in an invasive and opaque manner. Thus, a proposal in which adherence to algorithmic accountability would lower li­abil­ity of intermediaries could contribute to more effectively ensuring compliance with human rights. This would strive for a more balanced approach, which takes into account the interests of all relevant stakeholders. Such an approach would go beyond simply requiring intermediaries to take down content, but also specify how they should take down content. For example, specific provisions of algorithmic accountability in this context could include the following. (1) Decisions of automated content takedown systems need to be transparent to the users of the system, those affected by those decisions (typically content owners but also other referenced within the content) and society at large. (2) The development of external algorithmic accountability mechanisms in which outside organizations audit and report on the functioning, effectiveness, and errors of a specific content-takedown algorithm. (3) An independent external body40 to which complaints can be addressed and are independently adjudicated. In places where a functioning judicial system exists, this should be based on the existing legal system. (4) ‘Radical transparency’41 on the: (a) exact system design and variables used in automated content-moderation systems used and their functionality; (b) the outputs of these systems when measured against a standardized set of content which serves to compare different systems; (c) the actual outputs of these systems, including an explicit list of the content they take down and the content they allow to stay up;

37 See the 2017 Network Enforcement Act (Gesetz zur Verbesserung der Rechtsdurchsetzung in ­sozialen Netzwerken, NetzDG) (Ger.), s. 3(2)(2). 38  See also European Parliament, ‘Legislative resolution of 17 April 2019 on the proposal for a regulation on preventing the dissemination of terrorist content online’ [2019] P8_TA-PROV(2019)0421, Arts 4(2) and 4b(2). 39 Ben Wagner, ‘Freedom of Expression on the Internet: Implications for Foreign Policy’ (GISWatch, 2011) . 40  See Ben Wagner, ‘Ethics as an Escape from Regulation: From Ethics-Washing to Ethics-Shopping?’ in Mireille Hildebrandt (ed.), Being Profiling. Cogitas ergo sum (Amsterdam University Press 2018). 41  Kaye (n. 34).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Algorithmic Accountability: Towards Accountable Systems   687



(d) the business processes around those systems and how the business processes interact with existing automated systems. (5) Storing automated decisions in such an irrevocable manner to ensure that they cannot later be tampered with and can be fully reconstructed and assessed after the fact. (6) Meaningfully explaining the automated content regulatory decisions that were made in a manner that allows for users to understand the reasoning and process behind them. (7) Ensuring that human review of these decisions is possible, with human beings who are adequately trained, employed, and have sufficient time to make a meaningful legal determination. (8) Ensuring that all involved parties are always aware whether decisions are made by automated technical systems or human beings, as well as their specific interrelationship in this context. (9) These automated measures cannot replace regulatory measures or make it impossible for users to access. Instead they serve to pre-empt legislative decisions in a human rights-friendly manner.

5. Conclusions Ensuring algorithmic accountability could thus contribute to better implementation of intermediary liability regimes in a manner which provides greater support of human rights. Many of the current problems in existing intermediary liability regimes stem from an unwillingness to move from outcomes of the takedown process to the quality of the takedown process as well as from a general disregard of the wider implications of intermediary liability regimes not just on freedom of expression but human rights in general. More broadly, there is an urgent need for ensuring the development and implementation of more accountable systems. While ensuring algorithmic accountability can contribute to this, it constitutes just a first step in a wider development towards systems which promote greater accountability. Such systems do not just improve the problematic implementation of intermediary liability regimes but ensure accountability more broadly. Given the massive accountability deficit in technical systems that has been identified by numerous UN special rapporteurs and academics alike,42 these is clearly a 42  See David Kaye, ‘Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression to the Thirty-Second Session of the Human Rights Council’, United Nations General Assembly (2016); Frank La Rue, ‘Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression’ (2013) A/HRC/23/40; Ann Cavoukian, ‘Privacy by Design: The 7 Foundational Principles. Implementation and Mapping of Fair Information Practices’ (2009) ; Frank Pasquale, ‘Odd Numbers’ (Real Life, 2018) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

688   Ben Wagner significant deficit here. Human rights-enabling systems cannot just be left at the protocol layer,43 they need to be integrated across socio-technical systems and integrated within technical systems, as well as business and institutional processes. When resources of significant concern are on the line, we are perfectly capable of designing such systems which promote accountability. Thus, most companies around the world implement a two-man rule in technical systems for important corporate decisions, such as transferring funds above a certain limit from one entity to another, or to ensure meaningful auditing and compliance procedures. Of course, it is not enough for the technical system to require two distinct ‘users’ to sign in, unless business processes actually mandate that a two-man rule has to be implemented systematically and failure to implement it will be sanctioned accordingly. However, similar technical checks and limitations do not meaningfully exist for human rights. These concerns about accountable systems deserve similar scrutiny and importance as internal company audits or transfers of funds. Unless accountable systems are seen as a core aspect of the development of technical systems, they are unlikely to be implemented more broadly across a wide set of diverse organizations. Ensuring algorithmic accountability is a core component of this approach and can thus meaningfully contribute not just to ensuring that intermediary liability is implemented in an effective and human rights-friendly manner, but also ensuring more broadly that human rights can be more effectively safeguarded by technical systems as well. 43  ten Oever and Beraldo (n. 23).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

pa rt V I I I N T E R N E T J U R ISDIC T ION, E X T R AT E R R I TOR I A L I T Y, A N D L I A BI L I T Y

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 36

I n ter n et J u r isdiction a n d I n ter m edi a ry Li a bilit y Dan Jerker B. Svantesson*

As the preceding chapters have made clear, internet intermediaries are crucial for how most people use the internet. We use them, for example, to search for content, keep in contact with friends and work colleagues, for buying and selling products, and for finding partners. More broadly, they are now in a position as gatekeepers of the content we access. Thus, given that the claims of liability imposed on internet intermediaries steer how they act, internet intermediary liability is a topic that affects us all directly or indirectly. As the preceding chapters have also made clear, the rules governing internet intermediary liability are complex and vary from country to country. This takes us to what may be described as the most important, and perhaps most urgent, underlying issue facing the internet—there is a fundamental clash between the global, largely borderless,

*  This chapter is based on research supported by ERDF CyberSecurity, CyberCrime, and the Critical Information Infrastructures Center of Excellence (No. CZ.02.1.01/0.0/0.0/16_019/0000822) and draws, and expands, on research findings previously presented in: Dan Svantesson, ‘Electronic Commerce’ in Jürgen Basedow and others (eds), Encyclopedia of Private International Law (Edward Elgar 2017) 592–9; Dan Svantesson and Samson Esayas, ‘Digital Platforms under Fire: What Australia can Learn from Recent Developments in Europe’ (2018) 43(4) Alternative L.J. 1–8; Dan Svantesson, ‘Jurisdictional Issues and the Internet—a Brief Overview 2.0’ (2018) 34(4) Computer L. & Security Rev. 715–22; Radim Polcak and Dan Svantesson, Information Sovereignty—Data Privacy, Sovereign Powers and the Rule of Law (Edward Elgar 2017); Dan Svantesson and Lodewijk van Zwieten, ‘Law Enforcement Access to Evidence Via Direct Contact with Cloud Providers—Identifying the Contours of a Solution’ (2016) 32 Computer L. & Security Rev. 671–82; Dan Svantesson, ‘The future of the internet looks brighter thanks to an EU court opinion’ (The Conversation, 15 January 2019) ; Dan Svantesson, ‘ “Lagom jurisdiction”—What Viking drinking etiquette can teach us about Internet jurisdiction and Google France’ (2018) 12(1) Masaryk U. J. of L. and Tech. 29–47; Dan Svantesson, Solving the Internet Jurisdiction Puzzle (OUP 2017).

© Dan Jerker B. Svantesson 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

692   Dan Jerker B. Svantesson internet on the one hand, and the practice of lawmaking and jurisdiction anchored in territorial thinking. In Chapter 37, Professor Geist—a true pioneer on the topic of internet jurisdiction1— addresses extraterritoriality and online conflicts with particular emphasis on the Google Canada case (Equustek). And in Chapter 38 concluding this Part on ‘Jurisdiction on the Internet: From Legal Arms Race to Transnational Cooperation, Bertrand de La Chapelle and Paul Fehlinger—who lead the Internet & Jurisdiction Policy Network— look at the bigger picture of how we can move from the current ‘legal arms race’ to transnational cooperation when it comes to jurisdiction on the internet. In this chapter, I will seek to set the scene for those discussions and make some proposals for how we may make progress in this field. For this purpose, I will focus on three examples of where the matter of internet jurisdiction is a major concern for internet intermediaries. The first relates to the validity of the terms of service that internet intermediaries typically impose on their users, and which typically contain important provisions regarding jurisdiction and applicable law. As will be illustrated, this is a matter that, due to the strong public policy considerations that may be involved, goes well beyond traditional contract law. The second example relates to situations in which law enforcement agencies seek access to user data held by internet intermediaries. Such situations give rise to important matters of jurisdiction, not only where the requesting law enforcement agency and the internet intermediary are based in different countries, but may also—as was illustrated in the well-known Microsoft Warrant case—give rise to such issues where the requested data is stored outside the country in which both the law enforcement agency and the internet intermediary are based. The third example relates to the matter of the geographical scope where an internet intermediary is required to remove, block, take down, delist, de-index, or de-reference content. In this last context, I will, to a great extent, draw upon a case that, at the time of writing, is before the Court of Justice of the European Union (CJEU). However, first, it is appropriate to make some background observations about what is at stake, and how it is we now find ourselves in the situation we are in.

1.  Internet Intermediaries and Jurisdiction Debates about the role of, and possible protection for, internet intermediaries are often characterized by clashing extremist points of view. On the one side we find the free 1  See in particular Michael Geist, ‘Is There a There There? Towards Greater Certainty for Internet Jurisdiction’ (2001) 16 Berkeley Tech. L.J. 1345.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Internet Jurisdiction and Intermediary Liability   693 speech advocates (typically from the United States) seeking to impose an uncompromising and all-embracing free speech regime, where any restrictions imposed by an internet intermediary on what is uploaded by internet users is seen as a gross violation. Such extreme views must be seen in the light of how US notions of free speech were the clearly dominant values during the early days of the internet. Today, however, the internet is no longer a US resource it shares with the rest of the world; rather the internet is a common resource shared by the world, and other values—that may in some circumstances clash with the US notions of free speech—are being advocated more vigorously. Further, an absolute, or near absolute, right to free speech online will clearly clash with the reasonable, indeed necessary, goals of addressing, for example, online bullying and child abuse materials. Yet at the same time, the problem is obvious: as soon as we abandon the ideal of absolute free speech, internet intermediaries are faced with the difficult, and unwelcome, task of being the judges and enforcers of ‘good taste’—internet intermediaries become the censors and gatekeepers of speech. This is a role for which internet intermediaries are typically ill-suited. Indeed, we may legitimately question whether society should assign such a crucial role to private entities with the types of agenda normally held by private entities. Adding to the concern, the legal rules internet intermediaries are asked to apply in making such decisions are not always clear. Turning to the other side of the coin, it is clear that just as the views of the free speech extremist may be hard to accept, so are the extremist views in the legal compliance camp. The problem is this: while most people would expect internet intermediaries to abide by the law of their respective countries, they would probably not wish for internet intermediaries to abide by all laws of all other countries in the world. In the end, such compliance would lead to internet intermediaries being forced to take account only of the most restrictive laws from all the countries in the world. Such a race to the bottom is doubtless an unhealthy direction for the internet. And if this is not what we want, we need to consider whether a globally active internet intermediary can ever be excused for not complying with all the laws around the world that claim to apply to its conduct. Elsewhere I have raised the question of whether it is time to construct an ‘international law doctrine of selective legal compliance’.2 However, after a long period during which discussions of the regulation of digital platforms were predominantly concerned with ensuring that such actors were provided with sufficient protection to achieve their potential and blossom, there are now clear signs of a hardening attitude towards internet platforms rather than any exploration of how to accommodate them in situations where they are exposed to multiple, and potentially conflicting, laws. One example of this hardening attitude in the context of marketing restrictions and consumer protection can be seen in the Australian Competition & Consumer Commission (ACCC) inquiry specifically into digital platforms.3

2  See Svantesson, Solving the Internet Jurisdiction Puzzle (n. *) 215–23. 3  Letter from Scott Morrison to ACCC Chairman Rod Sims requiring ACCC inquiry into digital platforms (4 December 2017) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

694   Dan Jerker B. Svantesson Another example of a stricter approach is found in the December 2018 announcement that ‘India will ban e-commerce companies . . . from selling products from companies in which they have an equity interest.’4 Yet another example is the 2019 EU Copyright Directive, which imposes greater responsibilities on certain digital platforms to stop users from posting copyright content.5 Additionally, the UN Internet Governance Forum’s Dynamic Coalition on Platform Responsibility is working to produce model contractual provisions for internet platforms and ultimately aims to protect users’ human rights and enhance platform responsibility.6 A further example can be seen in the lawsuit filed by the US Attorney General in December 2018 against Facebook for failure to protect its customers’ personal data and allowing political data firm Cambridge Analytica to access users’ personal data.7 There can be no doubt that consumers need to be given appropriate safeguards in their dealings with the internet giants which are often based overseas. Equally, it is obvious that we must ensure fair competition in the media and advertising services markets, and we must clearly protect our democracies from manipulation. At the same time, we would do well to remember that digital platforms exist because we see reasons to use them. The popularity of social media is such that governments have also opted to use them as a primary means of communication and, realistically, who would want to go back to a pre-search engine internet? Thus, great care must be taken to ensure that any reform helps to create a digital platform environment that—as a whole—serves consumers better, not worse. At any rate, in determining how to approach the liability of internet intermediaries, we must take care to avoid stepping into the quagmire of analogies. While it is true that newspapers, as well as radio and TV broadcasters, have for a long time acted in the role of censors in deciding what content to make available, the role of the internet intermediary is so fundamentally different that we cannot, and should not, draw a comparison with such media outlets. No previous intermediaries in the history of mankind have been faced with the scale of global user-generated content with which internet intermediaries are faced. This difference in quantity amounts to a difference in character, and the role of internet intermediaries must be approached with fresh eyes, free from the contamination of preconceived notions based on comparisons with the roles of other intermediaries.

4  See Aftab Ahmed and Sankalp Phartiyal, ‘India tightens e-commerce rules, likely to hit Amazon, Flipkart’ (Reuters, 26 December 2018) . 5  See Directive 2019/790/EU of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92, Art. 17. 6  See IGF Internet Governance Forum, ‘Dynamic Coalition on Platform Responsibility: A Structural Element of the United Nations Internet Governance Forum’ . 7 .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Internet Jurisdiction and Intermediary Liability   695

2.  Terms of Service, Jurisdiction, and Choice of Law Important, and controversial, legal questions of relevance for the interface between private international law and internet intermediaries arise in the contexts of so-called choice-of-forum and choice-of-law clauses. A choice of forum clause determines—in an either exclusive or non-exclusive manner—in which court(s) the parties will litigate in the event of a dispute. Choice of law clauses specify which country’s, or in federal states which state’s, law should be applied in the event of litigation. As is well known, such clauses are commonly incorporated into the terms of service that internet intermediaries impose on their users. However, studies have repeatedly highlighted that consumers very rarely read the terms and conditions to which they arguably ‘agree’, for example, by clicking ‘I agree’ (so-called click-wrap agreements) or by merely using a website (so-called browse-wrap agreements). This has been seen to call into question the validity of the internet intermediaries’ nominated choice of forum and choice of law included in such agreements. US law, which has provided important guidance on this topic for courts outside the United States,8 has traditionally accepted the choice of forum and choice of law clauses included in these ‘contracts of adhesion’. Although, it has been held that simply including a disclaimer on a website does not necessarily bind visitors to the terms and conditions stated in the disclaimer,9 the US courts’ reasoning in relation to this type of contract is exemplified in Caspi v Microsoft Network LLC.10 There, the plaintiffs had brought a class action against Microsoft in relation to a unilateral change to the membership fees of the defendant’s service ‘MSN’. The click-wrap contract stated that the agreement was governed by the laws of the State of Washington and that Washington courts had exclusive jurisdiction. In evaluating the validity of the contract, the court noted that the plaintiffs’ consent did not appear to be the result of fraud. Furthermore, it observed that the online computer service industry is not one without competition and that consumers therefore have choices as to which service they select. It also noted that: In order to invalidate a forum selection clause, something more than merely size difference must be shown. A court’s focus must be on whether such an imbalance in size resulted in an inequality of bargaining power that was unfairly exploited by the more powerful party.11 8 See further Joel Reidenberg and others, ‘Internet Jurisdiction:  A  Survey of Legal Scholarship Published in English and United States Case Law’, Fordham Law Legal Studies Research Paper no. 2309526 (2013). 9 See Ticketmaster Corp v Tickets.com, 2000 US Dist. Lexis 4553 (CD Ca. 2000) (US). 10  Caspi v Microsoft Network LLC, 323 N.J. Super. 118 (NJ Super. Add. Div. 1999) (US). 11  ibid. (internal reference omitted).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

696   Dan Jerker B. Svantesson After this, the court went on to examine whether the choice of forum clause contravened public policy and whether the enforcement of the choice of forum clause would inconvenience a trial. Neither of these questions were seen to prevent the choice of forum clause being upheld. Finally, the court examined the plaintiffs’ claim that they had not received adequate notice of the forum selection clause. In doing so, it noted that there was nothing extraordinary about the size or placement of the forum selection clause text; thus, to conclude that the plaintiffs were not bound by the choice of forum clause would be equivalent to holding that they were bound by no other clause either, since all provisions were identically presented. However, the legality of such terms has now been called into question via a judgment by the Supreme Court of Canada and a judgment by the CJEU reflecting the thinking of the Opinion issued on 2 June 2016 by Advocate General Saugmandsgaard Øe in the same matter. In the June 2017 decision by the Supreme Court of Canada in Douez v Facebook Inc.,12 the majority (4–3) of the court held Facebook’s forum-selection clause, nominating a California court, unenforceable. In 2016, the CJEU was invited to consider whether Amazon EU’s choice of law clause was unfair under EU consumer law.13 Advocate General Saugmandsgaard Øe essentially broke this question down into two parts. First, he concluded that Amazon EU’s choice of law clause could not override the option of litigating under the consumer’s home state law as catered for under the Rome I Regulation.14 Thus, the clause could not be seen to unfairly exclude the consumer from exercising that option. However, the clause Amazon EU used could mislead consumers into believing that they did not have the right they did in fact have under the Rome I Regulation, and this potential to mislead made the term unfair under relevant EU consumer law. This reasoning was also adopted by the Court.15 It remains to be seen whether these developments are indicative of a trend against upholding choice of forum, and choice of law, clauses in online agreements, or whether adherence to so-called ‘party-autonomy’—which ultimately present users with unilaterally predetermined contractual terms on a take-it-or-leave-it basis—will be reaffirmed. As far as the EU is concerned, some clarity will be gained from a pending case as to the status of agreements by way of a pre-ticked checkbox which the user must unselect to refuse his consent.16

12  2017 SCC 33 (Can.). 13  See Council Directive 93/13/EEC of the European Parliament and of the Council of 5 April 1993 on unfair terms in consumer contracts [1993] OJ L95/29. 14  Regulation 593/2008/EC of the European Parliament and of the Council of 17 June 2008 on the law applicable to contractual obligations (Rome I) [2008] OJ L177/6. 15  See C-191/15 Verein für Konsumenteninformation v Amazon EU Sàrl [2016] ECLI:EU:C:2016:612. 16  See C-673/17 Planet49 GmbH v Bundesverband der Verbraucherzentralen und Verbraucherverbände— Verbraucherzentrale Bundesverband eV [2019] ECLI:EU:C:2019:246.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Internet Jurisdiction and Intermediary Liability   697

3.  Access to Evidence and Jurisdiction Effective law enforcement carried out in accordance with fundamental rights is a state obligation originating in both public international law and in the relationship between states and their citizens. In other words, it is an undisputable obligation that goes to the core of statehood, and failure to meet this obligation should be combated via a pincer movement involving both public international law and domestic law. The ubiquitous use by criminal actors of electronic communications and storage services offered by internet intermediaries offers various challenges for criminal investigations. Electronic evidence of criminal activity may no longer be found with criminals or their associates themselves. Rather, the evidence resides with cloud providers, oftentimes on servers outside the territory of the investigating law enforcement authorities (LEAs). The providers holding the evidence may not be incorporated in that territory or have a subsidiary there that acts as a ‘data controller’ and is capable of fully complying with the domestic legal process. Obtaining the evidence in those situations in principle requires mutual legal assistance (MLA), although providers incorporated in some states may voluntarily disclose non-content data to foreign LEAs on a direct request, without the intervention of authorities. Internet intermediaries with storage facilities in multiple countries may themselves not be able to establish the geographical location of the requested data at any given time, creating uncertainty about the applicable jurisdiction (and with it the lawful application of investigative powers) and possible conflicts of, for example, data privacy legislation. Providers currently each have their own procedures in place for this type of direct cooperation and make their own assessment of (the legality of) requests in view of fundamental rights and business considerations. This leads to a practice where providers may provide different (subsets of) data in seemingly similar situations, making the process, as a whole, at times diffuse and unpredictable for requesting LEAs. All this often leads to internet intermediaries being positioned as the ‘meat in the sandwich’, squeezed from multiple sides. An illustration of this can be found in the wellknown Microsoft Warrant case in 2018.17 In December 2013, the US government served a search warrant on Microsoft under the Electronic Communications Privacy Act of 1986. The warrant authorized the search and seizure of information associated with a specified web-based email account stored at Microsoft’s premises. Microsoft opposed the warrant since the relevant emails were located exclusively on servers in Dublin, Ireland. A key question in the matter was whether the United States would be engaging in extraterritorial law enforcement in Ireland where that data sat, 17  United States v. Microsoft Corp., no. 17-2, 584 US __ (2018) (US).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

698   Dan Jerker B. Svantesson even though all actions taken to retrieve that data would have been taken from the United States. Thus, while Microsoft was a party, the real dispute was, in a sense, between the United States, on the one hand, and Ireland/the European Union on the other hand. And, as Ryngaert discusses in some detail, such trans-Atlantic disputes have some history in the setting of orders for discovery abroad.18 The case reached the Supreme Court of the United States and was heard on 27 February 2018. However, it was overtaken by legislative development in the form of the CLOUD Act. However, the Microsoft Warrant case, and the discussions involved in that matter are nevertheless illustrative, not least of the challenges facing internet intermediaries exposed to law enforcement requests for user data. Most obviously, the case highlights that there is a real risk of internet intermediaries being forced to make a choice to breach one country’s law so as to be able to comply with another country’s law. Such situations benefit no one and must be avoided wherever possible. Further, this dispute between Microsoft and the US government may also be used to illustrate the definitional difficulties associated with drawing a line between what is territorial and what is extraterritorial—often portrayed as a key question in jurisdictional debates. Microsoft argued that the issue of extraterritoriality obviously did arise, and the US government claimed that it equally obviously did not. The difference in perspective is apparent throughout but is particularly well illustrated in this quote from the government’s brief of 9 June 2014: Relying on Section 432(2) of the Restatement (Third) of Foreign Relations, Microsoft argues that ‘[a] state’s law enforcement officers may exercise their functions in the territory of another state only with the consent of the other state.’ . . . But requiring the disclosure of records by a U.S. company does not involve any enforcement activity by government personnel on foreign territory, which is the concern of that section.19

Given issues such as this in anchoring the law in territoriality-thinking, it is not surprising that modern regimes for law enforcement access to evidence held by intermediaries do not focus on the location of the data;20 after all, the fact that the territoriality focus typical of international law is a bad fit with the fluidity of the online environment— characterized as it is by constant and substantial cross-border interaction—is well established and is now well beyond intelligent dispute. 18  See Cedric Ryngaert, Jurisdiction in International Law (OUP 2015) 89–94. 19  Government’s Brief in Support of the Magistrate Judge’s Decision to Uphold a Warrant Ordering Microsoft to Disclose Records Within its Custody and Control, In the Matter of a Warrant to Search a Certain E-mail Account Controlled and Maintained by Microsoft Corporation (1:13-mj-02814) 21. 20  See in particular Clarifying Lawful Overseas Use of Data Act or CLOUD Act (HR 4943) (US); European Commission, ‘Proposal for a Directive of the European Parliament and of the Council laying down harmonized rules on the appointment of legal representatives for the purpose of gathering evidence in criminal proceedings’ COM(2018) 226 final; and European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on European Production and Preservation Orders for electronic evidence in criminal matters’ COM(2018) 225 final.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Internet Jurisdiction and Intermediary Liability   699 As the role of strict territoriality declines in the context of jurisdiction, it is important that something else takes its place as the jurisprudential core of jurisdictional claims. At least in the context of law enforcement access to digital evidence, there are signs of an emerging consensus to focus on whether the state claiming jurisdiction has a legitimate interest and a substantial connection to the matter at hand, combined with an assessment of the consideration of other interests. Discussions regarding the cross-border legal issues associated with law enforcement access to digital evidence are comparatively advanced. Reliance on this three-factor framework may spread as it can also usefully be applied in respect of other settings in which standards need to be imposed on claims of jurisdiction.21 The focus on whether the state claiming jurisdiction has a legitimate interest and a substantial connection to the matter at hand, combined with an assessment of the consideration of other interests, has the advantage of incorporating a wide range of complex international law concepts and at the same time being easily understandable. Put as a normative expression, I have since 2015 argued for the following principles: In the absence of an obligation under international law to exercise jurisdiction, a state may only exercise jurisdiction where: (1) there is a substantial connection between the matter and the state seeking to exercise jurisdiction; (2) the state seeking to exercise jurisdiction has a legitimate interest in the matter; and (3) the exercise of jurisdiction is reasonable given the balance between the state’s legitimate interests and other interests.22 This ‘user-friendly’ restatement of applicable legal principles benefits from being ­relevant both for matters traditionally classed as falling within public international law and matters traditionally classed as falling within private international law (or conflict of laws).

4.  Scope of Jurisdiction of Content Blocking When private international law lawyers speak of jurisdiction, focus is typically placed on so-called personal jurisdiction (jurisdiction in personam) and subject matter jurisdiction. The former relates to the court’s power to adjudicate matters directed against a 21  See further Svantesson, Solving the Internet Jurisdiction Puzzle (n *) 57–90. 22  Dan Svantesson, ‘A New Jurisprudential Framework for Jurisdiction: Beyond the Harvard Draft’ (2015) 109 Am. J. of Int’l L. Unbound 69 .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

700   Dan Jerker B. Svantesson particular party while the latter relates to, for example, the substantive areas of law in respect of which the courts may adjudicate matters. Viewing personal jurisdiction and subject matter jurisdiction as the first two dimensions of jurisdiction, there is a third dimension—what we can call the ‘scope of jurisdiction’. Scope of jurisdiction relates to the appropriate geographical scope of orders rendered by a court that has personal jurisdiction and subject matter jurisdiction.23 This question has gained surprisingly little attention until recently. However, it is a question that is increasingly important and therefore deserving of detailed attention. The most famous decided case dealing with the scope of jurisdiction is the long-running dispute between Google Inc. and Equustek Solutions Inc. The case has gained a considerable degree of international attention and the background to the dispute is generally well known. The issue in the appeal was whether Google could be ordered, via an interlocutory injunction, to de-index—with global effect—the websites of a company (Datalink Technology Gateways Inc., and Datalink Technologies Gateways LLC) which, in breach of several court orders, sell the intellectual property of another company (Equustek Solutions Inc.) via those websites. On 28 June 2017, the Supreme Court of Canada handed down its judgment with the majority concluding that: [S]ince the interlocutory injunction is the only effective way to mitigate the harm to Equustek pending the resolution of the underlying litigation, the only way, in fact, to preserve Equustek itself pending the resolution of the underlying litigation, and since any countervailing harm to Google is minimal to non- existent, the interlocutory injunction should be upheld.24

This conclusion was reached via a belaboured journey through the quagmire of both legal and technical misunderstandings and half-truths. Most of those misunderstandings and half-truths are highlighted with commendable clarity in the dissenting judgment by Côté and Rowe JJ. However, Canada’s Supreme Court is far from the only court that has opted for the  deceptively attractive, but highly dangerous, path of global orders. US courts do so routinely without even displaying any awareness of the international implications of their decisions. One explicit example of this can be found in the factually complex Garcia case in which an actress cast in a minor role in a film sought to prevent the publication, on YouTube, of another film in which her scenes had been incorporated. Having failed to secure the content removal on other grounds, the actress sought and was initially granted takedown based on her alleged intellectual property rights in her performance.25 The decision was later overturned on copyrightrelated grounds.

23  See further Dan Svantesson, ‘Jurisdiction in 3D—“Scope of (Remedial) Jurisdiction” as a Third Dimension of Jurisdiction’ (2016) 12(1) J. of Priv. Int’l L. 60–76. 24  Google Inc. v Equustek Solutions Inc. 2017 SCC 34, para. 53 (Can.). 25 See Cindy Lee Garcia v Google Inc. and others, 786 F.3d 733 (9th Cir. 2015) (US).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Internet Jurisdiction and Intermediary Liability   701 An even more recent example is found in Hassel v Bird in its decision of July 2018, where the Supreme Court of California reversed an order by the Court of Appeals, thereby ensuring that platforms can continue to rely on the protection afforded under section 230 of the Communications Decency Act.26 Tellingly, neither the Supreme Court of California, nor the Court of Appeals, saw reason to confront the international implications even though the injunctive relief sought was for the removal of every defamatory review published by the defendant about plaintiffs from Yelp.com and from anywhere else they appeared on the internet.27 On the other side of the planet, Australia had initially adopted a sensibly cautious attitude towards scope of jurisdiction issues relating to the internet. In a case from 1999, Simpson J observed that: [a]n injunction to restrain defamation in NSW [New South Wales] is designed to ensure compliance with the laws of NSW, and to protect the rights of plaintiffs, as those rights are defined by the law of NSW. Such an injunction is not designed to superimpose the law of NSW relating to defamation on every other state, territory and country of the world. Yet that would be the effect of an order restraining publication on the Internet.28

This display of judicial self-restraint was departed from in 2017 via Pembroke J’s unfortunate judgment in X v Twitter Inc.29 There, the Supreme Court of New South Wales granted an order requiring Twitter to remove content anywhere in the world posted by one of its users. In fact, the order went as far as to require Twitter (a foreign defendant) to block or remove—with worldwide effect—future postings (regardless of subject matter) made by the unidentified (potentially foreign) person responsible for the postings at issue in the dispute. What makes Pembroke J’s decision even more alarming is the limited attention he directed at the matter’s nexus to Australia. Pembroke J had no difficulty finding jurisdiction over the dispute even though the primary relief sought against the foreign defendants included injunctions intended to restrain their conduct outside Australia, and the only contact with Australia that Pembroke J emphasized was that: ‘Among other things, the injunction sought to compel or restrain the performance of certain conduct by the defendants everywhere in the world. That necessarily includes Australia.’30 Given that the action was initiated in Australia, it is perhaps likely that the plaintiff was based, or at least active, in Australia. However, based only on how Pembroke J expressed the judgment, it seems possible for any party anywhere in the

26  234 Cal. Rptr 3d 867 (2018). Section 230(c)(1) of the Communications Decency Act states that: ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider’. 27  Hassell (n. 26) 4. 28  Macquarie Bank Lie & Anor v Berg [1999] NSWSC 526, para. 14 (Aus.). 29  [2017] NSWSC 1300 (Aus.). 30  ibid. para. 20.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

702   Dan Jerker B. Svantesson world to bring an action in Australia, against any company in the world, seeking global blocking of internet content. For our purposes, it is also interesting to note that Pembroke J used his judgment to send a message to the tech industry more generally in gratuitously noting that the applicable legal (equitable) principles are ‘equally applicable to Facebook, Instagram and any other online service or social networking web site that could be used to facilitate the posting of confidential information or private images belonging to another person’.31 And given that Twitter—a company with its main seats in the United States and in Ireland—did not show up to defend the action, it is significant that Pembroke J emphasized that Twitter was still within the court’s reach without assistance from the legal systems in the United States and Ireland: ‘there are assets in the jurisdiction [namely through Twitter Australia Holdings Pty Limited], whether or not, given the non-appearance of the defendants, a monetary judgment for costs in favour of the plaintiff is enforceable in California or the Republic of Ireland’.32 Also this may be seen as evidence of a hardening attitude towards internet intermediaries. It may be worth noting in passing that while there is an extensive international academic and policy debate about the merits of the Supreme Court of Canada’s decision in Equustek, and the CJEU cases (see later) discussing the issue of scope of jurisdiction, decisions from other parts of the world, such as Hassell v Bird and X v Twitter, that also involved claims of the global scope of jurisdiction—for example, through content removal with global effect—are virtually completely ignored in the debates. In an Opinion of 10 January 2019, Advocate General Szpunar expressed great scepticism regarding global de-referencing orders: The idea of worldwide de-referencing may seem appealing on the ground that it is radical, clear, simple and effective. Nonetheless, I do not find that solution convincing, because it takes into account only one side of the coin, namely the protection of a private person’s data.33

The Advocate General’s concerns are well founded. In facts, as I noted in a journal article in mid-2018 in relation to this case,34 arguably the biggest challenge facing internet jurisdiction is the current trend of the ends justifying the means; that is, there is an increasingly widespread view that the pursuit of the policies behind the substantive law (the goals) justify the stretching of jurisdictional rules (the means), both in the context of private and public international law. This is an extremely dangerous development, and I use both this and other publications to raise a warning flag cautioning us not to forget the goal of jurisdictional rules contributing towards securing a peaceful coexistence. If all we wanted was to secure the greatest possible reach of our substantive laws, we would hardly need any sophisticated 31  ibid. para. 19. 32  ibid. para. 54. 33 C-507/17 Google LLC v Commission nationale de l’informatique et des libertés (CNIL) ECLI:EU:C:2019:15, Opinion of AG Szpunar, para. 36. 34  See Svantesson, ‘Jurisdictional issues and the internet’ (n. *) 717.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Internet Jurisdiction and Intermediary Liability   703 private international law rules. We would merely proclaim that our laws always apply, that our courts always can claim jurisdiction, and that our court orders must be enforced all over the world. Simple and predictable, but also destructive and utterly misguided— this is clearly not what we want. The background to the case in relation to which Advocate General Szpunar reach his conclusion can be traced to the 2014 decision by the CJEU in the Google Spain case in which the Court articulated what has been called the ‘right to be forgotten’.35 In essence, that case arose based on a complaint made by a Spanish man about the search results that were displayed if anyone searched his name on the Google search engine. The CJEU upheld a right for people to have certain search results delisted from search engines. However, the Court was never asked to deal with the scope of jurisdiction question. As a direct, and unfortunate, consequence of this, there is considerable controversy about how widely—geographically speaking—search engines need to delist search results based on the so-called ‘right to be forgotten’. Different data protection authorities in the  EU Member States have taken radically different approaches to this matter. In a media release of 12 June 2015, the French data protection authority—the Commission Nationale de Informatique et Libertés (CNIL)—adopted an extremist view and, amongst other things, stated that: CNIL considers that in order to be effective, delisting must be carried out on all extensions of the search engine and that the service provided by Google search constitutes a single processing. In this context, the President of the CNIL has put Google on notice to proceed, within a period of fifteen (15) days, to the requested delisting on the whole data processing and thus on all extensions of the search engine.36

This sparked a legal battle that reached the French courts and a referral was made to the CJEU. The questions referred may, a bit simplified, be summarized in the following: must a search engine operator deploy the de-referencing to all of the domain names used by its search engine? If not, must a search engine operator only remove the links on the domain name corresponding to the state in which the request is deemed to have been made or on the national extensions used by that search engine for all of the Member States of the European Union? Must a search engine operator use ‘geoblocking’? If so, only from an IP address deemed to be located in the state of residence of the person ­benefiting from the ‘right to de-referencing’, or even, more generally, from an IP address deemed to be located in one of the Member States? The binary nature of the questions advanced by the Conseil d’État is both crude and inadequate, and I would rather be inclined to a different moulding of the relevant issues. In my view, we can get out of the quagmire and regain firm ground only if we realize that this is not an area that lends itself to such simplistic binary questions. Rather, what we 35  See C-131/12 Google Spain SL v Agencia Española de Protección de Datos [2014] ECLI:EU:C:2014:317. 36  ‘CNIL orders Google to apply delisting on all domain names of the search engine’ (CNIL, 12 June 2015) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

704   Dan Jerker B. Svantesson are dealing with—the appropriate protection of personality rights—will always be a matter of degree. Importantly, this is reflected in Advocate General Szpunar’s Opinion. The Advocate General concluded that: the Court’s answer to the first question should be that the provisions of Article 12(b) and point (a) of the first paragraph of Article 14 of Directive 95/46 must be interpreted as meaning that the operator of a search engine is not required, when granting a request for de-referencing, to operate that de-referencing on all the domain names of its search engine in such a way that the links at issue no longer appear, regardless of the place from which the search on the basis of the requester’s name is carried out.37

To this he added that: Once a right to de-referencing is established, it is thus for the operator of a search  engine to take all steps available to him to ensure effective and complete ­de-referencing. That operator must take all the steps which are technically possible. So far as the case before the referring court is concerned, that includes, inter alia, the  technique known as ‘geo-blocking’, irrespective of the domain name used by the internet user making the search.38

Interestingly, and sensibly, the Advocate General does not rule out the possibility that in certain situations, a search engine operator may be required to take de-referencing actions at the worldwide level.39 This is similar to a nuanced approach advocated for by the Swedish data protection authority in a parallel case that was before the Swedish courts,40 and in line with a proposed framework for scope of jurisdiction in de-referencing cases I first advanced in 2015.41 On 24 September 2019, the CJEU ruled that: where a search engine operator grants a request for de-referencing pursuant to [the relevant] provisions, that operator is not required to carry out that de-referencing on all versions of its search engine, but on the versions of that search engine corresponding to all the Member States, using, where necessary, measures which, while meeting the legal requirements, effectively prevent or, at the very least, seriously discourage an internet user conducting a search from one of the Member States on

37  C-507/17, Opinion of AG Szpunar (n. 33) para. 63. 38  ibid. para. 74 (internal reference omitted). 39  ibid. para. 62. 40  See Datainspektionen överklagar Google-dom (30 May 2018) . 41  See Dan Svantesson, ‘Limitless Borderless Forgetfulness? Limiting the Geographical Reach of the ‘Right to be Forgotten’ (2015) 2(2) Oslo  L.  Rev. 116–38. Amended version published in Polcak and Svantesson, Information Sovereignty (n. *) 227–8.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Internet Jurisdiction and Intermediary Liability   705 the basis of a data subject’s name from gaining access, via the list of results displayed following that search, to the links which are the subject of that request.42

Importantly, the CJEU emphasized the importance of the fact that: • ‘numerous third States do not recognise the right to de-referencing or have a different approach to that right’;43 • ‘the right to the protection of personal data is not an absolute right, but must be considered in relation to its function in society and be balanced against other fundamental rights, in accordance with the principle of proportionality’;44 • ‘the balance between the right to privacy and the protection of personal data, on the one hand, and the freedom of information of internet users, on the other, is likely to vary significantly around the world’;45 • ‘While the EU legislature has . . . struck a balance between that right and that freedom so far as the Union is concerned . . . , it must be found that, by contrast, it has not, to date, struck such a balance as regards the scope of a de-referencing outside the Union’;46 • ‘it is in no way apparent . . . that the EU legislature would . . . have chosen to confer a scope on the [relevant] rights . . . which would go beyond the territory of the Member States and that it would have intended to impose on an operator which, like Google, falls within the scope of that directive or that regulation a de-referencing obligation which also concerns the national versions of its search engine that do not correspond to the Member States’.47 Finally, it must be noted that the CJEU did not close the door to the nuanced approach envisaged by Advocate General Szpunar and the Swedish DPA (as referred to earlier): while, as noted . . . EU law does not currently require that the de-referencing granted concern all versions of the search engine in question, it also does not prohibit such a practice. Accordingly, a supervisory or judicial authority of a Member State remains competent to weigh up, in the light of national standards of protection of fundamental rights . . . , a data subject’s right to privacy and the protection of personal data concerning him or her, on the one hand, and the right to freedom of information, 42 C-507/17 Google LLC, successor in law to Google Inc. v Commission nationale de l'informatique et des libertés (CNIL) [2019] ECLI:EU:C:2019:772, para. 74. See also Geert Van Calster, ‘Court of Justice in Google sees no objection in principle to EU “Right to be forgotten” leading to worldwide delisting orders. Holds that as EU law stands, however, it is limited to EU-wide application, leaves the door open to national authorities holding  otherwise’ (GAVC Law, 2019) ; and Dan Svantesson, ‘The Court of Justice of the European Union steers away from global removal orders’ (LinkedIn, 24 September 2019) . 43  C-507/17 (n. 42), para. 59. 44  ibid. para. 60. 45  ibid. para. 60. 46  ibid. para. 61. 47  ibid. para. 62.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

706   Dan Jerker B. Svantesson on the other, and, after weighing those rights against each other, to order, where appropriate, the operator of that search engine to carry out a de-referencing concerning all versions of that search engine.48

The implications of the outcome, as well as the reasoning that led to the outcome, are highly significant as it can be expected that the EU’s approach will be influential or even standard setting. Yet, the clarity of the CJEU’s approach to scope of jurisdiction issues was arguably muddled by its decision in Case C-18/18. There, an Austrian politician sought to have content, argued to be defamatory, removed, and future content blocked, by Facebook Ireland Ltd with worldwide effect. Put simply, the CJEU ruled that the EU’s e-Commerce Directive does not preclude a court of a Member State from ordering a host provider (such as a social media site) to remove information it stores and block information uploaded in the future, in a range of circumstances such as where the content of the information is ‘equivalent’ (a controversial concept in the setting of pre-emptive content blocking) to the content of information which was previously declared to be unlawful. The freedom of expression implications are far-reaching and scholars such as Daphne Keller have discussed them in detail.49 Here, I will limit my focus to the scope of jurisdiction aspect of the case; a topic on which the CJEU had surprisingly little to say. Indeed, all the CJEU did in this regard was to: (1) conclude that the EU’s e-Commerce Directive50 does not preclude a court of a Member State from ‘ordering a host provider to remove information covered by the injunction or to block access to that information worldwide within the framework of the relevant international law’;51 and (2) point out that it is ‘up to Member States to ensure that the measures which they adopt and which produce effects worldwide take due account of those rules’.52 This has led some commentators to conclude that ‘this was no more than a decision about the dividing line between EU law and national law’ and that the CJEU ‘determined that in this case the question of territoriality was outside the scope of EU law’.53 However, 48  ibid. para. 72. This nuanced approach was initially canvassed in detail in Dan Svantesson, ‘The Google Spain case: Part of a harmful trend of jurisdictional overreach’, EUI Working Paper RSCAS 2015/45 (2015) . 49 Daphne Keller, ‘Dolphins in the Net: Internet Content Filters and the Advocate General’s Glawischnig-Piesczek v Facebook Ireland Opinion’ (Stanford Center for Internet and Society, 4 September 2019) . 50  Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on Certain Aspects of Information Society Services, in particular electronic commerce, in the Internal Market [2000] OJ L178. 51 C-18/18 Eva Glawischnig-Piesczek v Facebook Ireland Ltd [2019] ECLI:EU:C:2019:821, para. 55. 52  ibid. para. 52. 53  Graham Smith, ’Notice and Stay-down Orders and Impact on Online Platforms’ (Web Page, October 2019) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Internet Jurisdiction and Intermediary Liability   707 it may be somewhat rash to exclude the need for a deeper examination of the judgment based on such assertions. In fact, the more interesting aspect here is what the CJEU did not say. Around the globe, Case C-18/18 has generated news headlines such as: ‘Facebook can be forced to police and remove illegal content worldwide, Europe’s top court says’,54 and ‘Facebook can be ordered to remove content worldwide, E.U. says’.55 These, and similar, headlines will not merely shape public opinion about the decision around the world, they stand to influence the behaviour of courts around the world and shape the law on scope of jurisdiction. Indeed, just weeks after the CJEU’s decision, the High Court of Delhi referred to Case C-18/18 in granting an order requiring Facebook, Twitter, and Google to remove certain content globally based on that content being defamatory under local law in India.56 This surely disposes of any notions suggesting that the case is of intra-EU interest only.

5.  Concluding Remarks In the previous sections, I have sought to highlight a selection of internet jurisdiction issues of direct relevance for the topic of internet intermediary liability. I have noted how the ‘honeymoon’ period during which governments were primarily interested in ensuring that internet intermediaries blossomed as a driving force of digitalization seems to be over, and that there is a hardening regulatory attitude towards internet intermediaries. As far as jurisdictional issues are concerned, this trend takes the form of  more aggressive jurisdictional claims over internet intermediaries. However, as I  emphasized, while increased regulatory oversight is justified, we would do well to remember that digital platforms exist because we see reasons to use them, and we must take care not to undermine the values they bring. This is clearly a matter of striking the right balance. I have also shown that there are developments indicative of a trend against upholding the choice of forum, and choice of law, clauses internet intermediaries impose on their users, and I have pointed to the complications that arise where law enforcement agencies request internet intermediaries to hand over user data. In this latter context, the point was made that considerable progress has been made, not least due to the current move away from a strict territoriality focus. 54  ‘Facebook can be Forced to Police and Remove Illegal Content Worldwide, Europe’s Top Court Says’(ABC News Web Page, 4 October 2019) . 55  Maria C. Baca, ‘Facebook can be Ordered to Remove Content Worldwide, EU Says’, Washington Post (4 October 2019) . 56  High Court of Delhi at New Delhi Swami Ramdev & Anr vs Facebook, Inc. & Ors [23 October 2019] CS (OS) 27/2019.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

708   Dan Jerker B. Svantesson The ‘lion’s share’ of this chapter was, however, devoted to the pressing issue of scope of jurisdiction. To see why that is so, we need only consider what is at stake in the debates on that topic. Imagine an internet where you cannot access any content unless it complies with every law of all the countries in the world. In this scenario, you would be prevented from expressing views that were critical of many of the world’s dictatorships. You would not be able to question aspects of some religions, such as Islam, due to blasphemy laws. And some of the photos you post of your children would be illegal. A development like this is not as far-fetched as it may currently seem. Every country wants its laws respected online. The foregoing scenario may be an unavoidable outcome if countries are successful in seeking to impose their laws ­globally. Even where they happen to be unable to prosecute the person who posted the content, they can try to force the internet intermediaries that host the content to remove or block it. As highlighted previously, there have been numerous examples of courts seeking to impose their content restrictions globally by ordering the major internet platforms to remove or block access to specific content. As also highlighted, this is troubling. After all, what is illegal in one country may be perfectly legal in all other countries. Why should the harshest laws determine what can be posted online? Why should duties imposed by one country trump rights afforded to us by the laws in many other countries or under international human rights laws? The stakes are high and the future of the internet, as we know it, hangs in the balance.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 37

The Equ ustek Effect: A Ca na di a n Perspecti v e on Gl oba l Ta k edow n Or ders i n the Age of th e I n ter n et Michael Geist*

In the days before widespread broadband, social networks, and online video, a French anti-racism group launched the internet lawsuit heard round the world. In late 2000, the International League against Racism and Anti-Semitism—or Ligue Internationale Contre le Racisme et l’Antisémitisme (LICRA) in French—filed suit against then-internet giant Yahoo, seeking a court order to compel the company to block French residents’ access to postings displaying Nazi memorabilia.1 While Yahoo already blocked access to content on its local French site (yahoo.fr), the lawsuit targeted the company’s primary site based in the United States (yahoo.com). The case attracted immediate interest since it struck at the heart of one of the internet’s most challenging issues—how to bring the seemingly borderless internet to a bordered world.2 Given that the internet has little regard for conventional borders, the question of

* My thanks to Tamara Maschich-Cohen and Philip Abraham for their exceptional research as­sist­ance and to the Canada Research Chair programme and Social Sciences and Humanities Research Council of Canada for their financial assistance. Any errors or omissions are the sole responsibility of the author. 1  See Laura Noulhat, ‘Yahoo! between impudence and bad faith’ (Libération, 24 July 2000). 2  See e.g. Michael Geist, ‘Is There a There There: Towards Greater Certainty for Internet Jurisdiction’ (2001) 16 Berkeley Tech. L.J. 1345–406.

© Michael Geist 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

710   Michael Geist whose law applies and which court gets to apply it was a seminal question as the internet mushroomed into the dominant communications network of our time. Years after the Yahoo France case, the Supreme Court of Canada grappled with many of the same issues in 2017 in an internet jurisdiction case heard around the world.3 Equustek Solutions v Google was a classic David vs Goliath legal battle, pitting a small Canadian company concerned with misuse of its intellectual property against the world’s leading internet search engine.4 While Google was prepared to remove offending search results from its Canadian site, Equustek sought a far broader court order, asking Canadian courts to require the removal of the search results on a global basis. In doing so, Canada’s highest court grappled with the question of whether a single national court can dictate the content of search results for internet users worldwide. At stake was the prospect of a new internet takedown order—an ‘Equustek order’—which could be used to remove global content from a single jurisdiction.5 This chapter examines the Canadian Equustek case, tracing the development of internet jurisdiction cases in the late 1990s to the current legal battles over the appropriate scope of court orders that wield far greater effect than conventional, domestic-based orders.6 The chapter begins by recounting the Yahoo France case, the internet jurisdiction case that placed the conflict challenges squarely on the legal radar screen. It con­ tinues with a detailed examination of the Equustek decision and its aftermath, including efforts by Google to curtail the effect of the Canadian court order by obtaining a countervailing order from a US court and the use by Canadian courts to extend the ruling to other internet platforms and online issues. It also cites one additional risk with overbroad national court orders related to online activity, namely the prospect of further empowering large internet intermediaries, who may selectively choose which laws and orders to follow, thereby overriding conventional enforcement of court orders and national regulation.

1.  Where it all Began: The Yahoo France Case Few internet law cases attracted as much attention as the Yahoo France case, in which a French judge ordered what was then the world’s most popular and widely visited website to implement technical or access control measures blocking auctions featuring 3  See Hamza Shaban, ‘How a Supreme Court case could force Google to censor speech worldwide’ (Washington Post, 29 June 2017). 4  [2015] BCCA 265 (Can.) (hereafter Equustek 2015). 5  See Felicity Gerry QC and Nadya Berova, ‘The Rule of law online: Treating data like the sale of goods: Lessons for the internet from OECD and CISG and sacking Google as the regulator’ (2014) 30 Computer L. & Security Rev. 475. 6 See Agnes Callamard, ‘Are Courts re-inventing Internet Regulation?’ (2017) Int’l Rev. of L., Computers & Tech. 31.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Equustek Effect: A Canadian Perspective   711 Nazi memorabilia from French residents.7 Yahoo reacted with alarm, maintaining that the French court could not properly assert jurisdiction over the matter. Yahoo noted that the company maintained dozens of country-specific websites, including a Yahoo.fr site customized for France, that were free of Nazi-related content. These country-specific sites targeted the local population in their local language, and endeavoured to comply with all local laws and regulations. The company argued that its flagship site, Yahoo.com, primarily targeted a US audience. Since US free speech laws protect the sale of Nazi memorabilia, the auctions were entirely lawful. Moreover, the Yahoo.com site featured a terms of use agreement, which stipulated that the site was governed by US law. Since the Yahoo.com site was not intended for a French audience, and users implicitly agreed that US law would be binding, the company felt confident that a French judge could not credibly assert jurisdiction over the site. Judge Jean-Jacques Gomez of the County Court of Paris disagreed, ruling that the court could assert jurisdiction over the dispute since the content found on the Yahoo. com site was available to French residents and was unlawful under French law.8 Before issuing his final order, the judge commissioned an international panel to determine whether the technological means were available to allow Yahoo to comply with an order to keep the prohibited content away from French residents. The panel reported that though such technologies were imperfect, they could accurately identify French internet users at least 70 per cent of the time.9 Based on this report, Judge Gomez ordered Yahoo to ensure that French residents could not access content that violated French law on the site. Failure to comply with the order would result in fines of 100,000 francs per day after a three-month grace period. Yahoo was unsurprisingly critical of the decision, but rather than appealing the French ruling, it chose to let it stand and to launch a lawsuit of its own in the US courts, seeking an order that the French decision could not be enforced on its home turf. The 9th Circuit Court of Appeals, a US appellate court, ultimately issued a ninetynine-page split decision that asserted jurisdiction over the dispute but declined to provide Yahoo with its much-desired order.10 The US decision turned on the fact that Yahoo had independently removed much of the offending content, suggesting that the company was not being forced to block legal materials. On the question of jurisdiction, the majority of the court determined that it could assert jurisdiction over the case despite minimal connections to the United States. Indeed, in this case the contacts were limited to a cease-and-desist letter demanding that Yahoo comply with French law, the formal delivery of the lawsuit, and the mere existence of the French court order. 7  Noulhat (n. 1). 8  See Tribunal de grande instance [High Court] (TGI) Paris LICRA & UEJF v Yahoo! Inc. [22 May 2000] (Fr.). 9  See TGI Paris LICRA & UEJF v Yahoo! Inc. [20 November 2000] Ordonnance Référé (Fr.); Expert Report . 10  Yahoo! Inc. a Delaware corporation v La Ligue Contre Le Racisme et L’antisemitisme; L’Union Des Etudiants Juifs De France, 433 F.3d 1199 (9th Cir. 2006) (US).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

712   Michael Geist The French and US courts both demonstrated that the default in most internet jurisdiction cases is to assert jurisdiction, even if doing so is likely to lead to conflicting decisions, thorny conflict of law issues, and regulatory uncertainty. The case succeeded in raising awareness of the risks of legal conflicts online, pointing to the likelihood that attempts to extend national court orders beyond national borders would lead to protracted litigation and a gradual expansion of the domestic laws as ‘global standards’ for the online environment.

2.  Equustek Solutions v Google: Internet Jurisdiction Hits Canada’s Highest Court Canadian courts faced several notable internet jurisdiction cases in the years following the Yahoo France case,11 but it was Equustek Solutions v Google Inc., a case that ori­gin­ ated in the Province of British Columbia (BC) in 2014,12 that captured international attention as one of the first cases to be considered by a nation’s highest court.13 The case stemmed from claims by Equustek, a Canadian company, that another company used its trade secrets to create a competing product and engaged in misleading tactics to trick users into purchasing it. After struggling to get the offending company’s website taken offline, Equustek obtained a BC court order requiring Google to remove the site from its search index. Google voluntarily removed search results for the site from Google.ca search results, a micro-site aimed at the Canadian market, but was unwilling to block the sites from its worldwide index. The BC court affirmed that the order applied on an international basis, however, issuing what amounted to global takedown order. Google argued against a global order by pointing to the Yahoo France case, a position that was rejected by the lower court: Google argues that the Court should not make an order that could affect searches worldwide because it would put Google in the impossible situation of being ordered to do something that could require it to contravene a law in another jurisdiction. This raises the concern addressed by the  Baltic  proviso in  Mareva  injunctions. Google gives as an example of such jurisdictional difficulties the case of Yahoo! Inc. v La Ligue Contre Le Racism et L’Antisemitisme [Yahoo] . . .  Yahoo provides a cautionary note. As with Mareva injunctions, courts must be cognizant of potentially compelling a non-party to take action in a foreign jurisdiction that would breach the law in that jurisdiction. That concern can be addressed in 11  See e.g. Bangoura v Washington Post [2005] OJ No. 3849 (Can.). 12  Equustek Solutions Inc. v Google (n. 4). 13  See e.g. Gutnick v Dow Jones & Co. [2002] HCA 56, [2002] ALJR 255 (Can.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Equustek Effect: A Canadian Perspective   713 appropriate cases, as it is for  Mareva  injunctions, by inserting a  Baltic  type ­ roviso, which would excuse the non-party from compliance with the order if to do p so would breach local laws. In the present case, Google is before this Court and does not suggest that an order requiring it to block the defendants’ websites would offend California law, or indeed the law of any state or country from which a search could be conducted. Google acknowledges that most countries will likely recognize intellectual property rights and view the selling of pirated products as a legal wrong.14

In assessing the impact of the internet, the court concluded that the global impact was a reason to issue a broad-based injunction, not to shy away from one: The Court must adapt to the reality of e-commerce with its potential for abuse by those who would take the property of others and sell it through the borderless electronic web of the internet. I conclude that an interim injunction should be granted compelling Google to block the defendants’ websites from Google’s search results worldwide. That order is necessary to preserve the Court’s process and to ensure that the defendants cannot continue to flout the Court’s orders.15

On appeal, the BC Court of Appeal noted that orders with extraterritorial effect are not unusual: British Columbia courts are called upon to adjudicate disputes involving foreign residents on a daily basis, and the fact that their decisions may affect the activities of  those people outside the borders of British Columbia is not determinative of whether an order may be granted. In each case, the court must determine whether it has territorial competence under the CJPTA [Court Jurisdiction and Proceedings Transfer Act]. If it does, it must also determine whether it should make the orders that are sought. Issues of comity and enforceability are concerns that must be taken into account, but they do not result in a simple rule that the activities of non-residents in foreign jurisdictions cannot be affected by orders of Canadian courts.16 While acknowledging that controversial nature of such orders in an Internet context—‘I do not suggest that these rulings have been without controversy or problems’—the court maintained that ‘extensive case law does indicate, however, that international courts do not see these sorts of orders as being unnecessarily intrusive or contrary to the interests of comity’.17

The court ultimately upheld the initial global takedown order, but emphasized the ­ability to vary it should circumstances warrant: With respect to extraterritorial effects, Google has, in this Court, suggested that a more limited order ought to have been made, affecting only searches that take place 14  Equustek Solutions Inc. v Google [2014] BCSC 1063, [144] (Can.). 16  Equustek 2015 (n. 4) [88]. 17  ibid. [96].

15  ibid. [159].

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

714   Michael Geist on the google.ca site. I accept that an order with international scope should not be made lightly, and that where an order with only domestic consequences will accomplish all that is necessary, a more expansive order should not be made. In this respect, the jurisprudence dealing with freeze orders is helpful—where a domestic Mareva  injunction will freeze sufficient assets, the court should refrain from granting a more expansive world-wide injunction. The plaintiffs have established, in my view, that an order limited to the google.ca search site would not be effective. I am satisfied that there was a basis, here, for giving the injunction worldwide effect. I have already noted that applications can be made to vary the order should unexpected issues arise concerning comity.18

3.  Supreme Court of Canada Hearing The Supreme Court of Canada unsurprisingly granted leave to appeal from the BC Court of Appeal decision, since the broader implications of the ruling struck a chord with those concerned with legal overreach on the internet. Indeed, experts noted since if a Canadian court has the power to limit access to information for the globe, presumably other courts do as well.19 While the Canadian courts did not grapple with this possibility, what happens if a Russian court orders Google to remove gay and lesbian sites from its database? Or if a Saudi Arabian court orders it to remove Israeli sites from the index? The possibilities for legal conflict are significant given that local rules of freedom of expression often differ from country to country. The Canadian Supreme Court hearing, which attracted intervenors such as the Wikimedia Foundation,20 Electronic Frontier Foundation,21 as well as the music and movie industry associations,22 focused on issues such as the effectiveness of a Googletargeted order, where the responsibility for identifying conflicting laws should lie, and the fairness of bringing an innocent third party such as Google into the legal fray. Yet largely missing from the discussion was an attempt to grapple with perhaps the biggest question raised by the case: in a seemingly borderless internet, how do courts foster respect for legal rules and avoid vesting enormous power in the hands of internet intermediaries who may ultimately find themselves picking and choosing among competing laws.

18  ibid. [107] (emphasis added). 19  See Daphne Keller, ‘Ominous: Canadian Court Orders Google to Remove Search Results Globally’ (Stanford CIS Blog, 28 June 2017) . 20  Wikimedia Foundation, Factum of the Intervener in Equustek Solutions Inc v Google [2017] SCC 34. 21  Electronic Frontier Foundation, Factum of the Intervener in Equustek Solutions Inc v Google [2017] SCC 34. 22 International Federation of Film Producers Association & OpenMedia Engagement Network, Factums of the Interveners in Equustek Solutions Inc v Google [2017] SCC 34.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Equustek Effect: A Canadian Perspective   715 The effectiveness of a Google-targeted order and the burden of identifying potential global legal conflicts generated spirited debate before the court but no obvious answers.23 While Google noted that focusing on a single search engine ignored numerous alternatives and failed to remove the offending content from the internet, Equustek emphasized Google’s unparalleled online influence and its ability to limit public awareness of any website. The question of responsibility presented a similarly difficult choice. Google and some intervenors argued that it should fall to claimants to assure a court that an extra­ter­ri­tor­ial order would not violate the laws of other countries. Equustek responded by pointing to Google’s economic power, arguing that it was fairer for a company with billions in revenue that does business around the world to bear the burden of identifying legal conflicts. While those issues seemed to leave the court divided, it barely addressed the elephant in the room, namely the dangers of ceding decision-making on whether to abide by the law to global internet giants such as Google and Facebook. That issue was ultimately left unresolved, with the Supreme Court largely avoiding the issue in its written decision.

4.  The Supreme Court of Canada Decision The Supreme Court of Canada released its much-anticipated decision  in June 2017, upholding the validity of an injunction requiring Google to remove search results on an international basis.24 The 7–2 decision did not address the broader implications, content to limit its reasoning to the need to address the harm being sustained by a Canadian company, the limited harm or burden to Google, and the ease with which potential conflicts could be addressed by adjusting the global takedown order. In doing so, it arguably invited more global takedowns without requiring those seeking takedowns to identify potential conflicts or assess the implications in other countries. The Supreme Court’s majority decision was written by Justice Rosalie Abella, who framed the case as follows: The issue in this appeal is whether Google can be ordered, pending a trial, to globally de-index the websites of a company which, in breach of several court orders, is using those websites to unlawfully sell the intellectual property of another company.25 Characterized that way, the outcome to uphold the order was no surprise. As the dissent noted, this was likely a permanent order, not a temporary one. Further, 23  See Supreme Court of Canada, ‘Webcast of the Hearing on 2016-12-06’ for Equustek Solutions Inc v Google [2017] SCC 34’ . 24  Equustek Solutions Inc. v Google [2017] SCC 34 (Can.) (hereafter Equustek 2017). 25  ibid. [1].

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

716   Michael Geist ‘selling the IP of another company’ is an odd way of referencing the sale competing products that used trade secrets.

Justice Abella proceeded to analyse the law of injunctions but, for internet watchers, the key aspects of the ruling came with the discussion of the internet implications. The decision acknowledged the challenge of a global internet order, but concluded that an international takedown was necessary to provide the Canadian company with an ef­f ect­ive remedy: The problem in this case is occurring online and globally. The Internet has no borders—its natural habitat is global. The only way to ensure that the interlocutory injunction attained its objective was to have it apply where Google operates— globally. As Fenlon  J.  found, the majority of Datalink’s sales take place outside Canada. If the injunction were restricted to Canada alone or to google.ca, as Google suggests it should have been, the remedy would be deprived of its intended ability to  prevent irreparable harm. Purchasers outside Canada could easily continue ­purchasing from Datalink’s websites, and Canadian purchasers could easily find Datalink’s websites even if those websites were de-indexed on google.ca. Google would still be facilitating Datalink’s breach of the court’s order which had prohibited it from carrying on business on the Internet. There is no equity in ordering an interlocutory injunction which has no realistic prospect of preventing irreparable harm.26

The majority was not persuaded by concerns about potential legal conflicts of a global takedown order, characterizing them as ‘theoretical’ and indicating that it would be unfair to place the onus on Equustek to determine whether the order would be legally permissible in the other countries. Google’s argument that a global injunction violates international comity because it is possible that the order could not have been obtained in a foreign jurisdiction, or that to comply with it would result in Google violating the laws of that jurisdiction is, with respect, theoretical. As Fenlon  J.  noted, ‘Google acknowledges that most countries will likely recognize intellectual property rights and view the selling of pirated products as a legal wrong’ In the absence of an evidentiary foundation, and given Google’s right to seek a rectifying order, it hardly seems equitable to deny Equustek the extraterritorial scope it needs to make the remedy effective, or even to put the onus on it to demonstrate, country by country, where such an order is legally permissible. We are dealing with the Internet after all, and the balance of convenience test has to take full account of its inevitable extraterritorial reach when injunctive relief is being sought against an entity like Google.27

This is a key aspect of the decision as the court effectively concluded that those ­seeking global takedown orders do not need to canvass the laws in other countries to 26  ibid. [41].

27  ibid. [47].

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Equustek Effect: A Canadian Perspective   717 consider the potential for conflicts with their request. The majority also concluded that responding to a global takedown would not interfere with Google’s neutral character in providing search results nor that it involves a significant inconvenience: I have trouble seeing how this interferes with what Google refers to as its content neutral character. The injunction does not require Google to monitor content on the Internet, nor is it a finding of any sort of liability against Google for facilitating access to the impugned websites. As for the balance of convenience, the only obligation the interlocutory injunction creates is for Google to de-index the Datalink websites. The order is, as Fenlon J. observed, ‘only a slight expansion on the removal of individual URLs, which Google agreed to do voluntarily’. Even if it could be said that the injunction engages freedom of expression issues, this is far outweighed by the need to prevent the irreparable harm that would result from Google’s facilitating Datalink’s breach of court orders. Google did not suggest that it would be inconvenienced in any material way, or would incur any significant expense, in de-indexing the Datalink websites. It acknowledges, fairly, that it can, and often does, exactly what is being asked of it in this case, that is, alter search results. It does so to avoid generating links to child pornography and websites containing ‘hate speech’. It also complies with notices it receives under the US Digital Millennium Copyright Act, Pub. L. No. 105-304, 112 Stat. 2680 (1998) to de-index content from its search results that allegedly infringes copyright, and removes websites that are subject to court orders.28

Yet the reality is that the inconvenience does not come from the technical side of removing search results, which is indeed trivial. The real inconvenience comes from conflict of laws and the potential for global takedown orders coming from across the planet, thereby opening the door to other countries choosing what Canadians might be able to find in search results. Those issues—along with the need to identify the laws in other countries in order to avoid conflicts—do involve significant inconvenience and expense. This last paragraph noted that Google already removes links to certain content such as hate speech, child pornography, and copyright takedowns, highlighting the cumulative effect of court decisions and regulations that individually may seem reasonable but which quickly move towards takedowns of all kinds. In fact, the majority cited inter­nation­al support for internet injunctions with global effect as a justification for its own order. The net result is the expectation of all countries and courts that they may issue global takedown orders regardless of the impact on internet users outside the jurisdiction or on internet intermediaries. The dissent rested largely on three issues: the notion that the injunction was ­ef­fect­ive­ly permanent, its limited effectiveness, and the availability of alternatives. On the term of the injunction: In our view, granting of the Google Order further erodes any remaining incentive for Equustek to proceed with the underlying action. The effects of the Google Order 28  ibid. [50].

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

718   Michael Geist are final in nature. Respectfully, the pending litigation assumed by our colleague Abella J. is a fiction. The Google Order, while interlocutory in form, is final in effect. Thus, it gives Equustek more relief than it sought.29

On effectiveness, the dissent stated: The most that can be said is that the Google Order might reduce the harm to Equustek which Fenlon J. found ‘Google is inadvertently facilitating’ (para. 152). But it has not been shown that the Google Order is effective in doing so. As Google points out, Datalink’s websites can be found using other search engines, links from other sites, bookmarks, email, social media, printed material, word-of-mouth, or other indirect means. Datalink’s websites are open for business on the Internet whether Google searches list them or not. In our view, this lack of effectiveness suggests restraint in granting the Google Order.30

Finally, on alternatives: In our view, Equustek has an alternative remedy in law. Datalink has assets in France. Equustek sought a world-wide Mareva injunction to freeze those assets, but the Court of Appeal for British Columbia urged Equustek to pursue a remedy in French courts: ‘At present, it appears that the proposed defendants reside in France . . . The information before the Court is that French courts will assume jurisdiction and entertain an application to freeze the assets in that country’ (2016 BCCA 190, 88 B.C.L.R. (5th) 168, at para. 24). We see no reason why Equustek cannot do what the Court of Appeal urged it to do. Equustek could also pursue injunctive relief against the ISPs, as was done in Cartier, in order to enforce the December 2012 Order. In addition, Equustek could initiate contempt proceedings in France or in any other jurisdiction with a link to the illegal websites.31

5. After Equustek: The Risks of Global Takedown Orders From National Courts The Equustek decision was greeted with elation from rightholders such as the music and movie industries, which envisioned the possibility of using Canadian court orders to mandate the removal of search results on a global basis.32 Yet despite claims that the 29  ibid. [63]. 30  ibid. [79]. 31  ibid. [81]. 32  Corey Poole, ‘Music Canada applauds Supreme Court of Canada decision confirming that Internet intermediaries can be ordered to deindex illegal sites worldwide’ (Music Canada, 28 June 2017) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Equustek Effect: A Canadian Perspective   719 case would effectively require internet intermediaries to adopt ‘an affirmative duty to take steps to prevent the Internet from becoming a black market’, the reality is that the Supreme Court expressly rejected any monitoring requirement or attribution of liability based merely on facilitating access to unlawful or infringing content. The decision may have stopped short of creating a new liability framework, but it did open the door to three internet-related legal risks. First, much like the Yahoo France case years before, it invited protracted litigation with the possibility of competing court orders from different jurisdictions. Secondly, it facilitated an expanded national approach to global issues with the likelihood of Canadian courts relying on the Equustek decision to expand the applicability of domestic law outside Canada’s national borders. Thirdly, it may have inadvertently vested increased power in the hands of internet intermediaries, who could use the legal uncertainty and conflict to self-select which laws would govern their activities.

5.1  Conflicting Court Orders In the aftermath of the Canadian Supreme Court decision, Google filed suit in a US court seeking to block its application there. The Supreme Court decision noted that it was open to Google to raise potential conflict of laws with the BC court in the hopes of varying the order: If Google has evidence that complying with such an injunction would require it to violate the laws of another jurisdiction, including interfering with freedom of expression, it is always free to apply to the British Columbia courts to vary the interlocutory order accordingly.33

Yet despite the invitation to Google to adduce evidence of legal conflict, the crossborder legal challenges were precisely what critics of the Supreme Court ruling feared: conflicting rulings, protracted litigation, and legal uncertainty. Indeed, by upholding global takedowns without fully grappling with the implications, the Supreme Court effectively invited other courts to issue conflicting decisions without guidance on how to best resolve the issue. Google’s action in the United States started down towards that path by arguing before a court in California that ‘the Canadian order is “unenforceable in the United States because it directly conflicts with the First Amendment, disregards the Communication Decency Act’s immunity for interactive service providers, and violates principles of international comity” ’.34 Equustek did not participate in the hearing, arguing in a side letter that it was ‘un­neces­sary and unfair’.35 With no party contesting Google’s arguments, the US court 33  Equustek 2017 (n. 24) [46]. 34  Google LLC v Equustek Solutions Inc. et al., WL 5000834 (ND Cal. 2017), 12 (US).

35  ibid. 2.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

720   Michael Geist concluded that the Canadian order ‘threatens free speech on the global Internet’. Of particular concern to the court was the effect on statutory immunities granted to internet intermediaries through the Communications Decency Act (CDA). The court ruled that the CDA protections, which largely immunize internet intermediaries for the content or postings of third parties, would be lost as a result of the Canadian court order.36 Armed with the US court order, Google returned to the BC courts, seeking a ruling that would vary the scope of the initial order that was upheld by the Supreme Court of Canada. A BC court denied Google’s request to vary an injunction requiring it to remove search results from its global index, however, concluding that the US ruling did not demonstrate that the removal would result in a violation of US law.37 The court distinguished between an order inconsistent with the safe harbour protections and a violation of the First Amendment, concluding that ‘the US decision does not establish that the injunction requires Google to violate American law’ (emphasis added). Rather: The effect of the U.S. order is that no action can be taken against Google to enforce the injunction in U.S. courts. That does not restrict the ability of this Court to protect the integrity of its own process through orders directed to parties over whom it has personal jurisdiction.38

The court addressed several other Google arguments that may resurface during the full trial as the current temporary injunction is set to expire at the conclusion of the trial. Indeed, as of February 2019, it was still possible that Google could appeal the latest ruling or await the trial for a full airing of its arguments as part of the ongoing ­litigation that was supposedly addressed with finality by the Supreme Court of Canada in June 2017.

5.2 Expanding Equustek As the Equustek decision worked its way through the Canadian courts, the case began to influence other decisions. For example, AT v Globe24h.com involved the application of ‘right to be forgotten’-style remedies under Canadian privacy law.39 Globe24h.com was a Romanian website that republished Canadian court and tribunal records and made personal, financial, and medical information about parties which appeared in decisions easily accessible via popular search engines. The same court and tribunal records were also available on Canadian legal websites such as the Canadian Legal Information Institute (CanLII).40 In fact, it is believed that Globe24h.com downloaded the records from CanLII. However, unlike CanLII, Globe24h.com permitted the records to be indexed by third party search engines such as Google. Since records on Globe24h.com were indexed by search engines, those 36  ibid. 13. 37  Equustek Solutions Inc. v Jack [2018] BCSC 610 (Can.). 38  ibid. [22]. 39  AT v Globe24h.com [2017] FC 114 (Can.). 40  Canadian Legal Information Institute .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Equustek Effect: A Canadian Perspective   721 containing personal information such as names would generally appear in relevant search results. When affected individuals discovered the records, many asked Globe24h. com to remove personal information from its website. For example, The Globe and Mail reported that ‘Dawn Fishman’s 18-year-old son typed his name into a search engine and found a decade-old ruling from his parents’ divorce proceeding. Ms Fishman, who lives in Toronto, was embarrassed to learn details of what she calls a “dark chapter” in their lives were online and easy to find.’41 Globe24h.com’s practices were challenged by an applicant who discovered in June 2014 that an Alberta Labour Board decision concerning his case had been republished through Globe24h.com.42 The Federal Court ordered Globe24h.com to remove Canadian decisions containing personal information from its website, take steps to remove decisions from search engines caches, to refrain from further republishing of such decisions, and pay the applicant $5,000 in damages and $300 in costs.43 The court was aware that the ruling would have the effect of applying Canadian priv­acy law to a foreign-based website, yet relied on Equustek for the principle that it was still entitled to issue the order: As noted by the British Columbia Court of Appeal in Equustek, above, at paragraph 85, ‘[o]nce it is accepted that a court has in personam jurisdiction over a person, the fact that its order may affect activities in other jurisdictions is not a bar to it making an order.’ Further, in the context of Internet abuses, courts of many other jurisdictions have found orders that have international effects to be necessary.44

The importance of effective orders was similarly echoed in College of Optometrists of Ontario et al. v Essilor Group Canada Inc., a 2018 Ontario lower court decision.45 The case involved the Canadian branch of a French company operating out of BC that sold prescription eyeglasses and contact lenses online. At issue was whether Ontario’s regulatory scheme should be applied to a company without a physical presence within the province but which sold products to consumers. The court cited Equustek, noting ‘the intention was that the order be effective in Canada. Given the nature of the internet this could only be accomplished by extending the injunction around the world.’46 The Ontario Court of Appeal subsequently overturned the lower court decision in 2019, distinguishing the Equustek decision on the grounds that it addressed in personam jurisdiction, not a regulatory scheme.47 The expansion of the Equustek doctrine extends to administrative and criminal law matters. For example, the Quebec Financial Markets Administrative Tribunal was faced with the question of whether several companies were operating without the necessary authorizations in the province by virtue of the existence of corporate websites and a 41  Christine Dobby, ‘Canadians upset with Romanian website that exposes court case details’ (The Globe and Mail, 4 January 2015) . 42 See AT (n. 39) [19]. 43 ibid. 44  ibid. [84]. 45  [2018] ONSC 206 (Can.). 46  ibid. [87]. 47  [2019] ONCA 265 (Can.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

722   Michael Geist presence on social media giant Facebook.48 The tribunal acknowledged the difference between the Equustek case and its administrative hearing, but concluded: The action in Equustek Solutions Inc. differs from this action in that it emanated from a civil action while we are dealing with administrative law, which applies a law of public order. Nevertheless, the Tribunal is of the view that it may be appropriate to draw on the principles developed in that judgment when an order against an added party that is a third party to the action, such as Facebook, is required to stop harm and conduct contrary to the Act. In this case concerning the Facebook accounts of PlexCoin and PlexCorps, the order with respect to Facebook is, in this instance, the only way to stop this solicitation of investors in Quebec. In a context where there is an illegal solicitation of investors over the Internet, such an order constitutes an effective remedy to stop a contravention of legislation and prevent many investors from being solicited for a  distribution carried out in contravention of the Act. In view of the above, the Tribunal is of the opinion that the power set out in section 94 of the Act respecting the Autorité des marchés fi­nan­ciers to take any action to ensure compliance with the Act enables it to order Facebook Canada to shut down the accounts of PlexCoin and PlexCorps.49

British Columbia (Attorney General) v Brecknell50 provides an illustration of the Equustek analysis permeating into criminal law. A BC court was faced with the question of whether it could compel a non-resident internet company with only a virtual presence to produce documents to law enforcement in a criminal matter. Craigslist, a popular online classified ads site, was asked to provide a user’s name, address, IP address, phone number, and all relevant information associated with a post. The company was willing to respond to production requests sent by email, but without a physical presence, no valid service could be established. The Attorney General of BC cited the Equustek case for the proposition that Canadian courts had ‘in personam jurisdiction over Craigslist because, by conducting business in BC, it has a real and substantial connection to the province’. A lower court rejected the request, distinguishing Equustek on the grounds that unlike Craigslist, Google had a physical presence in Canada, and: the remedy sought in Equustek, an injunction, was to prohibit Google from delivering search results pointing to the defendant’s websites. The underlying action between the plaintiff and the defendant relates to the alleged violation of trade secrets and intellectual property rights while the defendant’s operations were based in Vancouver, B.C. The Court of Appeal found that because the underlying civil suit was within the territorial competence of the Supreme Court, the Court had territorial competence over the injunction application: Equustek at paras. 40 and 41. In the case of Craigslist, however, what the AGBC [Attorney General of British Columbia] seeks to do is 48 See Autorité des marchés financiers v PlexCorps [2017] QCTMF 88 (Can.). 49  ibid. [173], [175], [176], [177]. 50  [2018] BCCA 5 (Can.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Equustek Effect: A Canadian Perspective   723 obtain documents from a third party record holder where the custodian of the records is outside the country. While it is true that production orders may well gather records from outside the country, the legislative history of the provision indicates there needs to be a custodial or record-keeping presence within our own borders. While Craigslist does business in BC and, by virtue of the decision in Equustek, has a real and substantial connection to the province through its business, in my view, that does not give the Court jurisdiction to issue an order for evidence gathering to be served and implemented in a foreign country. In my view, a Criminal Code production order cannot be issued from a Canadian court against Craigslist, an American company with only a virtual presence in B.C.51

The Court of Appeal overturned the lower court’s ruling and issued the order, drawing on Equustek’s willingness to adapt the rules to reflect the jurisdictional challenges of the internet.52 Indeed, the court warned: in the Internet era it is formalistic and artificial to draw a distinction between ­physical and virtual presence. Corporate persons, as I have noted, can exist in more than one place at the same time. With respect, I do not think anything turns on whether the corporate person in the jurisdiction has a physical or only a virtual presence. To draw on and rely on such a distinction would defeat the purpose of the legislation and ignore the realities of modern day electronic commerce. Moreover, the current facts illustrate the doubtful relevance of the distinction. Craigslist’s ­virtual presence is closely connected to the circumstances of the alleged offence, because at least some elements of the alleged offence were facilitated by relying on the services Craigslist provides virtually.53

With respect to the challenges of enforcing a global order, the court took comfort from Equustek, noting ‘problems of enforceability may often need to be considered when courts make discretionary decisions, since that issue is relevant to the exercise of its discretion. Those difficulties do not, however, deprive the court of jurisdiction to make the order.’54 In other words, Canadian courts are comfortable issuing orders involving internet platforms that may be difficult to enforce given the global dimensions of the network, adopting an approach where challenges associated with enforceability are not treated as a legal barrier. The need to consider the implications of a global order was addressed by the BC Court of Appeal in Nazerali v Mitchell.55 Mitchell published a book where sixteen of the twenty-one chapters referenced Nazerali, who claimed the material was defamatory. Both Google and domain name registrar GoDaddy.com were also named as defendants in the case. A trial court awarded over $1.2 million in damages and special costs. Mitchell appealed the trial decision on numerous grounds, including the granting of a 51  ibid. [42]. 52 ibid. 55  [2018] BCCA 104 (Can.).

53  ibid. [40].

54  ibid. [52].

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

724   Michael Geist per­man­ent worldwide injunction ‘permanently enjoin[ing the Respondents] from publishing on the Internet or elsewhere any defamatory words concerning the Plaintiff ’. While the initial decision predated the Equustek case, the appellate court used the decision to establish new limits on the injunction: The decision in the present case was issued prior to Google Inc. v Equustek Solutions Inc. which upheld an interlocutory injunction requiring Google to de-index all of the defendant’s websites.  The majority of the Supreme Court of Canada agreed that the Supreme Court of British Columbia had in personam jurisdiction over Google and could make an order with extraterritorial effect. Google had argued that a global injunction violated international comity because the injunction could require it to violate the laws of another jurisdiction, including interfering with freedom of expression.  The majority dealt with this argument by pointing out that Google was at liberty to apply to vary the interlocutory order. In the present case, the trial judge had in personam jurisdiction over the appellants because they attorned to the jurisdiction of the Supreme Court of British Columbia.  However, in order to respect international comity, the injunction should have given the appellants liberty to apply to vary it as circumstances may require.  As the injunction is permanent in nature, it may not be possible for the court to vary it unless liberty to apply is expressly given.56

The case points to the notable limitation in the Equustek ruling, namely the necessity to  consider potential changes to a global order based on circumstances that include ­conflicting laws.

5.3  Expanding Intermediary Power The Supreme Court was influenced by the perceived imbalance of power in the Equustek case pitting a global internet giant against a small Canadian company. While that imbalance was persuasive in shifting the onus onto Google to identify potential legal conflicts, it also entrenched the company (and others like it such as Facebook or Twitter) as the party responsible for sorting through legal and compliance conflicts and determining how to effectively address the potential for conflict of laws. The growing comfort with laying responsibility for addressing conflicting laws and regulations at the feet of internet intermediaries carries considerable risk. In the ­context of competing court orders, it raised the possibility of ceding decision-making on whether to abide by the law to private interests, effectively leaving it to Google to decide whether to comply with Canadian law. While Google was content to abide by the  Canadian court order while simultaneously working to marshal evidence that it conflicted with US law, it has been less open to complying with other foreign laws or

56  ibid. [106], [107], [108].

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Equustek Effect: A Canadian Perspective   725 rulings that are more obviously at odds with US constitutional free speech norms.57 Since local content laws differ from country to country, there is a great likelihood of conflicts. That leaves two possible problematic outcomes: local courts deciding what others can access online or companies such as Google selectively deciding which rules they wish to follow. The issue of empowering intermediaries can quickly expand to other areas where courts or regulators are content to vest responsibility for addressing conflicts or enforcement to private companies. For example, there has been mounting pressure on internet intermediaries to more proactively moderate content on their sites. While the Equustek decision does not require such moderation, mandating broader content moderation and takedowns virtually ensures that the big players will only get bigger given the technology, research, and personnel costs that will be out of the reach of smaller companies. At a time when some of the internet companies already seem too big, content mod­er­ ation of billions of posts or videos would reaffirm their power, rendering it virtually impossible for upstart players to compete. Supporters of shifting more responsibility to internet companies argue that our court systems or other administrative mechanisms were never designed to adjudicate content-related issues on the internet’s massive scale. Yet internet companies were never designed for it either, but we should at least recognize the cost associated with turning public adjudication over to private entities. Leaving it to search engines, rather than the courts, to determine what is harmful and should therefore be removed from search indexes ultimately empowers Google and weakens our system of due process. Similarly, requiring hosting providers to identify instances of copyright infringement, removes much of the nuance in copyright analysis, creating real risks to freedom of expression. The Equustek case fell short of this requirement, yet by embracing global takedowns without fully grappling with the implications of conflicts and comity, it invariably left internet intermediaries to fill the emerging legal and policy vacuum.

6. Conclusions The internet is often characterized as a ‘wild west’ where laws cannot be easily applied. Yet the danger of extraterritorial application of court decisions such as those involving Google is that it encourages disregard for the rule of law online, placing internet com­ pan­ies in the unenviable position of choosing the laws and court orders they wish to follow. Moreover, if courts or companies openly disregard foreign court orders, legal

57  Amy Sawitta Lefevre, ‘Thai junta asks Google and YouTube to remove royal “insults”’ (The Guardian, 22 October 2016) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

726   Michael Geist certainty in the online environment is undermined, fostering cross-border litigation and an expansive approach to applying domestic laws on a global basis. Years of litigation starting with the Yahoo France case suggest that there are no easy answers. The Equustek case is a landmark ruling that held the potential to establish the foundation for global standards on internet jurisdiction and the responsibility of internet intermediaries. By opening the door to global takedowns, however, the ruling invites protracted global litigation, expansive assertions of jurisdiction, and further empowering large internet intermediaries. A preferable approach lies in developing standards that encourage comity and mutual respect for the applicability of the law on the internet by ensuring that national sovereignty is respected. Courts should only issue orders with substantial extraterritorial effect where it is clear that the underlying right and remedy are also available in affected foreign countries. Global takedown orders or decisions with substantial impact in other jurisdictions are likely to enhance the perception of the internet as a wild west where disregard for the law is common. For that reason, where there is uncertainty about the legal rights in other jurisdictions, courts should exercise restraint, recognizing that less may be more. Indeed, respect for the law online may depend as much on when not to apply it as do efforts to extend the reach of courts and court orders to a global internet community.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 38

J u r isdiction on th e I n ter n et: From L ega l A r ms R ace to Tr a nsnationa l Cooper ation Bertrand de La Chapelle and Paul Fehlinger*

In managing, promoting, and protecting [the internet’s] presence in our lives, we need to be no less creative than those who invented it. Clearly, there is a need for governance, but that does not necessarily mean that it has to be done in the traditional way for something that is so very different.1

The topic of jurisdiction has become a core issue for debate on the future of the internet. The internet’s cross-border nature has produced unprecedented benefits for mankind. But it also generates tensions between national legal systems based on the territoriality of jurisdiction. Rooted in the seventeenth-century treaties of the Peace of Westphalia, our inter­ nation­al system is based on the separation of sovereignties, and these traditional modes of interstate cooperation struggle to cope with the digital realities of internet intermediaries and cross-border data flows in the twenty-first century.

*  An earlier version of this chapter was published by the authors for the Global Commission on Internet Governance (2016). 1  Kofi Annan, then UN Secretary-General, remarks at the opening session of the Global Forum on Internet Governance (24 March 2004). .

© Bertrand de la Chapelle and Paul Fehlinger 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

728   Bertrand De La Chapelle and Paul Fehlinger We are therefore confronted with two major challenges: how to preserve the global nature of cyberspace, with its cross-border data flows and services, while respecting national laws; and how to fight misuses and abuses of the internet while ensuring the protection of human rights. Both challenges require cooperation, as well as policy standards and clear procedures across borders, to ensure efficiency and due process. Since 2012, the Internet & Jurisdiction Policy Network facilitates a global policy process that now engages over 200 key entities from more than fifty countries. The multistakeholder organization enables coordination and cooperation between governments, major internet companies, technical operators, civil society, and international or­gan­iza­tions. Its Secretariat helps these stakeholders develop policy standards and operational solutions for transnational cooperation on jurisdictional issues. This chapter directly draws upon the insights emerging from this pioneering multistakeholder organization. It addresses successively: • why these issues represent a growing concern for all stakeholders being under pressure to find rapid solutions as the uses and misuses of the internet increase; • the legal arms race produced by uncoordinated actions and unrestrained application of territoriality; • the struggle of traditional modes of international cooperation to deal with this situation of legal uncertainty, especially with regard to access to data, content restrictions, and domain seizures; • the resulting dangerous path that threatens to destroy the nature and benefits of the global network and the risks related to the economy, human rights, infrastructure, and security; • the need to fill the institutional gap in internet governance through innovative processes involving all stakeholder groups; and • how to move toward transnational cooperation frameworks.

1.  National Jurisdictions and Cross-Border Data Flows and Services 1.1  Conflicting Territorialities The technical architecture of the internet itself was conceived as cross-border and nonterritorial from the onset. The World Wide Web technically allows, by default, access to any link regardless of physical location, and intermediaries serve hundreds of millions of users in shared cross-border online spaces. This transnational nature of the internet has generated unprecedented benefits for humankind, be they political, economic, or social. In particular, it uniquely fulfils the promises of Article 19 of the Universal Declaration of Human Rights regarding access to information ‘irrespective of frontiers’.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

From Legal Arms Race to Transnational Cooperation   729 Yet, stored or processed data, as well as globally accessible content or services, can be legal in one country while infringing laws in other jurisdictions. From a historical perspective, cross-border interactions were rare, and international legal cooperation tools were designed to handle them as exceptions. However, due to the architecture of the open internet, interactions across borders have become the new normal. As a consequence, cross-border conflicts arise between users, the intermediaries they use, public authorities, and any combination thereof. How to determine the ap­plic­able laws when interactions are transnational is becoming increasingly difficult, as the current international system is based on a patchwork of separate and territorially defined national jurisdictions. Teresa Scassa and Robert Currie argue that, ‘put simply, because the Internet is borderless, states are faced with the need to regulate conduct or subject matter in contexts where the territorial nexus is only partial and, in some cases, uncertain. This immediately represents a challenge to the Westphalian model of exclusive territorial state sovereignty under international law.’2 At least four territorial factors can play a role in determining applicable law: the location(s) of internet end-user(s) or connected devices; the location(s) of the servers or devices that store or process the actual data; the locus of incorporation of the internet companies that run the service(s) in question; and, in the case of the world wide web, the registrars or registries through which a domain name was registered. These overlapping and often conflicting territorial criteria make both the application of national laws in cyberspace and the resolution of internet-related disputes difficult and inefficient. The principles of separation of sovereignties and non-interference between states that underpin the international system not only render court decisions difficult to enforce but also prevent the cooperation across borders necessary to efficiently deal with crimes and abuses online. Tensions arise and will only grow as internet penetration reaches five billion users and soon twenty-five billion connected devices from more than 190 different countries with diverse and potentially conflicting national laws, as well as social, cultural, or political sensitivities.3

1.2  A Challenge for All Stakeholders The present situation of legal uncertainty is a concern for each category of actors. Governments have a responsibility to ensure respect of the rule of law online, protect their citizens, and combat crime. A sense of frustration prevails in the absence of clear standards on how to enforce national laws on the cross-border internet, ranging from 2 Teresa Scassa and Robert Currie, ‘New First Principles: Assessing the Internet’s Challenges to Jurisdiction’ (2010) 42(4) Georgetown J. of Int’l L. 1018. 3 See GSMA Intelligence, ‘The mobile economy 2018.  London: GSM Association’ (GSMA, 2018) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

730   Bertrand De La Chapelle and Paul Fehlinger privacy over speech to taxation. Law enforcement agencies in particular feel unable to conduct necessary investigations to stop transnational crime and involving data flows and intermediaries. In a system based on strict Westphalian territoriality, the principle of separation of jurisdictions becomes an obstacle to international cooperation. Internet companies, which relied on terms of service early on to establish the jurisdiction of their country of incorporation, now have to handle—and interpret—the 190-plus different national legal frameworks of the countries in which their services are accessible or connected devices used. This is a particular challenge for start-ups and medium-sized companies. Faced with more and more direct requests for access to data or takedown of content, they also fear losing the protection of the limited-liability regime they have enjoyed so far and becoming responsible for thousands of microdecisions of a quasi-judicial nature4 with significant human rights dimensions and reputation risks. Technical operators worry that the fundamental separation of layers that forms the basis of the internet architecture will become blurred. Registries and registrars in particular see increasing efforts to leverage the domain name system (DNS) as a content control tool with global reach. Data centre operators, hosting providers, and internet service providers (ISPs), as well as the ‘Internet of Things’ industry are equally concerned by a growing application of strict territorial sovereignty and increasing liabilities. Civil society groups around the world worry about a potential race to the bottom in terms of protection of freedom of expression and privacy and a perceived privatization of dispute resolution. Average users are confused by the legal uncertainty about what rules apply to their online activities and feel powerless to obtain predictable and affordable redress when harmed, as multinational litigation is beyond their reach. International organizations struggle because of overlapping thematic scopes, or a geographical remit that is not universal. Although some, such as the Council of Europe, the Organisation for Economic Co-operation and Development (OECD), and the United Nations Educational, Scientific and Cultural Organization (UNESCO) have made significant efforts to include civil society, the private sector, and the technical community in their processes, they remain intergovernmental organizations by nature. As such, they are limited in their capacity to timely put sensitive but necessary issues on their agenda by the lack of consensus, or worse, dissent among their members.

1.3  A Core Issue of Internet Governance The jurisdictional challenge is at the nexus of internet governance and touches upon multiple traditional policy areas: the development of the global digital economy, ensuring 4  Jacques de Werra labelled this new phenomenon ‘massive online micro justice’. Jacques De Werra, ‘Alternative Dispute Resolution in Cyberspace: Can ADR Address the Challenges of Massive Online Micro Justice?’, Presentation at the University of Geneva (27 November 2015) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

From Legal Arms Race to Transnational Cooperation   731 a clear and predictable legal environment through cooperation, guaranteeing the exercise of fundamental human rights, and ensuring security and public order. Since 2012, the Internet & Jurisdiction Policy Network’s I&J Retrospect Database has documented more than 1,600 high-level cases around the world that show the growing tension between national jurisdictions due to the cross-border nature of the internet.5 According to the Internet & Jurisdiction Global Status Report 2019, an overwhelming 79 per cent of surveyed stakeholders in the Internet & Jurisdiction Policy Network say that there is insufficient international coordination and coherence to address cross-border legal challenges on the internet.6 Unfortunately, unilateral actions by actors trying to solve on their own the complex transnational jurisdictional conundrum create a legal competition that makes the problem harder rather than easier to solve. Contrary to what they may perceive, however, the different categories of stakeholders have less of a problem with each other than a problem in common—that is: how to manage the coexistence of different norms and rules in shared online spaces. Realizing this is the necessary first step towards a common solution. As the World Economic Forum’s 2016 report on internet fragmentation shows, trends toward the renationalization of cyberspaces are observable7 and stakeholders at the 1st Global Conference of the Internet & Jurisdiction Policy Network emphasized, that ‘If nothing is done the open internet could, in a decade or two, be a thing of the past’.8 Maintaining a global internet by default, which fulfils the ambitions of the Universal Declaration of Human Rights, notably Article 19, and boosts innovation and growth through cross-border data flows and cloud-based services, requires trans­nation­al legal cooperation. At the 2nd Global Conference of the Internet & Jurisdiction Policy Network, organized in partnership with the government of Canada on 26–8 February 2018, stake­holders adopted the Ottawa Roadmap with Work Plans to develop policy standards and op­er­ation­al solutions in three key areas. • Data & Jurisdiction: how can transnational data flows and the protection of priv­ acy be reconciled with lawful access requirements to address crime? • Content & Jurisdiction: how can we manage globally-available content in the light of the diversity of local laws and norms applicable on the internet? • Domains & Jurisdiction: how can the neutrality of the internet’s technical layer be preserved when national laws are applied to the DNS? 5 See the Internet & Jurisdiction Policy Network’s Retrospect Database . 6  See Dan Jerker Svantesson, ‘Internet & Jurisdiction Global Status Report 2019’, Internet & Jurisdiction Policy Network (2019) . 7 See William Drake, Vinton Cerf, and Wolfgang Kleinwächter, ‘Internet Fragmentation: An Overview’, World Economic Forum Future of the Internet Initiative White Paper (2016) . 8  ‘Lost in the Splinternet’ (The Economist, 5 November 2016) 50–1.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

732   Bertrand De La Chapelle and Paul Fehlinger In all cases, the procedural and substantive elements that jointly need to be addressed to  develop balanced regimes were outlined in the Ottawa Roadmap’s ‘Structuring Questions’. On this basis, more than 100 members of the three programmes of the multistakeholder organization from five continents developed so-called Operational Approaches Documents with concrete proposals for operational norms, criteria, and mechanisms.9 These were discussed by almost 300 stakeholders from over fifty countries at the 3rd Global Conference of the Internet & Jurisdiction Policy Network on 3–5 June 2019, organized in partnership with the government of Germany. This resulted in the Berlin Roadmap of the Internet & Jurisdiction Policy Network that guides the development of operational solutions and policy standards.

2. A  Legal Arms Race in Cyberspace? Solving the internet and jurisdiction challenge is intrinsically linked to the general debate about modalities of global governance. Christoph Knill and Dirk Lehmkuhl had already observed in 2002 that ‘[e]conomic and technological interdependencies have created a range of problems that exceed the scope of national sovereignty and can therefore no longer be sufficiently resolved by the unilateral action of national governments.’10 Marie-Laure Djelic and Sigrid Quack describe these new challenges resulting from the proliferation of a diversity of often overlapping public and private normative orders, which is exemplified by the global digital economy and its crossborder data flows and services, as a transformation from the ‘rule of law to the law of rules’.11 After a historic period of inaction to address issues in cyberspace where private normative orders served as the prime regulatory instrument to handle abuses or disputes, states are now confronted with increasing domestic pressure to address cross-border challenges arising from the digitalization of societies and economies and ensure the efficient enforcement of national laws. As a result, a proliferation of often uncoordinated legislative or enforcement national actions can be observed around the world. A looming risk is a legal arms race, in which states resort to an extensive in­ter­pret­ ation of territoriality criteria over cross-border data flows and services. Such ‘hyperterritoriality’ manifests itself by either extending sovereignty beyond national frontiers or strictly reimposing national borders. 9  See ‘Internet & Jurisdiction Policy Network—Operational Approaches Documents’ . 10 Christoph Knill and Dirk Lehmkuhl, ‘Private Actors and the State: Internationalization and Changing Patterns of Governance’ (2002) 15(1) Governance 41, 41. 11  Marie-Laure Djelic and Sigrid Quack, ‘Globalization and Business Regulation’ (2018) 44 Annual Rev. of Sociology 123–43.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

From Legal Arms Race to Transnational Cooperation   733

2.1 Extraterritoriality Extraterritorial extension of national jurisdiction is becoming the realpolitik of internet regulation. First of all, governments with internet companies, servers, and infrastructure, or technical operators incorporated on their soil can impose their national laws and regulations on these private actors, with direct transboundary impacts on all foreign users of these services. An often-cited example regarding the United States are the surveillance capacities described in the Snowden revelations. Regarding the reach of law enforcement, the US Cloud Act establishes that warrants for access to electronic evidence can cover data stored by US companies irrespective of where such data are stored. Previous cases involved a Department of US Homeland Security agency seizing domain names belonging to foreign registrants on the sole basis of their registration through a US-based registrar (the RojaDirecta case12) or registry (the Bodog case13). Furthermore, legislations increasingly include clauses establishing extraterritorial reach, such as the General Data Protection Regulation in the EU.14 Finally, litigation also plays a prominent role in setting new global standards, with impacts far beyond the respective jurisdictions. Facebook, for instance, changed its global terms of service after a US court decision on its ‘sponsored stories’ feature.15 Courts increasingly affirm competence regarding services incorporated in other countries merely because they are accessible in their territory, as illustrated by the Yahoo case in Belgium.16 Some difficulties naturally exist in enforcing the resulting judgments, as the national blockade of WhatsApp in Brazil showed.17 Yet local cases can have global impacts. For instance, after the Court of Justice of the European Union Costeja decision (the right to be de-indexed), the French data protection

12  See ‘US authorities give back seized domains of Spanish Rojadirecta site’ (Internet & Jurisdiction Policy Network—I&J Retrospect Database, 2012) . 13  See ‘US authorities seize foreign .com gambling site registered in Canada via VeriSign’ (Internet & Jurisdiction Policy Network—I&J Retrospect Database, 2012) . 14  See ‘New privacy standards: EU agrees on final draft of its data protection reform’ (Internet & Jurisdiction Policy Network—I&J Retrospect Database, 2015) . 15  See ‘US Sponsored Stories Facebook settlement triggers Terms of Service changes for global users’ (Internet & Jurisdiction Policy Network—I&J Retrospect Database, August 2013) . 16  See ‘Belgium asserts jurisdiction over Yahoo, refuses MLAT procedure’ (Internet & Jurisdiction Policy Network—I&J Retrospect Database, December 2015) . 17  See ‘Brazil blocks WhatsApp for 12 hours with accidental impacts in Venezuela and Chile’ (Internet & Jurisdiction Policy Network—I&J Retrospect Database, December 2015) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

734   Bertrand De La Chapelle and Paul Fehlinger authority demanded that Google extend its de-indexing to all versions of its search engine, arguing that the service is based on a single processing of data worldwide.18 Local court decisions can also trigger new international norms for the interaction between states and internet companies. For instance, the right to be de-indexed, initially established by Europe for Google, is now implemented by other search engines such as Microsoft Bing or Yahoo Search19 and has produced ripple effects in Asia and Latin America.20

2.2  Digital Sovereignty Not all countries are able—or trying—to extend their sovereignty beyond their borders. As a consequence, renationalization is a complementary trend to extraterritorial extension of sovereignty. The theme of ‘digital sovereignty’ gains traction in many jurisdictions in a context of rising tensions and a sense of powerlessness by public authorities to impose respect for their national laws on foreign-based internet platforms and technical operators. This can mean efforts to literally re-erect borders on the internet through blocking of uniform resource locators or Internet Protocol (IP) addresses via national ISPs—something that has become much easier to implement today than in the early 2000s—or the creation of a limited number of national gateways. So-called ‘mandatory data localization’ laws are also part of this trend. They range from indirect requirements that would impose data localization only as a last resort if companies fail to honour legitimate national requests (see Brazil’s Marco Civil21) to strict requirements, stipulating that the data of national citizens processed by foreign companies needs to be stored within the national jurisdiction. Other digital sovereignty measures can range from strong national intermediary li­abil­ity regimes,22 requirements to open local offices, demanding backdoors to encryption technologies, the imposition of fully-fledged licensing regimes, or, most extremely, national internet shutdowns. 18  See ‘French DPA rejects Google’s appeal on global application of “right to be de-indexed” ’ (Internet & Jurisdiction Policy Network—I&J Retrospect Database, September 2015) . 19  See ‘EU DPAs meet with Google, Microsoft, Yahoo to discuss “right to be de-indexed” ’ (Internet & Jurisdiction Policy Network—I&J Retrospect Database, July 2014) . 20 See ‘Constitutional Court of Colombia rules on “right to be de-indexed” case’ (Internet & Jurisdiction Policy Network—I&J Retrospect Database, July 2015) . 21 See ‘Marco Civil puts Brazilian data stored abroad under Brazilian jurisdiction’ (Internet & Jurisdiction Policy Network—I&J Retrospect Database, April 2014) . 22  For an overview of national intermediary liability regimes, see World Intermediary Liability Map (WILMap) (a project designed and developed by Giancarlo Frosio and hosted at Stanford CIS) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

From Legal Arms Race to Transnational Cooperation   735

2.3  Paradoxes of Sovereignty Extreme and unrestrained leveraging of traditional territorial criteria introduces two paradoxes. First, as described earlier, national actions upon operators with global reach have impacts on other jurisdictions. Such actions appear contrary to the very principle of non-interference, which is a direct corollary of sovereignty itself. This increases interstate tensions and potential conflicts between jurisdictions. While rewarding the most powerful digital countries, it encourages others to react and adopt measures based on mistrust and the reimposition of national borders. Secondly, strict digital sovereignty measures such as data localization are not scalable globally, including technically. It is highly unlikely that necessary data centres could be, for example, established in all developing or small countries. Furthermore, although often presented as a tool to prevent surveillance, it might increase the likelihood of surveillance through the replication of data, which is required to create local copies stored in the reach of national authorities, while still allowing global processing and cross-border interactions. Sovereignty is relevant in the digital age, but it behoves governments to take into account the potential transborder impact of their national decisions. This is why the recommendation adopted in 2011 by the Committee of Ministers of the Council of Europe established the responsibility of states to avoid ‘adverse transboundary impact on access to and use of the Internet’ when they enforce national jurisdiction.23 Exercised without restraint, both ‘extraterritorial extension of sovereignty’ and ‘digital sovereignty’ measures, if implemented without caution, run contrary to the Kantian categorical imperative that should underpin international internet regulation: any national policy measure that would be detrimental if generalized around the world should not be adopted in the first place. International norms of cooperation are needed to prevent this legal arms race in cyberspace.

3.  Limits to International Cooperation Managing cross-border commons poses systemic difficulties for the existing inter­ nation­al system.24 The Westphalian principles of separation of sovereignties and noninterference actually represent more of an obstacle than a solution for cooperation on cyber issues. 23  Council of Europe, ‘Recommendation of the Committee of Ministers to Member States on the Protection and Promotion of the Universality, Integrity and Openness of the Internet’ CM/Rec (2011)8. 24  See e.g. Elinor Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action (CUP 1990).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

736   Bertrand De La Chapelle and Paul Fehlinger John Palfrey and Urs Gasser25 and Rolf Weber26 rightfully argue that we need more legal interoperability to preserve the global nature of the internet, but substantive harmonization of laws related to the use of the internet seems unattainable and often non-desirable. Multilateral efforts have proved so far inconclusive; bilateral arrangements such as mutual legal assistance treaties (MLATs) are in dire need of reform; and the increasing number of informal interactions between public and private actors across borders lack procedural guarantees.

3.1  Obstacles to Multilateral Efforts The internet is by nature disruptive, including with respect to the international regulatory system. As Claire Cutler put it, ‘traditional Westphalian-inspired assumptions about power and authority are incapable of providing contemporary understanding, producing a growing disjunction between the theory and the practice of the global system’.27 The idea of a global, all-encompassing internet treaty that would harmonize relevant laws and solve the full range of cyber-cooperation issues is advocated only by some rare actors, who have tried to draw an analogy with the decades-long efforts of international negotiations that resulted in the Law of the Sea Convention or the Outer Space Treaty. But the internet is not a natural commons and, as Wolfgang Kleinwächter has argued, ‘while all these international conventions can be seen as great achievements of contemporary international law, it is hard to believe that this is a usable model for policy and law-making for the global Internet’28 due to the newness, volatility, and rapid pace of innovation in the digital realm.29 Since the end of the World Summit on the Information Society (WSIS), intergovernmental discussions in various UN fora have made little progress beyond the wording of the Declaration adopted in Tunis in 2005. The split of the international community in 2012 during the World Conference on International Telecommunications serves as an often-cited example for the absence of global consensus not only at the level of substance, but even on the proper institutional framework for such discussions. In any case, treaty negotiations are notoriously long. Even the most extensive agreement to date tackling cybercrime, the Budapest Convention, was a lengthy process. If formal 25  John Palfrey and Urs Gasser, Interop: The Promise and Perils of Highly Interconnected Systems (Basic Books 2012). 26  Rolf Weber, ‘Legal Interoperability as a Tool for Combating Fragmentation’, Global Commission on Internet Governance Paper Series no. 4 (2014). 27 Claire Cutler ‘Critical Reflections on the Westphalian Assumptions of International Law and Organization: A Crisis of Legitimacy’ (2001) 27(2) Rev. of Int’l Studies 133–50. 28  Wolfgang Kleinwächter, ‘Global Governance in the Information Age’, Centre for Internet Research Research Papers (2001) . 29  See Joseph Nye, ‘The Regime Complex for Managing Global Cyber Activities’, Global Commission on Internet Governance Paper Series no. 1 (2014).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

From Legal Arms Race to Transnational Cooperation   737 negotiations took only four years, more than a decade was necessary to actually put the topic on the agenda. Although now signed by more than sixty states around the world (excluding, however, several large countries such as Brazil and India), some countries still use the fact that it was elaborated initially within the Council of Europe as an argument to refuse joining a regime they did not participate in drafting. Like all inter­ nation­al agreements, the Budapest Convention is also difficult to modify in response to rapidly changing technology. It nonetheless remains the only ongoing process in a multi­lat­eral setting. In the past few years, many useful declarations have been developed within multi­lat­ eral organizations at the level of general principles, showing some form of convergence. Still, none of them were able to move towards developing an operationally implementable regime.

3.2  MLATs: The Switched Network of International Cooperation Historically, MLATs enabling government-to-government legal cooperation were negotiated to handle rare and rather exceptional cross-border criminal cases. These intergovernmental tools allow public authorities in country A to ask for assistance to, for instance, access user data stored by an operator in country B. Upon receipt of the request, country B examines if it is also valid according to its national laws. If so, the data holder in country B is lawfully compelled to submit the data to authorities in country B, which will then share it with the requesting authorities of country A. However, now that cross-border is the new normal on the internet, this system is generally described as ‘broken’. MLATs have at least four structural limitations. • Speed: MLATs are ill adapted to the speed of the internet and the viral spread of information. In the best cases, an MLAT request from one government to another takes months to be processed. It can take up to two years between certain countries. The very elaborate circuit of validations is legitimately intended to provide procedural guarantees but makes the whole system impracticable. • Scope: MLATs are often limited to ‘dual incrimination’ cases, that is, they cover only issues qualified as a crime in the jurisdictions of both requesting and receiving countries. Given the disparity of national legislations, their relevance is limit­ed, particularly on speech issues (e.g. hate speech and defamation). They are also ineffective when the location of the data is unknown. • Asymmetry: regardless of the actual physical location of events or involved parties, the MLAT system de facto imposes the law of the recipient country over the law of the requesting one, even if there is no other territorial connection to the latter than the incorporation of the targeted platform or operator. An increasing number of countries find this unbalanced, given the dominant role of US-based companies.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

738   Bertrand De La Chapelle and Paul Fehlinger • Scalability: the system of traditional MLAT treaties can hardly encompass the scale of the internet. A large number of countries around the world do not have MLAT treaties with each other and establishing such bilateral relations among 190 countries would require more than 15,000 arrangements.30 The MLAT system is the switched network of international cooperation.31 It is in dire need of reform to adapt to the internet age and reforming it will not be easy. It will require more than simply streamlining existing procedures: creative solutions are needed to address its structural limitations and ensure both transnational due process and efficiency. Recent initiatives to create frameworks for direct requests for cross-border access to electronic evidence include the Cloud Act in the US, a proposal from the European Commission, and the negotiation of an additional protocol to the Budapest Convention. But ensuring the interoperability of these regimes represents a new challenge.

4.  A Dangerous Path The lack of coordination and the inability of the Westphalian international system to provide the necessary cooperation solutions produce a typical ‘prisoner’s dilemma’ situation. That is, every single actor, forced to use the only tools available to it, is incentivized to make short-term decisions that appear in its immediate interest, though their cumulative effect is at best suboptimal and most likely detrimental to all in the longer term. If we continue to lack appropriate cooperation mechanisms and ‘fall back into managing the national, rather than managing shared cross-border online spaces in a collaborative way’,32 the sum of uncoordinated unilateral actions by governments and private actors can have unintended consequences, with strong negative impacts in economic, human rights, infrastructure, and security areas (see Table 38.1).

4.1  Economic Impacts In 2014, the Boston Consulting Group estimated the value of the entire digital economy of the Group of Twenty countries alone at US$4.2 trillion, representing 5 to 9 per cent of 30  For an overview of existing MLAT treaties, consult the MLAT Map by the non-governmental organization Access Now . 31  For a comparison between the public switched telephone network and the distributed architecture of internet routing, see Internet Society, ‘The Internet and the Public Switched Telephone Network: Disparities, Differences, and Distinctions’ . 32 Paul Fehlinger, ‘Cyberspace Fragmentation: An Internet Governance Debate beyond Infrastructure’ (Internet Policy Review, 17 April 2014) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

From Legal Arms Race to Transnational Cooperation   739

Table 38.1  Results of in inaction on economic, human rights, infrastructure, and security areas Economy

Human rights

Infrastructure

Security

• Demise of globally accessible services and data flows • Market entry and trade barriers • Reduced investment in start-ups • Legal uncertainty stifles innovation • Disadvantages for developing countries

• Lack of access to justice and redress • Reduced freedom of expression across borders • Limits to access to information • Limits to freedom of assembly in cross-border online spaces

• Blurred sep­ar­ation of internet layers • Reduced network resilience • Shutdowns • Facilitation of surveillance • Encryption wars • Restrictions on the use of tech­nical tools such as VPNs

• Eroding of global cyberse­cur­ity and trust • Diplomatic tensions • Increase of cybercrimes and online terrorism • Threats to human se­cur­ity and peace

Source: Authors.

total GDP in developed countries,33 while McKinsey Global Institute estimated that back in 2014 cross-border data flows alone had already added around US$2.8 trillion to world GDP. The cross-border nature of the internet and its cloud-based services are at the heart of innovation and growth. This is why the OECD addressed the challenges to internet openness in its June 2016 Ministerial Conference in Mexico, and why the 2016 World Economic Forum’s Davos meeting discussed the impact of cyberspace fragmentation. A legal arms race and lack of cooperation would jeopardize growth and stifle innovation and competition. Most established internet companies were able to scale up internationally before the current move towards re-territorialization. The future development of global services, data flows, and the cloud economy are at stake. Investment in start-ups and medium-sized companies (especially those dealing with user-generated content) would decrease because of higher intermediary liability risks and legal uncertainty. Compulsory data localization might constitute a potential market entry barrier. Such requirements could be respected only by large, already established operators, limiting innovation and market accessibility for small companies wanting to serve a global market, particularly from developing countries.

4.2  Human Rights Impacts Work in international organizations such as UNESCO (‘Internet universality’) or the Council of Europe (‘cross-border flow of Internet traffic and Internet freedom’) has 33  See Paul Zwillenberg , Dominic Field, and David Dean, The Connected World: Greasing the Wheels of the Internet Economy (Boston Consulting Group 2014) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

740   Bertrand De La Chapelle and Paul Fehlinger established the connection between human rights and the cross-border internet.34 The internet has uniquely fulfilled the promises of Article 19 of the Universal Declaration of  Human Rights, allowing everyone to ‘seek, receive and impart information and ideas through any media and regardless of frontiers’,35 enriched the social fabric across borders, and improved our quality of life. Personal communication capacities are augmented, allowing frictionless expression, deliberation, and the holding of opinions  across borders. The cross-border internet facilitates the sharing and pooling of resources and provides diasporas with irreplaceable communication tools. It has enabled the cre­ation of critical-mass communities with common interests for social, political, or economic issues regardless of spatial distance and facilitated collaborative not-for-profit activities that have created tremendous global social value, such as Wikipedia. The uncontrolled re-territorialization of the internet in order to address its misuses could destroy the unprecedented human rights benefits the internet has generated. Ironically, measures such as data localization and decryption could in fact increase opportunities for surveillance rather than reduce them, as well as harm the right to priv­acy.36 Increased pressure on internet companies to accept direct requests could produce a ‘race to the bottom’ by limiting freedom of expression and lowering due process protections. Conversely, the continued absence of affordable cross-border appeal and redress mechanisms for harmed internet users has a serious negative impact on global justice. At the same time, legal uncertainty and the absence of proper cooperation frameworks prevent the necessary fight against abuses such as incitement to violence, harassment, or disinformation.

4.3  Technical Infrastructure Impacts In 2013, the leaders of the ten organizations responsible for coordination of the internet’s technical infrastructure met in Montevideo, Uruguay, to stress in their joint statement ‘the importance of globally coherent Internet operations, and warn against Internet fragmentation at a national level’.37 In enforcing national laws online in the absence of 34  See UNESCO, ‘Internet Universality: A Means Towards Building Knowledge Societies and the Post-2015 Sustainable Development Agenda’, UNESCO Discussion Paper (2013) ; Council of Europe, ‘Recommendation of the Committee of Ministers to Member States on the Free, Transboundary Flow of Information on the Internet’ (2015) CM/Rec(2015)6 . 35  UN Human Rights Office of the High Commissioner, ‘Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, Frank La Rue’ (2011) A/HRC/17/27. 36  See UN Human Rights Office of the High Commissioner, ‘Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, David Kaye’ (2015) A/HRC/29/32. 37  Internet Corporation for Assigned Names and Numbers (ICANN), ‘Montevideo statement on the future of Internet cooperation’ (2013) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

From Legal Arms Race to Transnational Cooperation   741 international cooperation frameworks, there is a temptation to use the technical ­infrastructure of the internet to address content issues. This, however, blurs a fundamental architectural principle of the internet: the separation of the neutral logical layer (DNS, IP addresses, etc.) and the application layer (online platforms and services). Leveraging the location of registries and registrars to impose the national laws of their country of incorporation on the global content under the country-code top-level domains (ccTLDs) or generic top-level-domains (gTLDs) they manage would be a clear extraterritorial extension of sovereignty, given the global impact of a domain seizure. In  parallel, generalizing geo-IP filtering to withhold content on specific territories may ultimately lead to forcing Regional Internet Registries to systematically allocate IP addresses on a territorial basis. Such a scenario could complicate routing. With the transition from IP version 4 (IPv4) to IP version 6 (IPv6), it could even facilitate surveillance, should IP addresses be permanently hardwired to specific devices and become identity identifiers. In an effort by internet companies to reduce their multi-jurisdictional liability, unbreakable encryption technologies might lead to a spiral of encryption/decryption conflicts between public and private actors. The imposition of a limited number of internet gateways to connect a territory in order to facilitate blocking measures potentially reduces the resilience of the overall technical network. Finally, the banning of technologies such as virtual private networks is not only contrary to Article 13(2) of the Universal Declaration of Human Rights,38 it also reduces the security of transactions and communications. More drastically, the technology for governments to completely shut down the internet in their jurisdiction in the case of non-enforceability of specific requests to intermediaries becomes increasingly accessible.

4.4  Security Impacts The absence of agreed-upon frameworks to handle requests across borders has already resulted in diplomatic tensions between a country seeking to enforce its national laws and the country in whose jurisdiction the internet platform or technical operator is actually located. Historic examples include Google’s China exit in 2010,39 the Indian Assam riots in 2012,40 the Innocence of Muslim YouTube video in 2012,41 or Turkey’s 38  See Universal Declaration of Human Rights (adopted 10 December 1948) UNGA Res. 217 A(III) (UDHR), Art. 13(2) (‘Everyone has the right to leave any country, including his own, and to return to his country’). 39  See Declan McCullagh, ‘State Dept. presses China ambassador on Google’ (CNET, 22 January 2010) . 40  See ‘India cracks down on online content that stirs violence in its jurisdiction, needs US assistance to trace origins’ (Internet & Jurisdiction Policy Network—I&J Retrospect Database, August 2012) . 41  See ‘Pakistani YouTube block remains intact due to absence of MLAT with US jurisdiction’ (Internet & Jurisdiction Policy Network—I&J Retrospect Database, November 2012) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

742   Bertrand De La Chapelle and Paul Fehlinger blocking of Twitter in 2014.42 Likewise, debates about MLAT reform are fuelling interstate dissonances. Such international conflicts are likely to increase if nothing is done. It is the duty of states to protect their citizens and maintain public order within the provisions of Article 29 of the Universal Declaration of Human Rights. However, the rapid and viral propagation of incitement to violence (often called ‘digital wildfires’) could lead to disaster if we lack efficient transnational cooperation mechanisms that set standards and procedures for the interactions between states, internet platforms, and users across borders in situations of public order tensions. The international fight against terrorism online is emblematic of this challenge. Meanwhile, cybercrime is on the rise, and most online crimes have a multi-jurisdictional footprint, which makes cooperation across borders necessary to guarantee security online, as well as offline. The absence of appropriate regimes to access data across borders further increases the incentives for direct surveillance. Failure to develop the needed frameworks might ultimately lead to a decrease in global cyber security and order.

5.  Filling the Institutional Gap in Internet Governance Traditional intergovernmental cooperation mechanisms have so far failed to provide appropriate solutions. Legal harmonization on substance is difficult to achieve but the costs of inaction are daunting. There is an institutional gap in the internet governance ecosystem that must be filled to adequately address these new challenges. In doing so, following the words of former UN Secretary-General Kofi Annan, we need to be ‘as creative as the inventors of the internet’. To preserve the global nature of the internet and address its misuses demands the development of innovative cooperation mechanisms that are as transnational, inclusive, and distributed as the network itself.

5.1  Lessons from the Technical Governance ‘of ’ the Internet Internet governance was famously defined in the United Nation’s WSIS Tunis Agenda in 2005 as ‘the development and application by governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programs that shape the evolution and use of the Internet’.43 42  See ‘Twitter blocked in Turkish jurisdiction at IP level’ (Internet & Jurisdiction Policy Network—I&J Retrospect Database, March 2014) . 43  World Summit of the Information Society (WSIS), ‘Tunis Agenda for the Information Society’ (2005) para. 34 .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

From Legal Arms Race to Transnational Cooperation   743 In this definition, we see a distinction between governance ‘of ’ the internet and g­ overnance ‘on’ the internet.44 Governance ‘of ’ the internet designates the governance of  protocols, standards, addresses, and the evolution of the technical architecture. Governance ‘on’ the internet relates to the use of the internet, that is, the applications and services that run on top of the physical and logical layers, as well as internet users’ behaviour. The jurisdictional challenges discussed in this chapter are primarily related to governance ‘on’ the internet. A complex and robust network of institutions has emerged over time to handle governance ‘of ’ the internet. It comprises, inter alia, the Internet Engineering Task Force and World Wide Web Consortium (W3C) for the development of internet and web standards; five Regional Internet Registries allocating IP addresses; the thirteen root servers and their multiple mirrors; ICANN; and the numerous registries and registrars distributing second-level domain names. In dealing with the internet’s logical layer, each of these institutions covers the five stages necessary for the ‘development and application’ of governance regimes: issueframing, drafting, validation, implementation, and reviews. Policies developed through their bottom-up participatory processes can have wide-ranging transnational implications, such as when ICANN regulates the allocation of the semantic spectrum of gTLD extensions or the accreditation of market operators (registrars and registries). Together, these institutions formed the necessary ecosystem of governance that has enabled the internet to grow from the limited ambit of its research background to serve several billion people and permeate almost all human activities. This ecosystem of trans­nation­al institutions is fundamentally distributed: each entity deals with a specific issue, with loosely coupled coordination. It was developed progressively through time as policy needs arose. Each entity has its own specific institutional structure and internal procedures. Most importantly, they operate on a fundamental principle: the open participation of all relevant stakeholders in the processes dealing with issues they impact or are impacted by.

5.2  Evolution of the Ecosystem: Governance ‘on’ the Internet By contrast, the institutional ecosystem addressing issues related to governance ‘on’ the internet is embryonic at best, or as Mark Raymond and Laura DeNardis elegantly expressed it in 2015, ‘inchoate’.45 The Internet Governance Forum (IGF) is the main outcome of the WSIS process. In its ten years of existence, it has demonstrated its capacity to act every year as a ‘watering 44  See Bertrand de La Chapelle, ‘The Internet Governance Forum: How a United Nations Summit Produced a New Governance Paradigm for the Internet Age’ in Christian Möller and Arnaud Amouroux (eds), Governing the Internet: Freedom and Regulation in the OSCE Region (OSCE 2007) 27. 45 Mark Raymond and Laura DeNardis, ‘Multistakeholderism: Anatomy of an Inchoate Global Institution’ (2015) 7(3) Int’l Theory 572–616.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

744   Bertrand De La Chapelle and Paul Fehlinger hole’, where all actors identify challenges, share experiences, and present their work. However, despite its undeniable success and essential role, not to mention the emergence of numerous national and regional spin-offs, it still only covers at best the first stages of the policymaking cycle: agenda setting and issue framing. Beyond some noteworthy efforts to document best practices, no efficient mechanisms yet exist to enable ongoing intersessional work on specific issues to produce, let alone implement and enforce, the needed transnational arrangements for governance ‘on’ the internet. The NETmundial Roadmap, an outcome of the major 2014 multistakeholder conference, highlighted the jurisdiction issue as an important topic for the global com­mu­nity.46 To preserve the cross-border nature of the internet by default for the next generations to come, we need to collectively fill the institutional gap for the governance ‘on’ the internet. This is in line with the ambitions of the global internet governance community to ‘further develop the Internet governance ecosystem to produce op­er­ation­al solutions for current and future Internet issues’, and to preserve the internet as a ‘unified and unfragmented space’ in a collaborative manner.47 In doing so, we need to keep in mind the lessons that made the success of the existing institutional ecosystem for governance ‘of ’ the internet. The robustness of the policies and solutions it produces is directly related to its fundamental characteristic of being transnational, open, and organized in a distributed way. Given the diversity of the modes of organization of technical governance organizations, this does not mean the mere replication of a single model, but rather taking adequate inspiration from these principles to develop the governance ‘on’ the internet. In the specific case of developing new policy standards and transnational cooperation mechanisms for access to data, content restrictions, and domain seizures, the institutional gap of governance ‘on’ the internet lies at the intersection of four policy areas: legal interoperability, economy, human rights, and cybersecurity.

5.3  Enabling Issue-Based Multistakeholder Cooperation The multistakeholder approach was explicitly endorsed by more than 180 countries at the heads of state level in the Tunis Agenda in 2005, and reconfirmed in the UN General Assembly High-Level Meeting on the WSIS+10 in December 2015. Filling the institutional gap requires neither the creation of new international organizations nor giving a unique responsibility to any existing one, as internet issues are relevant to the mandates of a plurality of entities. A more creative approach is needed: the formation of issue-based governance networks.

46  See ‘NETmundial Multistakeholder Statement’, São Paulo, Brazil (24 April 2014) 2.IV . 47 NETmundial Initiative, ‘Terms of Reference’ (no date) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

From Legal Arms Race to Transnational Cooperation   745 In line with the 2014 recommendations of the High-Level Panel on Global Internet Cooperation and Governance Mechanisms,48 chaired by the President of Estonia, Toomas Ilves, developing transnational mechanisms for policy cooperation requires ongoing multistakeholder and issue-based processes. • Ongoing, because the current proliferation of one-shot conferences, fora, panels, and workshops, however useful to foster mutual understanding, is not sufficient to move towards operational solutions. Developing networks, trust, and a common approach to issues and objectives cannot be achieved in disconnected series of two-hour sessions. • Multistakeholder, because no single stakeholder group working alone can grasp all the technical, political, legal, security, social, and economic dimensions of an issue— although it is a condition for the development of balanced regimes. Furthermore, the likelihood of rapid implementation and scalability is increased if the diverse actors that will have to contribute to the implementations of a regime have also participated in its elaboration. • Issue-based, because each topic involves different sets of concerned stakeholders, or even different individuals and units within each entity. Efficient policy in­nov­ation therefore requires focus on a specific issue to ensure inclusion of all relevant actors. Based on the lessons of the Internet & Jurisdiction Policy Network, some key factors for the success of such issue-based policy networks are: • framing the problem as an issue of common concern for all actors; • ensuring the neutrality of the convener and facilitation team/secretariat; • involving all six stakeholder groups: states, internet platforms, technical operators, academia, civil society, and international organizations; • engaging a critical mass of actors with sufficient diversity to be representative of the various perspectives and able to implement potential solutions; • enable coordination between individual initiatives and major processes to enhance policy coherence; • constructing and expanding a global network of key actors; • creating trust among heterogeneous actors and adopting a shared vernacular; • combining smaller working groups and reporting on progress to make the process manageable and transparent; • informing stakeholders about relevant trends around the world to foster evidencebased policy innovation; and • providing sufficient geographic diversity from the onset to allow the scalability of adoption of any emerging policy solution. 48  ICANN, ‘Towards a Collaborative, Decentralized Internet Governance Ecosystem: Report by the Panel on Global Internet Cooperation and Governance Mechanisms’ (2014) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

746   Bertrand De La Chapelle and Paul Fehlinger Addressing jurisdictional issues on the internet and pre-empting the current legal arms race requires enhanced efforts to catalyse multistakeholder cooperation on specific topics, such as cross-border requests for domain seizures, content restrictions, and access to data.

6.  Towards Transnational Frameworks Such innovative multistakeholder networks can produce scalable and adaptive policy standards that guarantee procedural interoperability and transnational due process in relations between public and private actors.

6.1  Procedural Interoperability International human rights frameworks already represent an overarching substantive reference at the global level. Human Rights Council resolutions have reaffirmed that they apply online as well as offline.49 However, rapid substantive legal harmonization at a more detailed level regarding use of the internet is unrealistic, given the diversity of legislations that are often considered strong elements of national identity. Meanwhile, cross-border requests for domain seizures, content takedowns, and access to data pose everyday problems that require urgent action, as the stakes involved are high. In contrast to traditional interstate cooperation, these increasingly cross-border interactions engage heterogeneous public and private actors. They are conducted in all shapes and formats, through broadly diverse communication channels, and often without clear and standardized procedures or sufficient transparency. In that context, pri­ori­tiz­ing the development of shared procedural standards has several benefits: • it provides a field of cooperation that helps build trust among stakeholders and paves the way for constructive discussions on contentious substantive norms; • it establishes interoperability among heterogeneous actors by providing shared vernacular and mechanisms for their interactions, not unlike the Transmission Control Protocol/IP-enabled interoperability between heterogeneous networks; • it prepares a future digitization of the request treatment workflow, in order to reduce the delays that plague current mechanisms, such as MLATs; • most importantly, it is an opportunity to incorporate due process requirements in operational frameworks by design, in order to improve transnational interactions and safeguard users’ rights across borders. 49 See UN Human Rights Council, ‘Resolution on the Promotion, Protection and Enjoyment of Human Rights on the Internet’ (2014) A/HRC/26/L24 .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

From Legal Arms Race to Transnational Cooperation   747

6.2  Governance through Policy Standards Norms and procedures developed through such multistakeholder processes or or­gan­iza­tions such as the Internet & Jurisdiction Policy Network can be considered ‘policy standards’. As innovative transnational cooperation frameworks, they can establish mutual commitments between the different stakeholders, with: • clear distribution of responsibilities; • specific norms, procedural mechanisms, or guarantees; and • clear decision-making criteria. As new forms of transnational soft law, such operational governance frameworks can, in the context of addressing jurisdiction on the internet, guarantee procedural inter­oper­ abil­ity and due process. In doing so, they can either help to reform existing modes of interstate cooperation (e.g. the MLAT system) or fill current governance voids that require new sets of norms and standards. Implementation and enforcement of such policy standards can leverage a com­bin­ ation of existing tools and cover the range from simple best practices to strict normative obligations. Public and private actors have different options to operationalize these shared norms through measures such as states referencing policy standards in their administrative procedures, or internet platforms and technical operators doing so in their terms of service. Multistakeholder policy standards can even be institutionally embedded in national laws, endorsed by international organizations, or enshrined in new international treaties. Recognition by the existing international governance ecosystem is a pre-requisite for the development of policy standards by a critical mass of stakeholders. Here, the Internet & Jurisdiction Policy Network, as an organization enabling the development of such policy standards, can serve again as an example. The regular Global Conferences of the Internet & Jurisdiction Policy Network are institutionally supported by a unique coalition of international organizations: the Council of Europe, European Commission, ICANN, OECD, United Nations ECLAC, and UNESCO. First achievements of stake­ holders in the Internet & Jurisdiction Policy Network have been reported to and referenced by key international processes: the 2018 G7 Cyber Group Report endorsed its work and the Ottawa Roadmap, the 2018 Paris Peace Forum, a gathering with over eighty heads of states and international organizations, showcased the organization, and the organization was invited to report to the G20 Multi-Stakeholder Conference on Digitalisation or the United Nations Internet Governance Forum, as well as the UN Secretary General’s High-Level Panel on Digital Cooperation. Drawing lessons from the governance ‘of ’ the internet, a major advantage of standards is their potential to scale. Multistakeholder policy standards are based on consensus among different stakeholder groups, which augments the likelihood of successful and efficient adoption. They can more easily be implemented across heterogeneous public and private governance systems, which is the key to creating interoperability. Moreover,

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

748   Bertrand De La Chapelle and Paul Fehlinger such policy standards can be improved and adapted more quickly than conventional treaties, which allows them to develop further as the internet ecosystem evolves.

7. Conclusions In his Structure of Scientific Revolutions,50 Thomas Kuhn describes paradigm shifts that modify the model underpinning a particular scientific field when it no longer reflects or correctly explains observations. The Copernican revolution in astronomy is the most familiar example, triggered by the observations of Galileo’s telescope. Similarly, political paradigm shifts occur when a particular model of societal organization struggles to ad­equate­ly address the problems of the time. Rooted in the treaties of the Peace of Westphalia of the seventeenth century, our inter­ nation­al system, based on the territoriality of jurisdictions, the separation of sovereignties, and non-interference, struggles to handle the transborder digital realities of the twentyfirst century. The internet acts like Galileo’s telescope, showing that traditional principles and approaches can become as much an obstacle as a solution to address the jurisdiction challenges regarding cross-border data flows and intermediaries. Addressing issues related to governance ‘on’ the internet requires a paradigm shift: from international cooperation only between states, to transnational cooperation among all stakeholders; from pure intergovernmental treaties to policy standards; and from intergovernmental institutions to issue-based governance networks. Far from a rejection of traditional international cooperation, however, this is proposed as a constructive extension—a way to look at current practices in a new, generalized light. In physics, two theories coexist at the same time: relativity theory applies at high velocities in space; but in normal conditions, classical Newtonian equations still allow us to build bridges and predict trajectories. Both have their respective zones of validity. Likewise, the type of transnational cooperation envisioned here in no way suppresses or reduces the relevance and authority of existing governance frameworks, in particular national governments. On the contrary, multistakeholder processes can prod­uce policy standards that help the reform of existing interstate cooperation mech­an­isms, and policy standards can even later be enshrined by traditional multilateral organizations. The global community needs to step up efforts to avoid the negative consequences of a legal arms race, preserve the global nature of the internet, and address its misuse. We need innovative cooperation mechanisms that are as transnational as the internet itself and the necessary institutional ecosystem and cooperation processes to produce them.

50  Thomas Kuhn, The Structure of Scientific Revolutions (U. Chicago Press 1962).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Index

A

A&M Records Inc. v Napster Inc [2001] 49, 100, 351–2, 357 Access to information  see Freedom to access and impart information Accessory liability  see Secondary liability Accountability algorithmic decision-making access to data  682–3 aggregation 683 improvement to information systems  684 intermediary liability  685–7 measuring effect on user behaviour 683–4 need for more accountable systems 687–8 problem of bias  681–2 underlying rationale  679–80 user access to organizational factors  684 to users  680–1 verification 683 challenges to rule of law by content moderation algorithmic content moderation  674–5 disclosure of data  675 trade secrets  675–6 DMCA system  174 enhanced ‘responsibilities’  613–15 algorithmic decision-making  26 China 547–55 different mechanisms and legal challenges 625–9 DSM strategy  296–9, 305, 309 enforcement tools  615–25 EU legislative developments  481 for intermediaries  613–14 monitoring and filtering  544 powerful slogan for policymakers  630

importance 116 lack of legal legitimacy of private actors 484 liability distinguished  93–6 maturing of the regulatory environment 445 normative justifications  71–2 obligations of UGC Platform Provider  267 orders  40, 49, 52, 54 path towards increased accountability  348 promotion of self-review by online intermediaries 676 recent decisions of ECtHR  346 Russia 29 Accountability orders  see Injunctions Actual knowledge  see Knowledge-based liability Administrative enforcement advantages 591 AGCOM (Italy) in practice proportionality 608–10 protected subject matter  604–5 remedies 606–8 scope of violations  605–6 statistics 603 transparency 603–4 China 282 common features  589–90 domestic law  595–6 essential features abbreviated procedure  599–600 costs 600 double track  601–2 independence of enforcement bodies 596 judicial review  602 parties 597–8 procedure 598–9

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

750   index Administrative enforcement (cont.) protected subject matter and violations 596–7 remedies 600–1 safeguards against abuse  602 transparency 601 expansion in several countries  586–7 France establishment of HADOPI  587–8 fundamental rights  594–5 Greece domestic law  595–6 impact of US pressure  587 introduction of new mechanism  589 impact of US pressure  587 Italy adoption of EU law  589 AGCOM in practice  603–10 domestic law  595 impact of US pressure  587 legal provisions EU law  593–5 TRIPs Agreement  592 overview 24 remedies 590–1 Russia 24 Spain first European jurisdiction  588–9 impact of US pressure  587 underlying concerns  588 underlying issues  591–2 Africa first-generation liability limitations Ghana 220 South Africa  216–19 Uganda 221–2 Zambia 220–1 interstate cooperation adoption of Convention on Cyber Security and Personal Data Protection 224 role of AU  223–4 role of OAU  222–3 overview 11 recent legislative initiatives Ethiopia 226–7 Kenya 227–8

Malawi 225–6 South Africa  228–33 role of African Union  234–5 slow rise of discourse  214–15 Akdeniz v Turkey [2014]  139, 142, 150, 567–75 Algorithmic decision-making accountability access to data  682–3 aggregation 683 improvement to information systems 684 intermediary liability  685–7 measuring effect on user behaviour 683–4 need for more accountable systems 687–8 problem of bias  681–2 underlying rationale  679–80 user access to organizational factors  684 to users  680–1 verification 683 allocation of liability  104 challenges to rule of law by content moderation 674–5 data gathering and transparency  8 issue of transparency and accountability 29 monitoring and filtering  16 notice and takedown  113–14, 116 reflection of policy’s assumptions and interests 139 shift to ‘enhanced responsibility  26 Applicable law difficulty with cross-border conflicts  729 ICANN-Registry Agreement  637, 645 Russia ‘Bloggers’ Law’  654 content categories  658–9 terms of service imposed on users  692 Argentina blocking orders against innocent third parties 22 monitoring and OSPs as ‘gate-keepers’  549 notice and takedown  261 right to be forgotten - Belén Rodriguez Case blocking of harmful content  507

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   751 explicitly illegal content  506–7 liability of search engines  505–6 no general obligation to monitor 506 Auction sites as ‘internet intermediaries’  43–4 need for constructive cooperation  378 trade mark infringements  49, 399 Australia Competition & Consumer Commission (ACCC) inquiry  693 confusing and incoherent approach  236–7 consumer protection law  239–40 copyright 244–5 defamation 240–3 fundamental questions about responsibility 249–50 general rule on fault  237–8 geographical scope of blocking  701–2 ‘Good Samaritan’ problem  237 hate speech  243 overview 11–12 remedies 238–9 website-blocking orders  375

B

Bias algorithmic decision-making aggregation 683 how users learn to adapt  684 measuring effect on user behaviour 683–4 problem of accountability  681–2 verification 683 DNA’s copyright ADRP  644 responsibilities of OSPs  122 search engines  125–6 trusted notifier programme  641–2 voluntary disclosures  675 Blocking orders  see also Notice and takedown (NTD) administrative enforcement AGCOM (Italy) in practice  606–8 essential features  600–1 overview 590–1 applicable principles  417–18

challenges to rule of law by content moderation 672 Chinese approach to IP infringements  290 Cicarelli case in Brazil  192–3 difficulties of identifying EU standards 584–5 enhanced ‘responsibilities’  624–5 EU law  566 ‘fair balance’ doctrine  139, 563 freedom to conduct business availability of reasonable alternatives 579–80 costs and complexity  576–9 fundamental rights effect of recent EU reform  583–4 underlying problems  567–8 human rights availability of alternative means of accessing information  575–6 enforceability of user rights  569–72 ‘fair balance’ doctrine  574–5 freedom of expression  568–9 risk of overblocking  572–3 injunctions applicable principles  417–18 threshold conditions  416 trade mark infringements  418–19 ‘internet intermediary’ defined  48–9 issue of effectiveness  581–3 jurisdictional scope Australian approach  701–2 Canadian approach  700 EU law  702–7 increasingly important question 699–700 key issues at stake  708 US approach  700–1 Yahoo France case  32 most efficient option for rightholders  566 right to be forgotten - Belén Rodriguez Case 507 threshold conditions  416 trade marks  374–5 US regulation of domain names new gTLDs in the DNS  636–7 technically easy exercise  634 troubling developments  646

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

752   index Brazil criminal liability  201 dual judicial system  202–3 Marco Civil da Internet 47 USC § 230 immunity compared 169–70 broad principles and expansive guidelines 191 cases pending before Federal Supreme Court 205–6 digital constitutionalism  206–11 general provisions  198–9 immunities 199–201 landmark statute  190–1 legislative process  196–7 relevant High Court decisions  203–5 shift in balance of power  191 monitoring and OSPs as ‘gate-keepers’  548 pre-MCI measures Cicarelli case  192–3 class action against Rondônia  193 common rationale  195 defamation 194 filtering 193 important pending cases  195–6 intellectual property rights  194–5 search engines  193–4 Business  see Freedom to conduct business

C

Canada case studies hate speech  462–5 terrorist content  459–62 common law approaches defamation 446–8 reform proposals  452 Equustek case 2015 BC Court of Appeal ruling  713–14 creation of three internet-related legal risks 718–19 danger of extraterritorial application of court decisions  725–6 impact of shifting more responsibility to internet companies  724–5 influence on other decisions  720–4 original BC court order  712–13

Supreme Court of Canada decision 715–18 Supreme Court of Canada hearing 714–15 US application by Google  719–20 geographical scope of blocking  700 no general statutory model for intermediary liability  446 notice and notice (NN) mechanisms  533–4 notice-and-notice-plus (NN+) defamation 452–5 fake news  458–9 hate speech  455–6 NCDII 458 other harmful speech  456–8 reform proposals common law approaches  452 notice-and-notice-plus (NN+)  452–5 series of principles on the role of intermediaries 451 statutory approaches broad immunity model  449–50 complicated regulatory frameworks 448–9 defamation 450 terms of service imposed on users  697 US–Mexico–Canada Agreement (USMCA) adoption of s. 230-like protections  165 promotion of DMCA model  185–7 provisions on effective notice  177–80 Cartier International AG v British Telecommunications plc [2018] 17, 23, 82, 87, 102, 375, 415–19, 454, 578, 582, 587, 593, 600, 718 Causative secondary liability 66–7 Child pornography AU Convention  224 Brazil 201 Canadian approach  457 copyright infringements  87 DSM strategy  306 Equustek effect  717 EU law  478 explicit Illegality of content  506 Healthy Domains Initiative  642 Indonesia 266

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   753 Malawi 225 online search manipulation  617 possible new s 230 exceptions  170 recent legislative initiatives in South Africa 231–2 Russian regulatory developments  653 US panic in 1995  157–8 Chile completed FTA with US  180–2 notice and notice (NN) mechanisms  533–4 right to be forgotten  507–10 China default pre-safe-harbour position  253 DMCA safe harbour provisions compared basic laws and regulations  253–5 neither strict liability nor true safe harbour 258 notice and takedown  255–8 human rights issues  128–9, 131–2 liability for IP infringements copyright 286–92 evolution of converged law  292–4 overview 276–80 trade marks  280–6 monitoring and OSPs as ‘gate-keepers’ 548–9 ‘ratio’ principles  373–4 Classification of liability overview 63 primary liability  63–4 secondary liability causative secondary liability  66–7 common design  68–9 criminal liability  69 general principles  65–6 procurement 67–8 relational liability  69–70 Codes of conduct countering illegal hate speech  27, 311, 482–3, 629, 673 enhanced ‘responsibilities’  621–2 Ghana 220 South Africa  218, 235 voluntary measures  376, 586 Colombia pending FTA with US  183–4 right to be forgotten  510–12

Common design 68–9 Communication Decency Act (CDA) 47 USC § 230 immunity Brazil’s Marco Civil da Internet compared 169–70 e-Commerce Directive compared  167–8 EUs ‘right to be forgotten’ compared 165–7 exceptionalist treatment of internet 162–5 future in peril  170–1 Germany’s NetzDG compared  169 panic over pornography  157–8 pre-§ 230 law  155–7 protections for defendant  158–60 statutory exclusions  160–2 UK defamation law compared  168–9 Compensation  see Damages and compensation Competition EU antitrust inquiries  105 ‘fair balance’ approach  378–9 Constitutionalism  see Digital constitutionalism Content  see also Users challenges to rule of law by content moderation barriers to accountability  674–6 circumvention of separation of powers 672–4 effect of dual role of platforms  670–1 promotion of self-review by online intermediaries 676 proposal enhanced ‘responsibilities’ 676–8 role of intermediaries  669–70 use of technological measures  672 DSM strategy  298–300 enhanced ‘responsibilities’  619–20 Manila Principles on Intermediary Liability 5 overremoval of content and under-use of counter-notification mechanism  115–16 right to be forgotten - Belén Rodriguez Case blocking of harmful content  507 explicitly illegal content  506–7

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

754   index Content (cont.) rights to content  626 Russian government reliance on intermediaries 657–61 unfair commercial practices  307–9 US regulation administration and operation of DNS 633–4 assumption of private obligations  632–3 effect of public disenchantment with Silicon Valley  632 expanding IP enforcement in DNS 637–8 history of IP enforcement in DNS  635–6 little public law change  631–2 new gTLD programme  636–7 plans for copyright-specific UDRP  642–5 troubling developments  645–6 ‘trusted notifier’ agreements  638–42 Copyright  see also Trade marks administrative enforcement adoption in France  587–8 advantages 591 AGCOM (Italy) in practice  603–10 common features  589–90 domestic law  595–6 essential features  596–602 EU law  593–5 expansion in several countries  586–7 Greece 589 impact of US pressure  587 Italy 587 remedies 590–1 Spain 588–9 TRIPs Agreement  592 underlying concerns  588 underlying issues  591–2 Australian approach  244–5, 246 blocking orders difficulties of identifying EU standards 584–5 effect of recent EU reform  583–4 freedom of expression  567–75 freedom to conduct business  576–80 issue of effectiveness  581–3 burden of costs regarding copyright infringement 117–19

Canadian approach broad immunity model  449–50 complicated regulatory frameworks 448–9 notice-and-notice model  450 Chinese approach to infringements exclusion of storage receiving financial benefits 290–1 General Principles of the Civil Law 1987 286–7 inducement or contributory liability  291 non-interference principle  287–9, 292 notice and takedown  289–90 provision of necessary measures  290 comparative analysis of fault and causation 325–8 dangers of knowledge-based liability  251–2 DMCA convenience of DMCA approach in Latin America  187–8 current promotion of DMCA model 185–7 drawbacks of DMCA approach in Latin America 189 general principles  173 model for FTAs  172–3 multimedia social networking services 174–5 procedure 174 safe harbour provisions  110, 174 EU Copyright Directive general monitoring obligation  559–63 impact on fundamental rights  563–4 legislative culmination of global trend 564–5 OCSSP defined  558–9 transformation of OCSSPs  557–8 EU law comparative analysis of fault and causation 326–8 concluding remarks  333–4 current incomplete framework  316–19 proposal for harmonized approach 328–33 vexed and multi-dimensional topic 315–16

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   755 European Digital Single Market (DSM) strategy draft Directive  300–1 final version of Directive  303–4 form of direct liability  302 special regime for hosting large amount of works  301–2 exclusion from 47 USC § 230 immunity 161 freedom to conduct business availability of reasonable alternatives 579–80 costs and complexity  576–9 India’s position and DMCA compared basic laws and regulations  258–60 impact of Singhal decision  261–2 interpretation of Intermediary Guidelines 2011  260–1 notice and takedown  262–3 injunction-based solutions  324–5 ‘internet intermediary’ defined Directive on Copyright in the Digital Single Market 2019  54–6 functional approach  49–50 typological approach to classification  51 Japan’s position and e-Commerce Directive compared basic laws and regulations  263–4 narrower in scope  265–6 notice and takedown  264–5 main area of wrongdoing  78–80 Malaysia’s position and DMCA compared 268–70 ‘mere conduit’ liability exemption  246 monitoring  see Monitoring national regimes France 320–1 Germany 321 United Kingdom  320 notice and takedown accuracy of notices  110–11 need for wider range of publicly accessible datasets  119–20 over-enforcement of non-infringing material 112–15 volume of takedown notices  108–10 private ordering  555–6

right of communication to the public  64, 385, 561 secondary infringement in US common law approach  350–3 continuing legal uncertainty  360–1 Copyright Office Study  365–6 Department of Commerce Internet Policy Task Force  364–5 Digital Millennium Copyright Act (DMCA) 353–6 government enforcement measures 362–3 heavily litigated controversy  349–50 House Judiciary Committee Copyright Review 365 knowledge-based liability  356–7 overview 14–15 right and ability to control  358–60 slowing of reform activity and litigation 367–8 Stop Online Piracy Act and Companion Bills 363–4 use of technological measures to reduce infringement 361–2 wilful blindness  357–8 secondary infringement outside US EU law  366–7 international law  366 tort-based solutions  322–5 trade mark controls compared  385–7 US plans for copyright-specific UDRP DNA’s Copyright ADRP  643–4 Healthy Domains Initiative  642–3 PIR’s SCDRP  644–5 Costa Rica completed FTA with US  182–3 fundamental rights  141 Costs administrative enforcement  600 blocking orders  576–9 burden of costs regarding copyright infringement 117–19 DNA’s copyright ADRP  643 injunctions 101–2 jurisdiction 578 justifications for secondary liability  75–6

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

756   index Criminal liability see also Hate speech Brazil 201 Ethiopia 226–7 exclusion from 47 USC § 230 immunity 161 illegal content  297–8 illegal content within DSM strategy 297–300 Kenya 227–8 right to be forgotten  491–4 secondary liability  69 South Africa  228–9 Uganda 221–2

D

Damages and compensation aggregation 98–9 disclosure of trade secrets  437–8 monetary liability negligence-based liability  60–2 types of order  60 scope 96–8 Data protection access to user data by enforcement agencies 697–9 adoption of Convention on Cyber Security and Personal Data Protection by AU 224 algorithmic accountability  682–3 ‘fair balance’ approach  378–9 MCI (Brazil)  198–9 right to be forgotten ‘fair balance’ doctrine  491–4 Google Spain 488–90 removal of sensitive data  496–8 role of OSPs in protection of fundamental rights 148 Defamation Australian approach  240–3 Canadian approach common law  446–8 notice-and-notice-plus (NN+)  452–5, 465–6 reform proposals  452 statutory provisions  450 China’s position and DMCA compared basic laws and regulations  253–5

neither strict liability nor true safe harbour 258 notice and takedown  255–8 dangers of knowledge-based liability  251–2 ‘internet intermediary’ defined functional approach  50 status as measure of legal applicability  53 typological approach to classification  50 main area of wrongdoing  82–5 Malaysia 269–70 pre-§ 230 US law  156–7 pre-MCI measures in Brazil  194 recent legislative initiatives in South Africa 231 South Korea’s position and DMCA compared basic laws and regulations  270–1 error in adoption of s 512  274–5 expansionist approaches by courts 271–3 UK law and 47 USC § 230 immunity compared 168–9 US SPEECH Act 2010  170 Definitions  see Taxonomy of internet intermediaries Delfi AS v Estonia [GC] [2015]  19, 22, 147, 346, 372, 472–3, 481, 484, 518–19, 550–1 Digital constitutionalism (DC)  206–11, 213 Digital Millennium Copyright Act (DMCA) China’s position compared basic laws and regulations  253–5 neither strict liability nor true safe harbour 258 notice and takedown  255–8 convenience of DMCA approach in Latin America 187–8 current promotion of DMCA model  185–7 drawbacks of DMCA approach in Latin America 189 general principles  173 importance for commercial innovation  108 India’s position compared basic laws and regulations  258–60 impact of Singhal decision  261–2 interpretation of Intermediary Guidelines 2011  260–1 notice and takedown  262–3 Malaysia’s position compared  268–70

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   757 notice and takedown procedure  528 overview 106 safe harbour provisions  174 secondary copyright infringement  353–6 South Korea’s position compared basic laws and regulations  270–1 error in adoption of s 512  274–5 expansionist approaches by courts  271–3 stable framework for managing liability  112 Digital sovereignty challenge to the Westphalian model  729 China 279 impact of legal uncertainty on stakeholders 730 looming risk of legal arms race  730, 733–4 need for comity and mutual respect  726 ‘personal moral sovereignty’  59 purposes of the OAU  222 Russia 652 ‘sovranism’ 31 underlying paradoxes  735 Direct liability  see Primary liability Disclosure obligations challenges to rule of law by content moderation 675 main area of wrongdoing  87–8 trade secrets  436–8 Disgorgement of profits benefits of consequences-based remedies 103 non-monetary liability  62–3 secondary liability rules  73 Disinformation  see also Misleading content effect of legal uncertainty  740 EU Code of Practice  483, 622 Facebook’s increased focus  627 main area of wrongdoing  85–6 Due process administrative enforcement  588 administrative enforcement systems  24–5 DNA’s copyright ADRP  643 ECTA and its Guidelines  235 empirical research false positives  117 imposition of different contractual barriers 117

notice and takedown  116–17 overremoval of content and under-use of counter-notification mechanism  115–16 Internet and Jurisdiction project  5 Manila Principles  454 MLAT system  738 notice-an- notice-plus  452 notice-and-notice (NN) mechanisms 535–6 PIR’s SCDRP  644 safeguards for freedom of expression appeal procedure  542–4 clear and precise notifications  541–2 general v specific monitoring  540–1 towards transnational frameworks of cooperation development of ‘policy standards’  747–8 procedural interoperability  746

E

E-Commerce Directive 47 USC § 230 immunity compared  167–8 administrative enforcement  593 centrepiece of copyright framework  316 definition of ‘internet intermediary’  41–3 injunctions 425–7 interplay with trade secrets  438–9 interplay with unfair commercial practices 431–3 Japan’s position compared basic laws and regulations  263–4 narrower in scope  265–6 notice and takedown  264–5 knowledge-based liability  41–4 ‘mere conduit’ liability exemption  566 monitoring 545 regulatory framework for freedom of expression 475–6 scope of safe harbour provisions  424–5 secondary copyright infringement outside US 366–7 taxonomy of internet intermediaries  41–3 trade mark infringements primary liability  410 proliferation of obligations  396 secondary liability  413

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

758   index Empirical research burden of costs regarding copyright infringements 117–19 due process false positives  117 imposition of different contractual barriers 117 notice and takedown  116–17 overremoval of content and under-use of counter-notification mechanism  115–16 key sub-fields of inquiry  107 notice and takedown accuracy of notices  110–11 need for wider range of publicly accessible datasets  119–20 over-enforcement of non-infringing material 112–15 volume of notices  108–10 overview 104–8 Enforcement access to user data by enforcement agencies 697–9 administrative enforcement see Administrative enforcement blocking see Blocking orders filtering see Filtering mapping intermediary liability enforcement 21–5 notice-and-takedown see Notice-andtakedown (NTD) private ordering see Private ordering role of private business in Russia compliance dilemma for private companies 663–6 government reliance on intermediaries 657–63 overview 647–9 significant regulatory restrictions  668 secondary copyright infringement in US government enforcement measures  362–3 use of technological measures to reduce infringement 361–2 trade marks blocking injunctions  374–5 emergence of ius gentium of common principles 376–8 EU law  396–8 ‘fair balance’ approach  378–9

Germany 398–402 internet service providers defined  406 need for better measures  380 need for tailored liability regime  402–3 need to focus on legal measures  369–70 Netherlands 402 overview 16–17 peculiarities of common law  17 ‘ratio’ principles  373–4 remarkable climate change  381–3 similarities between approaches to intermediary and accessory liability 370–3 two key questions  404–5 trade secrets  439–40 Enhanced ‘responsibilities’ algorithmic decision-making  26 China 547–55 different mechanisms and legal challenges circulation of solutions  629 corporate social responsibility  626–8 involuntary cooperation in IP enforcement 628 markets 625–6 public deal-making with public authorities 628–9 DSM strategy  296–9, 305, 309 enforcement tools blocking orders  624–5 changes to search results  617 codes of conduct  621–2 filtering 622–4 follow-the-money strategies  618–19 ‘graduated response’  615–16 overview 615 payment blockades  618–19 private content regulation  619–20 standardization 620–1 EU legislative developments  481 impact of Equustek case 2015  724–5 for intermediaries  613–14 monitoring and filtering  544 powerful slogan for policymakers  630 proposals for content moderation  676–8 Equustek Solutions Inc. v Google [2017] 32, 380, 700 BC Court of Appeal ruling  713–14

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   759 creation of three internet-related legal risks 718–19 danger of extraterritorial application of court decisions  725–6 impact of shifting more responsibility to internet companies  724–5 influence on other decisions  720–4 original BC court order  712–13 Supreme Court of Canada decision  715–18 Supreme Court of Canada hearing  714–15 US application by Google  719–20 Ethical issues justifications for secondary liability accountability of causally significant conduct 71–2 fictional attribution  72–3 overview 70–1 protection of underlying primary right  73 voluntarily assumed duties  73–4 moral responsibilities of OSPs access to information  124–7 civic role in mature information societies 133–6 duty of ethical governance  136–7 human rights  127–33 overview 122–4 role of OSPs in protection of fundamental rights data protection  148 freedom to access and impart information 140–5 freedom to conduct business  148–9 importance 152 innovation 149 intellectual property rights  149–52 overview  8, 138–9 overview of users’ rights  139 privacy 147–8 Ethiopia  226–7, 233 EU law see also European Digital Single Market (DSM) strategy 47 USC § 230 immunity compared e-Commerce Directive compared  167–8 ‘right to be forgotten’  165–7 administrative enforcement fundamental rights  594–5 three separate sets of provisions  593–4

antitrust inquiries  105 AudioVisual Media Service Directive DSM Directive compared  306–7 new liability regime  305–6 public consultation on review  304–5 blocking injunctions applicable principles  417–18 threshold conditions  416 copyright comparative analysis of fault and causation 326–8 concluding remarks  333–4 current incomplete framework  316–19 proposal for harmonized approach 328–33 vexed and multi-dimensional topic 315–16 Copyright Directive general monitoring obligation  559–63 impact on fundamental rights  563–4 legislative culmination of global trend 564–5 OCSSP defined  558–9 transformation of OCSSPs  557–8 corporate social responsibility  627 definition of ‘internet intermediary’ alternative terms  45–7 Directive on Copyright in the Digital Single Market 2019  54–6 e-Commerce Directive  41–3 Enforcement Directive  39 Information Society Directive  39–41 key Court of Justice decisions  43–4 no synchronized usage  52 status as measure of legal applicability 53–4 direct liability in relation to user activities applicability to less egregious scenarios 343–4 communication by platform operators 340–3 far-reaching issues of Stichting Brein case 347–8 overview 335 right of communication to the public 335–40 safe harbour provisions  345–7

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

760   index EU law (cont.) e-Commerce Directive see e–Commerce Directive enhanced ‘responsibilities’ for intermediaries 614 freedom to access and impart information 146–7 freedom to conduct business  148–9 fundamental status of IP  150 geographical scope of blocking  702–7 ‘mere conduit’ liability exemption  167, 476 monitoring OSPs as ‘gate-keepers’  555 OSPs as ‘mere conduits’  545–7 notice and takedown procedure  528 ‘ratio’ principles  374 regulatory framework for freedom of expression Council of Europe  468–74 primary and secondary EU law  474–9 right to be forgotten impact of Costeja case  503–5 primary publishers  498–501 problem of the widespread privacydamaging information  486–8 search engines  488–98 scope of safe harbour provisions  424–5 secondary copyright infringement outside US 366–7 stricter approach to internet intermediaries 694 terrorist content  460, 478 trade mark infringements dealings in counterfeit and grey goods 409–10 e-Commerce Directive  410 injunctions  414, 420 primary and secondary liability distinguished 405 primary liability  406–7 proliferation of obligations  396–8 remarkable climate change  381–3 secondary liability  410–11 use of sign complained of  407–8 trade secrets enforcement against third parties  439–40 harmonization measures for remedies 437–8

interplay with e-Commerce Directive 438–9 overview 435–6 unlawful acquisition, use and disclosure 436–7 unfair commercial practices applicability to OIs  428–30 defined 427–8 effectiveness of consumer protection 433–4 interplay with e-Commerce Directive 431–3 Unfair Commercial Practices Directive application in practice  308–9 professional diligence and transparency duties 308 UCPD Guidance  307–8 video-sharing platforms  476–8 European Digital Single Market (DSM) strategy AudioVisual Media Service Directive DSM Directive compared  306–7 new liability regime  305–6 public consultation on review  304–5 copyright draft Directive  300–1 final version of Directive  303–4 form of direct liability  302 special regime for hosting large amount of works  301–2 Copyright Directive general monitoring obligation  559–63 impact on fundamental rights  563–4 legislative culmination of global trend 564–5 OCSSP defined  558–9 transformation of OCSSPs  557–8 emergence of new liability regime  309–11 issue of illegal content  298–300 overview  12–13, 295–7 special emphasis on three key pillars 297–8 Unfair Commercial Practices Directive application in practice  308–9 professional diligence and transparency duties 308 UCPD Guidance  307–8

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   761 Extraterritoriality  see also Jurisdiction Equustek case 2015 BC Court of Appeal ruling  713–14 creation of three internet-related legal risks 718–19 danger of extraterritorial application of court decisions  725–6 impact of shifting more responsibility to internet companies  724–5 influence on other decisions  720–4 original BC court order  712–13 Supreme Court of Canada decision 715–18 Supreme Court of Canada hearing 714–15 US application by Google  719–20 key threat to global internet  31 looming risk of legal arms race  733–4 right to be forgotten  20, 31, 494–6 shaping and applying of legal norms  57

F

‘Fair balance’ doctrine blocking orders freedom of expression  574–5 freedom to conduct business  579–80 protection of fundamental rights  417 EU copyright law  317–18, 328, 333 filtering, blocking, and other monitoring measures  139, 563 fundamental rights  378 right to be forgotten  491–4 Fault-based liability  see Negligence-based liability Filtering  see also Monitoring algorithmic decision-making  16 challenges to rule of law by content moderation 672 enhanced ‘responsibilities’  622–4 ‘fair balance’ doctrine  139, 563 freedom to access and impart information 146 moral responsibilities of OSPs  128 pre-MCI measures in Brazil  193 privacy violations  148 shift towards more active control  106

trade mark infringements and proliferation of obligations EU law  396–8 Germany 398–402 need for tailored liability regime  402–3 Netherlands 402 overview 16–17 remarkable climate change  381–3 Follow-the-money strategies change in perspective  630 defendants of first resort  50 enhanced ‘responsibilities’  618–19 wide application  27 France copyright regime comparative analysis of fault and causation 326 injunctions 324 ‘primary liability’ approach  320–1 tort-based solutions  322–3 ‘graduated response’  535 ‘internet intermediaries’ defined  44 monitoring and OSPs as ‘gate-keepers’ 551–2 notice and notice (NN) mechanisms  533–4 notice and takedown procedure  529 Yahoo France case  32, 710–12 Freedom of expression blocking orders availability of alternative means of accessing information  575–6 copyright infringements  568–9 enforceability of user rights  569–72 ‘fair balance’ doctrine  574–5 risk of overblocking  572–3 Brazil 201 continuing challenge and source of tension 484–5 EU regulatory framework Council of Europe  468–74 primary and secondary EU law  474–9 ‘fair balance’ approach  378–9 impact of EU Copyright Directive  563–4 impact of notice and action  527–8 increasing focus on online expression  467 Inter American System of Human Rights 517–20

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

762   index Freedom of expression (cont.) Joint Declaration of three Special Rapporteurs (2011)  4–5 jurisdictional issues  693 limitation on trade mark rights commercial freedom  389–91 social and cultural freedom  391–4 moral responsibilities of OSPs  128 notice and notice (NN) - risks and safeguards foreseeability 534–5 procedural fairness  535–6 severity of the response  536–8 notice-and-stay-down (NSD) - risks and safeguards appeal procedure  542–3 clear and precise notifications  541–2 general v specific monitoring  540–1 notice and takedown (NTD) - risks and safeguards abusive requests  530–1 foreseeability 530 notification and counter-notification  531–2 role of African Union  234–5 Russia 666–7 shifts in the geometrical patterns of regulation growing number of new media actors 479 importance of internet intermediaries 479–80 need for strong and effective measures 481 ‘online platforms’  480 overview 467–8 shift of focus towards lawmaking and policymaking 480–1 video-sharing platforms  481 wider-sweeping policy line  481–4 Freedom to access and impart information blocking orders  567–8 ethical implications of the role of OSPs benefits of negligence-based system  145 EU law  146–7 impact of EU Copyright Directive  563–4

moral responsibilities of OSPs incidental exposure to diverse information 126 increased risks from AI  126–7 search engines  124–6 right of communication to the public applicability to less egregious scenarios 343–4 China  253–4, 257, 278, 287 communication by platform operators 340–3 copyright  64, 385, 561 DSM strategy  303 international law  336–7 meaning and scope of concept  337–40 purpose-driven interpretation  335–6 UK 320 right to be forgotten Argentina - Belén Rodriguez Case  505–7 Chile 507–10 Colombia 510–12 impact of Costeja case  503–5 Mexico 512–14 Peru 514–16 primary publishers  498–501 problem of the widespread privacydamaging information  486–8 search engines  488–98 trends and future developments in Latin America 520–1 Uruguay 516–17 Freedom to conduct business blocking orders availability of reasonable alternatives 579–80 costs and complexity  576–9 ethical implications of the role of OSPs 148–9 Fundamental rights  see also Human rights administrative enforcement  594–5 blocking orders difficulties of identifying EU standards 584–5 effect of recent EU reform  583–4 freedom of information  567–8 underlying problems  567–8

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   763 ethical implications of the role of OSPs data protection  148 freedom to access and impart information 140–5 freedom to conduct business  148–9 importance 152 innovation 149 intellectual property rights  149–52 overview  8, 138–9 overview of users’ rights  139 privacy 147–8 EU copyright law  317–18 impact of EU Copyright Directive  563–4 regulatory framework for freedom of expression legal consistency within Europe  475 legal status of Charter  474–5 right to be forgotten  489 special emphasis  5

G

Gatekeepers central role in the management of resources 133 civic role in mature information societies 133–6 European Digital Single Market (DSM) strategy 295 free speech online  693 monitoring by OSPs Argentina 549 Brazil 548 China 548–9 EU law  555 France 551–2 Germany 552–3 global shift towards heightened standard of liability  547 human rights  549–51 Italy 553–4 Spain 554–5 reducing claimants’ enforcement costs  75 regulating communications policy  77 role of intermediaries  691 transformation of OCSSPs  557–8 trustees of the greater good  613

Germany copyright regime comparative analysis of fault and causation 325–6 difficulties of finding direct liability  321 injunctions 325 tort-based solutions  323–4 increased reporting requirements for platform operators  105 monitoring and OSPs as ‘gate-keepers’ 552–3 NetzDG and 47 USC § 230 immunity compared 169 notice and takedown procedure  529 profound repercussions from innocent-sounding rule  251 trade mark infringements and proliferation of obligations  398–402 Ghana 220 ‘Good Samaritan’ problem  172, 237 Google France SARL v Louis Vuitton Malletier SA [2010]  43, 81, 145, 371, 396, 407 Google LLC v CNIL [2019]  31, 495–6, 702, 704–5 Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (AEPD), Mario Costeja González [2014]  19, 31, 50, 143–4, 165–6, 492–4, 501, 503–4, 518–20, 733 Governance moral responsibilities of OSPs  136–7 need for paradigm shift  748 need to fill institutional gap in governance benefits of multistakeholder approach 744–6 gap at intersection of four policy areas 743–4 nexus to jurisdiction  730–2 ‘of ’ and ‘on’ the internet distinguished 742–3 towards transnational frameworks of cooperation development of ‘policy standards’  747–8 procedural interoperability  746

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

764   index ‘Graduated response’ change in perspective  630 enhanced ‘responsibilities’  615–16 France 535 Italy 588 notice-and-notice 533 South Korea  533 Greece administrative enforcement domestic law  595–6 essential features  596–602 impact of US pressure  587 introduction of new mechanism  589 transparency 601 GS Media BV v Sanoma Media Netherlands BV [2016]  15, 64, 91, 316, 318–19, 338–43, 347, 410, 559

corporate social responsibility  626–7 direct liability in relation to user activities 346 enhanced ‘responsibilities’ for intermediaries 613–14 Inter American System of Human Rights 517–20 major jurisdictional challenge  728 monitoring and OSPs as ‘gate-keepers’ 549–51 moral responsibilities of OSPs  128–9 ‘all things considered’ approach  132–3 China 128–9 Chinese government’s requests  131–2 harmful uses of the web  127–8 need to consider context  129–30 striking a balance  128 procedural interoperability  746 recent legislative initiatives in South Africa 229–31 regulatory framework for freedom of expression Art 10 ECHR  468–9 corpus of ‘internet’ case law  471–4 expansive interpretation by CJEU 469–70 key principles  470–1 role of African Union  234–5 Russia 666–7 status of IP  150–1 uncoordinated approach to jurisdiction 739–40

H

Harassment effect of legal uncertainty  740 Kenya 227 main area of wrongdoing  85–6 Malawi 225 Hate speech Australian approach  243 challenges to rule of law by content moderation 672–4 continuing challenge and source of tension 484–5 Delfi judgment  472–3 DSM strategy  298–9, 305–6 EU code of conduct  27, 311, 482–3, 629, 673 main area of wrongdoing  85–6 need for strong and effective measures  481 notice-and-notice-plus (NN+)  455–6, 462–5 recent legislative initiatives in South Africa 229–31 Human rights  see also Fundamental rights blocking orders availability of alternative means of accessing information  575–6 enforceability of user rights  569–72 ‘fair balance’ doctrine  574–5 freedom of expression  568–9 risk of overblocking  572–3

I

Immunities  see also Safe harbour provisions 47 USC § 230 immunity Brazil’s Marco Civil da Internet compared 169–70 e-Commerce Directive compared  167–8 EUs ‘right to be forgotten’ compared 165–7 exceptionalist treatment of internet 162–5 future in peril  170–1 Germany’s NetzDG compared  169 panic over pornography  157–8

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   765 pre-§ 230 law  155–7 protections for defendant  158–60 statutory exclusions  160–2 UK defamation law compared  168–9 Australian approach degree of participation  246–7 extension of safe harbour provisions 245–6 knowledge-based liability  247–9 Canada 449–50 Kenya 227–8 Malawi 225–6 MCI (Brazil)  199–201 monetary liability  61–2 South Africa  232–3 SPEECH Act 2010  170 India DMCA safe harbour provisions compared basic laws and regulations  258–60 impact of Singhal decision  261–2 interpretation of Intermediary Guidelines 2011  260–1 notice and takedown  262–3 stricter approach to internet intermediaries 694 Individual responsibility challenge of strict liability  60 impact of secondary liability  74 intermediary liability as an exception  83 policy justification for imposing liability 88 principle of liability  59–60 Indonesia safe harbour policy  266 User Generated Content  267–8 Information  see Freedom to access and impart information Injunctions  see also Blocking orders administrative enforcement  590–1 costs 101–2 disclosure of trade secrets  437–8 e-Commerce Directive  425–7 EU copyright law  317 against innocent parties  93 ‘internet intermediary’ defined  39–40, 49, 52, 54 national copyright regimes  324–5

non-monetary liability  62–3 scope and goals  99–101 trade mark infringements EU law  414 other kinds of injunction  420 UK law  415–16 website-blocking orders  418–19 trade marks  374–5 Innovation EU protection of trade secrets  435 foci of empirical research  107 importance of cross-border nature of internet and cloud-based services  739 importance of safe harbour provisions  108 justifications for secondary liability  76–7 role of OSPs in protection of fundamental rights 149 Intellectual property rights see also Trade marks 47 USC § 230 immunity  161 aims of Handbook  3 Australian approach to copyright  244–5, 246 Chinese approach to infringements copyright 286–92 evolution of converged law  292–4 overview 276–80 trade marks  280–6 dangers of knowledge-based liability  251–2 exclusion from 47 USC § 230 immunity 161 ‘internet intermediary’ defined Directive on Copyright in the Digital Single Market 2019  54–6 functional approach  49–50 typological approach to classification  51 involuntary cooperation in IP enforcement 628 main areas of wrongdoing copyright 78–80 trade marks  81–2 pre-MCI measures in Brazil  194–5 role of OSPs in protection of fundamental rights 149–52 US content regulation administration and operation of DNS 633–4

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

766   index Intellectual property rights (cont.) assumption of private obligations  632–3 effect of public disenchantment with Silicon Valley  632 expanding enforcement in DNS  637–8 history of IP enforcement in DNS  635–6 little public law change  631–2 new gTLD programme  636–7 plans for copyright-specific UDRP  642–5 troubling developments  645–6 ‘trusted notifier’ agreements  638–42 WTO jurisdiction  175 Intent-based liability accessories 102 Argentina 461 Australia  239, 243, 246–7 Canadian approach defamation 452–3 terrorist content  461 China  285–7, 291 common design  68–9, 81 copyright infringements accessory liability  412 China 286 due cause defence  393 ‘multiple faults’ paradigm  329 parodies 113 use of signs  403 different policy challenges  103 double intent  93 east-cost avoiders  76 Ethiopia 226–7 future challenges  678 individual responsibility  59 injunction-based solutions  327 Kenya 227 ‘multiple faults’ paradigm  329–30 peer-to-peer technologies  352 primary liability and secondary liability distinguished 95–6 profit-making intentions  340–3, 347 ‘ratio’ principles  371 scope of damages  96–7 South Africa  228–31 South Korea  275 tort-based solutions  322–4 tort-based thresholds in English law  89

Intermediary liability challenges posed by algorithmic decision-making 685–7 classification of liability overview 63 primary liability  63–4 secondary liability  65–70 defining governance issue of our time  3 justifications for secondary liability normative considerations  70–4 practical functions  74–8 ‘liability’ defined monetary liability  60–2 moral agency and individual responsibility 59–60 non-monetary liability  62–3 main areas of wrongdoing breaches of regulatory obligations  86 copyright 78–80 defamation 82–5 disclosure obligations  87–8 disinformation 85–6 harassment 85–6 hate speech  85–6 trade marks  81–2 mapping of key issues enforcement 21–5 fundamental notions  6–8 international fragmentation  9–14 international private law issues  30–3 private ordering  25–30 subject-specific regulation  14–21 monetary liability immunity 61–2 negligence-based liability  60–2 strict liability  60 types of order  60 typology of liability benefits of conceptual architecture  57 classification of liability  63–70 importance of intermediaries’ conduct 88–9 justifications for secondary liability 70–8 ‘liability’ defined  58–63 main areas of wrongdoing  78–88 overview 6–7

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   767 Intermediate liability  see Secondary liability International law administrative enforcement  592 adoption of Convention on Cyber Security and Personal Data Protection by AU 224 enforcement cooperation requests  5 MLATs as broken system  737–8 right of communication to the public  336–7 secondary copyright infringement outside US 366 US free trade agreements completed agreements - Chile  180–2 completed agreements - Costa Rica  182–3 convenience of DMCA approach in Latin America  187–8 current promotion of DMCA model 185–7 drawbacks of DMCA approach in Latin America 189 modelled on DMCA  172–3 pending agreements  183–5 provisions on actual knowledge  177–80 underlying rationale  175–6 Internet intermediaries jurisdictional issues  692–4 meaning and scope alternative terms  45–7 broad view approach preferred  56 comparative law approaches  38 EU law  39–44 functional approach  47–50 little consensus or clarity  37–8 overview 6 typological approach to classification  38, 50–6 typology of liability benefits of conceptual architecture  57 classification of liability  63–70 importance of intermediaries’ conduct 88–9 justifications for secondary liability 70–8 ‘liability’ defined  58–63 main areas of wrongdoing  78–88 overview 6–7

Internet intermediary liability see Intermediary liability ‘Internet service providers’ (ISPs) see Online service providers (OSPs) Italy administrative enforcement adoption of EU law  589 AGCOM in practice  603–10 domestic law  595 essential features  596–602 impact of US pressure  587 ‘graduated response’  588 ‘internet intermediaries’ defined  44 monitoring and OSPs as ‘gate-keepers’ 553–4

J

Japan e Commerce Directive safe harbour provisions compared basic laws and regulations  263–4 narrower in scope  265–6 notice and takedown  264–5 notice and takedown  274–5 Jurisdiction  see also Extraterritoriality access to user data by enforcement agencies 697–9 administrative enforcement  588–92 core issue  727 costs 578 developing trend  707 England and Wales  407, 415–16 Equustek case 2015 BC Court of Appeal ruling  713–14 creation of three internet-related legal risks 718–19 danger of extraterritorial application of court decisions  725–6 impact of shifting more responsibility to internet companies  724–5 influence on other decisions  720–4 original BC court order  712–13 Supreme Court of Canada decision 715–18 Supreme Court of Canada hearing 714–15 US application by Google  719–20

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

768   index Jurisdiction (cont.) factors determining applicable law  729 geographical scope of blocking Australian approach  701–2 Canadian approach  700 EU law  702–7 increasingly important question 699–700 key issues at stake  708 US approach  700–1 impact of legal uncertainty on stakeholders 729–30 importance 691–2 Internet & Jurisdiction Policy Network  728 limits to international cooperation disruptive nature of internet  736–7 MLATs as broken system  737–8 need for more legal interoperability 735–6 looming risk of legal arms race digital sovereignty  733–4 extraterritoriality 733–4 increasing domestic pressures  732 underlying paradoxes of sovereignty  735 need for paradigm shift in governance  748 need to fill institutional gap in governance benefits of multistakeholder approach 744–6 gap at intersection of four policy areas 743–4 governance ‘of ’ and ‘on’ the internet distinguished 742–3 nexus of internet governance  730–2 role and protection of internet intermediaries 692–4 Russia 651 South Africa  229 terms of service imposed on users Canadian approach  697 US approach  695–6 towards transnational frameworks of cooperation development of ‘policy standards’  747–8 procedural interoperability  746 transnational nature of the internet  728–9 two major challenges  728 Uganda 221–2

unintended consequences of lack of coordination economic impacts  738–9 human rights  739–40 security 741–2 technical infrastructure  740–1 website blocking  567 WTO IP jurisdiction  175 Yahoo France case  710–12

K

Kenya  227–8, 233 Knowledge-based liability Argentina - Belén Rodriguez Case facts of case  505–6 previous and actual knowledge  506 Australia  230, 239, 241, 244–8 blocking orders  416, 425 Canadian approach to defamation  447–9, 452 changing geometry of EU law  473, 475–7 Chinese approach to IP infringements overview 276–80 trade marks  280–6 copyright infringements in US actual knowledge or ‘red flag’ knowledge 356–7 common law approach  351–2 Digital Millennium Copyright Act  353, 355 direct liability  338–9 DSM strategy  300, 302 e-Commerce Directive  41–4, 413 emerging approaches  375, 379 Ethiopia 226 free trade agreements with the United States Chile 181 Costa Rica  182–3 text of agreement  178–9 freedom of expression  530 Germany 398–9 India 258–61 Japan 263–6 joint tortfeasance  412 Kenya 227 liability in EU copyright 317–19 tort-based solutions  322–31

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   769 Malaysia 269–70 monetary liability  61 monitoring  545, 549, 559 notice-and-takedown mechanism  528 platform operators  340–5 primary and secondary liability distinguished 79 procurement 67 ‘ratio’ principles  370–1 safe harbour provisions  432 secondary copyright infringement in US actual and ‘red flag’ knowledge  356–7 wilful blindness  357–8 secondary infringers  345–7 secondary wrongdoers  67, 74 South Africa  216–18, 228–30 South Korea  270–6 trade secrets  438–9 underlying dangers  251–2 United States  252 US Copyright Act  42, 46

L

Liability  see Intermediary liability L’Oréal SA v eBay Int’l AG [2011]  39, 43–4, 49, 52, 101, 300, 317–18, 330, 342, 346–7, 382, 390, 396–7, 406–7, 411, 413, 416, 426–7, 431, 439, 539, 547, 555, 559, 561, 577, 590, 593, 600

M

Malawi  225–6, 233 Malaysia defamation 269–70 DMCA safe harbour provisions compared 268–70 Manila Principles on Intermediary Liability  5, 25, 452–4, 533 Marco Civil da Internet 47 USC § 230 immunity compared  169–70 broad principles and expansive guidelines 191 cases pending before Federal Supreme Court 205–6 digital constitutionalism  206–11 general provisions  198–9 immunities 199–201 landmark statute  190–1

legislative process  196–7 relevant High Court decisions  203–5 shift in balance of power  191 ‘Mere conduit’ liability exemption copyright 246 DSM strategy  296 EU law  167, 476, 566 global shift to gatekeepers  547–55 immunity against criminal sanctions  69 ‘intermediaries’ 53 New Zealand  42 ‘no-monitoring’ obligations  545–7 publishers 84 secondary liability  79 Metro-Goldwyn-Mayer Studios, Inc. v Grokster, Ltd [2005]  349, 352–3 Mexico right to be forgotten  512–14 US–Mexico–Canada Agreement (USMCA) adoption of s. 230-like protections  165 promotion of DMCA model  185–7 provisions on effective notice  177–80 website-blocking orders  22 Misleading content  297–8, 307–9 see also Disinformation Monetary liability immunity 61–2 negligence-based liability  60–2 strict liability  60 types of order  60 Monitoring  see also Filtering algorithmic decision-making  16 challenges to rule of law by content moderation 672 EU Copyright Directive general monitoring obligation  559–63 impact on fundamental rights  563–4 legislative culmination of global trend 564–5 ‘fair balance’ doctrine  139, 563 freedom to access and impart information 146 moral responsibilities of OSPs  128 ‘no monitoring principle’ Ghana 220 Malawi 225–6 notice-and-stay-down (NSD)  540–1

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

770   index Monitoring (cont.) OSPs as ‘gate-keepers’ Argentina 549 Brazil 548 China 548–9 EU law  555 France 551–2 Germany 552–3 global shift towards heightened standard of liability  547 human rights  549–51 Italy 553–4 Spain 554–5 OSPs as ‘mere conduits’  545–7 overview  22, 544 privacy violations  148 private ordering  555–7 right to be forgotten - Belén Rodriguez Case 506 Russian government reliance on intermediaries 661–3 shift towards more active control  106 Moral agency basic principle  59 challenge of strict liability  60 intermediary liability as an exception  70, 83 liability flowing  59 policy justification for imposing liability 88 primary liability  64 principle of liability  59–60 Multimedia social networking services DMCA 174–5 Facebook’s increased focus on disinformation 627 as ‘internet intermediaries’  44 moral responsibilities  129 pre-MCI measures in Brazil  192–3 private ordering  555 widespread regulatory antipathy  171

N

Negligence-based liability Argentinian approach  549 benefits of consequences-based approach 103

copyright  287, 544 cost-saving effects  75 defamation Africa 231 common law approach  217 different policy challenges  103 difficulty to converge standards  102 EU and US legal frameworks  545 freedom of expression  145 harmonizing copyright liability in EU 323–33 impact of new terminology  630 Indonesia 268 inherent built-in protections for fundamental rights  546–7 Marco Civil da Internet  194 monetary liability  60–2 patent law in Germany  93 perception of OSPs as passive players  546 right to be forgotten  493 secondary copyright infringement in US 350–3 secondary infringers  98, 100 supplementation in German law  92 trade secrets  430 US approach  97 Netherlands approach to filtering  401, 402 rights to content  626 trade mark infringements  377 website-blocking orders  581 Network of Centers mapping and comparative analysis exercises 4 Normative issues  see Ethical issues Notice-and-action (N & A) impact on freedom of expression  527–8 overview  21, 525–6 worldwide variations in approach  543 Notice-and-notice (NN) commonly encountered notice–and-action mechanism 526 general principles  532–3 risks and safeguards for freedom of expression foreseeability 534–5

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   771 procedural fairness  535–6 severity of the response  536–8 Notice-and-notice-plus (NN+) broader notion of liability  445 case studies hate speech  462–5 terrorist content  459–62 codification of rules for defamation  465–6 defamation 452–5 fake news  458–9 hate speech  455–6 key themes  466 NCDII 458 other harmful speech  456–8 overview 445–6 Notice-and-stay-down (NSD) general principles  556–7 primarily a judicial construct  539–40 risks and safeguards for freedom of expression appeal procedure  542–3 clear and precise notifications  541–2 general v specific monitoring  540–1 Notice-and-takedown (NTD)  see also Notice-and-action (N & A) accuracy of notices  110–11 administrative enforcement  590–1 China’s position and DMCA compared 255–8 Chinese approach to IP infringements copyright 289–90 trade marks  284–5 DMCA general principles  173 multimedia social networking services 174–5 procedure 174 safe harbour provisions  174 DMCA procedure  528 due process  116–17 Equustek case 2015 BC Court of Appeal ruling  713–14 creation of three internet-related legal risks 718–19 danger of extraterritorial application of court decisions  725–6

impact of shifting more responsibility to internet companies  724–5 influence on other decisions  720–4 original BC court order  712–13 Supreme Court of Canada decision 715–18 Supreme Court of Canada hearing 714–15 US application by Google  719–20 EU law  528 general principles  528 Ghana 220 India’s position and DMCA compared 262–3 initiatives to propose solutions  5 Japan’s position and e-Commerce Directive compared 264–5 need for wider range of publicly accessible datasets 119–20 over-enforcement of non-infringing material 112–15 risks and safeguards for freedom of expression abusive requests  530–1 foreseeability 530 notification and counter-notification  531–2 South Africa  217–18 South Korea’s position and DMCA compared error in adoption of s 512  274–5 expansionist approaches by courts  271–3 US free trade agreements completed agreements - Chile  180–2 completed agreements - Costa Rica  182–3 convenience of DMCA approach in Latin America  187–8 current promotion of DMCA model 185–7 drawbacks of DMCA approach in Latin America 189 pending agreements  183–5 provisions on actual knowledge  177–80 underlying rationale  175–6 volume of notices  108–10 Zambia 220–1

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

772   index

O

Online intermediaries  see Internet intermediaries Online intermediary liability see Secondary liability Online service providers (OSPs) enhanced ‘responsibilities’  613–15 inadequate framework for liability and responsibility 3 moral responsibilities access to information  124–7 civic role in mature information societies 133–6 duty of ethical governance  136–7 human rights  127–33 overview 122–4 protection of fundamental rights data protection  148 freedom to access and impart information 140–5 freedom to conduct business  148–9 importance 152 innovation 149 intellectual property rights  149–52 overview  8, 138–9 overview of users’ rights  139 privacy 147–8 terminology 45–7 trade mark infringements  406 Organisation for Economic Co-operation and Development (OECD) challenges from legal uncertainty  730 Colombia’s 2018 entry into the Organization 184 ‘internet intermediary’ not defined 47 Ministerial Conference in Mexico 2016 739 recommendations on Principles for Internet Policy Making  4 Organization for Security and Co-operation in Europe (OSCE) Communiqué on Open Journalism  4–5, 143 monitoring and filtering  547 notice and takedown  518 special rapporteur for Africa  234

P

Payment blockades change in perspective  630 enhanced ‘responsibilities’  618–19 wide application  27 Peru pending FTA with US  185 right to be forgotten  514–16 Platform operators increased reporting requirements  105 Platforms  see Communications platforms Policy issues governance development of ‘policy standards’  747–8 gap at intersection of four policy areas 743–4 justification for imposing liability  88 justifications for secondary liability  77–8 negligence-based and intent-based liability 103 OECD recommendations on Principles for Internet Policy Making  4 remedies 102–3 US Department of Commerce Internet Policy Task Force  364–5 Pornography  see Child pornography Primary liability EU law on user activities applicability to less egregious scenarios 343–4 communication by platform operators 340–3 far-reaching issues of Stichting Brein case 347–8 overview 335 right of communication to the public 335–40 safe harbour provisions  345–7 remedies aggregation of damages  98–9 consequences and reasons distinguished 94–6 cost of injunctions  101–2 interface between primary and secondary liability  91–2 scope and goals of injunctions  99–101 scope of damages  96–8

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   773 right to be forgotten criteria for weighing interests at stake 500–1 less intrusive legal measures  502 newspaper’s digital archives  498–500 trade marks dealings in counterfeit and grey goods 409–10 e-Commerce Directive  410 use of sign complained of  407–8 use of sign in relevant territory  408–9 Privacy exclusion from 47 USC § 230 immunity 161 ‘fair balance’ approach  378–9 Inter American System of Human Rights 517–20 MCI (Brazil)  199 right to be forgotten Argentina - Belén Rodriguez Case  505–7 Chile 507–10 Colombia 510–12 impact of Costeja case  503–5 Mexico 512–14 Peru 514–16 primary publishers  498–501 problem of the widespread privacy-damaging information  486–8 search engines  488–98 trends and future developments in Latin America 520–1 Uruguay 516–17 role of OSPs in protection of fundamental rights 148 Private ordering amorphous notion of responsibility  630 automated or algorithmic filtering  622 Canada 454 emerging common international principle 16 emerging industry practice  555–7 enforcement role of private business in Russia compliance dilemma for private companies 663–6 government reliance on intermediaries 657–63

overview 647–9 significant regulatory restrictions  668 enhanced ‘responsibilities’ circulation of solutions  629 corporate social responsibility  626–8 enforcement tools  615–25 involuntary cooperation in IP enforcement 628 markets 625–6 public deal-making with public authorities 628–9 filtering  see Filtering function of secondary liability  75 increased responsibility achieved through markets 625–6 ius gentium of voluntary measures emergence of international common approaches 376–8 freedom of expression, competition and data-protection rights  378–9 mapping intermediary responsibility  25–30 monitoring  see Monitoring online search manipulation  617 proliferation of new programmes  350 US content regulation administration and operation of DNS 633–4 assumption of private obligations  632–3 expanding enforcement in DNS  637–8 history of IP enforcement in DNS  635–6 new gTLD programme  636–7 troubling developments  645–6 ‘trusted notifier’ agreements  638–42 video-sharing platforms  479–80 Procedural justice  see Due process Procurement 67–8

Q

Qui facit per alium facit 72

R

‘Ratio’ principles China 373–4 similarities between approaches to intermediary and accessory liability 370–3 UK approach  374

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

774   index Regulation aims of Handbook  3 ethical governance distinguished  136–7 EU framework for freedom of expression Council of Europe  468–74 primary and secondary EU law  474–9 main area of wrongdoing  86 search engines  171 US content regulation administration and operation of DNS 633–4 assumption of private obligations  632–3 expanding enforcement in DNS  637–8 history of IP enforcement in DNS  635–6 new gTLD programme  636–7 troubling developments  645–6 ‘trusted notifier’ agreements  638–42 Remedies see also Enforcement administrative enforcement AGCOM (Italy) in practice  606–8 essential features  600–1 overview 590–1 Australia 238–9 benefits of consequences-based approach 103 blocking orders  see Blocking orders consequences and reasons distinguished 94–6 damages aggregation 98–9 scope 96–8 difficulties with convergence  102–3 disclosure of trade secrets  437–8 injunctions costs 101–2 scope and goals  99–101 injunctions against innocent parties  93 interface between primary and secondary liability 91–2 lack of scholarly attention  90–1 monetary liability types of order  60 notice and takedown (NTD)  see Notice and takedown (NTD) overlap of three pillars  93–4 policy issues  102–3

Responsibilities  see Enhanced ‘responsibilities’ Right of communication to the public applicability to less egregious scenarios 343–4 China  253–4, 257, 278, 287 communication by platform operators 340–3 copyright  64, 385, 561 DSM strategy  303 international law  336–7 meaning and scope of concept  337–40 purpose-driven interpretation  335–6 UK 320 Right to be forgotten 47 USC § 230 immunity compared  165–7 Argentina - Belén Rodriguez Case blocking of harmful content  507 explicitly illegal content  506–7 liability of search engines  505–6 no general obligation to monitor  506 Chile 507–10 Colombia 510–12 impact of Costeja case  503–5 Inter American System of Human Rights 517–20 Mexico 512–14 Peru 514–16 primary publishers criteria for weighing interests at stake 500–1 less intrusive legal measures  502 newspaper’s digital archives  498–500 problem of the widespread privacydamaging information  486–8 search engines Argentina - Belén Rodriguez Case  505–7 delisting numbers  490–1 ‘fair balance’ doctrine  491–4 Google Spain 488–90 removal of sensitive data  496–8 well-established and settled law  501–2 trends and future developments in Latin America 520–1 Uruguay 516–17 Rodríguez, María Belén v Google Inc. [2014]  20, 22, 505–7, 520, 534, 549

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   775 Rolex v eBay [2007]  17, 22, 400, 553 Rule of law challenges posed by content moderation barriers to accountability  674–6 circumvention of separation of powers 672–4 effect of dual role of platforms  670–1 overview 29–30 promotion of self-review by online intermediaries 676 proposal enhanced ‘responsibilities’ 676–8 role of intermediaries  669–70 use of technological measures  672 consequence of increased responsibility mechanisms 615 legal basis for content removal  456 need to protect the rights and freedoms of others 595 relevance to internet  57 Russia accountability 29 administrative enforcement  24 changes in regulation and ownership structure 650–1 enforcement role of private business compliance dilemma for private companies 663–6 government reliance on intermediaries 657–63 overview 647–9 significant regulatory restrictions  668 evolution of regulation and OSP liability difficulties of challenging government 656–7 key principles  651 new regulatory restrictions post 2011 652–6 ‘government interest’ in internet companies 651 government reliance on intermediaries content 657–61 overview 657 surveillance 661–3 growth of internet users  649–50 hate speech  473 human rights compliance  666–7

nexus of Russian jurisdiction  651 processing of the personal data  20 protection of children  624

S

SABAM v Netlog NV [2012]  39, 139, 148, 318, 373, 540–1, 547, 561, 577 Safe harbour provisions  see also Immunities China’s position and DMCA compared basic laws and regulations  253–5 neither strict liability nor true safe harbour 258 notice and takedown  255–8 copyright e-Commerce Directive  316 negatively stated  317 significant division exists between intermediaries 317 definition of ‘internet intermediary’ alternative terms  45–6 e-Commerce Directive  42–3 functional approach  48 key CJEU decisions  43–4 typological approach to classification  51 direct liability in relation to user activities 345–7 DMCA effectiveness 120 importance for commercial innovation 108 overview 106 shield against secondary copyright liability 174 stable framework for managing liability 112 e-Commerce Directive 47 USC § 230 immunity compared 167–8 overview 106 India’s position and DMCA compared basic laws and regulations  258–60 impact of Singhal decision  261–2 interpretation of Intermediary Guidelines 2011  260–1 notice and takedown  262–3 Indonesia 266 international fragmentation  9–14

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

776   index Safe harbour provisions (cont.) Japan’s position and e-Commerce Directive compared basic laws and regulations  263–4 narrower in scope  265–6 notice and takedown  264–5 Malaya’s position and DMCA compared 268–70 scope of EU law  424–5 South Africa  217–18 South Korea’s position and DMCA compared basic laws and regulations  270–1 error in adoption of s 512  274–5 expansionist approaches by courts 271–3 Uganda 221 Scarlet Extended SA v SABAM [2011] 139, 148, 317, 372, 540–1, 547, 561, 577 Search engines bias 125–6 consumer protection law in Australia 239–40 direct liability in relation to user activities 347 enhanced ‘responsibilities’  617 Equustek case 2015 BC Court of Appeal ruling  713–14 creation of three internet-related legal risks 718–19 danger of extraterritorial application of court decisions  725–6 impact of shifting more responsibility to internet companies  724–5 influence on other decisions  720–4 original BC court order  712–13 Supreme Court of Canada hearing 714–15 US application by Google  719–20 as ‘internet intermediaries’ key CJEU decisions  43–4 status as measure of legal applicability 53–4 moral responsibilities freedom to access and impart information 124–6 human rights issues  128–9

notice and takedown accuracy of notices  110–11 volume of notices  108–9 pre-MCI measures in Brazil  193–4 private ordering  555 right to be forgotten Argentina - Belén Rodriguez Case  505–7 delisting numbers  490–1 ‘fair balance’ doctrine  491–4 Google Spain 488–90 well-established and settled law  501–2 widespread regulatory antipathy  171 Secondary liability classification causative secondary liability  66–7 common design  68–9 criminal liability  69 general principles  65–6 procurement 67–8 relational liability  69–70 copyright infringement in US common law approach  350–3 continuing legal uncertainty  360–1 Copyright Office Study  365–6 Department of Commerce Internet Policy Task Force  364–5 Digital Millennium Copyright Act (DMCA) 353–6 government enforcement measures 362–3 heavily litigated controversy  349–50 House Judiciary Committee Copyright Review 365 knowledge-based liability  356–7 overview 14–15 right and ability to control  358–60 slowing of reform activity and litigation 367–8 Stop Online Piracy Act and Companion Bills 363–4 use of technological measures to reduce infringement 361–2 wilful blindness  357–8 copyright infringement outside US EU law  366–7 international law  366 justifications for secondary liability  70–4

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   777 remedies aggregation of damages  98–9 consequences and reasons distinguished 94–6 cost of injunctions  101–2 interface between primary and secondary liability  91–2 scope and goals of injunctions  99–101 scope of damages  96–8 trade marks e-Commerce Directive  413 EU law  410–11 UK law  411–12 Secrets  see Trade secrets Shreya Singhal [2013]  12, 259, 261–3 Significant cases A&M Records Inc. v Napster Inc [2001]  49, 100, 351–2, 357 Akdeniz v Turkey [2014]  139, 142, 150, 567–75 Cartier International AG v British Telecommunications plc [2018]  17, 23, 82, 87, 102, 375, 415–19, 454, 578, 582, 587, 593, 600, 718 Delfi AS v Estonia [GC] [2015]  19, 22, 147, 346, 372, 472–3, 481, 484, 518–19, 550–1 Equustek Solutions Inc. v Google [2017]  32, 380, 700, 710, 712–26 Google France SARL v Louis Vuitton Malletier SA [2010]  43, 81, 145, 371, 396, 407 Google LLC v CNIL [2019]  31, 495–6, 702, 704–5 Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (AEPD), Mario Costeja González [2014]  19, 31, 50, 143–4, 165–6, 492–4, 501, 503–4, 518–20, 733 GS Media BV v Sanoma Media Netherlands BV [2016]  15, 64, 91, 316, 318–19, 338–43, 347, 410, 559 L’Oréal SA v eBay Int’l AG [2011]  39, 43–4, 49, 52, 101, 300, 317–18, 330, 342, 346–7, 382, 390, 396–7, 406–7, 411, 413, 416, 426–7, 431, 439, 539, 547, 555, 559, 561, 577, 590, 593, 600

Metro-Goldwyn-Mayer Studios, Inc. v Grokster, Ltd [2005]  349, 352–3 Rodríguez, María Belén v Google Inc. [2014]  20, 22, 505–7, 520, 534, 549 Rolex v eBay [2007]  17, 22, 400, 553 SABAM v Netlog NV [2012]  39, 139, 148, 318, 373, 540–1, 547, 561, 577 Scarlet Extended SA v SABAM [2011]  139, 148, 317, 372, 540–1, 547, 561, 577 Shreya Singhal [2013]  12, 259, 261–3 Sony Corp. of America v Universal City Studios, Inc. {1984]  49, 349, 351–3 Stichting Brein v Ziggo BV and XS4All Internet BV [2017]  15, 64, 91, 302, 318, 335, 339–47, 401, 410, 590 Stratton Oakmont Inc. v Prodigy Services Co [1995]  157–8, 450 Tommy Hilfiger Licensing LLC v Delta Center [2016]  40–1, 101, 405–6 UMG Recordings v Shelter Capital Partners LLC [2013]  79, 159, 356–60 United States v. Microsoft Corp.[2018]  692, 697–8 UPC Telekabel Wien GmbH v Constantin Film Verleih GmbH [2014]  22, 39, 48, 139, 147, 151, 318, 373, 397, 406, 567–84, 590 Zhong Qin Wen v Baidu [2014]  12, 22, 167, 254–7, 288–9, 293, 548–51 Singapore free trade agreements  172, 185 website-blocking orders  22, 375 Social networking  see Multimedia social networking services Sony Corp. of America v Universal City Studios, Inc. {1984]  49, 349, 351–3 South Africa ECTA legislation  216–17 Guidelines for representative bodies  218–19 importance in setting standards  219 notice and takedown  217–18 peculiarity of the South African model  218 recent legislative initiatives anti-discrimination legislation  229–31 complexities of liability limitations 232–3 Cybercrime Bill  228–9

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

778   index South Africa (cont.) defamation 231 pornography 231–2 South Korea defamation 276 DMCA safe harbour provisions compared basic laws and regulations  270–1 error in adoption of s 512  274–5 expansionist approaches by courts 271–3 ‘graduated response’  533 notice and notice (NN) mechanisms  533–4 notice and takedown procedure  529 Sovereignty  see Digital sovereignty Spain administrative enforcement first European jurisdiction  588–9 impact of US pressure  587 ‘internet intermediaries’ defined  44 monitoring and OSPs as ‘gate-keepers’ 554–5 Stichting Brein v Ziggo BV and XS4All Internet BV [2017]  15, 64, 91, 302, 318, 335, 339–47, 401, 410, 590 Stratton Oakmont Inc. v Prodigy Services Co [1995]  157–8, 450 Strict liability African approach to defamation  231 Argentinian approach  549 blurring of distinction in EU law  346 Canadian approaches  444 Chinese regime  253, 258 Consumer Protection Code  193, 195 copyright infringements  212 EU law  321 Malaysia and Indonesia  275 Marco Civil da Internet  200 new strict liability of platforms  374 impact of new EU rules  506, 520 monetary liability  60 primary infringement  370 STJ decisions  203–5

T

Taxonomy of internet intermediaries alternative terms ‘internet service providers’ (ISPs)  45–7 ‘online service providers’ (OSPs)  45–7

broad view approach preferred  56 comparative law approaches  38 EU law e-Commerce Directive  41–3 Enforcement Directive  39 Information Society Directive  39–41 key Court of Justice decisions  43–4 functional approach  47–50 little consensus or clarity  37–8 overview 6 typological approach to classification Directive on Copyright in the Digital Single Market 2019  54–6 particular fields of law  50–2 status as measure of legal applicability 53–4 Terrorist content DSM strategy  298–9, 305–6 EU law  460, 478 filtering 557 notice-and-notice-plus (NN+)  459–62 TGI Paris LICRA & UEJF v Yahoo! Inc [2000]  32, 709–12 ‘Three-strikes-and-you’re-out’ change in perspective  630 enhanced ‘responsibilities’  615–16 France 535 Italy 588 notice-and-notice 533 South Korea  533 Tommy Hilfiger Licensing LLC v Delta Center [2016]  40–1, 101, 405–6 Trade marks  see also Copyright blocking injunctions  374–5 Chinese approach to infringements Beijing Guidelines  282–3 criteria for enforcement obligations 284–5 intent 286 judicial interpretation by the Supreme People’s Court  283–4 reasonable measures for repeat infringements 285 underlying challenges  280–2 copyright controls compared  385–7 emerging international approaches ius gentium of voluntary measures  376–9

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   779 need for better measures  380 overview 369–70 ‘ratio’ principles  370–5 exclusion from 47 USC § 230 immunity  161 infringement conditions to be satisfied 387–8 injunctions EU law  414 other kinds of injunction  420 threshold conditions  416 UK law  415–16 website-blocking orders  418–19 ‘internet intermediary’ defined functional approach  49–50 typological approach to classification  51 limitations on rights context-specific limitations  394–5 freedom of expression  389–91 overview 388–9 social and cultural freedom  391–4 main area of wrongdoing  81–2 nature of rights  383–5 need for better enforcement  380 need to focus on legal measures  369–70 primary liability dealings in counterfeit and grey goods 409–10 e-Commerce Directive  410 overview 406–7 use of sign complained of  407–8 use of sign in relevant territory  408–9 proliferation of filter obligations EU law  396–8 Germany 398–402 need for tailored liability regime  402–3 Netherlands 402 overview 16–17 remarkable climate change  381–3 ‘ratio’ principles China 373–4 EU law  374 similarities between approaches to intermediary and accessory liability 370–3 UK approach  374 secondary liability e-Commerce Directive  413

EU law  410–11 UK law  411–12 voluntary cooperation emergence of ius gentium of common principles 376–8 ‘fair balance’ approach  378–9 Trade secrets 47 USC §230 immunity  161 Canada Equustek Solutions case  712–14 influence of case on other decisions 720–4 Supreme Court decision  715–18 Supreme Court hearing  714–15 challenges to rule of law by content moderation 675–6 complexity of European legal framework 440–3 EU law enforcement against third parties 439–40 harmonization measures for remedies 437–8 interplay with e-Commerce Directive 438–9 overview 435–6 unlawful acquisition, use and disclosure 436–7 key issues  422–4 overview 17–18 three different scenarios  435 Transparency administrative enforcement algorithmic decision-making  601 essential features  603–4 administrative enforcement systems  24 algorithmic decision-making  8, 29 barriers to accountability  674–6 DMCA system  174 procedural justice  115–17 public deal-making with public authorities 628–9 Russia 666–7 thematic foci of empirical research on notice and takedown  107–8 ‘trusted notifier’ agreements  638–41 Unfair Commercial Practices Directive  308 Treaties  see International law

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

780   index ‘Trusted notifier’ agreements new gTLD registry operators  638–41 UDRP compared  641–2

U

Uganda 221–2 UMG Recordings v Shelter Capital Partners LLC [2013]  79, 159, 356–60 Unfair commercial practices applicability to OIs  428–30 complexity of European legal framework 440–3 defined 427–8 effectiveness of consumer protection  433–4 interplay with e-Commerce Directive  431–3 key issues  422–4 legislative developments  13 misleading content  307–9 overview 17–18 United Kingdom blocking injunctions applicable principles  417–18 threshold conditions  416 copyright regime comparative analysis of fault and causation 325–6 doctrine of authorization  320 injunctions 324 tort-based solutions  322 defamation law and 47 USC § 230 immunity compared  168–9 right of communication to the public  320 trade marks injunctions  375, 415–16 ‘ratio’ principles  374 secondary liability  411–12 use of sign in relevant territory  408–9 United Nations adoption of Convention on Cyber Security and Personal Data Protection by AU 224 challenges from legal uncertainty  730 corporate social responsibility  627 ‘Protect, Respect and Remedy’ Framework 5 right to ‘internet freedom’ as a human right 128

stricter approach to internet intermediaries 694 World Summit on the Information Society (WSIS) 736 United States 47 USC § 230 immunity Brazil’s Marco Civil da Internet compared 169–70 e-Commerce Directive compared  167–8 EUs ‘right to be forgotten’ compared 165–7 exceptionalist treatment of internet 162–5 future in peril  170–1 Germany’s NetzDG compared  169 panic over pornography  157–8 pre-§ 230 law  155–7 protections for defendant  158–60 statutory exclusions  160–2 UK defamation law compared  168–9 access to user data by enforcement agencies 697–9 content regulation administration and operation of DNS 633–4 assumption of private obligations  632–3 effect of public disenchantment with Silicon Valley  632 expanding IP enforcement in DNS 637–8 history of IP enforcement in DNS  635–6 little public law change  631–2 new gTLD programme  636–7 plans for copyright-specific UDRP 642–5 troubling developments  645–6 ‘trusted notifier’ agreements  638–42 Copyright Act  42, 46 Digital Millennium Copyright Act (DMCA)  see Digital Millennium Copyright Act (DMCA) free trade agreements completed agreements - Chile  180–2 completed agreements - Costa Rica  182–3 convenience of DMCA approach in Latin America  187–8

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

index   781 current promotion of DMCA model 185–7 drawbacks of DMCA approach in Latin America 189 pending agreements  183–5 provisions on actual knowledge  177–80 underlying rationale  175–6 geographical scope of blocking  700–1 Google challenge to Equustek decision 719–20 notice and takedown accuracy of notices  110–11 over-enforcement of non-infringing material 112–15 volume of takedown notices  108–10 secondary copyright infringement common law approach  350–3 continuing legal uncertainty  360–1 Copyright Office Study  365–6 Department of Commerce Internet Policy Task Force  364–5 Digital Millennium Copyright Act (DMCA) 353–6 government enforcement measures 362–3 heavily litigated controversy  349–50 House Judiciary Committee Copyright Review 365 knowledge-based liability  356–7 overview 14–15 right and ability to control  358–60 slowing of reform activity and litigation 367–8 Stop Online Piracy Act and Companion Bills 363–4 use of technological measures to reduce infringement 361–2 wilful blindness  357–8 secondary copyright infringement outside US EU law  366–7 international law  366 SPEECH Act 2010  170 stricter approach to internet intermediaries 694 terms of service imposed on users  695–6

United States v. Microsoft Corp.[2018] 692, 697–8 UPC Telekabel Wien GmbH v Constantin Film Verleih GmbH [2014]  22, 39, 48, 139, 147, 151, 318, 373, 397, 406, 567–84, 590 Uruguay Montevideo statement on the future of Internet cooperation  740 right to be forgotten  516–17 Users  see also Content; Secondary liability algorithmic accountability  682–5 algorithmic decision-making accountability to users  680–1 measuring effect on user behaviour 683–4 user access to organizational factors 684 Australia 245–6 blocking injunctions applicable principles  417–18 threshold conditions  416 blocking orders  569–72 China  285, 292 copyright infringements  78 direct liability in relation to user activities applicability to less egregious scenarios 343–4 communication by platform operators 340–3 far-reaching issues of Stichting Brein case 347–8 overview 335 right of communication to the public 335–40 safe harbour provisions  345–7 DSM strategy  309 fictional attribution of liability  72 filtering 550 Germany’s NETzDG  169 Indonesia 266–9 Malawi 225 Marco Civil da Internet  169, 190–1 marketing and private ordering  625–6 primary and secondary liability distinguished 63–4

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

782   index Users (cont.) Russian approach main obligations imposed on intermediaries 654–7 surveillance 661–2 secondary copyright infringement in US common law approach  350–3 continuing legal uncertainty  360–1 Copyright Office Study  365–6 Department of Commerce Internet Policy Task Force  364–5 Digital Millennium Copyright Act (DMCA) 353–6 government enforcement measures  362–3 heavily litigated controversy  349–50 House Judiciary Committee Copyright Review 365 knowledge-based liability  356–7 overview 14–15 right and ability to control  358–60 slowing of reform activity and litigation 367–8 Stop Online Piracy Act and Companion Bills 363–4 use of technological measures to reduce infringement 361–2 wilful blindness  357–8 secondary copyright infringement outside US EU law  366–7 international law  366 South African model  219 strict liability  60 terms of service imposed on users

Canadian approach  697 US approach  695–6 trade mark infringements blocking injunctions  417–19 EU law  414 other kinds of injunction  420 UK law  415–16 UK defamation law  168–9 US approach  157–8 users’ fundamental rights freedom of expression  145–7 freedom of information and access  140–5, 563–4 privacy and data protection  147–8 video-sharing platforms  477–8 website blocking  569–72

V

Video-sharing platforms EU law  476–8 self-regulation 479–80 shifts in the geometrical patterns of regulation 481 Vilification see Hate speech Voluntary measures see Private ordering

W

Website-blocking orders see Blocking orders World Intermediary Liability Map (WILMap) 3–4

Z

Zambia 220–1 Zhong Qin Wen v Baidu [2014]  12, 22, 167, 254–7, 288–9, 293, 548–51