Oxford Handbook of Online Intermediary Liability 9780198837138, 0198837135

This book provides a comprehensive, authoritative, and state-of-the-art discussion of fundamental legal issues in interm

166 12 5MB

English Pages 801 Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Oxford Handbook of Online Intermediary Liability
 9780198837138, 0198837135

Table of contents :
Cover
The Oxford Handbook of ONLINE INTERMEDIARY LIABILITY
Copyright
Contents
Editor’s Note: A Dialogue on the Role of Online Intermediaries
Notes on Contributors
Part I: INTRODUCTION
Chapter 1: Mapping Online Intermediary Liability
1. Mapping Fundamental Notions
2. Mapping International Fragmentation: From Safe Harbours to Liability
3. Mapping Subject-Specific Regulation
4. Mapping Intermediary Liability Enforcement
5. Mapping Private Ordering and Intermediary Responsibility
6. Mapping Internet Jurisdiction, Extraterritoriality, and Intermediary Liability
Part II: MAPPING FUNDAMENTAL NOTIONS
Chapter 2: Who Are Internet Intermediaries?
1. Definitions of ‘Internet Intermediaries’
2. Alternative Terms
3. A Functional Taxonomy
4. Typological Considerations
4.1 Intermediaries in Distinct Fields
4.2 Status as a Measure of Legal Applicability
4.3 The Directive on Copyright in the Digital Single Market
5. Conclusions
Chapter 3: A Theoretical Taxonomy of Intermediary Liability
1. What is ‘Liability’?
1.1 Moral Agency and Individual Responsibility
1.2 Monetary Liability
1.2.1 Strict Liability
1.2.2 Negligence-Based Standards
1.2.3 Knowledge-Based Standards
1.2.4 Immunity
1.3 Non-Monetary Liability
2. Classifying Liability
2.1 Primary Liability
2.2 Secondary Liability
2.2.1 Causative Secondary Liability
2.2.1.1 Procurement
2.2.1.2 Common Design
2.2.1.3 Criminal Accessory Liability
2.2.2 Relational Secondary Liability
3. Justifying Intermediary Liability
3.1 Normative Justifications
3.1.1 Holding Causes of Harm Accountable
3.1.2 Fictional Attribution to Secondary Wrongdoers
3.1.3 Upholding Primary Duties
3.1.4 Upholding Duties Voluntarily Assumed
3.2 Practical Functions
3.2.1 Reducing Claimants’ Enforcement Costs
3.2.2 Encouraging Innovation
3.2.3 Regulating Communications Policy
4. Types of Wrongdoing
4.1 Copyright Infringement
4.2 Trade Mark Infringement
4.3 Defamation
4.4 Hate Speech, Disinformation, and Harassment
4.5 Breach of Regulatory Obligations
4.6 Disclosure Obligations
5. Conclusions
Chapter 4: Remedies First, Liability Second: Or Why We Fail to Agree on Optimal Design of Intermediary Liability
1. Three Legal Pillars
2. Distinguishing Reasons from Consequences
3. Typology of Consequences
3.1 Scope of Damages
3.2 Aggregation of Damages
3.3 Scope and Goals of Injunctions
3.4 Costs of Injunctions
4. Putting the Cart before the Horse
5. Conclusions
Chapter 5: Empirical Approaches to Intermediary Liability
1. Volume of Notices
2. Accuracy of Notices
3. Over-Enforcement and Abuse
4. Due Process and Transparency
5. Balancing of Responsibilities and Costs
6. Conclusion: Limitations, Gaps, and Future Research
Chapter 6: The Civic Role of OSPs in Mature Information Societies
1. Managing Access to Information
2. Human Rights: Harmful Content and Internet Censorship
3. The Civic Role of Osps in Mature Information Societies
4. Conclusion: The Duty of Ethical Governance
Chapter 7: Intermediary Liability and Fundamental Rights
1. Users’ Rights
1.1 Freedom of Information and Internet Access
1.2 Freedom of Expression
1.3 Right to Privacy and Data Protection
2. OSPs, Freedom of Business, and Innovation
3. IP Owners and Property Rights
4. Conclusions
Part III: SAFE HARBOURS, LIABILITY, AND FRAGMENTATION
Chapter 8: An Overview of the United States’ Section 230 Internet Immunity
1. Pre-Section 230 Law
1.1 The Moderator’s Dilemma
2. Section 230’s Protections for Defendants
2.1 Section 230’s Statutory Exclusions
3. Section 230’s Implications
4. Comparative Analysis
4.1 EU’s ‘Right to Be Forgotten’
4.2 EU Electronic Commerce Directive
4.3 The UK Defamation Law
4.4 Germany’s Network Enforcement Law (NetzDG)
4.5 Brazil’s Internet Bill of Rights
4.6 Section 230 and Foreign Judgments
5. What’s Next for Section 230?
Chapter 9: The Impact of Free Trade Agreements on Internet Intermediary Liability in Latin America
1. Background: Notice and Takedown in the Dmca
2. Free Trade Agreements and Notice-and-Take down Provisions
3. Implementation of FTA Intermediary Liability Provisions in Latin America
3.1 A Comparison of Provisions on Effective Notice in FTAs with Latin American Countries
3.2 Completed Implementation
3.2.1 Chile
3.2.2 Costa Rica
3.3 Pending Implementation
3.4 The Current Promotion of the DMCA Model in FTAs
4. The Convenience of the DMCA Approach for Notice and Takedown in Latin America
5. Conclusions
Chapter 10: The Marco Civil da Internet and Digital Constitutionalism
1. The Civil Rights Framework for the Internet
1.1 The Path Towards the MCI’s Enactment
1.2 The MCI Legislative Process
2. The ‘MCI on the Books’
2.1 General Provisions of the MCI
2.2 Intermediary Liability Rules
2.3 Intermediary Liability Beyond the MCI
3. The MCI in Practice
3.1 The Brazilian Justice System
3.2 Relevant Decisions in High Courts
3.3 Cases Pending Before the Federal Supreme Court
4. The MCI and Digital Constitutionalism
5. Concluding Remarks
Chapter 11: Intermediary Liability in Africa: Looking Back, Moving Forward?
1. The Slow Rise of the Intermediary Liability Discourse in Africa
2. First-generation Liability Limitations
2.1 South Africa
2.2 Ghana
2.3 Zambia
2.4 Uganda
3. Interstate Cooperation in the African Region: Implications for Intermediary Liability
4. Second-generation Liability Limitations: the Rise of Hybrid Instruments
4.1 Malawi
4.2 Ethiopia
4.3 Kenya
4.4 South Africa
4.4.1 Liability limitations when you are also ‘cyber-police’: a complex terrain
5. Conclusion: The Role o fthe African Union and the Promise of the South African Model
Chapter 12: The Liability of Australian Online Intermediaries
1. Liability: Active Intermediaries and Recalcitrant Wrongdoers
1.1 Consumer Protection Law
1.2 Defamation
1.3 Vilification
1.4 Copyright
2. Limiting Devices and their Flaws
3. Conclusions
Chapter 13: From Liability Trap to the World’s Safest Harbour: Lessons from China, India, Japan, South Korea, Indonesia, and Malaysia
1. China
1.1 Basic Laws and Regulations
1.2 Liability-Imposing v Liability-Exempting
1.3 Conclusions
2. India
2.1 Basic Laws and Regulations
2.2 Liability-Imposing v Liability-Exempting
2.3 Dialectical Turn of Singhal
2.4 Conclusions
3. Japan
3.1 Basic Laws and Regulations
3.2 Comparison With Other Safe Harbours
3.3 Conclusions
4. Indonesia
5. Malaysia
6. South Korea
6.1 Introduction: Basic Laws and Regulations
6.2 Proof: Intermediary Behaviour and Courts’ Expansionist Interpretation
6.3 Origins: Syntactical Error in Adopting Section 512 of the DMCA?
6.4 Conclusions
7. Epilogue
Chapter 14: China’s IP Regulation and Omniscient Intermediaries: Oscillating from Safe Harbour to Liability
1. Intermediary Liability for Trade mark Infringement
1.1 Establish Internal Procedures and Enforce Accordingly
1.2 Appropriate Reasonable Measures in the Case of Repeat Infringement
1.3 Inferences
2. Intermediary Liability in the Case of Copyright Liability
2.1 Non-Interference
2.2 Removal after a Notice-and-Takedown Request
2.3 Necessary Measures
2.4 No Financial Benefits
2.5 Inducement and Contributory Liability
2.6 Non-Interference
3. Conclusions
Chapter 15: A New Liability Regime for Illegal Content in the Digital Single Market Strategy
1. The Issue of Illegal Contentwith in the DSM Strategy
2. Copyright-Infringing Content and the Copyright in the DSM Directive
3. Harmful Content within the Reformed AudioVisual Media Service Directive
4. Misleading content and the Unfair Commercial Practices Directive
5. Final Remarks
Part IV: A SUBJECT MATTER-SPECIFIC OVERVIEW
Chapter 16: Harmonizing Intermediary Copyright Liability in the EU: A Summary
1. The Current Incomplete EU Framework for Intermediary Liability in Copyright
2. The National Regimes on Intermediary Liability in Copyright
2.1 Intra-Copyright Solutions
2.2 Tort-Based Solutions
2.3 Injunction-Based Solutions
3. Intermediary Liability and Tort Law
4. Building a Complete Framework for European Intermediary Liability in Copyright
5. Closing Remarks
Chapter 17: The Direct Liability of Intermediaries
1. The right of communication to the public as construed through case law
2. Liability of Platform Operators for the Making of Acts of Communication to the Public: The Pirate Bay Case
3. Applicability of C-610/15 Stichting Brein to Less Egregious Scenarios
4. Other Implications: Primary/Secondary Liability and Safe Harbours
5. Conclusion
Chapter 18: Secondary Copyright Infringement Liability and User-Generated Content in the United States
1. Secondary Copyright Infringement in Common Law
2. The Digital Millennium Copyright Act
2.1 Actual and ‘Red Flag’ Knowledge
2.2 Wilful Blindness
2.3 Right and Ability to Control
3. User-Generated Content in The Shadow of The DMCA and Case Law
3.1 Technological Measures
3.2 Government Enforcement Efforts
4. Policy Activity
4.1 Stop Online Piracy Act and Companion Bills
4.2 US Department of Commerce Internet Policy Task Force
4.3 US House Judiciary Committee Copyright Review
4.4 US Copyright Office Study
5. Secondary Copyright Infringement Liability for Online Intermediaries Outside the United States
5.1 International Treaties and Trade Agreements
5.2 European Union
6. Conclusions
Chapter 19: Intermediary Liability and Online Trade Mark Infringement: Emerging International Common Approaches
1. The ‘Ratio’ Principles of Intermediary Responsibility—International Common Approaches
1.1 Injunctions for Blocking Websites by ISPs
2. The Ius Gentium of Voluntary Measures—International Common Approaches
2.1 Freedom of Expression, Competition, and Data Protection
3. Conclusions
Chapter 20: Intermediary Liability and Trade Mark Infringement: Proliferation of Filter Obligations in Civil Law Jurisdictions?
1. Trade Mark Rights
1.1 Inherent Limits
1.2 Context-Specific Infringement Analysis
2. Limitations of Trade Mark Rights
2.1 Commercial Freedom of Expression
2.2 Artistic and Political Freedom of Expression
2.3 Context-Specific Limitations
3. Developments in Case Law
3.1 Guidelines at EU Level
3.2 Application in Civil Law Jurisdictions
3.3 Need for Balanced, Proportionality-Based Approach
4. Conclusions
Chapter 21: Intermediary Liability and Trade Mark Infringement: A Common Law Perspective
1. Who are Online Intermediaries?
2. Primary Liability of Intermediaries for Trade Mark Infringement
2.1 Use of the Sign Complained of by the Intermediary
2.2 Use of the Sign in the Relevant Territory
2.3 Counterfeit Goods and Grey Goods
2.4 Articles 12 to 14 of the e-Commerce Directive
3. Accessory Liability of Intermediaries for Trade Mark Infringement
3.1 Accessory Liability of Intermediaries for Trade Mark Infringement under English Law
3.2 Article 14 of the e-Commerce Directive
4. Injunctions Against Intermediaries Whose Services are Used to Infringe Trade Marks
4.1 Jurisdiction of the Courts of England and Wales to Grant Injunctions Against Intermediaries in Trade Mark Cases
4.2 Website-Blocking Injunctions: Threshold Conditions
4.3 Website-Blocking Injunctions: Applicable Principles
4.5 Other Kinds of Injunctions Against Intermediaries in Trade Mark Cases
Chapter 22: Digital Markets, Rules of Conduct, and Liability of Online Intermediaries—Analysis of Two Case Studies: Unfair Commercial Practices and Trade Secrets Infringement
1. Problem Definition
2. Online Intermediaries: Walking a Tightrope Between Immunity from Liability and Remedies
2.1 Safe Harbours
2.2 Injunctions Against Online Intermediaries
3. Unfair Commercial Practices via Online Intermediaries
3.1 Unfair Commercial Practices Definition
3.2 Is the Online Intermediary a ‘Trader’ that Performs ‘Commercial Practices’?
3.3 The Interplay Between the UCPs Directive and e-Commerce Directive
3.4 Protection of Consumers’ Interests vs IPRs’ Protection
4. Trade Secrets Infringement via OIs
4.1 Protection of Undisclosed Know-How and Business Information (Trade Secrets) Within the EU: Overview
4.2 Unlawful Acquisition, Use, and Disclosure of Trade Secrets by Third Parties
4.3 Remedies Against Third Parties
4.4 The Interplay Between Trade Secrets Directive and the e-Commerce Directive
4.5 Trade Secrets and IPRs’ Enforcement Against Third Parties
5. Assessment
Chapter 23: Notice-and-Notice-Plus: A Canadian Perspective Beyond the Liability and Immunity Divide
1. Legal Context
1.1 Common Law
1.2 Statutory Approaches
2. Proposal for Reform
2.1 Common Law
2.2 Notice-and-Notice-Plus
3. Notice-and-Notice-Plus Beyond Defamation Law
3.1 NN+ Requires the Speech to Be Unlawful but Other Forms of Speech Are Harmful Too
3.2 Speech Regulation for Which NN+ is Clearly Unsuitable
3.3 Case Study: Terrorist Content
3.4 Case Study: Hate Speech
4. Conclusions
Chapter 24: Free Expression and Internet Intermediaries: The Changing Geometry of European Regulation
1. The European RegulatoryFramework
1.1 The Council of Europe
1.2 The European Union
2. Geometrical Shifts
3. Conclusions
Chapter 25: The Right to Be Forgotten in the European Union
1. The Right to be Forgotten Vis-à-Vis Search Engines
1.1 Google Spain
1.2 Delisting in Numbers
1.3 Balancing Rights
1.4 Geographical Scope
1.5 Sensitive Data
2. The Right to be Forgotten Vis-À-Vis Primary Publishers
3. Conclusions
Chapter 26: Right to be . . . Forgotten? Trends in Latin America after the Belén Rodriguez Case and the Impact of the New European Rules
1. The Belén Rodriguez Case: Something New Under the Sun?
1.1 Previous and Actual Knowledge
1.2 Explicit Illegality of the Content
1.3 Negligent Response
2. Trends in Latin America: What do they Follow?
2.1 Chile
2.2 Colombia
2.3 Mexico
2.4 Peru
2.5 Uruguay
3. The Right to Privacy and the Right to Freedom of Expression Under the Inter American System of Human Rights and its Impact on the Right to be Forgotten
4. Conclusions
Part V: INTERMEDIARY LIABILITY AND ONLINE ENFORCEMENT
Chapter 27: From ‘Notice and Takedown’ to ‘Notice and Stay Down’: Risks and Safeguards for Freedom of Expression
1. Impact on Freedom of Expression
2. Notice and TakeDown
2.1 General
2.2 Variations of the Mechanism
2.3 Risks and Safeguards for Freedom of Expression
2.3.1 Foreseeability
2.3.2 Abusive requests
2.3.3 Notification and counter-notification
3. Notice and Notice
3.1 General
3.2 Variations of the Mechanism
3.3 Risks and Safeguards for Freedom of Expression
3.3.1 Foreseeability
3.3.2 Decision-making bodies
3.3.3 Severity of the response
4. Notice and Stay Down
4.1 General
4.2 Judicial Construct
4.3 Risks and Safeguards for Freedom of Expression
4.3.1 General v specific monitoring
4.3.2 Clear and precise notifications
4.3.3 Appeal procedure
5. Conclusions
Chapter 28: Monitoring and Filtering: European Reform or Global Trend?
1. OSP as a ‘Mere Conduit’: ‘No Monitoring’ Obligations
2. From ‘Mere Conduits’ to ‘Gate-Keepers’? The Global Shift in Intermediary Liability
2.1 Case Law
2.1.1 The European experience
2.2 Private Ordering: Emerging Industry Practice
3. The EU Copyright Directive in the Digital Single Market: Legitimation through Legislation?
3.1 Definition of an OCSSP
3.2 A General Monitoring Obligation?
4. Effect on Fundamental Rights
5. Conclusions
Chapter 29: Blocking Orders: Assessing Tensions with Human Rights
1. A Freedom of Expression Perspective on Website Blocking: The Emergence of User Rights
1.1 User Rights
1.2 Collateral Effects of Blocking: The Risk of Overblocking
1.3 The ‘Value’ of Content
1.4 Alternative Means of Accessing Information
2. A Freedom to Conduct a Business Perspective on Website Blocking: The (Rising) Role of the ISPS in Digital Copyright Enforcement
2.1 Costs and Complexity of Blocking
2.2 Availability of Reasonable Alternatives (Subsidiarity)
3. A Right to Property Perspective on Website Blocking: Effectiveness of the Blocking
4. Recent EU Copyright Reform and its Effects on Website Blocking and Fundamental Rights
5. Conclusions
Chapter 30: Administrative Enforcement of Copyright Infringement in Europe
1. The European Landscape: Spain, Italy, and Greece
2. The Legal Context
2.1 International and EU Legislative Provisions
2.1.1 TRIPs
2.1.2 The EU
2.1.3 The EU Charter of Fundamental Rights
2.2 Domestic Legal Basis
3. The Essential Features
3.1 The Independence of the Public Bodies Entrusted with the Task
3.2 Protected Subject Matter and Violations
3.3 Parties
3.4 Procedure
3.5 Abbreviated Proceedings and Protective Orders
3.6 Costs
3.7 Remedies
3.8 Transparency
3.9 Double Track
3.10 Review
3.11 Safeguards against Abuse
4. The AGCOM Regulation in Practice: A Case Study
4.1 Transparency
4.2 Protected Subject Matter
4.3 Scope of Violations
4.4 Remedies
4.5 Relevance of the Principle of Proportionality
Part VI: INTERMEDIARY RESPONSIBILITY, ACCOUNTABILITY, AND PRIVATE ORDERING
Chapter 31: Accountability and Responsibility of Online Intermediaries
1. Tools for Increasing Responsibility
1.1 Graduated Response
1.2 Changes to Online Search Results
1.3 Payment Blockades and Follow the Money
1.4 Private DNS Content Regulation
1.5 Standardization
1.6 Codes of Conduct
1.7 Filtering
1.8 Website-Blocking
2. Mechanisms and Legal Challenges
2.1 Market and Private Ordering
2.2 Corporate Social Responsibility
2.3 Involuntary Cooperation in IP Rights Enforcement
2.4 Public Deal-Making
2.5 Circulation of Solutions
3. Conclusions
Chapter 32: Addressing Infringement: Developments in Content Regulation in the US and the DNS
1. ICANN, the DNS, and DNS Intermediaries
2. The History of Intellectual Property Enforcement in the DNS
2.1 The UDRP
3. ICANN’s New gTLD Programme and IP Stakeholder Demands
4. Expanding IP Enforcement in the DNS: Within and Without ICANN
4.1 Present Arrangements: ‘Trusted Notifier’ Agreements
4.1.1 The trusted notifier model and the UDRP compared
4.2 Future Plans: A Copyright-Specific UDRP?
4.2.1 The DNA’s copyright ADRP
4.2.2 PIR’s SCDRP
5. Conclusions
Chapter 33: Intermediary Liability in Russia and the Role of Private Business in the Enforcement of State Controls over the Internet
1. Russian Internet
2. The Evolution of Internet Regulations and ISP Liability
3. How the Government Relies on Intermediaries
3.1 Content
3.2 Surveillance
4. Compliance Dilemma
5. Transparency and Compliance with Human Rights
6. Conclusions
Chapter 34: Guarding the Guardians: Content Moderation by Online Intermediaries and the Rule of Law
1. Content Moderation by Platforms and The Rule of law
2. Barriers to Accountability
3. Enhancing Intermediaries’ Oversight
4. Future Challenges
Chapter 35: Algorithmic Accountability: Towards Accountable Systems
1. Accountable to whom?
2. Accountability for what?
3. Challenges with Algorithmic Accountability
3.1 Access to the Algorithmic System
3.2 Verification
3.3 Aggregation
3.4 Measuring the Effect of a System on User Behaviour
3.5 Users’ Interpretation of the Capabilities of the Systems They are Using
3.6 Improving the Quality of Technical Systems
3.7 Accountability of the Socio-Technical System
4. What does Algorithmic Accountability Mean in the Context of Intermediary Liability Online?
5. Conclusions
Part VII: INTERNET JURISDICTION, EXTRATERRITORIALITY, AND LIABILITY
Chapter 36: Internet Jurisdiction and Intermediary Liability
1. Internet Intermediaries and Jurisdiction
2. Terms of Service, Jurisdiction, and Choice of Law
3. Access to Evidence and Jurisdiction
4. Scope of Jurisdiction of Content Blocking
5. Concluding Remarks
Chapter 37: The Equustek Effect: A Canadian Perspective on Global Takedown Orders in the Age of the Internet
1. Where it all Began: The Yahoo France Case
2. Equustek Solutions v Google: Internet Jurisdiction Hits Canada’s Highest Court
3. Supreme Court of Canada Hearing
4. The Supreme Court of Canada Decision
5. After Equustek: The Risks of Global Takedown Orders From National Courts
5.1 Conflicting Court Orders
5.2 Expanding Equustek
5.3 Expanding Intermediary Power
6. Conclusions
Chapter 38: Jurisdiction on the Internet: From Legal Arms Race to Transnational Cooperation
1. National Jurisdictions and Cross-Border Data Flows and Services
1.1 Conflicting Territorialities
1.2 A Challenge for All Stakeholders
1.3 A Core Issue of Internet Governance
2. A Legal Arms Race in Cyberspace?
2.1 Extraterritoriality
2.2 Digital Sovereignty
2.3 Paradoxes of Sovereignty
3. Limits to International Cooperation
3.1 Obstacles to Multilateral Efforts
3.2 MLATs: The Switched Network of International Cooperation
4. A Dangerous Path
4.1 Economic Impacts
4.2 Human Rights Impacts
4.3 Technical Infrastructure Impacts
4.4 Security Impacts
5. Filling the Institutional Gap in Internet Governance
5.1 Lessons from the Technical Governance ‘of ’ the Internet
5.2 Evolution of the Ecosystem: Governance ‘on’ the Internet
5.3 Enabling Issue-Based Multistakeholder Cooperation
6. Towards Transnational Frameworks
6.1 Procedural Interoperability
6.2 Governance through Policy Standards
7. Conclusions
Index

Citation preview

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

t h e ox f o r d h a n d b o o k o f

ON L I N E I N T E R M E DI A RY L I A BI L I T Y

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

The Oxford Handbook of

ONLINE INTERMEDIARY LIABILITY Edited by

GIANCARLO FROSIO

1

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

1 Great Clarendon Street, Oxford, ox2 6dp, United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © Giancarlo Frosio 2020 The moral rights of the authors have been asserted First Edition published in 2020 Impression: 1 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number: 2020931359 ISBN 978–0–19–883713–8 Printed and bound by CPI Group (UK) Ltd, Croydon, cr0 4yy Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work.

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

Contents

ix xv

Editor’s Note: A Dialogue on the Role of Online Intermediaries Notes on Contributors

PA RT I   I N T RODU C T ION 1. Mapping Online Intermediary Liability

3

Giancarlo Frosio

PA RT I I   M A P P I N G F U N DA M E N TA L N OT ION S 2. Who Are Internet Intermediaries?

37

Graeme Dinwoodie

3. A Theoretical Taxonomy of Intermediary Liability

57

Jaani Riordan

4. Remedies First, Liability Second: Or Why We Fail to Agree on Optimal Design of Intermediary Liability

90

Martin Husovec

5. Empirical Approaches to Intermediary Liability

104

Kristofer Erickson and Martin Kretschmer

6. The Civic Role of OSPs in Mature Information Societies

122

Mariarosaria Taddeo

7. Intermediary Liability and Fundamental Rights

138

Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko

PA RT I I I   S A F E HA R B OU R S , L IA B I L I T Y, A N D F R AG M E N TAT ION 8. An Overview of the United States’ Section 230 Internet Immunity Eric Goldman

155

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

vi   contents

9. The Impact of Free Trade Agreements on Internet Intermediary Liability in Latin America

172

Juan Carlos Lara Gálvez and Alan M. Sears

10. The Marco Civil da Internet and Digital Constitutionalism

190

Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes

11. Intermediary Liability in Africa: Looking Back, Moving Forward?

214

Nicolo Zingales

12. The Liability of Australian Online Intermediaries

236

Kylie Pappalardo and Nicolas Suzor

13. From Liability Trap to the World’s Safest Harbour: Lessons from China, India, Japan, South Korea, Indonesia, and Malaysia

251

Kyung-Sin Park

14. China’s IP Regulation and Omniscient Intermediaries: Oscillating from Safe Harbour to Liability

277

Danny Friedmann

15. A New Liability Regime for Illegal Content in the Digital Single Market Strategy

295

Maria Lillà Montagnani

PA RT I V   A SU B J E C T M AT T E R- SP E C I F IC OV E RV I E W 16. Harmonizing Intermediary Copyright Liability in the EU: A Summary

315

Christina Angelopoulos

17. The Direct Liability of Intermediaries

335

Eleonora Rosati

18. Secondary Copyright Infringement Liability and User-Generated Content in the United States

349

Jack Lerner

19. Intermediary Liability and Online Trade Mark Infringement: Emerging International Common Approaches Frederick Mostert

369

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

contents   vii

20. Intermediary Liability and Trade Mark Infringement: Proliferation of Filter Obligations in Civil Law Jurisdictions?

381

Martin Senftleben

21. Intermediary Liability and Trade Mark Infringement: A Common Law Perspective

404

Richard Arnold

22. Digital Markets, Rules of Conduct, and Liability of Online Intermediaries—Analysis of Two Case Studies: Unfair Commercial Practices and Trade Secrets Infringement

421

Reto M. Hilty and Valentina Moscon

23. Notice-and-Notice-Plus: A Canadian Perspective Beyond the Liability and Immunity Divide

444

Emily Laidlaw

24. Free Expression and Internet Intermediaries: The Changing Geometry of European Regulation

467

Tarlach McGonagle

25. The Right to be Forgotten in the European Union

486

Miquel Peguera

26. Right to be . . . Forgotten? Trends in Latin America after the Belén Rodriguez Case and the Impact of the New European Rules

503

Eduardo Bertoni

PA RT V   I N T E R M E DIA RY L IA B I L I T Y A N D ON L I N E E N F ORC E M E N T 27. From ‘Notice and Takedown’ to ‘Notice and Stay Down’: Risks and Safeguards for Freedom of Expression

525

Aleksandra Kuczerawy

28. Monitoring and Filtering: European Reform or Global Trend?

544

Giancarlo Frosio and Sunimal Mendis

29. Blocking Orders: Assessing Tensions with Human Rights Christophe Geiger and Elena Izyumenko

566

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

viii   contents

30. Administrative Enforcement of Copyright Infringement in Europe

586

Alessandro Cogo and Marco Ricolfi

PA RT V I   I N T E R M E DIA RY R E SP ON SI B I L I T Y, AC C OU N TA B I L I T Y, A N D P R I VAT E OR DE R I N G 31. Accountability and Responsibility of Online Intermediaries

613

Giancarlo Frosio and Martin Husovec

32. Addressing Infringement: Developments in Content Regulation in the US and the DNS

631

Annemarie Bridy

33. Intermediary Liability in Russia and the Role of Private Business in the Enforcement of State Controls over the Internet

647

Sergei Hovyadinov

34. Guarding the Guardians: Content Moderation by Online Intermediaries and the Rule of Law

669

Niva Elkin-Koren and Maayan Perel

35. Algorithmic Accountability: Towards Accountable Systems

679

Ben Wagner

PA RT V I I   I N T E R N E T J U R I SDIC T ION , E X T R AT E R R I TOR IA L I T Y, A N D L IA B I L I T Y 36. Internet Jurisdiction and Intermediary Liability

691

Dan Jerker B. Svantesson

37. The Equustek Effect: A Canadian Perspective on Global Takedown Orders in the Age of the Internet

709

Michael Geist

38. Jurisdiction on the Internet: From Legal Arms Race to Transnational Cooperation

727

Bertrand de La Chapelle and Paul Fehlinger

Index

749

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

Editor’s Note: A Dialogue on the Role of Online Intermediaries Giancarlo Frosio

In pursuit of an answer to the online intermediary liability conundrum, The Oxford Handbook of Online Intermediary Liability has gathered together multiple voices that have contributed greatly to this emerging debate in the last few years. I have encountered or came to know about most of my co-authors a few years ago at the time of my residence at the Stanford Center for Internet and Society where I served as the first Intermediary Liability Fellow. The idea for this Handbook was the result of other projects that we have run together, including the World Intermediary Liability Map, and the need to crystallize an emerging field of research. This field of research has grown exponentially since this Handbook was first envisioned. Online intermediary liability has now become a pervasive issue on the agenda of governments, courts, civil society, and academia and in the last few years legislation, case law, and initiatives dealing with the liability of online intermediaries have followed at an astounding pace. The role of online  service providers (OSPs) is unprecedented in relation to their capacity to influence users’ interactions in the infosphere.1 Online intermediaries mediate human life in a virtual brave new world that reflects and augments our physical realm. ‘Internet intermediaries are crucial for how most people use the Internet,’ Dan Jerker Svantesson noted.2 Ubiquitous platforms dictate our daily routine: searching for information on Google, getting a taxi on Uber, shopping on Amazon Fresh, making payments via PayPal, collaborating on Google Docs, storing documents on Dropbox, taking up employment through Upwork, discussing trendy topics on Twitter, sharing videos on YouTube, or posting pictures on Instagram. In particular, most creative expression today takes place over communications networks owned by private companies. The decentralized, global nature of the internet means that almost anyone can present an 1 See Luciano Floridi, The Fourth Revolution—How the Infosphere is Reshaping Human Reality (OUP 2014) (arguing that after the Copernican, Darwinian, and Freudian revolutions, humans are once again forced to rethink their role as they must now interact with virtual entities and agent in a wholly new medium, the infosphere). 2  The following considerations are the result of reflections collected via email exchange with my ­co-authors in the last few months. The quoted passages are taken from this exchange, which is on file with the author.

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

x   editor’s note: a dialogue on the role of online intermediaries idea, make an assertion, post a photograph, or push to the world numerous other types of content. Billions of  people possess multiple connected portable devices. People communicate their experiences on the go through multimedia social networking services (e.g. Facebook, Instagram), online file repositories (e.g. Flickr, Dropbox, Google Photos), and various kinds of video-sharing platforms (e.g. YouTube, Vimeo, Dailymotion). According to Danny Friedmann, ‘in China, en route to become the biggest economy in the world, at the end of 2017, it was estimated that more than half of the population (772 million people) had access to the internet. Goods are traded between businesses and consumers, between consumers, and between businesses, at enormous online market platforms, such as those of the Alibaba Group’. The ‘information society’ with its 2.5 quintillion bytes of data created each day3 has slowly morphed into the ‘platform society’. But there is more to it. Perhaps unnoticed, our society has transformed into an ‘intermediated society’. Platforms and OSPs filter our networked life, shape our experiences, define our memories—and even remind us of a (selected) few of them. ‘They contribute to inform the space of opportunities in which individuals and societies can flourish and evolve,’ Mariarosaria Taddeo stresses, ‘and eventually impact how we understand reality and how we interact with each other and with the environment’. The decisions made by these platforms increasingly shape contemporary life. ‘As leading designers of online environments, OSPs make decisions that impact private and public lives, social welfare and individual wellbeing.’ For Christina Angelopoulos, ‘the prevalence of intermediation in modern electronic communications makes [intermediary liability] an increasingly significant aspect of an increasing wide array of different areas of information law’. Eleonora Rosati adds that ‘in this sense Recital 59 of the Information Society Directive, which stresses how, in a context of this kind, intermediaries may be indeed best placed to bring infringements to an end, helps understand the need for intermediaries’ involvement in the enforcement process’. Also Richard Arnold develops this point by noting that ‘online exploitation makes infringement of IPRs easy, but makes enforcement of IPRs against the sources of such infringements difficult (eg because of jurisdictional issues)’, therefore ‘IPR owners increasingly find that it is more practical and effective to target online intermediaries both with claims for direct and accessory liability and with claims for intermediary liability’. Frederick Mostert recognizes that ‘intermediary liability for online service providers has become relevant because of an attempt . . . to deal with the volume and velocity of illegal online activities by Bad Actors’. But, Mostert adds, this strategy is suboptimal: ‘culpa is rarely present at the platform end-point and intermediaries are virtually in all cases not the source of the problem’. However, ‘dealing with the source of the problem is exacerbated by the anonymity issue in the online world, making it very difficult to track and trace the identity of the Bad Actors themselves’. Hence, ‘the last resort efforts to try and stop the illegal activities at the end-point—the platform level’. 3  See Domo, ‘Data Never Sleeps 5.0’ . Note that all hyperlinks in the Handbook were last accessed on 10 June 2019.

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

editor’s note: a dialogue on the role of online intermediaries   xi Therefore, whether and when access providers and communications platforms like Google, Twitter, and Facebook are liable for their users’ online activities is a key factor that affects innovation and fundamental rights. Transaction costs deriving from their liability shape platforms’ decisions and policies, algorithms’ development, and, finally, the architecture of the infosphere. Kylie Pappalardo and Nicolas Suzor clarify that ‘inter­medi­ary liability matters because the rules of intermediary liability structure the internet. What internet users can and cannot see and do is largely dictated by the laws that apply to the online service providers that mediate their experience’. According to Miquel Peguera, ‘the way business models are conceived and developed closely hinges on the legal framework that governs their duties and that determines the conditions under which they may or may not be sheltered from liability’, so that ‘intermediary liability ends up shaping the availability of new digital services and affects citizens’ daily lives in multiple ways’. Jaani Riordan reminds of the tension that lies in setting up liability against online intermediaries: If intermediary liability rules are too strict, they risk stifling new and useful services and restricting market participation to the largest platforms—those best able to afford lawyers, compliance costs, and political lobbying. At the other extreme, rules which immunise intermediaries against any need to act could make it impossible or impracticable for claimants to enforce their rights, encouraging the spread of disinformation and other harmful material.

On one side, as Marco Ricolfi suggests, ‘the adoption of intermediary liability exacerbates the difficulties in preserving freedom of innovation [and], in terms of competition policy, it risks creating barriers to newcomers’. On the other side, Eduardo Bertoni stresses that ‘depending on the regulation that could be enacted, the impact on fundamental rights might be positive or negative’. For Peguera, the way in which we arrange liability has ‘an immediate effect on the exercise of fundamental rights on the Internet, particularly in terms of freedom to access and impart information and in terms of priv­ acy and data protection’. According to Kristofer Erickson and Martin Kretschmer, ‘the legal regime of notice-and-takedown which has been in place for 20 years was conceived as a means of balancing legitimate interests of media rightholders, platform innovators and online users’. However, Christophe Geiger and Elena Izyumenko add that: once intermediaries are asked to do more in their new role of active . . . enforcers, impartial enforcement can be at risk, raising the questions with regards to compliance, among others, with the fundamental rights of both Internet users and intermediaries themselves. This situation entails a serious risk of intermediary being ‘overzealous’ in order to avoid liability and to block access to content that is made available in a perfectly legal manner.

OSPs are subjected to increasing pressure by governments and interest groups which are seeking to control online content by making use of their technical capacities. In this

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

xii   editor’s note: a dialogue on the role of online intermediaries regard, Niva Elkin-Koren and Maayan Perel highlight that ‘they offer a natural point of control for monitoring, filtering, blocking, and disabling access to content, which makes them ideal partners for performing civil and criminal enforcement’. As Emily Laidlaw develops: Intermediaries occupy a critical regulatory role, because they have the capacity to control the flow of information online in a way that others, including states, cannot. Thus, significant energy is devoted by lawmakers to strategizing how to create laws that capitalize on intermediaries’ capacity to regulate to solve many problems posed by internet use.

However, ‘intermediary obligations should not be confused with those of states’. This is hardly the case nowadays. For example, as Nicolo Zingales notes, ‘intermediary liability of OSPs is particularly relevant in the African continent because of the increasing pressure on intermediaries to fulfil broad and open-ended public policy mandates, and potentially affecting both technological progress and the creation of local content’. Multiple jurisdictions follow the same path. As Bertrand de La Chapelle and Paul Fehlinger argue, the intermediary liability debates are exacerbated by a looming risk of ‘a legal arms race, in which states resort to an extensive interpretation of territoriality criteria over cross-border data flows and services’. In an ‘invisible handshake’ between public and private parties, faced with semiotic regulation on an unprecedented scale, enforcement looks once again for an ‘answer to the machine in the machine’.4 Sophisticated algorithms and company policies enable and constrain our actions. These algorithms take decisions reflecting policy’s assumptions and interests that have very significant consequences for society at large, yet there is limited understanding of these processes. This is critical, Ben Wagner notes, as ‘understanding the mechanism of implementation is key to understanding the nature of speech that is enabled by it’. Sergei Hovyadinov notes how ‘the framework of inter­medi­ ary liability established in the EU and the US 15–20 years ago is not sufficient anymore to fight illegal content, especially terrorist, which because of the volume and the speed of its dissemination calls for a more proactive approach, with elements of automation and proactive monitoring’. ‘Intermediary liability’, Hovyadinov continues, ‘is thus shifting from a reactive “notice-and-take down” regime to “intermediary accountability” [as Martin Husovec termed it] based on new expectations from governments and civil society as to the role online platforms should proactively perform’. Christophe Geiger and Elena Izyumenko highlight the ‘danger that intermediaries have recourse to automated systems, leading to a situation where machines and algorithms would become the decisionmakers of what is available or not on the Internet’. Of course, ‘this privatisation of justice is highly problematic in a democratic society, in particular with regard to the constitutional legal framework’. Therefore, Niva Elkin-Koren and Maayan Perel conclude that 4  Charles Clark, ‘The Answer to the Machine is in the Machine’ in Bernt Hugenholtz (ed.), The Future of Copyright in a Digital Environment (Kluwer Law Int’l 1999) 139 (discussing the application of digital right management systems to enforce copyright infringement online).

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

editor’s note: a dialogue on the role of online intermediaries   xiii ‘with the rise of algorithmic enforcement, data driven economy, centralized architecture and market concentration, the new regulatory powers of online platforms are challenging the rule of law’. To continue this dialogue on the role of online intermediaries in modern society, this  Handbook will present multiple scholarly perspectives on the major themes in inter­medi­ary liability and platform governance. The Handbook discusses fundamental legal issues in online intermediary liability, while also describing advances in intermediary liability theory and identifying recent policy trends. Part I features an introductory chapter that will serve as a blueprint for the consistent development of other chapters as it sets out in advance the most relevant trends according to which the structure of the book has been generated. Part II provides a taxonomy of internet platforms, a general discussion of a possible basis for liability, and a review of remedies. In addition, Part II introduces the discussion of the fundamental rights implications of intermediary liability and considers the ethical ramifications of the role of online intermediaries. Part III presents a jurisdictional overview of intermediary liability safe harbours and highlights systemic fragmentation. In this respect, Part III will also focus on enhanced responsibilities that multiple jurisdictions incresingly impose on online intermediaries. Part IV provides an overview of domain-specific solutions, including intermediate liability for copyright, trade mark, unfair competition, and privacy infringement, together with internet platforms’ speech-related obligations and liabilities. Part V reviews intermediary liability enforcement strategies by focusing on emerging trends, including proactive monitoring obligations, blocking orders, and the emergence of administrative enforcement of online infringement. Part VI discusses an additional core emerging trend in intermediary liability enforcement: voluntary measures and private enforcement of allegedly illegal content online are shifting the d ­ iscourse from intermediary liability to intermediary responsibility or accountability. International private law issues are addressed in Part VII with special emphasis on extraterritorial enforcement of intermediaries’ obligations. Finally, I would like to express my gratitude to my co-authors for completing this truly ‘monumental’ study that contributes fundamentally to research in the field. Special thanks also go to the team at Oxford University Press for giving us the opportunity to publish this volume, helping to define the trajectory that it has finally taken, and for all the editorial support. I would also like to thank my wife, Hong, and my parents, Nuccia and Clemente, for their continuous support. Nothing would be possible without the happiness that my family gives to me. June 2019

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

Notes on Contributors

Christina Angelopoulos  is a Lecturer in Intellectual Property Law at the University of Cambridge and a member of the Centre for Intellectual Property and Information Law (CIPIL). Email: [email protected]. Richard Arnold  is a Judge of the Court of Appeal of England and Wales. Eduardo Bertoni is the Director of the Access to Public Information Agency in Argentina, Director of the Post-graduated Program on Data Protection at Buenos Aires University School of Law, Argentina, and Global Clinical Professor at New York University School of Law, New York. Email: [email protected] and [email protected]. Annemarie Bridy  is the Allan G. Shepard Professor of Law at the University of Idaho College of Law, an Affiliate Scholar at the Stanford Law School Center for Internet and Society (CIS), and an Affiliated Fellow at the Yale Law School Information Society Project (ISP). Email: [email protected]. Bertrand de La Chapelle is the Executive Director and Co-founder of the global multistakeholder organization Internet & Jurisdiction Policy Network. Email: [email protected]. Alessandro Cogo  is Associate Professor at the University of Turin Law School and Director of the Master of Laws in Intellectual Property jointly organized by the World Intellectual Property Organization and the Turin University. Email: alessandroenrico. [email protected]. Graeme Dinwoodie  is the Global Professor of Intellectual Property Law at Chicago-Kent College of Law. Email: [email protected]. Niva Elkin-Koren  is a Professor of Law at the University of Haifa, Faculty of Law and a Faculty Associate at the Berkman Klein Center at Harvard University. She is the Founding Director of the Haifa Center for Law & Technology (HCLT), and a Co-director of the Center for Cyber, Law and Policy. Email: [email protected]. Kristofer Erickson is Associate Professor in Media and Communication at the University of Leeds. Email: [email protected]. Paul Fehlinger is the Deputy Executive Director and Co-founder of the multistakeholder  organization Internet & Jurisdiction Policy Network. Email: [email protected].

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

xvi   notes on contributors Danny Friedmann  is Visiting Assistant Professor of Law, Peking University School of Transnational Law in Shenzhen. Email: [email protected]. Giancarlo Frosio  is an Associate Professor at the Center for International Intellectual Property Studies at Strasbourg University, a Fellow at Stanford Law School Center for Internet and Society, and Faculty Associate of the NEXA Center in Turin. Email: [email protected]. Christophe Geiger is Professor of Law and Director of the Research Department of  the Centre for International Intellectual Property Studies (CEIPI) at Strasbourg University. Email: [email protected]. Michael Geist is a Professor of Law at the University of Ottawa where he holds the Canada Research Chair in Internet and E-commerce Law and is a member of the Centre for Law, Technology and Society. Email: [email protected]. Eric Goldman  is a Professor of Law at Santa Clara University School of Law, where he is also Director of the school’s High Tech Law Institute. Email: [email protected]. Reto M. Hilty  is Managing Director at the Max Planck Institute for Innovation and Competition in Munich and Full Professor (ad personam) at the University of Zurich. Email: [email protected]. Sergei Hovyadinov is a JSD candidate at Stanford Law School. Email: sergeih@ stanford.edu. Martin Husovec  is Assistant Professor at the University of Tilburg (Tilburg Institute for Law, Technology and Society & Tilburg Law and Economics Center) and Affiliate Scholar at Stanford Law School Center for Internet and Society. Email: martin@ husovec.eu. Elena Izyumenko  is a Researcher and a PhD Candidate at the Center for International Intellectual Property Studies (CEIPI), University of Strasbourg. Email: elena. [email protected]. Martin Kretschmer  is Professor of Intellectual Property Law at the School of Law, University of Glasgow and Director of CREATe, the UK Copyright and Creative Economy Centre. Email: [email protected]. Aleksandra Kuczerawy  is a Postdoctoral Researcher at the Katholieke Universiteit (KU) Leuven’s Centre for IT & IP Law. Email: [email protected]. Emily Laidlaw  is Associate Professor, Faculty of Law, University of Calgary. Email: [email protected]. Juan Carlos Lara Gálvez is the Research and Public Policy Director at Derechos Digitales—América Latina, based in Santiago de Chile. Email: juancarlos@ derechosdigitales.org.

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

notes on contributors   xvii Jack Lerner  is a Clinical Professor of Law at the University of California, Irvine School of Law and Director of the UCI Intellectual Property, Arts, and Technology Clinic. Email: [email protected]. Tarlach McGonagle  is a Senior Lecturer/Researcher at IViR, University of Amsterdam and Professor of Media Law & Information Society at Leiden Law School. Email: [email protected]. Luiz Fernando Marrey Moncau  is a Non-Residential Fellow at the Stanford Center for Internet and Society and a PhD from Pontifícia Universidade Católica of Rio de Janeiro. Email: mailto:[email protected]. Sunimal Mendis is a Postdoctoral Researcher at the Center for International Intellectual Property Studies (CEIPI) at Strasbourg University. Email: mendis@ ceipi.edu Maria Lillà Montagnani  is an Associate Professor of Commercial Law at Bocconi University in Milan. Email: [email protected]. Valentina Moscon  is Senior Research Fellow in Intellectual Property and Competition Law at the Max Planck Institute for Innovation and Competition. Email: valentina. [email protected]. Frederick Mostert  is Professor of Intellectual Property at the Dickson Poon School of Law, King’s College and Research Fellow at the Oxford Intellectual Property Research Centre. Email: [email protected]. Kylie Pappalardo  is a Lecturer in the Law School at the Queensland University of Technology (QUT) in Brisbane. Email: [email protected]. Kyung-Sin Park  is a Professor at Korea University Law School. Email: kyungsinpark@ korea.ac.kr. Miquel Peguera  is an Associate Professor of Law at the Universitat Oberta de Catalunya (UOC) in Barcelona and Affiliate Scholar, Stanford Center for Internet and Society. Email: [email protected]. Maayan Perel  is an Assistant Professor in Intellectual Property Law at the Netanya Academic College in Israel and a Senior Research Fellow at the Cyber Center for Law & Policy, University of Haifa. Email: [email protected]. Marco Ricolfi  is Professor of Intellectual Property at the Turin Law School, Partner at the law firm Tosetto, Weigmann e Associati, and Co-director of the Nexa Center on Internet and Society of the Turin Polytechnic. Email: [email protected]. Jaani Riordan  is a barrister at 8 New Square, London. Email: [email protected]. Eleonora Rosati  is an Associate Professor in Intellectual Property Law at Stockholm University and an Of Counsel at Bird & Bird. Email: [email protected].

OUP CORRECTED PROOF – FINAL, 03/28/2020, SPi

xviii   notes on contributors Alan M. Sears  is a Researcher and Lecturer at Leiden University’s eLaw Centre for Law and Digital Technologies. Email: [email protected]. Martin Senftleben  is a Professor of Intellectual Property Law, Institute for Information Law, University of Amsterdam and a Visiting Professor, Intellectual Property Research Institute, University of Xiamen. Email: [email protected]. Nicolas Suzor  is a Professor in the Law School at Queensland University of Technology in Brisbane. Email: [email protected]. Dan Jerker B. Svantesson  is a Professor at the Faculty of Law at Bond University, a Visiting Professor at the Faculty of Law, Masaryk University, and a Researcher at the Swedish Law & Informatics Research Institute, Stockholm University. Email: dasvante@ bond.edu.au. Mariarosaria Taddeo is a Researcher Fellow at the Oxford Internet Institute and Deputy Director of the Digital Ethics Lab. Email: [email protected]. Ben Wagner is an Assistant Professor and Director of the Privacy & Sustainable Computing Lab at Vienna University of Economics and Business and a Senior Researcher of the Centre of Internet & Human Rights (CIHR). Email: [email protected]. Diego Werneck Arguelhes  is an Associate Professor of Law at Insper Institute for Education and Research, São Paulo, Brazil. E-mail: [email protected]. Nicolo Zingales  is an Associate Professor of Law at the University of Leeds, an Affiliate Scholar at Stanford Center for Internet and Society, and a Research Associate at the Tilburg Institute for Law, Technology and Society and the Tilburg Law and Economics Centre. Email: [email protected].

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

PA RT I

I N T RODUC T ION

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 1

M a ppi ng On li n e I n ter m edi a ry Li a bilit y Giancarlo Frosio

Intermediary liability has emerged as a defining governance issue of our time. However, modern legal theory and policy struggle to define an adequate framework for the li­abil­ity and responsibility of online service providers (OSPs). In addition, market conditions, against which the initial regulation was developed, have changed considerably since the first appearance of online intermediaries almost two decades ago. These changes started to be reflected in new policy approaches. Tinkering with this matter which is in constant flux brought The Oxford Handbook of Online Intermediary Liability into being. The Handbook will crystallize the present theoretical understanding of the intermediary liability conundrum, map emerging regulatory trends, and qualify pol­it­ical and economic factors that might explain them. In doing so, the Handbook will provide a comprehensive, authoritative, and ‘state-of-the-art’ discussion of intermediary liability by bringing together multiple scholarly perspectives and promoting a global discourse through cross-jurisdictional parallels. The Handbook thus serves as a priv­il­eged venue for observing emerging trends in internet jurisdiction and innovation regu­la­tion, with special emphasis on enforcement strategies dealing with intermediate liability for copyright, trade mark, and privacy infringement, and the role of online platforms in moderating the speech they carry for users, including obligations and liabilities for defamation, hate, and dangerous speech. With globalized OSPs operating across the world in an interdependent digital en­vir­on­ ment, inconsistencies across different regimes generate legal uncertainties that undermine both users’ rights and business opportunities. To better understand the heterogeneity of the international online intermediary liability regime and contribute to this important policy debate, the Handbook enlisted leading authorities with the goal of mapping the field of online intermediary liability studies. This effort builds on the work of a predecessor, the World Intermediary Liability Map (WILMap), a repository for information

© Giancarlo Frosio 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

4   Giancarlo Frosio on international intermediary liability regimes hosted at the Stanford Center for Internet and Society (CIS), that I developed and launched with contributions from many of the co-authors of this Handbook.1 The Handbook’s attempt to study intermediary liability and come to terms with a fragmented legal framework builds on a vast array of other efforts. Besides the WILMap mentioned earlier, mapping and comparative analysis exercises have been undertaken by the Network of Centers,2 the World Intellectual Property Organization (WIPO),3 and other academic initiatives.4 Institutional efforts at the international level are on the rise. The Global Multistakeholder Meeting on the Future of Internet Governance (NETmundial) worked towards the establishment of global provisions on intermediary liability within a charter of internet governance principles.5 The final text of the NETmundial Statement included the principle that ‘[i]ntermediary liability limitations should be implemented in a way that respects and promotes economic growth, in­nov­ation, creativity, and free flow of information’.6 The Organisation for Economic Co-operation and Development (OECD) issued recommendations on Principles for Internet Policy Making stating that, in developing or revising their policies for the internet economy, the state members should consider the limitation of intermediary liability as a high level principle.7 Also, the 2011 Joint Declaration of the three Special Rapporteurs for Freedom of Expression contains statements suggesting an ongoing search for a global regime for intermediary liability.8 The Representative on Freedom of the Media of the 1 World Intermediary Liability Map (WILMap) (a project designed and developed by Giancarlo Frosio and hosted at Stanford CIS) (WILMap). 2  See Berkman Klein Center for Internet and Society, ‘Liability of Online Intermediaries: New Study by the Global Network of Internet and Society Centers’ (18 February 2015) . 3  See Daniel Sang, ‘Comparative Analysis of National Approaches of the Liability of the Internet Intermediaries’ (WIPO Study); Ignacio Garrote Fernández-Díez, ‘Comparative Analysis on National Approaches to the Liability of Internet Intermediaries for Infringement of Copyright and Related Rights’ (WIPO study). 4  See e.g. for other mapping and comparative exercises, Graeme Dinwoodie (ed.), Secondary Liability of Internet Service Providers (Springer 2017); Martin Husovec, Injunctions Against Intermediaries in the European Union: Accountable But not Liable? (CUP 2017); Jaani Riordan, The Liability of Internet Intermediaries (OUP 2016); Christina Angelopoulos, European Intermediary Liability in Copyright: A Tort-Based Analysis (Wolters Kluwer 2016); Christopher Heath and Anselm Kamperman Sanders (eds), Intellectual Property Liability of Consumers, Facilitators, and Intermediaries (Wolters Kluwer 2012). 5  See NETmundial Multistakeholder Statement (São Paulo, Brazil, 24 April 2014) . 6  ibid. 5. 7  See OECD, ‘Recommendation of the Council on Principles for Internet Policy Making’ C(2011)154 . See also OECD, ‘The Economic and Social Role of Internet Intermediaries’ (April 2010) . 8  See Organization for Security and Co-operation in Europe (OSCE), ‘International Mechanism for Promoting Freedom of Expression: Joint Declaration on Freedom of Expression and the Internet by the United Nations Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE) Representative on Freedom of the Media, the Organization

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   5 OSCE issued a Communiqué on Open Journalism recognizing that ‘intermediaries have become one of the main platforms facilitating access to media content as well as enhancing the interactive and participatory nature of Open Journalism’.9 Efforts to produce guidelines and general principles for intermediaries also emerged in civil society. In particular, the Manila Principles on Intermediary Liability set out safeguards for content restriction on the internet with the goal of protecting users’ rights, including ‘freedom of expression, freedom of association and the right to privacy’.10 Other projects developed best practices that can be implemented by intermediaries in their conditions of service with special emphasis on protecting fundamental rights.11 For example, under the aegis of the Internet Governance Forum, the Dynamic Coalition for Platform Responsibility aims to delineate a set of model contractual provisions.12 The provisions should be compliant with the UN ‘Protect, Respect and Remedy’ Framework as endorsed by the UN Human Rights Council together with the UN Guiding Principles on Business and Human Rights.13 Ranking Digital Rights is an add­ition­al initiative that promotes best practice and transparency among online inter­medi­ar­ies.14 The project ranks internet and telecommunications companies according to their moral behaviour in respecting users’ rights, including privacy and freedom of speech. Several initiatives have looked into notice-and-takedown procedures in order to highlight possible chilling effects and propose solutions. Lumen—formerly Chilling Effects—archives takedown notices to promote transparency and facilitate research into the takedown ecology.15 The Takedown Project is a collaborative effort housed at the University of California, Berkeley School of Law and the American Assembly to study notice-and-takedown procedures.16 Again, the Internet and Jurisdiction project has been developing a due process framework to deal more efficiently with transnational notice-and-takedown requests, seizures, mutual legal assistance treaties (MLATs), and law enforcement cooperation requests.17

of American States (OAS) Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples’ Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information’ (2011) . 9 ibid. 10  See Manila Principles on Intermediary Liability, Intro . 11  See e.g. Jamila Venturini and others, Terms of Service and Human Rights: Analysing Contracts of Online Platforms (Editora Revan 2016). 12  See Dynamic Coalition on Platform Responsibility: a Structural Element of the United Nations Internet Governance Forum . 13  See United Nations, Human Rights, Office of the High Commissioner, ‘Guiding Principles on Business Human Rights: Implementing the United Nations “Protect, Respect, and Remedy” Framework’ (2011) A/HRC/RES/17/4. 14  See Ranking Digital Rights . 15  See Lumen . 16  See The Takedown Project . 17  See Bertrand de La Chapelle and Paul Fehlinger, ‘Towards a Multi-Stakeholder Framework for Transnational Due Process’, Internet & Jurisdiction White Paper (2014) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

6   Giancarlo Frosio

1.  Mapping Fundamental Notions Following this chapter, Part II of the Handbook maps out fundamental notions and issues in online intermediary liability and platforms’ regulation that will serve as a basis for further analysis in later Parts. It provides a taxonomy of internet platforms, an ana­ lysis of evidence-based research in the field, and a general discussion of a possible basis for liability and remedies. In addition, it puts into context intermediary liability regu­la­ tion with fundamental rights—a theme that will resurface many times throughout the Handbook—and considers the ethical implications of the online intermediaries’ role. Graeme Dinwoodie sets the stage by defining a taxonomy of online intermediaries in Chapter 2. Dinwoodie ask the question Who are Internet Intermediaries? or, as multiple alternatives go, ‘online service providers’ or ‘internet service providers’. The first findings, disappointing as they may be, are that there is little consensus, a condition that is common to all things related to online intermediary liability. Fragmentation and mul­tiple approaches abound. Reconstruction of the notion comes from scattered references in statues and case law, obviously a suboptimal approach for promoting consistency and legal certainty. However, Dinwoodie does a stellar job of finding patterns within the confusion and bringing together the systematization of other contributors to the Handbook, such as Jaani Riordan and Martin Husovec.18 Within the broad view of online intermediaries endorsed by Dinwoodie, we are left with the option of pursuing an essentialist approach that responds to technology or going beyond it in search of a more dynamic approach unbounded by ephemeral technological references. The present debate on the overhaul of the online intermediary liability regime exposes the limitations of the former approach as it is striking how in just twenty years—when the first intermediary liability regulations were enacted—the online market and its techno­logic­al solutions have changed. A well-known definition from the OECD bears out this point: ‘Internet intermediaries bring together or facilitate transactions between third parties on the Internet. They give access to, host, transmit and index content, products and services originated by third parties on the Internet or provide Internet-based services to third parties.’19 Traditional distinctions in access, hosting, caching providers, and search engines has increasingly become obsolete. More granularity is needed and account should be taken of actors performing multiple functions and services. Jaani Riordan follows up on Dinwoodie’s taxonomy by describing the typology of li­abil­ity that may attach to online intermediaries’ conduct. Liability rules give rise to two main classes of obligation: monetary and non-monetary. Monetary liability may be further divided along a spectrum comprising four main standards: strict liability; negligence or fault-based liability; knowledge-based liability; and partial or total immunity. Non-monetary liability may be further divided into prohibitory and mandatory 18  See Husovec (n. 4) and Riordan (n. 4). 19  See OECD, The Economic and Social Role of Internet Intermediaries (2010) 9.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   7 obligations. As Riordan explains, these rules can be then situated within concepts of primary and secondary liability. The review of the types of liability and intermediary obligations also leads to identifying the main justifications given for imposing liability on inter­medi­ar­ies. On one side, liability rules may attribute blame to secondary actors who have assumed responsibility for primary wrongdoers and intermediary liability rules reflect the secondary actor’s own culpability. Alternatively, secondary liability rules can be justified by reducing enforcement costs by secondary actors who are likely to be least-cost avoiders. Finally, Chapter 3 provides an overview of the types of wrongdoing which may be relevant to intermediaries’ liability, including copyright infringement, trade mark infringement, defamation, hate speech, breach of regulatory obligations, and disclosure obligations. Further chapters will pick up several of these wrongdoings for more detailed discussion. In Chapter  4, Martin Husovec completes this preliminary mapping exercise with an overview of remedies for intermediary liability. Husovec describes damages and injunctions, their scope and goals, while also analysing the costs of those remedies. In Husovec’s view, there are three legal pillars of liability that lead to remedies: primary li­abil­ity, secondary liability, and injunctions against innocent third parties or inter­medi­ ar­ies, which are a more recent development. Applying any of these pillars to inter­medi­ ar­ies has consequences by changing the scope of damages, their aggregation, scope, and goal of injunctions, and their associated costs. This, according to Husovec, launches forms of ‘remedial competition’, where plaintiffs may have an incentive to bring lawsuits against parties that never acted wrongfully themselves, rather than against known tortfeasors that might better redress the wrongdoing. In this respect, distinguishing the cost structure between intermediaries as infringers and innocent third parties—therefore exempting innocent third parties from the full or partial burden of compliance and procedural costs—would limit incentives to act against innocent third parties as well as negative externalities for technological innovation. As Husovec discusses, some jurisdictions address this issue by putting in place the necessary balancing, while others do not come up with similar distinctions. More generally, mapping remedies for inter­medi­ ary liability leads to the conclusion that much still needs to be done to bring about consistency between different jurisdictions. In search of consistency, legal systems should look for a common vocabulary and a common set of consequences associated with regulatory modalities. Chapter 5 wraps up the legal framework that has been in place for almost two decades by looking at it through the lens of empirical evidence. Kristofer Erickson and Martin Kretschmer provide a thoughtful study on the implication of empirical evidence for intermediary liability policy, which builds on previous empirical projects such as the Copyright Evidence Wiki, an open-access repository of findings relating to the effects of copyright.20 As ‘we appear to be in the midst of a paradigm shift in intermediary 20  See ‘The Copyright Evidence Wiki: Empirical Evidence for Copyright Policy’, CREATe Centre, University of Glasgow .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

8   Giancarlo Frosio li­abil­ity, moving from an obligation to act once knowledge is obtained to an obligation to prevent harmful content appearing’—Erickson and Kretschmer note—changes in the legal framework should be based on hard empirical evidence that attaches those changes to positive externalities for society, while addressing negative externalities that emerged under the previous regimes. Empirical data—which Erickson and Kretschmer identify and discuss—such as the volume of takedown requests, the accuracy of notices, the potential for over-enforcement or abuse, transparency of the takedown process, and the costs of enforcement borne by different parties, should be assessed in advance of policymaking. Legislative and regulatory changes should then follow from an empirically-based policymaking process. All in all, according to Erickson and Kretschmer, evidence suggests that the notice-and-takedown regime works and its shortcomings should be ‘addressed through tweaking, rather than overhauling, the safe harbour regime’. In add­ition, data gathering and transparency of algorithmic decisionmaking becomes a crit­ic­al call for the platform society, which increasingly becomes a ‘black box society’ where users’ lives are daily affected by unaccountable privately-run algorithms.21 Algorithmic accountability will be further discussed by Ben Wagner in Chapter 35. After mapping fundamental categorizations in the field, Part II also highlights tensions within the system, especially concerning the ethical implications of intermediary liability regulation and frictions with fundamental rights. In Chapter 6, Mariarosaria Taddeo investigates the ethical implications of intermediary liability by describing moral responsibilities of OSPs with respect to managing access to information and human rights. As designers of online environments, OSPs play a civic role in mature information societies. This role brings about a responsibility for designing OSPs’ services according to what is acceptable and socially preferable from a global perspective that can reconcile different ethical views and stakeholders’ interests. In applying Floridi’s soft ethics to consider what responsibilities the civic role of OSPs entails, Taddeo concludes that OSPs need to develop ethical foresight analyses to consider the impact of their practices and technologies step-by-step and, if necessary, identify alternatives and risk-mitigating strategies. In Chapter 7, Christophe Geiger, Elena Izyumenko, and I consider the tension between intermediary liability and fundamental rights with special emphasis on the European legal framework. Competing fundamental rights, such as freedom of expression, privacy, freedom of business, and the right to property are entangled in the intermediary liability conundrum. Policymakers are still in search of a balanced and proportional fine-tuning of online intermediaries’ regulation that can address the miscellaneous interests of all stakeholders involved, with special emphasis on users’ rights. In this context, the increasing reliance on automated enforcement technologies, which will be the topic of further review in several chapters of the Handbook, might set in motion dystopian scenarios where users’ fundamental rights are heavily undermined.

21 See Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (HUP 2015).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   9

2.  Mapping International Fragmentation: From Safe Harbours to Liability Part III sets out to establish the fragmented field of online intermediary liability by mapping international actions which have oscillated from safe harbours to liability. This Part presents a jurisdictional overview discussing intermediary liability safe harbour arrangements and highlighting systemic fragmentation and miscellaneous inconsistent approaches. Each chapter in this Part focuses on regional trends, then cherry-picks an important topic to be discussed in detail. In this respect, Part III exposes an increasing number of cracks that appear in safe harbour arrangements—where they are in place— and the enhanced responsibilities that multiple jurisdictions cast on online inter­medi­ar­ ies and platforms. Since the enactment of the first safe harbours and liability exemptions for online intermediaries, market conditions have radically changed. Originally, intermediary liability exemptions were introduced to promote an emerging internet market. Do safe harbours for online intermediaries still serve innovation? Should they be limited or expanded? These critical questions—often tainted by protectionist concerns—define the present intermediary liability conundrum. In the mid-1990s, after an initial brief hesitation,22 legislators decided that online intermediaries, both access and hosting providers, should enjoy exemptions from li­abil­ity for wrongful activities committed by users through their services. The United States first introduced these safe harbours. In 1996, the Communications Decency Act exempted intermediaries from liability for the speech they carry.23 In 1998, the Digital Millennium Copyright Act introduced specific intermediary liability safe harbours for copyright infringement under more stringent requirements.24 Shortly after, the e-Commerce Directive imposed on EU Member States the obligation to enact similar legal arrangements to protect a range of online intermediaries from liability.25 Other jurisdictions have more recently followed suit.26 In most cases, safe harbour legislation provides mere 22  See Bruce Lehman, Intellectual Property and the National Information Infrastructure: The Report of the Working Group on Intellectual Property Rights (DIANE Publishing 1995) 114–24 (noting ‘the best policy is to hold the service provider li­able . . . Service providers reap rewards for infringing activity. It is difficult to argue that they should not bear the responsibilities’). 23  See Communications Decency Act of 1996, 47 USC § 230. 24  See the Digital Millennium Copyright Act of 1998, 17 USC § 512 (DMCA). 25  See Directive 2000/31/EC of the European Parliament and of the Council of 17 July 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1. 26 See e.g. Copyright Legislation Amendment Act 2004 (Cth) no. 154, Sch. 1 (Aus.); Copyright Modernization Act, SC 2012, c.20, s. 31.1 (Can.); Judicial Interpretation no. 20 of 17 December 2012 of the Supreme People’s Court on Several Issues concerning the Application of Law in Hearing Civil Dispute Cases Involving Infringement of the Right of Dissemination on Information Networks (Ch.); Federal

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

10   Giancarlo Frosio conduit, caching, and hosting exemptions for intermediaries, together with the exclusion of a general obligation on online providers to monitor the information which they transmit or store or actively seek facts or circumstances indicating illegal activity.27 According to Eric Goldman, who provides an overview of the state of intermediaries’ immunities in the United States in Chapter 8, even the traditionally strong enforcement of online intermediaries’ safe harbours in section 230 of the Communication Decency Act (CDA) shows signs of decay with critics claiming that the functional life of section 230 is nearing its end and more regulation is necessary.28 Similarly, safe harbours for copyright infringement provided for by the US DMCA have also been questioned.29 However, on the other hand, trade agreements like the United States–Mexico–Canada Agreement (USMCA) or NAFTA 2.0 have exported the section 230 arrangement to other countries—Canada and Mexico—both making it a North American standard and limiting the power of Congress to undermine significantly section 230. In Chapter 9, Juan Carlos Lara Gálvez and Alan Sears follow up by discussing the impact of free trade agreements (FTAs) on internet intermediary liability in Latin America. They note that even where FTAs have been adopted between the United States and Latin American countries, the implementation of related intermediary liability provisions lags behind. So far, only Chile and Costa Rica have implemented these provisions in their national laws. Notably, after the withdrawal of the Unites States from the Trans-Pacific Partnership (TPP), the remaining parties reached an agreement on an amended version of the TPP, which actually suspended the provisions pertaining to online inter­medi­ar­ies’ safe harbours. In sum, Lara and Sears question whether adopting the DMCA model is ideal for Latin American countries. In any event, they note, there is no consensus in Latin America on whether this would be the best model for balancing the rights of copy­right holders and the general public, which explains the resistance in implementing this regime nationally. Similar resistance to providing immunities for intellectual property infringement is also shown in the Brazilian Marco Civil da Internet (MCI)—or Internet Bill of Rights— introducing a civil liability exemption for internet access providers and other internet providers.30 This broad civil—and not criminal—liability exemption, however, does not apply to copyright infringement.31 Luiz Moncau and Diego Arguelhes describe the process leading to enactment of the MCI and its main achievements in Chapter  10. Law no. 149-FZ of 27 July 2006 on Information, Information Technologies and Protection of Information (Rus.) and Federal Law no. 187-FZ of 2 July 2013 amending Russian Civil Code, s. 1253.1. A repository including most of the safe harbour legislation enacted worldwide can be found at the WILMap (n. 1). 27  See e.g. Directive 2000/31/EC (n. 25) Arts 12–15; DMCA (n. 24) s. 512(c)(1)(A)–(C). 28  See e.g. David Ardia, ‘Free Speech Savior or Shield for Scoundrels: An Empirical Study of Intermediary Immunity Under Section 230 of the Communications Decency Act’ (2010) 43 Loyola L. Rev. 373. 29  The United States Copyright Office is undertaking a public study to evaluate the impact and effectiveness of the safe harbour provisions. In particular, notice-and-stay-down arrangements—rather than takedown—are under review in the United States as well as elsewhere. See United States Copyright Office, Section 512 Study . 30  See Marco Civil da Internet, Federal Law no. 12.965 (23 April 2014) Art. 18 (Bra.) (‘the Internet connection [access] provider shall not be subject to civil liability for content generated by third party’). 31  ibid. Art. 19(2).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   11 Although the MCI is simply an ordinary federal statute, it arguably stands out as a manifestation of digital constitutionalism, as Moncau and Arguelhes argue. Following in the footsteps of an emerging global move,32 the MCI updates and translates the traditional concerns of constitutionalism—protecting rights and limiting power—into a realm where the enjoyment of powers is very much threatened by private companies that hold the resources and tools to shape our online experience. African countries have been discussing the introduction of a safe harbour regime for online intermediaries for quite some time. In Chapter 11, Nicolo Zingales reviews a global attempt in several African jurisdictions to adjust the legal framework to the challenges posed by the platform economy. In Intermediary Liability in Africa: Looking Back, Moving Forward?, Zingales highlights how the African Union (AU) has attempted to drive harmonization on the basis of the South African Electronic Communications Act—the first and most sophisticated intermediary liability legislation in the area.33 Ghana, Zambia, and Uganda have enacted legislation heavily inspired by the South African model. Meanwhile, AU-led interstate cooperation has raised awareness of human rights protection online, cybersecurity, and personal data protection, in particular with a dedicated AU Convention.34 However, very limited intermediary liability le­gis­la­tion has been adopted in the region. In this fragmented legal framework, cyber-policing obligations become entangled with immunities and the self- and co-regulation schemes that characterize the South African Electronic Transactions Act, posing challenges to due process and other fundamental rights, such as freedom of expression. According to Zingales, only the AU could promote a shared notion of inter­medi­ary li­abil­ity exemptions in the region. An inconsistent approach that brings about legal uncertainty also characterizes the Australian law governing the liability of online intermediaries, according to Kylie Pappalardo and Nicolas Suzor, who provide a comprehensive review of the current state of Australian online intermediary liability law across different doctrines, such as laws of defamation, racial vilification, misleading and deceptive conduct, contempt of court, and copyright. In Chapter 12, Pappalardo and Suzor show that the basis on which third parties are liable for the actions of individuals online is confusing and, viewed as a whole, largely incoherent. Australian law lacks articulation of a clear distinction for circumstances in which intermediaries will not be held liable, which results in a great deal of uncertainty. These conflicts in flux within Australian doctrines that have been applied to online intermediary liability have led to a push for greater online enforcement and intermediary regulation based on the capacity of doing something to prevent harm rather than responsibility. Pappalardo and Suzor posit that confusion between capacity and responsibility has a role in much of the uncertainty in Australia intermediary li­abil­ity regulation. One solution would be for Australian courts to more strictly apply responsibility 32 See Dennis Redeker, Lex Gill, and Urs Gasser, ‘Towards digital constitutionalism? Mapping attempts to craft an Internet Bill of Rights’ (2018) 80 Int’l Communication Gazette 311. 33  See Electronic Transactions and Communications Act (ECTA) (2001) XI. 34  See African Union Convention on Cyber Security and Personal Data Protection (2014).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

12   Giancarlo Frosio theory in tort law and ascribe liability only after examining the inter­medi­ar­ies’ causal role in committing the wrong, therefore establishing fault first and liability later. In Chapter 13, Kyung-Sin Park compares the intermediary liability rules of six major Asian countries including China, India, Japan, Korea, Indonesia, and Malaysia to demonstrate that under the label of safe harbours lies, in fact, a liability trap.35 China and South Korea adopted a rule that an intermediary is required to remove known unlawful content on the penalty of liability and thereby set out to specify when the intermediaries will be held liable, instead of when they will not be held liable, inadvertently creating a liability-imposing rule instead of a liability-exempting rule. India’s regulation, namely the 2011 Intermediary Guidelines, generated a raft of obligations on intermediaries that threatened to convert the whole system into one imposing, rather than exempting from, liability. However, such a threat may have had an impact on the jurisprudence that, by way of the 2013 Shreya Singhal decision,36 made the Indian system an extremely ‘safe harbour’ by requiring judicial review for taking down infringing content. Further, Indonesia’s draft safe harbour regulation which was announced in December 2016 seems to move towards the model of China and South Korea, while Malaysia’s copyright notice and takedown seems to follow the US model closely but has a structure that allows the same misunderstanding made by the Korean regulators. All in all, Park shows how all over Asia online intermediary liability is on the rise, while safe harbours’ scope narrows proportionally. In Chapter 14, Danny Friedmann expands on China and explains how the interplay of multiple laws, regulations, and judicial interpretations have produced a system where weak safe harbours for online intermediaries oscillate heavily towards enhanced li­abil­ ity, given a very broad notion of ‘knowledge’ and the fact that an OSP without the ability to control copyright or trade mark infringement can still be found liable. This liabilityimposing rule ends up putting pressure on intermediaries to take down unlawful—and lawful—content. This seems to imply that filtering standards for OSPs in China will be continuously on the rise as well as their obligations to sanitize their networks against allegedly infringing content. This regulatory approach will be increasingly coupled with predictive artificial intelligence (AI) analytics and deep learning that will allow massive data processors, like Alibaba and Baidu, to become an omniscient tool against alleged infringement, both intellectual property (IP) and speech related. However, in a move that is surprisingly similar to that of other jurisdictions, especially the European Union, the Chinese regulatory framework seems to push forward self-regulation and pressure for OSPs to take on more responsibility, rather than a legislatively mandated duty of care. In Europe, Asia, South America, Africa, and Australia, the recent international policy debate has focused on the recalibration of safe harbours towards more liability for online intermediaries. As part of its Digital Single Market (DSM) Strategy, the European 35  The Hong Kong government introduced a Copyright Bill establishing a statutory safe harbour for OSPs for copyright infringement, provided that they meet certain prescribed conditions, including taking reasonable steps to limit or stop copyright infringement after being notified. See Copyright Amendment Bill 2014, C2957, cl. 50 (HK) . 36 See Shreya Singhal [2013] 12 SCC 73 (Ind.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   13 Commission has narrowed the e-Commerce Directive’s horizontal liability limitations for internet intermediaries and put in place a ‘fit for purpose’—or vertical—regulatory environment for platforms and online intermediaries.37 In Chapter  15, Maria Lillà Montagnani discusses this development by looking into the emergence of A New Liability Regime for Illegal Content in the Digital Single Market Strategy. The DSM Strategy deploys enhanced obligations that websites and other internet intermediaries should have for dealing with unlawful third party content.38 Legislative developments, including the Copyright in the DSM Directive,39 the amendments to the Audiovisual Media Service Directive,40 and the Guidance on Unfair Commercial Practices,41 have vertically ‘enhanced responsibility’42 among online platforms in Europe. These developments both aim to achieve a fairer allocation of value generated by the distribution of copyrightprotected content by online platforms—to close the so-called value gap43—and lower transaction costs of online enforcement by shifting the burden of sanitization of allegedly illegal speech on online intermediaries rather than law enforcement agencies. In this context, Montagnani highlights obvious inconsistencies between the new vertical regimes and the horizontal safeguards for intermediary liability provided by the e-Commerce Directive.44 Fragmentation, enhanced responsibility, and statutory li­abil­ity then emerge in tight connection with the expansion of private ordering and voluntary measures in Europe as much as in other jurisdictions as earlier noted. Perhaps, within a fragmented international framework, a common design in recent developments of intermediary liability regulation can be identified as consistently emerging in multiple jurisdictions. Standards for liability are lowered, safe harbours constricted, and incentives for responsible behaviour, that should mainly be carried out 37  See European Commission Communication, ‘A Digital Single Market Strategy for Europe’ (2015) COM(2015) 192 final, s. 3.3 (DSM Strategy). 38  ibid. s. 3.3.2 (noting that ‘[r]ecent events have added to the public debate on whether to enhance the overall level of protection from illegal material on the Internet’). 39  See Directive 2019/790/EU of the European Parliament and of the Council of 17 April 2019 on copy­ right and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92. 40 See European Commission, ‘Proposal for a Directive amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audio-visual media services in view of changing market realities’ (25 May 2016) COM(2016) 287 final. 41  See European Commission, ‘Commission Staff Working Document—Guidance on the implementation/application of Directive 2005/29/EC on Unfair Commercial Practices’ (25 May 2016) SWD(2016) 163 final. 42  European Commission Communication, ‘Tackling Illegal Content Online—Towards an enhanced responsibility of online platforms’ COM(2017) 555 final. 43  See European Commission Communication, ‘Online platforms and the Digital Single Market— Opportunities and Challenges for Europe’ COM(2016) 288/2, 8. 44  cf. Directive 2000/31/EC (n. 25) recital 48 (previously establishing that ‘[t]his Directive does not affect the possibility for Member States of requiring service providers, who host information provided by recipients of their service, to apply duties of care, which can reasonably be expected from them and which are specified by national law, in order to detect and prevent certain types of illegal activities’) (emphasis added).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

14   Giancarlo Frosio through automated algorithmic monitoring and filtering, increased. Horizontally, online intermediaries have been increasingly involved with semiotic regulation online, which is perceived as a major threat to global stability due to the ubiquity of allegedly infringing behaviours and the enormous transaction costs that enforcement entails. However, standards differ greatly from jurisdiction to jurisdiction, with the strongest protection given to online intermediaries in the United States, while the rest of the world is slowly drifting away from the US approach. The legal uncertainty deriving from miscellaneous approaches in constant flux obviously affects online businesses. This is further exacerbated by a steady increase in vertical regulation often launched without pondering potential inconsistencies with horizontal regulation that has now been in place for a few decades.

3.  Mapping Subject-Specific Regulation Mapping online intermediary liability worldwide entails a review of a wide-ranging topic, stretching into many different areas of law and domain-specific solutions. The purpose of Part IV is to consider online intermediary liability for subject matter-specific infringements. This Part provides an overview of intermediate liability for copyright, trade mark, unfair competition, and privacy infringement, together with internet platforms’ obligations and liabilities for defamation, hate, and dangerous speech. Secondary liability for copyright infringement has increasingly moved centre stage and is the driving issue on the agenda of recent reform proposals. In particular, recent EU copyright reform has struggled to find consensus on new obligations for OSPs.45 The European debate has revolved around the harmonization of national traditions of secondary liability and the alternative of construing online intermediaries as primarily li­able for the violation of the right of communication to the public. The latter option has finally been endorsed by the European Parliament,46 setting Europe apart from the dominant approach in other jurisdictions. In Chapter 16, Christina Angelopoulos builds on her long-standing research in this domain and reviews the lessons of European tort law for intermediary liability in copyright in order to plot a path for Harmonizing Intermediary Copyright Liability in the EU: A Summary. Traditionally, Member States have relied on home-grown solutions in the absence of a complete EU framework for intermediary accessory copyright liability. Angelopoulos examines the approaches taken in three of the major tort law traditions of Europe: the UK, France, and Germany. This examination shows the emergence of three cross-jurisdictional approaches to intermediary liability, including intra-copyright solutions, tort-based solutions, and 45  See Giancarlo Frosio, ‘To Filter or Not to Filter? That is the Question in EU Copyright Reform’ (2018) 36(2) Cardozo Arts & Entertainment L.J. 101–38. 46  See Directive 2019/790/EU (n. 39) Art. 17(1).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   15 injunction-based solutions. Existing projects on the harmonization of European tort law, such as the Principles of European Tort Law (PETL),47 may serve as a basis for building a framework for European intermediary liability in copyright. Angelopoulos proposes a negligence-based approach, which is informed by existing EU and national copyright law and tort law according to the emerging doctrine of the Court of Justice of the European Union (CJEU) of fair balance among fundamental rights.48 However, harmonization in this field is also taking place through an expansive notion of communication to the public. Both CJEU case law49 and EU legislation50 have construed some online intermediaries as directly liable for communicating infringing content to the public. In Chapter 17, Eleonora Rosati looks into the Direct Liability of Intermediaries for copyright infringement and disentangles the complexities of the recent CJEU case law concerning the matter. In the light of recent legislation,51 direct liability reaches beyond platforms that induce infringement by users—where the core business is piracy—as concluded by the CJEU in The Pirate Bay case.52 More broadly, it reaches user-generated content platforms that organize and promote user-uploaded content for profit.53 In addition, no safe harbours will be available to platforms that communicate to the public. Both these points settled, Rosati wonders whether a distinction between primary harmonized liability and secondary unharmonized liability still makes sense. In sum, this expansive construction of the notion of direct liability for communication to the public of online platforms reflects an ongoing move towards their enhanced accountability and liability, especially in the EU. In a highly volatile policy environment like that of the United States where market forces constantly lobby the legislative power in order to obtain better conditions, the recent European developments might lead to reconsideration of the traditional balance of interests and power embedded in section 512 of the DMCA in favour of stricter regu­ la­tions for online intermediaries. Presently, although reform activity and ligation has slowed down in the United States, Jack Lerner notes that there are considerable challenges that new entrants to the user-generated content (UGC) market must face to comply with the requirements of the Digital Millennium Copyright Act.54 In discussing Secondary Copyright Infringement Liability and User-Generated Content in the United States, Lerner highlights that online intermediaries and UGC platforms are exposed to uncertainty by the construction of the notion of inducement and wilful blindness, which leaves room for litigation even if they respond to takedown notices expeditiously and actively seek to remove infringing content. 47  See European Group on Tort Law, Principles of European Tort Law . 48  See e.g. C-160/15 GS Media BV v Sanoma Media Netherlands BV [2016] ECLI:EU:C:2016:644, para. 31. 49  See e.g. C-466/12 Nils Svensson et al. v Retriever Sverige AB [2014] ECLI:EU:C:2014:76; C-527/15 Stichting Brein v Jack Frederik Wullems [2017] ECLI:EU:C:2017:300; C-610/15 Stichting Brein v Ziggo BV and XS4All Internet BV [2017] ECLI:EU:C:2017:456. 50  See Directive 2019/790/EU (n. 39) Art. 17(1). 51 ibid. 52 C-610/15 Stichting Brein v Ziggo BV and XS4All Internet BV [2017] ECLI:EU:C:2017:456. 53  See Directive 2019/790/EU (n. 39) Art. 2(6) and recital 62. 54  See 17 USC § 512.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

16   Giancarlo Frosio Next to rampant global copyright infringement, the internet—and the emergence of the platform economy within it—has seen the widespread availability of counterfeit goods that can be purchased and easily delivered anywhere in the world. More than copy­right, trade mark infringement can become a sensitive public order concern, especially when it involves drugs or food the commercialization of which might put public health at risk. According to Frederick Mostert, even though ‘the lack of uniform international guidelines has made tackling counterfeits in a borderless digital environment even more challenging’, there are common approaches emerging towards intermediary liability at the international level for online trade mark infringement. In Chapter 19, Mostert outlines three common tenets that can be distilled into a transnational principle of inter­medi­ary liability: ‘[1] upon notice of a specific infringement, [2] an ISP is required to take all proportionate and reasonable measures which a reasonable ISP would take in the same circumstances [3] to address the specific instance of infringement brought to their attention’. This emerging common international principle is then coupled with a ius gentium of voluntary measures that results from voluntary cooperation between online intermediaries and rightholders to curb infringement. In this context, Mostert highlights the consistent deployment of voluntary removals, monitoring, algorithmic filtering, follow the money approaches, registry systems, advertising codes of practice preventing advertisements on counterfeit websites, and educational campaigns. As Friedmann also explained earlier in Chapter 14, with special regard to Chinese online conglomerates such as Alibaba, the development of this ius gentium of voluntary measures is tightly connected to advancement in technological innovation such as AI and machine learning. However, current technology can barely cope with trade mark infringement, which is potentially even more challenging than copyright infringement. The issue here is twofold. On one side, technology can be easily circumvented by sophisticated infringers. On the other side, ‘fair balance’ between trade mark protections and other fundamental rights, such as freedom of competition, freedom of expression and information, and the right to privacy, can be hard to achieve through automated enforcement. Actually, Chapter 20 magnifies the issue of balancing trade mark protection and social and cultural values from a civil law perspective. Martin Senftleben reviews the CJEU case law on point as well as European national jurisprudence concluding that there is a growing recognition of necessary limitations in trade mark protection for providing breathing space for commercial, artistic, and political freedom of expression. However, Senftleben also warns against a Proliferation of Filter Obligations in Civil Law Jurisdictions, reinforcing the concerns already expressed in other chapters. Overblocking through filtering technologies might easily defy all safeguards of free expression that recent jurisprudential developments have carved into trade mark law. Although copyright filtering has recently been under the spotlight, over-enforcement of trade marks online should not be underplayed—Senftleben noted—especially in the light of the heavily context-specific nature of trade mark exclusive rights, which make file-matching technologies inefficient in trade mark enforcement online.55 In this regard, at least in 55  cf. Graeme Dinwoodie, ‘Secondary Liability for Online Trademark Infringement: The International Landscape’ (2014) 37 Columbia J. of L. & the Arts 463, 498–9.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   17 European civil law jurisdictions, there are fragmented responses. Senftleben uses the example of Dutch courts imposing a far-reaching filtering obligation only if the intermediary systematically and structurally facilitates the infringing activities. In contrast, German jurisprudence has been less cautious in this domain by using the open-ended Störerhaftung doctrine to develop quite substantial specific moni­tor­ing and filtering duties for online intermediaries, such as in the eBay and Rapidshare cases.56 This jurisprudence will be also reassessed in Chapter 28 by Sunimal Mendis and me when considering global emergence of judicially-imposed filtering and monitoring obligations. The peculiarities of the common law perspective of intermediary liability and trade mark infringement are discussed by Richard Arnold in Chapter 21. Arnold crystallizes the teachings of UK case law in this domain, while situating this common law perspective within EU trade mark law,57 the e-Commerce Directive,58 and the Enforcement Directive.59 Arnold, as well as other contributors in this Handbook earlier,60 makes a fundamental distinction between liability stemming from legal principles which are not particular to intermediaries, including primary and accessory liability, and liability depending on the application of principles which are specific to intermediaries, inter­ medi­ary liability proper. This second type of liability includes injunctions against inter­ medi­ar­ies whose services are used to infringe trade marks made available in national jurisdictions by the implementation of Article 11 of the Enforcement Directive. Although other types of injunctions against intermediaries are available, Arnold focuses on the increasing popularity of website-blocking injunctions, which have recently been ported from the copyright domain, where they have more traditionally been deployed, to the trade mark domain. The Cartier case was the first—and so far the only—European case applying a website-blocking injunction to trade mark infringement.61 Online intermediary liability can also arise as a consequence of infringements of rights and legal interests other than IP rights. In Chapter 22, Valentina Moscon and Reto Hilty explore intermediary liability for unfair commercial practices (UCPs) and trade secret infringement under European law.62 Moscon and Hilty investigate whether and 56  See e.g. Bundesgerichtshof [Supreme Court] (BGH) Rolex v eBay (aka Internetversteigerung II) [19 April 2007] I ZR 35/04 (Ger.); BGH Rolex v Ricardo (aka Internetversteigerung III) [30 April 2008] I ZR 73/05 (Ger.); BGH GEMA v RapidShare [15 August 2013] I ZR 80/12 (Ger.). 57  See Directive 2015/2436/EU of the European Parliament and of the Council of 16 December 2015 to approximate the laws of the Member States relating to trade marks (recast) [2015] OJ L336/1; Regulation 2017/1001/EU of 14 June 2017 on the European Union trade mark [2017] OJ L154/1. 58  See Directive 2000/31/EC (n. 25) Arts 12–15. 59  See Directive 2004/48/EC of the European Parliament and of the Council of 29 April 2004 on the enforcement of intellectual property rights [2004] OJ L195/16, Art. 11. 60  See, for an identical or close categorization, Riordan, Husovec, Angelopoulos, and Mostert. 61 See Cartier International AG v British Telecommunications plc [2018] UKSC 28 (UK). 62  See Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No. 2006/2004 of the European Parliament and of the Council (Unfair Commercial Practices Directive) [2005] OJ L149/22; Directive (EU) 2016/943 of the European Parliament and of the Council of the European Parliament and of the Council of 8 June 2016 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure [2016] OJ L157/1.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

18   Giancarlo Frosio under which conditions online intermediaries are liable under rules of conduct governing the functioning of the market, and the upcoming DSM in particular. Platforms like TripAdvisor might be considered traders as well as hosting providers with the potential cumulative applicability of the e-Commerce Directive safe harbours and the obligations governing traders under the UPC Directive. Most likely, Moscon and Hilty conclude, the e-Commerce Directive as a lex specialis will prevail on competing legal instruments and provide exemption from liability. However, there is uncertainty regarding the standard of knowledge required for triggering takedown obligations or the obligation of preventing future infringements or, again, whether injunctive relief similar to that of Article 11 of the Enforcement Directive can apply to UCPs. Obviously, then, liability, if exemptions do not apply, is a matter of fragmented and unharmonized national tort laws that implement very differently the general prohibition against UCPs contained in EU law. Given the globalized nature of digital markets and their steady growth, multiplying potential violations by online intermediaries, Moscon and Hilty conclude that the status quo is unsatisfactory and specific alternatives for UCPs and trade secret violations in digital markets must be developed. Sanitization of allegedly infringing speech is yet another area where online inter­medi­ ar­ies have a primary role. The ‘datafication’ and ‘platformization’ of society has reached every sphere of life with platforms controlling the flow of a more abundant but fragmented information offer than the traditional mass media.63 Therefore, their long-lasting liability exemptions have been challenged almost everywhere, even in the United States, where section 230 of the CDA, however, still holds as a comprehensive protection for online intermediaries for speech-related infringements. Elsewhere, the messenger can now be more freely shot unless it acts responsibly enough and supports wronged parties and law enforcement agencies in fighting illegal speech online. In Chapter 23, Emily Laidlaw tackles this side of the intermediary liability conundrum, providing a common law perspective on intermediary liability for defamation and dangerous and hate speech. Laidlaw focuses on the Canadian system and other common law jurisdictions. After an introduction to the common law and statutory legal context, Laidlaw puts forward a reform proposal for online defamation that goes under the name of Notice-and-NoticePlus (NN+). The discussion of this rather subject-specific proposal becomes an opportunity for exploring optimal intermediary liability models for the regu­la­tion of other kinds of harmful speech, including fake news, terrorist content, and hate speech. In Free Expression and Internet Intermediaries: The Changing Geometry of European Regulation, Tarlach McGonagle stresses how the geometry of European regulation has moved towards online intermediaries’ increased liability and enhanced responsibility for illegal third party speech. This change seems to sideline the well-established understanding that freedom of the media must be safeguarded and regulation should not curb the development of information and communication technologies. McGonagle reviews

63  See José van Dijck, Thomas Poell, and Martijn de Waal, The Platform Society: Public Values in a Connective World (OUP 2018) 46 as cited in Chapter 24.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   19 the case law of the European Court of Human Rights (ECtHR), such as Delphi,64 EU Communications, Recommendations, and Codes of Conduct and concludes that the greater the seriousness of the perceived harms of certain categories of expression, such as hate speech, the more responsibly online intermediaries are supposed to act. In particular, as noted elsewhere, there is an emergent preference for self-regulatory codes of conduct as a regulatory technique in Europe. As McGonagle notes, however, all the rele­vant European codes of conduct are less voluntary than they seem, as rather than trusting the market with appropriate self-regulatory choices, they put forward a rather coercive approach. In the information society, the role of private sector entities in gathering information for and about users has long been a most critical issue. Therefore, intermediaries have become a main focus of privacy regulation, especially in jurisdictions with a strong trad­ ition of privacy protection such as Europe.65 In Chapter 25, Miquel Peguera discusses The Right to be Forgotten in the European Union. As Peguera recounts, in a landmark case the CJEU ruled that an internet search engine operator is responsible for processing personal data which appear on web pages published by third parties.66 The Google Spain ruling once again expands the obligations of online intermediaries, of which search engines are a subset. It brings about enhanced responsibility—and transaction costs— for online intermediaries, while entrusting them with an adjudication role that entails a delicate balance between fundamental rights. That balance has been addressed quite satisfactorily by European institutions and national courts, setting precise guidance that preserves freedom of expression and public interest.67 However, also in this field, there still remain concerns about whether any adjudication role should be entrusted to online intermediaries, although the scale of semiotic governance online leaves room for very few sustainable alternatives. At the same time, additional obligations of uncertain applicability, such as the prohibition of processing of sensitive data that should theoretically apply to all data controllers including those online intermediaries that qualify as such, might be so invasive as to disrupt the business of online intermediaries. Peguera also discusses the hotly debated issue of the geographical scope of the right to be forgotten; that is, its possible extraterritorial global application to .com domains rather than European domains only. Extraterritorial application of intermediary liability obligations is a critical issue that goes beyond enforcement of the right to be forgotten and will be further discussed in Part VI.

64  Delfi AS v Estonia [GC] App. no. 64569/09 (ECtHR, 16 June 2015). 65  See Bart van der Sloot, ‘Welcome to the Jungle: the Liability of Internet Intermediaries for Privacy Violations in Europe’ (2015) 6 JIPITEC 211. 66  See C-131/12 Google Spain SL v Agencia Española de Protección de Datos [2014] ECLI:EU:C:2014:317. 67  See e.g. Art. 29 Working Party (WP29), ‘Guidelines on the implementation of the Court of Justice of the European Union judgment on “Google Spain and Inc v. Agencia Española de Protección de Datos (AEPD) and Mario Costeja González” C-131/12’ (2014) 14/EN WP 225. See also Giancarlo Frosio, ‘Right to be Forgotten: Much Ado about Nothing’ (2017) 15(2) Colorado Tech. L.J. 307.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

20   Giancarlo Frosio Multiple jurisdictions are trying to cope with ‘right to be forgotten’ demands, following the Google Spain ruling.68 The emergence of the right to be forgotten—and its extraterritorial application—follows in the footsteps of a global move towards data protectionism against the de facto market dominance of US internet conglomerates.69 There are plenty of recent examples, including the CJEU’s Schrems decision and Russian Federal Law No. 242-FZ. In Schrems, the CJEU ruled that the transatlantic Safe Harbour agreement— which lets US companies use a single standard for consumer privacy and data storage in both the United States and Europe—is invalid;70 whereas Russia introduced legislation requiring that the processing of the personal data of Russian citizens be conducted with the use of servers located in Russia.71 Eduardo Bertoni tackles the global impact of enhanced privacy obligation for online intermediaries in Chapter 26, where he discusses the Right to be . . . Forgotten? Trends in Latin America after the Belén Rodriguez Case and the Impact of the New European Rules. Several Latin American countries, including Argentina, Chile, Columbia, Peru, and Uruguay, have been heavily debating obligations of online intermediaries in connection with the protection of users’ data and personal information. The debate has further involved delisting obligations and proactive filtering generally. In the aftermath of the CJEU Google Spain decision, freedom of expression and privacy advocates have been in confrontation in Latin America with proposals to introduce the right to be forgotten or other delisting obligations often met with strong civil society opposition. The Belén Rodriguez case in Argentina, for example, endorsed the quite extreme view that no delisting obligation should be imposed on OSPs unless ordered by a court or in a few specific cases of obviously infringing content.72 Bertoni, in particular, warns about de-indexing obligations against search engines that would amount to prior restraint to speech and not be compliant with the American Convention on Human Rights.73 Again, as Bertoni reports, the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights has been quite straightforward in rejecting delisting obligations à la Google Spain.74 Tensions between fundamental rights and intermediary liability obligations define the essence of the intermediary liability conundrum online. In this context, fragmentation and inconsistencies abound. They are actually steadily growing rather than receding. Obviously, fragmentation 68  ibid. 307–12. 69  See e.g. Maria Farrel, ‘How the Rest of the World Feels About  U.S.  Dominance of the Internet’ (Slate, 18 November 2016) . 70  See C-362/14. 71  See Federal Law No. 242-FZ of 21 July 2014 ‘on amending certain legislative acts of the Russian Federation as to the clarification of the processing of personal data in information and telecommunications networks’ . 72  See Corte Suprema de Justicia de la Nación [National Supreme Court] Rodríguez, María Belén v Google Inc. /daños y perjuicios [2014] CSJN Case no. 337:1174 (Arg.). 73  See Organization of American States (OAS), American Convention on Human Rights (‘Pact of San Jose’), Costa Rica, 22 November 1969 (entered into force 18 July 1978) Art. 13. 74  See Office of the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights, ‘Annual Report’ (15 March 2017) OEA/Ser.L/V/I Doc. 22/17 .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   21 is multiplied vertically across different subject matter and if common solutions are not rapidly agreed, inconsistencies may soon become so irreconcilable that they will drive a process of Balkanization that will break down the internet, as discussed in more detail in Part VI of the Handbook.

4.  Mapping Intermediary Liability Enforcement As should now be quite obvious, the intense debate concerning OSPs’ intermediary li­abil­ ity primarily concerns the involvement of OSPs in online enforcement to aid law enforcement agencies and wronged parties to curb widespread illegal behaviours online. Given the scale of the phenomenon, intermediaries have been increasingly identified as the most suitable option for minimizing transaction costs and enhancing efficacy of enforcement. Part V reviews intermediary liability enforcement strategies by focusing on emerging trends, including notice and action, proactive monitoring obligations across the entire spectrum of intermediary liability subject matter, blocking orders against innocent third parties, and the emergence of administrative enforcement of online infringement. Later, Part VI discusses private ordering and voluntary measures, an additional emerging trend in intermediary liability enforcement. The focus of the review in Part V inevitably magnifies the tensions between enforcement strategies and miscellaneous fundamental rights, including freedom of expression, privacy, and freedom of business. In Chapter 27, Aleksandra Kuczerawy discusses From ‘Notice and Takedown’ to ‘Notice and Stay Down’: Risks and Safeguards for Freedom of Expression. Kuczerawy describes enforcement mechanisms that are provided to allegedly wronged parties in multiple jurisdictions to seek a remedy directly from an OSP for an infringement that may have occurred through its networks. These are generally known as ‘notice-and-action’ (N&A) mechanisms and—Kuczerawy explains—take many nuanced forms, including most commonly ‘notice-and-take-down’, ‘notice-and-notice’, and ‘notice-and-stay-down’. By providing for the removal or blocking of content, all these mechanism can interfere with the right to freedom of expression. Therefore, Kuczerawy examines ‘how different types of N&A mechanisms amplify the risks to free expression and what safeguards they include to prevent such risks from manifesting themselves’. From notice-and-take-down to notice-and-notice and notice-and-stay-down, fundamental rights find themselves increasingly under pressure. The recent move away from the e-Commerce notice-andstay-down model to endorse notice-and-stay down and other filtering obligations in the new Directive on Copyright in the DSM witnesses this intensification of the tension between online intermediary regulation and fundamental rights.75

75  See Directive 2019/790/EU (n. 39) Art. 17(4)(b)–(c).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

22   Giancarlo Frosio In Monitoring and Filtering: European Reform or Global Trend?, Sunimal Mendis and I develop this very point and take up from Kuczerawy’s more general discussion by focusing on the widespread deployment of monitoring and filtering obligations through voluntary, judicial, and legislative means. The recent EU copyright reform would de facto force hosting providers to develop and deploy filtering systems, therefore moni­tor­ing their networks.76 As we argue, the solution adopted by the new Directive follows in the footsteps of a well-established path in recent intermediary liability policy: the demise of the principle of ‘no monitoring obligations’. In the same vein, recent case law has imposed proactive monitoring obligations on intermediaries for copyright infringement—such as Allostreaming in France, Dafra in Brazil, RapidShare in Germany, or Baidu in China.77 Actually, the emerging enforcement of proactive filtering and moni­tor­ing obligations has spanned the entire spectrum of intermediary liability subject matter, including other IP rights,78 privacy,79 defamation, and hate/dangerous speech.80 In that context, notable exceptions—such as the landmark Belén Rodriguez case that is discussed in detail in Chapter 26—highlight again the fragmented international response to intermediary liability.81 Next, blocking orders against innocent third parties are an additional relevant trend in intermediary liability. Blocking orders have become increasingly popular in Europe, especially to contrast online copyright—and recently also trade mark—infringement.82 Their validity under EU law was recently confirmed by the CJEU in the Telekabel decision.83 Outside the EU, website blocking of copyright-infringing sites has been authorized in countries including Argentina, India, Indonesia, Malaysia, Mexico, South Korea, and Turkey.84 In December 2014, Singapore effected an amendment to its Copyright Act to enable rightholders to obtain website-blocking orders,85 and in 2015 Australia introduced ‘website blocking’ provisions to the Copyright Act.86 These 76 ibid. 77  See Cour de cassation [French Supreme Court] SFR, Orange, Free, Bouygues télécom et al. v Union des producteurs de cinéma et al. [6 July 2017] no. 909 (Fra.) (Allostreaming); Superior Court of Justice Google Brazil v Dafra (24 March 2014) Special Appeal 1306157/SP (Bra.); BGH GEMA v RapidShare (n. 56); Beijing Higher People’s Court Zhong Qin Wen v Baidu [2014] Gao Min Zhong Zi 2045 (Ch.). 78  See BGH Rolex v eBay (n. 56); BGH Rolex v Ricardo (n. 56). 79  See Tribunal de grande instance [High Court] TGI Paris Google v Mosley [6 November 2013] (Fra.); Landgericht [District Court] (LG) Hamburg Max Mosley v Google Inc. [24 January 2014] 324 O 264/11 (Ger.); Mosley v Google [2015] EWHC 59 (QB) (UK). 80 See Delfi (n. 64). 81 See Belén (n. 72). 82  See Directive 2004/48/EC (n. 59) Art. 11; Directive 2001/29/EC of the European Parliament and the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L167/10, Art. 8(3). 83  See C-314/12 UPC Telekabel Wien GmbH v Constantin Film Verleih GmbH [2014] ECLI:EU:C:2014:192. 84  See Swiss Institute of Comparative Law, Comparative Study on Filtering, Blocking and Take-down of Illegal Content of the Internet (a study commissioned by the Council of Europe, 20 December 2015) . 85  See Copyright (Amendment) Act 2014, An Act to Amend the Copyright Act (Ch. 63 of the 2006 revised edn). 86  See Copyright Amendment (Online Infringement) Act 2015 (Cth).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   23 measures have been enacted with the aim of curbing IP infringement online, although negative effects on human rights have been widely highlighted. In Chapter 29, Christophe Geiger and Elena Izyumenko discuss Blocking Orders: Assessing Tensions with Human Rights and consider the standards developed by the jurisprudence of the CJEU and the ECtHR to sustain the difficult coexistence between blocking orders and fundamental rights, including freedom of expression and freedom to conduct a business. Although CJEU case law has recognized users’ rights as enforceable against injunctions that might curb users’ freedom of expression online,87 according to Geiger and Izyumenko, it also ‘shifted a con­sid­er­able part of the human-rights-sensitive enforcement choices on the intermediaries’. This is a suboptimal solution from a human right perspective. It is forced on international courts, such as the CJEU, by the need to find a proportional equilibrium between competing rights—given the scale of the transaction costs of online enforcement—according to the doctrine of ‘fair balance’ among fundamental rights, which is also discussed in Chapters 16 and 17. In reviewing the proper balance between freedom to conduct business and blocking orders, Geiger and Izyumenko discuss the allocation of costs of enforcement; that is, whether the rightholders or the online intermediaries should sustain the costs of blocking. The issue is also discussed in Chapter 20. The UK Supreme Court in Cartier,88 a trade mark infringement case, allocated the costs of blocking and delisting to rightholders, taking the opposite view to the French Cour de cassation in the Allostreaming case,89 a copyright infringement case. Both the UK and French Courts of Appeal had instead decided that the costs of enforcement had to be equally divided between the two parties.90 Other courts in Europe, such as the Irish Court of Appeal in the Sony Music case,91 although deciding on costs for setting up a graduate response scheme rather than website blocking, came to a different ratio, imposing 80 per cent of the costs to the online intermediaries and 20 per cent to the rightholders. EU law and CJEU jurisprudence say little in this regard and leave the decision to the national courts on the basis of their 87  C-314/12 (n. 83) para. 57. 88  Cartier (n. 61) para 31 (‘the ordinary principle is that unless there are good reasons for a different order an innocent intermediary is entitled to be indemnified by the rights-holder against the costs of complying with a website-blocking order’). 89 See Allostreaming (n. 77) (noting that EU provisions ‘do not prevent the costs of the measures strictly necessary for the safeguarding of the rights in question . . . from being borne by the technical intermediaries, even when such measures may present significant costs for the intermediaries. The aforementioned Directives 2000/31 and 2001/29, . . . foresee that notwithstanding the principle of nonresponsibility of the intermediaries, the ISPs and hosting providers are required to contribute to the fight against the illegal content and, in particular, against the infringement of copyright and related rights, when they are best positioned to put an end to such violations’, as translated in Oleksandr Bulayenko, Cour de cassation, Urt. v. 6.7.2017 (SFR, Orange, Free et al. / Union des producteurs de cinéma et al.) (translation into English) (2018) 1 GRUR Int 51–53.). 90  See e.g. Cartier Int’l AG and others v British Telecommunications Plc and another [2017] Bus L.R. 1 [100]–[128] (CA) (UK). 91 See Sony Music Entertainment Ireland Ltd v UPC Communications Ireland Ltd [2016] IECA 231 (Ire.) (‘[b]ecause the defendant is the company which profits—albeit indirectly—because it derives revenue from its subscribers who are engaged in this practice, it is the defendant who should, in my view, be primarily liable for the costs’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

24   Giancarlo Frosio national law.92 Of course, this is telling of considerable fragmentation in approaches to intermediary liability. This is especially relevant as it occurs in matters as sensitive as the allocation of costs of enforcement. Fragmentation in this context brings about legal uncertainty and higher transaction costs that reflect on the sustainability of the business models of online intermediaries in Europe. However, blocking orders have also been widely used in many jurisdictions—in particular by administrative authorities—in connection with amorphous notions of public order, defamation, and morality. In this respect, the emergence of administrative enforcement of online intermediary liability appears to be another well-marked trend in recent internet governance. Multiple administrative bodies have been put in charge of enforcing a miscellaneous array of online infringements—primarily against inter­medi­ar­ies and often absent any judicial supervision. Some administrative bodies—such as the Italian Communication Authority (AGCOM), the Second Section of the Copyright Commission (CPI), and the Greek Committee on Internet Violations of Intellectual Property (CIPIV)—have been given powers to police copyright infringement online and issue blocking orders and other decisions to selectively remove infringing digital works.93 In Chapter 30, Alessandro Cogo and Marco Ricolfi dig deep into the legal and regulatory framework empowering these administrative bodies by studying the Administrative Enforcement of Copyright Infringement Online in Europe. Although administrative procedures are available both under international and EU law for the protection and enforcement of IP rights,94 they must conform to the same principles and safeguards as those for judicial review. Cogo and Ricolfi find—especially by analysing data resulting from the practical implementation of the Italian administrative enforcement system—that transparency and due process rank very low in these administrative enforcement systems. Worldwide many other administrative agencies enjoy broader powers of sanitization of the internet. The Russian Roskomnadzor is an administrative body competent to request telecoms operators to block access to websites featuring content that violates miscellaneous pieces of legislation and competent to keep a special registry or ‘blacklist’ of websites that violate the law.95 Chapter 33 provides more insight on the functioning of this Russian agency. In South Korea, the Korea Communications Commission 92  See C-314/12 (n. 83) para. 57 (noting only that ‘[an injunction] constrains its addressee in a manner which restricts the free use of the resources at his disposal because it obliges him to take measures which may represent a significant cost for him’); Directive 2001/29/EC (n. 82) recital 59. 93  See AGCOM Regulations regarding Online Copyright Enforcement, 12 December 2013 680/13/ CONS (It.); Royal Legislative Decree No. 1/1996, enacting the consolidated text of the Copyright Act, 12 April 1996 (as amended by Law No. 21/2014, 4 November 2014) (Sp.). 94  See Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs) (15 April 1994) Marrakesh Agreement Establishing the World Trade Organization, Annex 1C, 1869 UNTS 299, 33 ILM 1197, Art. 41(2); Directive 2000/31/EC (n. 25) Arts 12–14 (all stating that these provision do ‘not affect the possibility for a court or administrative authority . . . of requiring the service provider to terminate or prevent an infringement’). 95  See Federal Law no. 139-FZ of 28 July 2012 on the Protection of Children from Information Harmful to Their Health and Development and Other Legislative Acts of the Russian Federation (aka ‘Blacklist law’) (Rus.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   25 implements deletion or blocking orders according to the request and standards of the  Korea Communications Standards Commission ‘as necessary for nurturing sound communications ethics’.96 In Turkey, the law empowers the Presidency of Telecommunications (TIB) to block a website or web page within four hours without any judicial decision for the violation of a new category of crimes labelled as ‘violation of private life’ or privacy.97 In India, section 69(A)(1) of the IT Act provides the government with the ‘power to issue directions for blocking for public access of any information through any computer resource’,98 which is dealt by a special committee examining within seven days all requests received for blocking access to online information.99 Many other national administrative authorities—such as the Supreme Council of Cyberspace in Iran or CONATEL in Venezuela—do issue orders against ISPs regarding the legality, blocking, and removal of online content, which do not involve—or involve very limited—judicial review.100 Concerned views have been voiced against administratively issued blocking orders, which could undermine basic due process guarantees.101

5.  Mapping Private Ordering and Intermediary Responsibility As anticipated, private ordering has been emerging powerfully as a privileged tool for online enforcement. Part VI considers whether policymakers—and interested third parties such as IP rightholders—try to coerce online intermediaries into implementing enforcement strategies through voluntary measures and self-regulation, in addition to legally mandated obligations. As Martin Husovec argued, EU law, for example, increasingly forces internet intermediaries to work for the rightholders by making them accountable even if they are not tortiously liable for the actions of their users.102 Bringing pressure on innocent third parties that may enable or encourage violations by others is a 96 See Act on the Establishment and Operation of Korea Communications Commission, last amended by Act no. 11711 of 23 March 2013 (Kor.). 97  See Omnibus Bill no. 524 of 26 June 2013, amending provisions in various laws and decrees including Law no. 5651 on regulation of publications on the internet and suppression of crimes committed by means of such publications, Law no. 809 ‘Electronic Communications Law’ and others (Tur.). 98  See Information Technology Act 2000, as amended by the Information Technology (Amendment) Act 2008, Art. 69(A)(1). 99  See Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules 2009 (to be read with s. 69A of the IT Act), rule 7 (Ind.). 100  See Executive Order of the Supreme Leader Establishing the Supreme Council of Cyberspace, March 2012 (Ire.); Ley de Responsabilidad Social en Radio Televisión y Medios Electrónicos (ResorteME) [Law of Social Responsibility in Radio-Television and Electronic Media], Official Gazette no. 39.579 of 22 December 2012 (Ven.). 101  See e.g. Manila Principles (n. 10) Principle no. 2 (stating that content must not be required to be restricted without an order by a judicial authority). 102  See Husovec (n. 4).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

26   Giancarlo Frosio well-established strategy to curb infringement. As also discussed by Riordan in Chapter 3, intermediaries’ secondary liability has been based on different theories ran­ging from moral to utilitarian approaches. A moral approach would argue that en­cour­aging infringement is widely seen as immoral.103 The second approach is associated with utilitarian and welfare theories.104 Welfare theory approaches have been dominant in intermediary liability policy until recently. They have been based on the notion that li­abil­ity should be imposed only as a result of a cost–benefit analysis. However, there is an ongoing revival of moral approaches to intermediary liability with justification for policy intervention based on responsibility for the actions of users as opposed to efficiency or balance innovation vs harm. The discourse is increasingly shifting from liability to ‘enhanced responsibility’105 of online intermediaries under the assumption that OSPs’ role is unprecedented for their capacity to influence the informational environment and users’ interactions within it. The European Commission stressed that ‘the responsibility of online platforms is a key and cross-cutting issue’.106 This policy development puts special focus on inter­medi­ar­ies’ corporate social responsibilities and their role in implementing and fostering human rights.107 However, emphasis on a responsible role for intermediaries in fostering human rights has a flip side when multiple competing rights are at stake. Online inter­medi­ar­ies are unequipped—and lack constitutional standing—for making decisions involving a proportional balancing of rights. As Calabresi–Coase’s ‘least-cost avoiders’,108 online intermediaries will inherently try to lower the transaction costs of adjudication and liability and, in order to do so, might functionally err on the side of overblocking, in particular by deploying automated algorithmic enforcement tools. Again, this policy trend leads to incremental fragmentation as enforcement is handled directly by miscellaneous private entities from multiple jurisdictions through pro­pri­etary automated means applying corporate visions and disparate terms of service. Of course, there are also 103  See Richard Spinello, ‘Intellectual Property: Legal and Moral Challenges of Online File Sharing’ in Ronald Sandler (ed.), Ethics and Emerging Technologies (Palgrave Macmillan 2013) 300; Mohsen Manesh, ‘Immorality of Theft, the Amorality of Infringement’ (2006) 2006 Stan. Tech. L. Rev. 5; Richard A. Spinello, ‘Secondary Liability in the Post Napster Era: Ethical Observations on MGM v. Grokster’ (2005) 3(3) J. of Information, Communication and Ethics in Society 121; Geraldine Szott Moohr, ‘Crime of Copyright Infringement: An Inquiry Based on Morality, Harm, and Criminal Theory’ (2003) 83 BU L. Rev. 731. 104  See Reiner Kraakman, ‘Gatekeepers: the Anatomy of a Third-Party Enforcement Strategy’ (1986) 2(1) J. of Law, Economics and Organization 53. 105  European Commission Communication, ‘Tackling Illegal Content Online. Towards an enhanced responsibility of online platforms’ COM(2017) 555 final. 106 European Commission, ‘Communication on online platforms and the digital single market: opportunities and challenges for Europe’ COM(2016) 288 final, 9. 107  See e.g. United Nations Human Rights Council, ‘The Promotion, Protection and Enjoyment of Human Rights on the Internet’ (20 June 2014) A/HRC/RES/26/13 (addressing inter alia a legally binding instrument on corporations’ responsibility to ensure human rights). See also Emily Laidlaw, Regulating Speech in Cyberspace: Gatekeepers, Human Rights and Corporate Responsibility (CUP 2015); Mariarosaria Taddeo and Luciano Floridi, ‘The Debate on the Moral Responsibility of Online Service Providers’ (2015) 22(6) Sci. Eng. Ethics 1575, 1575–603. 108 See Guido Calabresi, ‘Some Thoughts on Risk Distribution and the Law of Torts’ (1961) 70 Yale L.J. 499; Ronald Coase, ‘The Problem of Social Cost’ (1960) 3 J. L. & Econ. 1.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   27 counter-posing forces at work in the present internet governance struggle. As seen in Chapter 10, a centripetal move towards digital constitutionalism for internet governance alleviates the effects of the centrifugal platform responsibility discourse.109 This move from intermediary liability to platform responsibility has been occurring on several levels—being apparent from the deployment of miscellaneous enforcement strategies that will be detailed in Part VI. Governments everywhere—and the European Commission in particular—push coordinated self-regulatory efforts by major online hosting providers—including Facebook, Twitter, YouTube, Microsoft, Instagram, Snapchat, and Dailymotion—to promote codes of conduct endorsing commitments to combat the spread of illegal hate speech online,110 fight incitement to terrorism,111 prevent cyberbullying,112 or curb IP infringement online.113 Martin Husovec and I introduce this discussion in Chapter 31, where we describe several emerging legal trends reflecting this change in perspectives, such as obviously voluntary agreements and codes of conduct, but also other legal arrangements, including three-strikes schemes, online search manipulation, follow-the-money strategies, voluntary filtering and website blocking, and private Domain Name System (DNS) content regulation. Under these agreements, schemes, and enforcement strategies, access and hosting providers would be called on actively and swiftly removing illegal materials, instead of reacting to complaints. Of course, some of these enforcement tools are also discussed in many other chapters of the Handbook as stand-alone items and from a legal liability rule perspective. However, as Husovec and I note, ‘legal rules are often only basic expectations which are further developed through market transactions, business decisions, and political pressure’, therefore the practical responsibility landscape is wide-spanning, ‘ranging from legal entitlements to request assistance in enforcement to entirely voluntary private-ordering schemes’. Equally, the typology of OSPs enlisted in voluntary online enforcement strategies broadens steadily beyond the traditional access and hosting providers. On the IP enforcement side, payment blockades—notice-and-termination agreements between major rightholders and online payment processors—and ‘voluntary best practices agreements’ have been applied widely.114 Both the European Commission and the US government have endorsed a ‘follow-the-money’ approach seeking to ‘deprive those engaging in commercial infringements of the revenue streams (for example from consumer payments and advertising) emanating from their illegal activities, and therefore act as a

109 See Lex Gill, Dennis Redeker, and Urs Gasser, ‘Towards Digital Constitutionalism? Mapping Attempts to Craft an Internet Bill of Rights’ (2018) 80(4) Int’l Communication Gazette 302, 302–19. 110  See European Commission, ‘The EU Code of conduct on countering illegal hate speech online’ . 111  See European Commission, ‘Recommendation on measures to effectively tackle illegal content online’ C(2018) 1177 final. 112  See European Commission, ‘Communication on online platforms’ (n. 106) 10. 113  See Directive 2019/790/EU (n. 39) Art. 17(10). 114  See Annemarie Bridy, ‘Internet Payment Blockades’ (2015) 67 Florida L. Rev. 1523. See also Derek Bambauer, ‘Against Jawboning’ (2015) 100 Minnesota L. Rev. 51.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

28   Giancarlo Frosio deterrent’.115 Payment processors like MasterCard and Visa have been pressured to act as IP enforcers as well as enforcers of speech-related infringements. Law enforcement agencies have tried to coerce payment providers to stop providing service to websites like Backpage, for the adult section it runs,116 or Wikileaks. Annemarie Bridy focuses on one additional emerging enforcement strategy involving non-conventional intermediaries in Addressing Infringement: Developments in Content Regulation in the United States and the DNS. In Chapter 32, Bridy describes how the reach of privately ordered online content regulation is deepening by migrating downwards from the application layer into the network’s technical infrastructure, specifically the Domain Name System (DNS). Private agreements between DNS intermediaries and IP rightholders are based on a ‘pass-along’ provision in the ICANN-Registry Agreement stating that a domain name can be suspended if the registrant is found to have engaged copyright or trade mark infringement. On the basis of this clause in the ICANN-Registry Agreement, registry operators and rightholders have set up ‘trusted notifier’ agreements to fast-track domain name suspensions—and hence site blocking—when a ‘trusted notifier’ sends a complaint to the registry operator. Bridy highlights lack of transparency and due process in this privately-ordered form of enforcement, which is also heavily biased in favour of complainants. Bridy also notices how the notice-and-action procedures institutionalized by these IP-focused agreements are readily adaptable for use in censoring all types of content. Increased intermediary accountability has become a globalized trend that has been emerging in numerous jurisdictions. In this regard, online intermediaries are not only held liable for IP, privacy, or defamation infringements, but are also held responsible for state security. Several countries enlist private business in the enforcement of state controls over the internet. In Chapter 33, Sergei Hovyadinov looks exactly at this expanded scope of online intermediaries’ responsibility by presenting Intermediary Liability in Russia and the Role of Private Business in the Enforcement of State Controls over the Internet. According to Hovyadinov, since 2011–12, the Russian government drastically changed its stance on internet regulation. The rapid expansion of the internet in Russia— and its potential for triggering social unrest—has led to the adoption of significant regulatory restrictions on online content and anonymity. Regulation increasingly restricted the type of information available online and allowed the state to collect user data and online activity. As part of this development, telecom operators, web-hosting providers, and social media platforms have become an integral part of the state’s internet control apparatus. Hovyadinov reports—also thanks to interviews of sector operators in Russia— that, ‘[f]aced with new technical challenges, the Kremlin has enlisted competent and technically capable internet actors . . . to help implement these restrictions and control the flow of information.’ Actually, at approximately the same time, similar developments 115 See European Commission Communication, ‘Towards a Modern More European Copyright Framework’ COM(2015) 260 final, 11. See also Office of the Intellectual Property Enforcement Coordinator, Supporting Innovation, Creativity & Enterprise: Charting a Path Ahead (US Joint Strategic Plan for Intellectual Property Enforcement FY 2017–2019) (2017) 61ff. 116 See Backpage v Dart, no. 15–3047 (7th Cir. 2015) (US) (upholding an injunction against Sheriff Dart for his informal efforts to coerce credit card companies into closing their accounts with Backpage).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   29 occurred in many jurisdictions besides Russia. Chapters 13 and 14 describes similar dynamics occurring in China and other Asian countries. Chapter 11 stresses the enlisting of OSPs as cyber-police in African countries. Several chapters also report similar trends in the European regulatory framework. Although Hovyadinov notes that transparency and public accountability of private involvement in internet regulation in Russia is especially low, numerous governments, including western governments, rely on OSPs as their ‘online proxy agents’ to assist with the enforcement of IP rights, data protection, speech-related infringements, and state security. Privately-ordered content moderation defines the contours of the infosphere where social interaction occurs for billions of people daily.117 The impact of online content moderation on modern society is tremendous. Governments, along the lines of what Hovyadinov has described in Chapter 33, rightholders, and miscellaneous user groups would like to shape the gatekeeping functions of OSPs according to their agendas. In Guarding the Guardians: Content Moderation by Online Intermediaries and the Rule of Law, Niva Elkin-Koren and Maayan Perel continue the discourse undertaken in this Part by highlighting challenges posed by content moderation to the rule of law. ElkinKoren and Perel reinforce the point that private ordering ‘circumvents the constitutional safeguard of the separation of powers’ and blurs the distinction between private interest and public responsibility. Chapter 34, then, introduces a critical point in the current debate on intermediary liability online: socially relevant choices are delegated to automated enforcement run though opaque algorithms. Algorithms’ transparency and accountability remains an issue challenging semiotic regulation online.118 In addition, ElkinKoren and Perel stress that machine learning and data analytics allow OSPs to proactively predict and prevent illicit use of content, bringing to life the omniscient platforms that Friedmann recalled in Chapter 14. Omniscient platforms that give a not-so-invisible handshake119 to government for cybersecurity, surveillance, censorship, and general lawenforcement tasks through opaque algorithms evoke threatening dystopian scenarios. Elkin-Koren and Perel suggest that the solution to black box content mod­er­ation can be found in grassroots oversight through ‘tinkering’,120 which would allow people to ‘systematically test and record how online intermediaries respond to representatives, like-real content’ submitted to the platforms. Apparently, there is an emerging strategy for regulation of online platforms leaning towards a globalized, ongoing move in the direction of privatization of law enforcement 117  See Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (Yale U. Press 2018). 118  See Joshua Kroll, Joanna Huey, Solon Barocas, Edward Felten, Joel Reidenberg, David Robinson, and Harlan Yu, ‘Accountable Algorithms’ (2017) 165 U.  Pa. L.  Rev. 633; Nicholas Diakopoulos, ‘Accountability in Algorithmic Decision Making’ (2016) 59(2) Communications of the ACM 56, 56–62; Nicholas Diakopoulos and Michael Koliska, ‘Algorithmic Transparency in the News Media’ (2016) Digital Journalism. 119  See Michael Birnhack and Niva Elkin-Koren, ‘The Invisible Handshake: The Reemergence of the State in the Digital Environment’ (2003) 8 Va. J. L. & Tech. 6. 120 See also Maayan Perel and Niva Elkin-Koren, ‘Black Box Tinkering: Beyond Disclosure in Algorithmic Enforcement’ (2017) 69 Fla. L. Rev. 181, 193.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

30   Giancarlo Frosio online through algorithmic tools. Algorithmic enforcement makes this shift even more unpredictable in terms of fair balancing between private and public interests and human rights. In Chapter 35, Ben Wagner tries to shed some light on this murky issue by discussing Algorithmic Accountability: Towards Accountable Systems. Given the early stage of human engagement with AI, the essential basics and the precise nature of the notion itself of algorithmic accountability are still under review. Basically, according to Wagner, there is still a high level of uncertainty regarding ‘to whom’ and ‘for what’ algorithms should be accountable. An initial basic finding, which fits within Elkin-Koren and Perel’s conclusions, is that algorithms should be at least accountable to users. In this respect, access to the source code might provide some accountability but users should be enabled to understand what the algorithm is actually doing. In order to do so, Wagner lists a number of technical, organizational, and regulatory challenges to ensuring access to data. Considering intermediary liability and algorithmic accountability more closely, Wagner believes that ‘a proposal in which adherence to algorithmic accountability would lower liability of intermediaries could contribute to more effectively ensuring compliance with human rights’. Specific provisions for ensuring algorithmic ac­count­abil­ity might include transparency of automated decisions, external auditing of the algorithmic accountability mechanism, an independent external body addressing complaints, ‘radical transparency’ of the system, irrevocable storage of the decisions, user-friendly explanation of the automated decisions, and availability of human review of the decisions.

6.  Mapping Internet Jurisdiction, Extraterritoriality, and Intermediary Liability Finally, international private law issues are addressed in Part VII. The purpose of this Part is to examine the interface between intermediary liability online and internet jurisdiction, with special emphasis on extraterritorial enforcement of intermediaries’ obligations, which has been emerging as a consistent trend in intermediary liability policy. The term ‘fragmentation’ has surfaced several times in the last few pages. Rules governing online intermediaries are complex and diverging in numerous jurisdictions. As is apparent from the preliminary mapping in this chapter, divergence has recently been expanding rather than becoming normalized. This phenomenon is perhaps tightly attached to the protectionist impulses that characterize present international relationships and internet governance. In Chapter 36, Dan Jerker Svantesson rates this ‘as the most important, and perhaps most urgent, underlying issue facing the internet—there is a fundamental clash between the global, largely borderless, internet on the one hand, and the practice of lawmaking and jurisdiction anchored in a territorial thinking’. In discussing Internet Jurisdiction and Intermediary Liability, Svantesson points at several issues, including the validity of OSPs’ terms of service and law enforcement requests to

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   31 access data stored abroad, but the key development that threatens the global internet relates to the geographical scope of online intermediaries’ liability and obligations. In particular, recent case law and policymaking is faced with answering the riddle of the extraterritorial application of such obligations. Extraterritorial enforcement recently made the headlines for the worldwide enforcement of the ‘right to be forgotten’. Some European institutions endorse the view that delisting should have an extraterritorial reach. On the territorial effect of delisting decisions, the WP29 Guidelines noted that limiting delisting to EU domains cannot be considered a sufficient means to satisfactorily guarantee the rights of data subjects according to the ruling. In practice, ‘this means that in any case de-listing should also be effective on all relevant .com domains’.121 Recently—in accordance with the WP29 Guidelines— the Commission Nationale de l’informatique et des Libertés (CNiL), the French data protection authority, ordered Google to apply the right to be forgotten on all domain names of Google’s search engine, including the .com domain.122 The question raised by CNiL was finally decided by two recent CJEU cases—with a second case dealing with global delisting of a defamatory post on Facebook. The CJEU concluded that EU law does not impose or preclude worldwide measures.123 Instead, it is up to national courts to decide whether extraterritorial delisting should be imposed according to their own balancing of fundamental rights and application of international norms.124 The final CJEU’s stance partially departed from Advocate General Szpunar’s Opinion warning against the unqualified application of worldwide delisting and blocking orders.125 Svantesson shares similar critical views against worldwide measures. However, in a time of ‘sovranism’, a stronger stance on digital sovereignty is to be expected. Of course, this approach is set to create conflicts between the law of the affected party and the law of the speaker. However, national courts can hardly endorse a different approach—or EU law impose it on national courts. If international law allows it, a national court finding that the fundamental rights of its nationals have been infringed must impose measures that provide global redress. Meanwhile, decisions imposing extraterritorial effects on intermediaries have also appeared elsewhere. Svantesson discusses additional ones from Australia and the United States, such as X v Twitter126 and Garcia v Google respectively.127 Notably, the 121 Article 29 Data Protection Working Party, ‘Guidelines on the Implementation of the CJEU Judgment on Google Spain v. Costeja’ (26 November 2014) 14/EN WP 225, 3. 122  See CNiL Restricted Committee Decision no. 2016–054 of 10 March 2016 . See also ‘CNiL Orders Google to Apply Delisting on All Domain Names of the Search Engine’ (CNiL, 12 June 2015) . 123  See C-507/17 Google LLC v Commission nationale de l’informatique et des libertés (CNIL) [2019] ECLI:EU:C:2019:772, para. 72; C-18/18 Eva Glawischnig-Piesczek v Facebook Ireland Ltd [2019] ECLI:EU:C:2019:821, paras 50–2. 124 ibid. 125 C-507/17 Google LLC v Commission nationale de l’informatique et des libertés (CNIL) [2019] ECLI:EU:C:2019:15, Opinion of AG Szpunar, para. 36. 126  [2017] NSWSC 1300. 127 See Cindy Lee Garcia v Google Inc. and others, 786 F.3d 733 (9th Cir. 2015) (US).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

32   Giancarlo Frosio Supreme Court of Canada issued an order requiring Google to remove websites from its worldwide index. The court order is unprecedented for Canada as it forces Google to remove links anywhere in the world, rather than only from the search results available through Google.ca.128 In Chapter 37, Michael Geist discusses The Equustek Effect: A Canadian Perspective on Global Takedown Orders in the Age of the Internet. Equustek stands as a quintessential example of the disruptive effect of extraterritorial enforcement and the stalemate that it might bring about. After the Canadian Supreme Court issued its final order for global delisting, a district court in San Jose, California issued a diametrically opposed decision stating that Google would be infringing US law if it enforced the Canadian order. Twenty years later, the catch-22 scenario of Licra v Yahoo! resurfaces in a far more distributed fashion and might potentially break the internet.129 The 2000 hate speech and Nazi memorabilia case showed the world that internet jurisdiction could become an unsolvable puzzle. For a couple of decades, though, courts shied away from the controversy so as not to awake the dormant kraken. The kraken that can destroy the internet—or at least Balkanize it—has been reawakened. According to Geist, only two outcomes seem plausible as well as precarious: ‘local courts deciding what others can access online or companies such as Google selectively deciding which rules they wish to follow’. Bertrand de La Chapelle and Paul Fehlinger further develop this precarious state of internet affairs by noting that ‘[e]xtraterritorial extension of national jurisdiction is becoming the realpolitik of internet regulation.’ In Jurisdiction on the Internet: From Legal Arms Race to Transnational Cooperation, de La Chapelle and Fehlinger argue from a global internet governance perspective that conflicts between jurisdictions online increase steadily, challenging the Westphalian international system. Possibly, as the case law reviewed by Svantesson and Geist has already shown, a legal arm race will escalate with countries exerting digital sovereignty through an extensive interpretation of territoriality criteria over cross-border data flows and services. For de La Chapelle and Fehlinger, the traditional legal cooperation cannot cope with internet jurisdictional tensions, opening an uncertain path for the future of the global digital economy, human rights, cybersecurity, and the technical internet infrastructure. The route to pursue would be that of an international agreement on the matter. Unfortunately, there is no consensus in sight and this is not going to change any time soon. This institutional gap in internet governance—de La Chapelle and Fehlinger stress—may be solved by launching innovative cooperation mechanisms as transnational as the internet itself through the development of issue-based multistakeholder policy networks for developing ‘scal­able solutions for cross-border legal challenges with regard to data flows, online content or domains’. Given the role of online intermediaries in the digital interconnected society, their li­abil­ity for the speech and content they carry has become a primary policy concern. Much has changed since the inception of the first online intermediaries—and their early regulation. New challenges have renewed discussion of the scope of intermediaries’ 128  Equustek Solutions Inc. v Google [2017] SCC 34 9 (Can.). 129  TGI Paris LICRA & UEJF v Yahoo! Inc [20 November 2000] (Fra.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Mapping Online Intermediary Liability   33 duties and obligations. The Oxford Handbook of Online Intermediary Liability seeks to understand a confused international legal framework. The uncertainty that this confusion brings about can hurt users by potentially scaring companies away from providing innovative new services in certain markets. Additionally, companies may unnecessarily limit what users can do online, or engage in censorship by proxy to avoid uncertain retribution under unfamiliar laws. National courts and authorities, on the other hand, may seek extraterritorial enforcement to prevent any access to infringing materials in their jurisdiction. This is telling of a disconnection between physical and digital governance of information and content that will hardly go away, at least for some time. As a result, in an apparently confused legal and theoretical landscape, there is a growing tendency towards internet fragmentation, which is made even more obvious by unconcealed national tendencies towards data protectionism.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

pa rt I I

M A PPI NG F U N DA M E N TA L NOT IONS

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 2

W ho A r e I n ter n et I n ter m edi a r ies? Graeme Dinwoodie

As this Handbook plainly attests, much has been written in recent years on the liability of ‘internet intermediaries’.1 Yet, it is not altogether clear who is an ‘internet intermediary’ (or ‘online intermediary’), the object of this intense scholarly attention. Tarleton Gillespie’s 2018 article ‘Platforms Are Not Intermediaries’ clearly invoked an understanding of the term (probably meaning they are not passive neutral conduits)2 quite different than most legal scholars.3 But even among legal scholars (and courts and legislatures) there is little consensus or clarity. As Jaani Riordan has explained: The term ‘internet intermediary’ is [an] unhappy abstraction. These words must be used to describe many entities which seem to share little in common, other than activity that uses electronic computer networks . . . They are a genus whose many members’ common feature have never been systematically identified. These difficulties partly explain their tendency to elude precise definition.4

This chapter considers several different ways in which we can better understand the category of online intermediaries. It does so by acknowledging important scholarly efforts 1 See e.g. Jaani Riordan, The Liability of Internet Intermediaries (OUP 2016); Martin Husovec, Injunctions Against Intermediaries in the European Union: Accountable But not Liable? (CUP 2017). 2  See Tarleton Gillespie, ‘Platforms are Not Intermediaries’ (2018) 2 Geo. L. Tech. Rev. 198; see also Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media (Yale U. Press 2018). 3  Cf. David Llewelyn, ‘Intellectual Property Liability of Consumers, Facilitators and Intermediaries: Concepts Under Common Law’ in Christopher Heath and Anselm Kamperman Sanders (eds), Intellectual Property Liability of Consumers, Facilitators and Intermediaries (Wolters Kluwer 2012) s. 2.02, at 18 (‘This paper regards “intermediaries” as those who have played an active role in the transaction, providing an essential element for the act of infringement; “facilitators”, on the other hand, do not play this role at all, they are mere conduits and their role is a passive one’). 4  See Riordan (n. 1) s. 2.12, at 29.

© Graeme Dinwoodie 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

38   Graeme Dinwoodie to produce a detailed taxonomy of online intermediaries. Riordan himself has produced a remarkably useful taxonomy of internet services—developed by reference to the way in which a party’s services fits within the internet’s architecture—on which it would be nigh impossible to improve.5 An analysis of positive law in a number of countries reveals some lessons about the meaning of the term, as well as alternative legal categories of online actors that might be seen as subsets of, or overlapping with, ‘online intermediaries’. Most notably, the concepts of ‘online service provider’ or ‘internet service provider’ serve as rough, but not exact, proxies for internet intermediaries.6 And more recent legislative innovations in Europe have carved out further subsets, such as the ‘online content-sharing service pro­viders’ which will be subject to additional obligations under the 2019 Directive on Copyright in the Digital Single Market.7 By and large, these are technical statutory def­in­itions adopted for very particular legal purposes, but they are considered for they introduce a range of distinctions among internet intermediaries. This chapter also pursues a more typological approach to the classification question, while expounding on the important role that Riordan’s functional taxonomy (or those of others) might play.8 Clearly, law must have regard to empirical reality of how actors behave in constructing categories. But those legal categories are often driven by other considerations, as will be our policy debates. By adopting a somewhat more conceptual approach, I hope to develop understandings that will allow us to debate the liability of online intermediaries across borders and across time, given that the technological features and social role of online intermediaries are constantly evolving.

1.  Definitions of ‘Internet Intermediaries’ The term ‘internet intermediary’ has almost no formal definition in statutes or inter­ nation­al treaties.9 This is not to say that the term ‘intermediary’ is not found in such 5  ibid. ss. 2.40–2.86, at 36–46. 6  See text accompanying nn. 104–6. 7  See ibid. 8  Different disciplines comprehend taxonomies and typologies in different ways. Without engaging with that abstract philosophical question, I use ‘taxonomy’ to suggest more empirically observable features of different online intermediaries and ‘typology’ to suggest more conceptual groupings along a number of variables that should inform how we treat internet intermediaries. 9  But see Comprehensive Economic and Trade Agreement Between Canada of the One Part, and the European Union and its Member States, of the Other Part, Can-EU, 30 October 2016 [CETA], OJ L/11 (14 January 2017) 23, Art. 20.11 (mandating safe harbours for ‘intermediaries’ including conduit, hosting, and caching, but also permitting safe harbours for other functions, ‘including providing an information location tool, by making reproductions of copyright material in an automated manner, and communicating the reproductions’). Art. 18.82 of the Trans-Pacific Partnership Agreement noted that ‘the Parties recognise the importance of facilitating the continued development of legitimate online services operating as intermediaries and, in a manner consistent with Article 41 of the TRIPS Agreement, providing

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   39 instruments. For example, Article 11 of the EU Enforcement Directive and Article 8(3) of the EU Information Society Directive each require that Member States ensure that ‘right holders are in a position to apply for an injunction against intermediaries whose services are used by a third party to infringe an intellectual property right’.10 For the purpose of such injunctions—which might be called ‘accountability orders’11—neither Directive defines the term ‘intermediary’. But the Court of Justice of the European Union has been faced with cases in which the defendant against whom such an order was sought has contested its status as an ‘intermediary’. These challenges to intermediary status have largely failed. Thus, an internet access provider is an intermediary.12 So, too, is an online marketplace, a social network, and the operator of an open wireless network.13 Drawing on recital 59 of the Information Society Directive, the Court of Justice suggested in Telekabel that an intermediary must ‘carry a third party’s infringement in a network’.14 But this is implicit in the ordinary concept of ‘intermediation’, as reflected in Jaani Riordan’s observation that ‘internet intermediaries are united by the attribute that enforcement procedures that permit effective action by right holders against copyright infringement covered under this Chapter that occurs in the online environment’. However, the safe harbours expressed im­mun­ity in term of ‘internet service providers’. Art. 18.82 was one of those provisions suspended when the parties renegotiated the agreement upon US withdrawal. See Comprehensive and Progressive Agreement for Trans-Pacific Partnership, 8 March 2018 . 10  See Directive 2004/48/EC of the European Parliament and of the Council of 29 April 2004 on the enforcement of intellectual property rights [2004] OJ L195/22, Art. 11; Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the Harmonisation of Certain Aspects of Copyright and Related Rights in the Information Society [2001] OJ L167/10, Art. 8(3). 11  In using this phrase, I am adapting Martin Husovec’s insightful characterization of this type of remedy. See Husovec (n. 1). Sir Richard Arnold has labelled this type of liability as ‘intermediary liability’, see Richard Arnold in Chapter 21, which makes some sense given the precondition to its availability. Cf. Riordan (n. 1) ss. 1.49–1.60, at 12–14 (using terminology of ‘injunctive’ liability, and describing these mechanisms as ‘injunctions without wrongdoing’). But as this chapter is intended to interrogate the concept of internet ‘intermediary’ I will avoid using that description of the orders here. 12  See C-314/12 UPC Telekabel Wien GmbH v Constantin Film Verleih GmbH [2014] ECLI:EU:C:2014:192; C-557/07 LSG-Gesellschaft zur Wahrnehmung von Leistungsschutzrechten GmbH v Tele2 Telecommunication GmbH [2009] ECLI:EU:C:2009:107, para. 43 (‘Access providers who merely enable clients to access the Internet, even without offering other services or exercising any control, whether de iure or de facto, over the services which users make use of, provide a service capable of being used by a third party to infringe a copyright or related right, inasmuch as those access providers supply the user with the connection enabling him to infringe such rights’). 13  See C-324/09 L’Oréal SA v eBay Int’l AG [2011] ECLI:EU:C:2011:474, para. 113; C-360/10 SABAM v Netlog NV [2012] ECLI:EU:C:2012:85; C-484/14 Tobias McFadden v Sony Music [2016] ECLI:EU:C:2016:689 (discussing free public wi-fi offered by small business). 14  See C-314/12 (n. 12) para. 30 (emphasis added). The Court further elaborated that the access provider in that case was an intermediary because it was ‘an inevitable actor in any transmission of an infringement over the internet between one of its customers and a third party, since, in granting access to the network, it makes that transmission possible’, ibid. para. 32. This seems to suggest (no doubt correctly) that there will only be an order against such an intermediary when the order relates to the provision of services with which its customer would potentially access an infringing site. Cf. Jaani Riordan, ‘Website Blocking Injunctions Under United Kingdom and European Law’ in Graeme Dinwoodie (ed.), Secondary Liability of Internet Service Providers (Springer 2017) 275, 278 (noting that this ‘appears to reflect causation-based justification for the grant of blocking remedies’ rather than serving as any definitional limit).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

40   Graeme Dinwoodie they are necessary but insufficient causes when online wrongdoers utilize their services or status to cause harm to others’.15 And the language of the Court’s quite conclusory reasoning appears to impose few limits. The suggestion in Tommy Hilfiger Licensing LLC v Delta Center that ‘[f]or an economic operator to fall within the classification of “intermediary” within the meaning of those provisions, it must be established that it provides a service capable of being used by one or more other persons in order to infringe one or more intellectual property rights’ brings within the concept a wide array of actors.16 In Hilfiger, the Court did not decide the hypothetical question whether suppliers of electricity were intermediaries, but the language admits of this possibility.17 Scholars have suggested that ‘[f]urther intermediaries likely to be covered [by Art. 8(3) of the Information Society Directive and Art. 11 of the Enforcement Directive] include pro­viders of payment or anonymizing services, domain name authorities, search engines, VPN providers, or providers of other technical infrastructure.’18 In short, the Court of Justice has embraced a capacious understanding of the term ‘intermediary’. As Martin Husovec notes in his foundational work on accountability orders, ‘although the Directive uses the term “intermediary” [as the potential subject of an accountability order] it is becoming increasingly clear that the underlying concept has very little to do with any intermediary role. The generous protective hand of injunctions basically extends to everyone who engages in an economic activity, in course of which he or she is in a position to prevent third party wrongdoing. Any definitional exercises seem to be by now a mere windowdressing for the purpose of satisfying black letter law.’19

15  See Riordan (n. 1) s. 2.03, at 27. 16  See C-494/15 Tommy Hilfiger Licensing LLC v Delta Center [2016] ECLI:EU:C:2016:528, para. 23. If Art. 11 of the Enforcement Directive is read as encompassing a number of conditions before an order is issued, one of which being that the defendant is an ‘intermediary’, see Cartier Int’l v British Sky Broadcasting [2014] EWHC 3354 (Ch) (UK), then that suggests that one could be an ‘intermediary’ in circumstances where the other conditions do not pertain. The UK courts have listed the other conditions as including that (1) the users/operators of the website are infringing the plaintiff ’s rights, and (2) the users/operators of the website use the intermediary’s service to do so. See ibid. So, logically, these cannot be definitional elements of the concept of an ‘intermediary’; ‘intermediary’ must be a concept that stands apart from these considerations. 17  See C-494/15 (n. 16) para. 28. Sir Richard Arnold has suggested that certain types of search engine might be an intermediary within the meaning of Art. 11 but others would not. See Arnold in Chapter 21. If this were true, it would highlight the important point that for the purposes of particular provisions where the concept of ‘intermediary’ is a threshold for application, ‘intermediary’ might be defined by particular conduct in a particular context rather than status in the online ecosystem. 18  See Husovec (n. 1) 90. This understanding of the legal term ‘internet intermediary’ for the purposes of accountability orders is not all that different than the functionally-derived taxonomy created by Riordan. See Riordan (n. 1) ss. 2.40–2.86 at 36–46. But, of course, Riordan’s taxonomic exercise is designed not only to show the range of actors whom we might think of as intermediaries but also to highlight the important differences among them, which should help in determining whether accountability (or any other) orders should be issued against them, and what the nature of any relief should be. That is, the scope of the overall concept is a different question from the question of sub-classification. 19  See Husovec (n. 1) 90. The EU is not unique. See Graeme Austin, ‘Common Law Pragmatism: New Zealand’s Approach to Secondary Liability of Internet Service Providers’ in Dinwoodie (n. 14) 213 (commenting that the definition of a ‘hosting’ service provider for the purposes of the copyright safe

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   41 Indeed, although the provision in the Information Society Directive was clearly mo­tiv­ated in large part by the context of online exploitation,20 the parallel provision in the Enforcement Directive has not been limited to the online context.21 Thus, in Tommy Hilfiger Licensing, the tenant of market halls who sublet the various sales points situated in those halls to market traders, some of whom used the locations in order to sell counterfeit branded products, was held to fall within the concept of ‘an intermediary whose services are being used by a third party to infringe an intellectual property right’ within the meaning of Article 11 of the Enforcement Directive.22 The Court of Justice declined to draw any distinction between an online and physical marketplace. Likewise, the e-Commerce Directive references the term ‘intermediary’ several times in its recitals.23 But in the corresponding Articles that implement those recitals (Arts 12–15), safe harbours are created for providers of ‘information society service’ providers (rather than ‘internet intermediaries’).24 In the context of the EU e-Commerce Directive, a ‘service provider’ is a person providing an information society service (as that term is defined in an earlier Directive). This basically means ‘any service normally provided for remuneration, at a distance, by electronic means and at the individual request of a recipient of services’.25 This is a broad definition but does exclude some commercial actors such as internet cafés (because not provided remotely) and broadcasters (who, rather than the user, determine when and what transmissions occur). And although the safe harbours to which such providers can have recourse apply only to economic operators rather than non-commercial services, English courts have held un­equivo­cal­ly that personal websites, such as blogs and discussion fora, which have no profit motive or revenue model, may qualify for protection.26 Insofar as the defined term ‘information society service’ provider is serving as a proxy for intermediaries that the e-Commerce Directive has in its sights, this too is a broad conception of ‘internet intermediary’.27 And in the context of

harbour ‘would extend the concept of “Internet service providers” to “Web 2.0” platforms, bulletin boards, blogs, or even websites operated by firms, public entities, and private parties’). 20  See Information Society Directive, recital 59 (‘In the digital environment, in particular, the services of intermediaries may increasingly be used by third parties for infringing activities. In many cases such intermediaries are best placed to bring such infringing activities to an end . . .’). 21  See C-494/15 (n. 16) para. 29 (‘The fact that the provision of sales points concerns an online marketplace or a physical marketplace such as market halls is irrelevant in that connection. It is not apparent from Directive 2004/48 that the scope of the directive is limited to electronic commerce’). 22  See ibid. 23  See Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on Certain Legal Aspects of Information Society Services, in Particular Electronic Commerce, in the Internal Market [2000] OJ L178/1, recitals 14, 40, and 50. 24  In implementing Art. 8(3) of the Information Society Directive, the UK government also used the term ‘service provider’, which it then defined in terms consistent with regulations issued under the e-Commerce Directive. Cf. Directive 2001/29/EC (n. 10) Art. 8(3); Copyright, Designs and Patents Act 1988, s. 97A; Electronic Commerce (EC Directive) Regulations 2002 (SI 2002/2013), reg. 2. 25  See Directive 2000/31/EC (n. 23) recital 17. 26  See Riordan (n. 1) s. 12.63, at 390. 27  Cf. Riordan (n. 1) s. 2.01, 26 (‘Not all providers of internet services are properly described as “inter­ medi­ar­ies” as such, and are not necessarily to be treated comparably’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

42   Graeme Dinwoodie the e-Commerce Directive, unlike the Enforcement Directive, we are talking about internet intermediaries. The e-Commerce Directive not only encompasses a wide variety of internet inter­medi­ ar­ies, but also classifies them in a way that might suggest a taxonomy; at the very least, these classifications have legal consequences. Thus, the e-Commerce Directive creates immunity for service providers: (1) who are mere neutral and transient conduits for tortious or unlawful material authored and initiated by others (Art. 12); (2) for caching local copies of third parties’ tortious or unlawful data (Art. 13);28 and (3) who store third parties’ tortious or unlawful material while having neither actual knowledge of the ‘unlawful activity or information’ nor an awareness of facts or circumstances from which that ‘would have been apparent’ (Art. 14).29 As we shall see later, EU legislation picks up three of the four safe harbours found in US copyright law, as enacted by the Digital Millennium Copyright Act (DMCA).30 More generally, the types of intermediary activity protected by the safe harbours vary slightly among jurisdiction.31 But there is a core commonality that identifies, inter alia, access providers that serve as conduits and host providers that host content as distinct classes of intermediary. Article 14 of the e-Commerce Directive, which provides a hosting safe harbour not unlike that found in section 512(c) of the US Copyright Act, has received the greatest attention in litigation and is similar to immunity provisions in many jurisdictions. Article 14 provides that:



1. Where an information society service is provided that consists of the storage of information provided by a recipient of the service, Member States shall ensure that the service provider is not liable for the information stored at the request of a recipient of the service, on condition that: (a) the provider does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent; or

28  To qualify for protection, caching must be ‘automatic, intermediate and temporary’, and for the sole purpose of making onward transmission more efficient. Service providers will lose protection if they do not act expeditiously to remove cached information upon obtaining actual knowledge that the original copy has been removed or its removal ordered by a competent authority. See Directive 2000/31/EC (n. 23) Art. 13. 29  ibid. Arts 12–14. See also C-484/14 (n. 13). 30  Unlike the DMCA safe harbours, however, those required by the e-Commerce Directive apply horizontally to tort claims under national law as well as trade mark or unfair competition law claims. Indeed, they apply horizontally across all forms of civil and criminal wrongdoing, subject to a number of exclusions set out in Art. 3. 31  For example in New Zealand there are storage and caching safe harbours, along with a provision (which arguably serves as immunity for a mere conduit) affirming that service provider liability cannot be based on mere use of its facilities by a primary infringer. See Austin (n. 19).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   43

(b) the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information. 2. Paragraph 1 shall not apply when the recipient of the service is acting under the authority or the control of the provider.

The Court of Justice interpreted this provision in Google France v Louis Vuitton Malletier and L’Oreal v eBay.32 The Court held that whether the intermediaries in question—Google or eBay—came within the safe harbour would depend on two considerations. First, as a threshold matter, the intermediary could not have been ‘active’ in the al­leged­ly illegal activity; the safe harbour protects conduct of a ‘mere technical, automatic and passive nature’.33 To take advantage of the safe harbour, an intermediary must be a ‘neutral’ actor. The Advocate General in eBay had questioned the application of the neutrality condition to immunity under Article 14; the concept is referenced in a recital arguably addressing another provision (the conduit safe harbour).34 But the Court of Justice adopted the requirement in both Google France and eBay.35 Thus, Google’s immunity would depend on the role it played in the selection of keywords.36 Google’s Keyword Suggestion Tool might render its activities ‘non-neutral’ and make it vulnerable to loss of immunity under Article 14. Likewise, in eBay L’Oréal argued that Article 14(1) could not apply because the activities of eBay went ‘far beyond the mere passive storage of information provided by third parties’.37 On the contrary, L’Oréal alleged, eBay ‘actively organized and participated in the processing and use of the information to effect the advertising, offering for sale, exposing for sale and sale of goods (including infringing goods)’.38 The Court of Justice found first that eBay was potentially entitled to the benefit of Article 14 as an information society service provider. Whether eBay was such a provider within the protection afforded by the safe harbour would as a threshold matter depend on how active it was in the allegedly illegal activity.39 If it had been involved in ‘optimising the presentation of the offers for sale in question or promoting those offers, it [would] be 32  See C-236–238/08 Google France SARL v Louis Vuitton Malletier SA [2010] ECLI:EU:C:2010:159, para. 20; C-324/09 (n. 13) para. 113. 33  ibid.; ibid. para. 115 (‘[T]he mere fact that the operator of an online marketplace stores offers for sale on its server, sets the terms of its service, is remunerated for that service and provides general information to its customers cannot have the effect of denying it the exemptions from liability provided for by [the e-Commerce Directive]’). 34  See C-324/09 (n. 13) [2010] ECLI:EU:C:2010:757, AG’s Opinion 139−46. 35  See C-236–238/08 (n. 32) paras 113–16. 36  See C-324/09 (n. 13) para. 119. 37  L’Oréal SA v eBay International AG [2009] EWHC 1094 (Ch) [437] (UK). 38 ibid. 39  See C-324/09 (n. 13) para. 113; ibid. para. 115. The fact that certain services are automated should not always mean that the provider of those services is of itself not ‘active’. Algorithmic development is surely relevant behaviour in assessing degrees of activity. Cf. Lenz v Universal, 815 F.3d 1145 (9th Cir. 2016) (US) (amending opinion to reflect algorithmic review). But see Annette Kur, ‘Secondary Liability for Trademark Infringement on the Internet: The Situation in Germany and Throughout the EU’ (2014) 37 Colum. J. of L. & Arts 525, 532 (noting the majority position in German case law that has treated suggestions proffered by operation of algorithms as being within the protection of Art. 14).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

44   Graeme Dinwoodie considered not to have taken a neutral position . . . [and thus unable] to rely, in the case of those [offers]’, on the Article 14 exemption.40 Determination of that question was, however, left to national courts.41 Secondly, by the terms of Article 14, even if an intermediary was insufficiently in­active to qualify as an information services provider it would lose immunity if it was put on knowledge of a wrong and did not act expeditiously. Thus, for example, if eBay ‘was aware of facts or circumstances on the basis of which a diligent economic operator should have realised that the offers for sale in question were unlawful and, in the event of it being so aware, failed to act expeditiously’.42 Google could invoke Article 14 only if it disabled ads upon receiving notice.43 The Court of Justice’s reading of the second condition imposes ‘a hybrid standard that assesses what the defendant actually knew according to the standards of a reasonable service provider in the defendant’s position’.44 The Court of Justice has not finally decided on the availability of the EU safe harbour for either search engines or auction sites, leaving that to be fought out among the lower national courts. In decisions handed down on similar facts to eBay v L’Oreal, the French Supreme Court held that eBay could not avail itself of the protection under Article 14 because it had played too active a role by assisting sellers in the promotion and fostering of sales.45 In contrast, the Madrid Court of Appeals has held that Google (as the owner of YouTube) was acting as a host under Article 14 in the context of a copyright infringement case.46 The Italian courts have reached similar outcomes regarding YouTube.47 Variations clearly remain at the national level within Europe.48 The issue is now before the Court of Justice.49

40  C-324/09 (n. 13) para. 116. 41  ibid. para. 117. 42  ibid. para. 124 (quoting Art. 14 of the e-Commerce Directive). 43  See ibid. para. 120. 44  See Graeme Dinwoodie, ‘A Comparative Analysis of the Secondary Liability of Online Service Providers’ in Dinwoodie (n. 14) 37 (quoting Riordan). 45  See Cour de cassation [French Supreme Court] eBay Inc v LVMH, Parfums Christian Dior [3 May 2012] Bull. civ. IV, no. 89 (Fra.). See also Beatrice Martinet Farano, ‘French Supreme Court Denies eBay Hosting Protection’ (2012) 3 TTLF Newsletter on Transatlantic Antitrust & IPR Developments (discussing the French Supreme Court’s decision, in which the court specifically relied on the ‘active role’ standard from the Court of Justice’s L’Oréal and Google France decisions). 46 See Audiencia Provincial Sentencia [APS] [Provincial Appellate Court Sentence] Madrid Gestevision Telecinco, SA v YouTube, LLC [14 January 2014] no. 11/2014 (ES) translated in ‘Decision no. 11/2014 on YouTube v Telecinco’ (Hogan Lovells Global Media & Comm Watch, 14 February 2014) . 47  See Martini Manna and Elena Martini, ‘The Court of Turin on the Liability of Internet Service Providers’(Lexology,10June2014) (reporting on similar outcomes in Italy on similar grounds in Tribunale Torino [Ordinary Court of First Instance of Turin] Delta TV Programs srl v Google Inc [23 June 2014] Docket no. 38113/2013 (It.) ). 48  See Kur (n. 39) 531–2 (noting different approaches in Germany and France). 49  See C-682/18 LF v Google LLC, YouTube Inc., not yet decided.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   45

2.  Alternative Terms The literature discussing online intermediaries frequently uses alternative terms that are often assumed to be synonymous both with each other and with ‘online intermediaries’. The most frequent alternative would be ‘online service provider’ (OSP) or ‘internet service provider’ (ISP). These terms are used largely because they are commonly defined terms in statutes; indeed, if they were not defined by the statute, the concept would almost be without limit.50 As a result, those terms have no consistent meaning across national borders. To be sure, the term ‘online intermediary’ also lacks a single, common, and consistent usage. But ‘online intermediary’ is less frequently used as a defined statutory term; the variation in that case arises more from conceptual ambiguities than statutory definitions. Each of these terms has in recent years received some legislative definition (along with yet other synonyms) in provisions creating safe harbours, or immunity from liability, for such actors.51 Just before the EU adopted the e-Commerce Directive, the United States enacted the DMCA (and two years before that the Communications Decency Act). Section 230 of the Communications Decency Act provides the strongest and most unconditional form of immunity from liability for providers and users of an ‘interactive computer service’ who publish information provided by others.52 However, that provision does not apply to federal intellectual property claims. Instead, the DMCA introduced a detailed set of provisions into section 512 of the Copyright Act establishing a series of immunities from damages for copyright infringement for intermediaries engaged in certain types of behaviour: providing internet access as conduits, hosting websites, or offering ‘information location tools’.53 It also provided a safe harbour for caching, from which many intermediaries benefit. The parallels with the e-Commerce Directive are obvious, even though the DMCA is formally restricted to copyright law.54

50  See Riordan (n. 1) s. 2.04, at 27 (noting that the term ‘[service providers] is so general that it described almost any commercial entity’). 51  The terms make more fleeting appearances outside this context, primarily in defining which actors are subject to certain disclosure obligations vis-à-vis customers and enforcement authorities. For example the UK’s Digital Economy Act 2010 required an ‘internet service provider’ to participate in a subscriber notification regime, and for that purpose defined an ‘internet service provider’. See Digital Economy Act 2010, s. 16 amending Communications Act 2003, s. 124N (defined as ‘a person who provides an “internet access service”, which in turn means an electronic communications service consisting wholly or mainly of access to the internet, where an IP address is allocated to each subscriber to enable access’). See also Digital Economy Act 2010, ss. 3–4. 52  See Communications Decency Act of 1996, 47 USCA s. 230(c)(1) (‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider’). The protected intermediaries—interactive computer service providers—is a broad category. 53  17 USC § 512 (2000). 54  It is on occasion improperly used to further different claims.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

46   Graeme Dinwoodie Each safe harbour under the DMCA (e.g. access vs hosting vs caching) has its own specific requirements that vary slightly.55 Immunity as an access provider requires, to oversimplify, a basic passivity. More formally, the storage of material must be ‘inter­medi­ ate and transient . . . in the course of transmitting, routing, or providing connections’. Moreover: • the event must be initiated by a third party; • the event must be carried out through an automatic technical process without selection of the material by the service provider; • the service provider must not select recipients of the material except as an automatic response to the request of another person; • no copy of the material made by the service provider must be maintained on the system in a manner ordinarily accessible to anyone other than anticipated recipients or for a longer period than is reasonably necessary for the transmission, routing, or provision of connections; and • the material is transmitted through the system or network without modification of its content. The crucial hosting immunity in section 512(c) of the Copyright Act is based on inter alia the operation of a notice-and-takedown system by the intermediary. Section 512(c) is an elaborate provision that (like each safe harbour in the statute) imposes a series of specific conditions—in addition to the general conditions noted above—that are quite detailed. Thus, ‘the (hosting) provider must not have actual knowledge that the material or an activity using the material is infringing, is not aware of facts or circumstances from which infringing activity is apparent, and upon obtaining such knowledge or awareness, it acts expeditiously to remove or disable access to the material’. In addition, where the provider has the right and ability to control such activity, it must not receive a financial benefit directly attributable to the infringing activity.56 Most importantly, where an ISP receives a takedown notice alleging infringement that complies with the detailed provisions of the statute, it must respond expeditiously to remove or disable access to the material alleged to be infringing. And the statute then orchestrates in precise terms the actions that the ISP must take to preserve its immunity, including how it must respond to any counter-notice that it receives from any person in response to its good-faith disabling of access to, or removal of, material.57 The counternotice procedure of the DMCA is intended to preserve some balance between the rights of the customers of the ISP, who might have valid grounds for believing that their conduct is not infringing, and those of the copyright owner (with whose notice the ISP is 55  To avail oneself of any of the different immunities under the DMCA, the statute created two general conditions. Thus, OSPs must ‘adopt and reasonably implement, and inform its subscribers and account holders, of a policy for termination of repeat infringers’; and they must ‘accommodate and not interfere with standard technical measures used by copyright owners to identify or protect copyrighted works’. See 17 USC § 512(i)–(j). 56  ibid. § 512(c). 57  ibid. § 512(g).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   47 likely to comply in order to maintain immunity). These provisions take the form of conditions which an ISP must satisfy to protect itself from liability for hosting infringing content or direct liability to a customer for removing what turns out to be lawful ma­ter­ ial. But the benefits of compliance are so strong that the conditions in the statute function as a form of business regulation.

3. A  Functional Taxonomy The Organisation for Economic Co-operation and Development (OECD) has described (but not defined) internet intermediaries in these terms: ‘Internet intermediaries bring together or facilitate transactions between third parties on the Internet. They give access to, host, transmit and index content, products and services originated by third parties on the Internet or provide Internet-based services to third parties.’58 As with the statutory definitions noted earlier, this is hardly limited. And it risks treating quite diverse actors as one monolith. In The Liability of Internet Intermediaries, Jaani Riordan offered a compelling taxonomy of internet services, which ‘classifies their technical activities into the physical, network, and application layers’ of the internet.59 Physical layer services provide the basic infrastructure necessary for communication; network layer services are responsible for data being routed from one IP address to another;60 and application layer services where ‘content is transacted’.61 Riordan consciously sought to offer a more nuanced classification than one could find in the statutory schemes noted earlier. One of the criticisms of the schemes found in, for example, the e-Commerce Directive or the DMCA, is that they reflect a static twentieth-century view of internet intermediaries. The separation between access pro­ viders and hosts, and our expectation of what each category of intermediaries does or should do has moved on. Many intermediaries now straddle these classifications. And those classifications also group together actors who might in sometimes important ways differ dramatically from each other. More nuance can only help. Without rehashing Riordan’s model, it is worth noting two features. First, within each category there is a remarkable range of different actors performing different roles. The legal regimes operate with a far smaller number of categories, as is not surprising (and surely appropriate for enforcement costs reasons).62 Secondly, one actor can perform 58  See OECD, ‘The Economic and Social Role of Internet Intermediaries’ (2010) 9. 59  See Riordan (n. 1) s. 2.02, at 26. 60  In this category, Riordan creates sub-classes of ISPs (akin to access providers); hosts; cloud services; domain name controllers; and certificate authorities. See Riordan (n. 1) s. 2.46, at 38. 61  See Riordan (n. 1) s. 2.57, at 40. Riordan splits this last extremely diverse category into platforms, gateways, and marketplaces. See ibid. 62  See Christina Angelopoulos, European Intermediary Liability in Copyright: A Tort-Based Analysis (Wolters Kluwer 2016) s. 1.3.2, at 18 (noting that the oversimplification of categories might have been appropriate in the early days of the internet, but might now be questioned).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

48   Graeme Dinwoodie more than one type of service; indeed, this has become evident even in working with the smaller number of legal categories, as defendants have often asserted different safe ­harbours to immunize different parts of their operation. It must be understood that perfecting the taxonomy is an essentially empirical task. Given the fast-changing and diverse nature of online intermediaries, there might appear to be a risk in placing too much analytical weight on a (2016) technical description of the role and capacity of current actors. And it is an open question whether that empirical reality should be hardwired into legal doctrine, or even given dispositive weight at the moment it is assessed. The dynamic between technological form and legal liability has ebbed and flowed over the years. For example, the US Supreme Court has recognized the relevance of design choice to determinations of copyright inducement liability when combined with other evidence.63 Yet, it has also refused to allow technology clearly designed around doctrine to secure a clean bill of health.64 As long as this potential for evolution is understood and the functional features of today do not become embedded in fixed legal doctrine, this effort at taxonomy is valu­able.65 There is a need for greater granularity in this debate. This sensitivity would be valuable, because the legitimacy of the behaviour of intermediaries occupies a spectrum that requires greater flexibility (and room for more subtle calibration) than formal secondary liability doctrine might seem to allow.66 Indeed, such flexibility is also necessary to account for the different demands that might be imposed on smaller entities without the capital or sophistication of eBay.67 To give just one example of where a factual taxonomy might inform legal analysis, there remains disagreement among national courts in Europe whether web-blocking orders must as a matter of proportionality be sought first from host providers before actions are brought against access providers. The Court in Telekabel did not address 63 See MGM Studios Inc. v Grokster Ltd, 545 US 913, 939 (2005) (US) (‘[T]his evidence of unlawful objective is given added significance by MGM’s showing that neither company attempted to develop filtering tools or other mechanisms to diminish the infringing activity using their software’). 64  See Joseph Fishman, ‘Creating Around Copyright’ (2015) 128 Harv. L. Rev. 1333; Rebecca Giblin and Jane Ginsburg, ‘We Need to Talk About Aereo: Copyright-Avoiding Business Models, Cloud Storage and a Principled Reading of the “Transmit” Clause’, Columbia Law and Economics Working Paper no. 480 (29 May 2014) . 65  Riordan recognizes this risk, but believes that being organized around network architecture, his taxonomy is ‘flexible and capable of adaptation’. See Riordan (n. 1) s. 2.42, at 37. 66  See generally Jane Ginsburg, ‘Separating the Sony Sheep from the Grokster Goats: Reckoning the Future Business Plans of Copyright-Dependent Technology Entrepreneurs’ (2008) 50 Ariz. L. Rev. 577 (noting the need for copyright law to have tools by which to recognize the different culpability of a range of intermediaries even though all operate dual-purpose technologies). 67  cf. Frederick Mostert and Martin Schwimmer, ‘Notice and Takedown for Trademarks’ (2011) 101 Trademark Rep. 249, 264 (speculating as to some of the misjudgments that might have prompted the lack of responsiveness of the defendant in Louis Vuitton Malletier SA v Akanoc Solutions Inc., 658 F.3d 936 (9th Cir. 2011) (US)). In Louis Vuitton v Akanoc, the Ninth Circuit affirmed a jury verdict of over $10 million against a web-hosting business that leased, inter alia, server space to customers trafficking in counterfeit goods. In that case, the defendant (a small company) failed to respond expeditiously to takedown requests, and thus fell afoul of Tiffany because it had received actual notice. See ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   49 the suggestion by the Advocate General that the principle of proportionality might, however, warrant the infringers or the host ISP being pursued first.68 But if such a form of analysis were pursued then courts would need to be informed about the different roles played by different intermediaries (and potential defendants). That is to say, while resisting subservience to technical function as determinant of how to classify a commercial actor, it is important that technical realities inform assessment of liability.69 This is particularly true if the responsibility of an intermediary to engage in particular conduct turns on its technological capacity to do so.70 Definition by exemplar might also be useful, so it is worth briefly setting out the types of defendants that have been considered intermediaries in litigation. Claims against intermediaries have arisen not only when claimants are seeking accountability orders as discussed previously. In particular, claims for primary or secondary liability have been asserted—and characterized as being in essence against intermediaries—under copyright law, trade mark law, defamation law, and privacy law, to name but the most prom­ in­ent examples.71 The principal cases have revolved around relatively similar fact patterns in different jurisdictions. For example, copyright owners have sued manufacturers of copying technologies for infringements caused by those who use their equipment,72 purveyors of peer-to-peer file-sharing software for the activities of those who download material without rightholders’ permissions,73 and social media sites (such as YouTube) that host allegedly infringing clips from copyrighted audiovisual works.74 In the context of trade mark law, the leading modern exemplars of such claims are actions brought against online auction sites, each essentially alleging that the auction site could have done more to stop the sale of counterfeits or other allegedly infringing items by third parties on its website; and claims brought against search engines alleging 68  See C-324/09, AG’s Opinion (n. 34) para. 107 (‘[I]t should be noted that the ISP is not in a contractual relationship with the operator of the copyright-infringing website. As a consequence . . . , a claim against the ISP is, admittedly, not completely out of the question, but the originator must, as a matter of priority, so far as is possible, claim directly against the operators of the illegal website or their ISP’); see also ‘ “Disturber Liability of an Access Provider Disturber Liability of an Access Provider” (Störerhaftung des Access-Providers) Decision of the Federal Supreme Court (Bundesgerichtshof) 26 November 2015— Case No. I ZR 174/14’ (2016) 481 47(4) IIC 481 (para. 83). 69  See Riordan (n. 1) s. 2.02, at 26 (suggesting that the accurate description of ‘what each service does’ will ‘enable more nuanced conceptions of their causative relationship to harm and the circumstances in which they may face primary secondary liability’). 70 See Directive 2001/29/EC (n. 10) recital 59 (‘In many cases, intermediaries are best placed to bring . . . infringing activities [of third parties using their services] to an end’). 71 cf. Sea Shepherd v Fish & Fish [2015] UKSC 10 [40] (Lord Sumption dissenting) (UK). 72  See e.g. Sony Corp. of Am. v Universal City Studios Inc., 464 US 417 (1984) (US). 73  See e.g. Metro-Goldwyn-Mayer Studios Inc. v Grokster Ltd, 545 US 913 (2005) (US); A&M Records Inc. v Napster Inc., 239 F.3d 1004 (9th Cir. 2001) (US); see generally Jane Ginsburg, ‘Separating The Sony Sheep from the Grokster Goats: Reckoning The Future Business Plans of Copyright-Dependent Technology Entrepreneurs’ (2008) 50 Ariz. L. Rev. 577. 74 See Viacom International Inc. v YouTube Inc., 676 F.3d 19 (2d Cir. 2012) on remand 940 F.Supp.2d 110 (SDNY 2013) (US).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

50   Graeme Dinwoodie that the sale of keyword advertising consisting of the trade marks of parties other than the mark owner resulted in infringement (normally, by causing actionable confusion).75 In defamation or libel law, websites (e.g. the retailer Amazon.co.uk) have been sued where a third party posted an allegedly defamatory book review on the claimant’s book product page.76 And search engines such as Google have been sued for allegedly ‘publishing’ defamatory material that appeared within ‘snippets’ summarizing search results for the claimant,77 or ‘processing’ personal data the publication of which within snippets violated the privacy of individuals to whom the personal data related (even if the data is not removed from the actual publisher’s website).78 This illustrates the wide variety of internet intermediaries pursued as liable for en­ab­ ling wrongs perpetrated by others, but core ISPs, such as companies providing access to the internet or web-hosting services, are also potential defendants in any of these scen­ arios.79 And as rightholders—and potentially policymakers—adopt ‘follow the money’ or ‘least cost avoider’ strategies to identify defendants of first resort, the list of relevant online intermediaries may grow further. Thus, companies that process credit card payments have also been sued for facilitating unlawful transactions,80 and companies merely providing customers with access to the internet have been required to block websites in other countries where allegedly infringing content resides.81

4.  Typological Considerations 4.1  Intermediaries in Distinct Fields Some of the definitions of intermediary noted earlier are drawn from regimes regulating intermediaries horizontally. Others are taken from particular legal regimes where provision has been made for the treatment of intermediaries. Insofar as the concept is defined for a particular field of law, it will almost inevitably come to be shaped by the 75  See generally Graeme Dinwoodie, ‘Secondary Liability for Online Trademark Infringement: The International Landscape’ (2014) 36 Colum. J. of L. & Arts 463–501. See also Free Kick Master LLC v Apple Inc., 2016 WL 77916 (ND Cal. 2016) (US) (app store sued for trade mark infringement with respect to apps sold by third parties in its app store). 76 See McGrath v Dawkins [2012] EWHC B3 (QB) (UK). 77  See e.g. Metropolitan Schools v DesignTechnica [2009] EWHC 1765 (QB) (UK); A v Google New Zealand Ltd [2012] NZHC 2352 (NZ). 78  See Case C-131/12 Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (AEPD), Mario Costeja González [2014] ECLI:EU:C:2014:317. 79  See e.g. Louis Vuitton v Akanoc (n. 67). 80  See e.g. Gucci Am Inc. v Frontline Processing Corp., 721 F.Supp.2d 228 (SDNY 2010) (US) (allowing secondary infringement claim to proceed against credit card processing companies who provided services to online merchant allegedly selling counterfeit goods); but cf. Perfect 10, Inc. v Visa Int’l Serv. Ass’n, 494 F.3d 788 (9th Cir. 2007) (US) (affirming dismissal of actions against credit card companies). 81  See e.g. Cartier Int’l AG v British Sky Broadcasting Ltd [2016] EWCA Civ 658; [2018] UKSC 28 (UK).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   51 purpose of that field. The capacity of an intermediary to avoid confusion (and thus further the goals of trade mark law) might differ from its capacity to prevent reproduction (and thus assist in the prevention of copyright infringement). Likewise, establishing an understanding of intermediary that extends further than the norm might bring costs that undermine the purpose of the regime in question in ways that vary among regimes. This consideration has to inform some of the debate surrounding whether to develop schemes of liability for intermediaries that are tied to particular legal claims or regimes. There may be pragmatic reasons for doing so. Overlapping and inconsistent regulation can increase costs and uncertainties and undermine the proper functioning of a legal regime (especially where the regime, such as to provide safe harbours) is aimed at providing the necessary certainty for electronic commerce.82 In some countries, immunity provisions are restricted to particular types of claims. In the United States, there is a remarkably complex matrix of immunity.83 Most claims are covered by section 230 of the Communications Decency Act.84 Copyright claims are covered by the DMCA. But trade mark claims are encompassed within neither US safe harbour regime;85 liability of intermediaries for trade mark infringement is thus determined by the positive standard articulated in Inwood v Ives86 and applied in eBay v Tiffany.87 Thus, one choice to bear in mind as we classify intermediaries is whether status as an ‘intermediary’ is a free-standing assessment about the technological or social function 82  See Mark Lemley, ‘Rationalizing Internet Safe Harbors’ (2007) 6 J. Telecomm’s & High Tech. L. 101 (arguing that at least some aspects of liability—e.g. safe harbours—be standardized to afford the requisite certainty to IPSs and to prevent opportunistic distortion of the claims asserted against them). 83  ibid. (calling for rationalization of the matrix). 84  See text accompanying nn. 52–3. 85  Some have argued for a DMCA-like system of notice and takedown, tied to immunity, to be extended to trade marks. See Susan Rector, ‘An Idea Whose Time Has Come: Use of Takedown Notices for Trademark Infringement’ (2016) 106 Trademark Rep. 699; Mostert and Schwimmer (n. 67) 265. The ISP community is split. See the comments of Etsy, Foursquare, Kickstarter, Meetup, Shapeways, In the Matter of Joint Strategic Plan on Intellectual Property Enforcement (16 October 2015) (asking for consideration of trade mark safe harbour). 86 See Inwood Labs Inc. v Ives Labs Inc., 456 US 844 (1982) (US). 87 See Tiffany (N.J.) Inc. v eBay Inc., 600 F.3d 93 (2d Cir. 2010) (US). In the United States, the Lanham Act does contain provisions that immunize certain intermediary conduct from liability. See e.g. 15 USC § 1114(2)(B)–(C) (protecting innocent publisher or distributor of newspaper, magazine, or electronic communication from liability for trade mark infringement based on use in paid advertising matter against liability for damages); see also 15 USC § 1125(c)(3) (defence against dilution liability based on facilitating fair use, although the use facilitated was fair it is hard to see much need for this provision). Some scholars have seen s. 32(2) as possessing the potential for operating as a general trade mark im­mun­ity. See e.g. Lemley (n. 82) 105–6. Indeed, Mark Lemley has argued that that immunity provision should provide the model for a uniform safe harbour for intermediaries across all causes of action, in preference to that found in the Communications Decency Act or the DMCA. See ibid. 115–18. However, the courts have rarely found reason to apply s. 32(2). Cf. Hendrickson v eBay Inc., 165 F.Supp.2d 1082 (CD Cal. 2001) (US). Domain name registrars also benefit from provisions drafted in ways that confer limited immunity from liability for trade mark infringement under US law. See 15 USC § 1114(2)(D)(iii) (‘A domain name registrar, a domain name registry, or other domain name registration authority shall not be liable for damages under this section for the registration or maintenance of a domain name for another absent a showing of bad faith intent to profit from such registration or maintenance of the domain name’). See also Lockheed Martin Corp. v Network Solutions, Inc., 141 F.Supp.2d 648 (ND Tex. 2001) (US).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

52   Graeme Dinwoodie of the actor, or whether we denominate an actor as an intermediary for a particular type of legal claim in respect of which that status results in legal consequences.88 It might be appropriate to consider an actor an ‘intermediary’—insofar as legal consequences attached to this characterization—for one type of legal claim but not another.89 Martin Husovec has noted that ‘in its case law, the CJEU doesn’t synchronize [the usage of the term “intermediary”] across the InfoSoc Directive, the Enforcement Directive, and the E-Commerce Directive’.90 Husovec suggests that the Court has perhaps resisted harmonizing the concept because it wanted the provisions in Articles 8(3) and 11 to apply to as broad a category of providers, and that applying the understandings developed for host provider immunity under Article 14 of the e-Commerce Directive (e.g. ‘passive’ behaviour) would unduly restrict those orders.91 But, as a descriptive matter, one can read L’Oreal not as saying that active behaviour renders a provider something other than an intermediary; rather, it would be one not entitled to immunity under Article 14, just as an access provider against whom an order is not made under Article 8(3) is no less an intermediary. The Court found first that eBay was potentially entitled to the benefit of Article 14 as an information society service provider.92 Whether eBay was such a provider within the protection afforded by the safe ­harbour would as a threshold matter depend on how active it was in the allegedly illegal activity.93 If it had been involved in ‘optimising the presentation of the offers for sale in question or promoting those offers, it [would] be considered not to have taken a neutral position . . . [and thus unable] to rely, in the case of those [offers]’, on the Article 14 exemption.94 Moreover, the range of intermediaries encompassed by Articles 8(3) and 11 might appropriately be broader than those addressed in the safe harbour immunities, because the safe harbours shield intermediaries from monetary liability. Under prevailing standards for liability, it is hard to imagine plausible liability (as opposed to accountability) claims against many of the actors potentially susceptible to orders under Article 8(3) or 11. But perhaps there is merit in trying to articulate the concept apart from the susceptibility of a particular actor in a particular setting to be the subject of an accountability order or the beneficiary of safe harbour immunity?

88  The scope of the immunity provision in a particular country will not necessarily map to the scope of the secondary liability standard. Nor logically must it do so. 89  cf. Martin Senftleben, ‘An Uneasy Case for Notice and Takedown: Context Specific Trademark Rights’, SSRN Research Paper no. 2025075 (16 March 2012) . 90  See Husovec (n. 1) 88. 91 ibid. 92  See C-324/09 (n. 13) para. 109 (‘an internet service consisting in facilitating relations between sellers and buyers of goods is, in principle, a service for the purposes of Directive 2000/31. That directive concerns, as its title suggests, ‘information society services, in particular electronic commerce’. It is apparent from the definition of ‘information society service’ . . . that that concept encompasses services provided at a distance by means of electronic equipment for the processing and storage of data, at the individual request of a recipient of services and, normally, for remuneration’). 93  ibid. paras 113 and 115. 94  ibid. para. 116.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   53

4.2  Status as a Measure of Legal Applicability As this last comment suggests, to some extent the apparent variation in decisions as to whether particular actors are intermediaries under the e-Commerce Directive may be because the definition of those actors who are potentially immune may implicitly contain some of the conditions for availing oneself of immunity.95 Strictly, these conditions do not define who is a service provider but rather whether the service provider is acting in a way that will allow them to take advantage of the immunities conferred.96 Thus, for example, the Italian courts have developed a distinction between passive and active intermediaries that has its roots in the conditions under which the protections of the e-Commerce Directive will be available (and given greater weight than anticipated by the Court of Justice).97 Often the term ‘intermediary’ is used in ways that are an attempt to link the actor with a particular legal consequence or immunity.98 One could read this case law as refining the notion of service provider (for these purposes) or simply imposing conditions on when immunity will be available. The latter is probably the better reading because a service provider may be active in one scenario but passive in another. Likewise, and relatedly, statutes providing immunity for different kinds of service provider performing different online roles will frequently define the term ‘service provider’ in varying ways to accommodate those differences. For example, an ‘access provider’ will inevitably be defined differently from a host provider at a certain level of detail.99 Both are intermediaries, but what we expect them to do might differ. Should we define intermediaries by reference to what they do or by what we want them to do (or by the legal exposure to which we wish to subject them)? One sees an inversion of this, in fact, in the way that the conduct of actors who might as a matter of functional taxonomy be regarded as intermediaries has been treated as implicating primary liability. For example, search engines such as Google have been formally found not liable for defamatory material that appeared within ‘snippets’ sum­mar­ iz­ing search results for the claimant because they were not a publisher of the material.100 95  Even within a single jurisdiction, different safe harbours may also contain different specific conditions. Thus, a service provider may be protected under one safe harbour but not another. See Viacom (n. 74) (discussing immunity under the different safe harbours of the e-Commerce Directive). 96  The same approach can be found in provisions defining service provider for the purpose of imposing regulatory obligations. Thus, the UK Digital Economy Act only required ‘qualifying ISPs’ to participate in its subscriber notification regime. 97  See Dinwoodie (n. 44). 98  See Llewelyn (n. 3) 18 (‘It is impossible to draw a precise line to categorize . . . virtual platforms into either “facilitators” or “intermediaries”, as it will depend upon what these platforms actually do and provide, and their exact involvement in, or relationship with, the act of infringement of IP . . . This paper regards “intermediaries” as those who have played an active role in the transaction, providing an essential element for the act of infringement; “facilitators”, on the other hand, do not play this role at all, they are mere conduits and their role is a passive one’). 99  See 17 USC § 512(k)(1)(A)–(B) (providing narrower definition of providers able to come within the scope of the copyright infringement immunity conferred by § 512(a) on access providers). 100  See e.g. Metropolitan Schools v DesignTechnica [2009] EWHC 1765 (QB) (UK); A v Google New Zealand Ltd [2012] NZHC 2352 (NZ).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

54   Graeme Dinwoodie They were merely intermediaries. Similarly, the analysis of the US Court of Appeals for the Second Circuit in Rescuecom v Google turned on the question of the extent to which Google’s Keyword Suggestion Tool effectively induced the primary infringing conduct or was itself trade mark use.101 Here, the classification is being used to trigger a particular form of analysis, rather than analysis resulting in a classification. To the extent that these cases expand the elements of primary liability, they are distorting what we might think of as outside the category of ‘internet intermediary’. And in some cases this has been a stretch, prompting Stacey Dogan to label the Rescuecom-endorsed cause of action as a ‘curious branch of direct trade mark infringement designed to distinguish between the innocent intermediary, and one whose technology and business model deliberately seeks to confuse’.102 But these legal assessments do not (and perhaps should not) define the outer bound­ ar­ies of the essentialist concept. Thus, while one of the reasons for an ‘intermediary’ being held accountable under the Enforcement Directive or Information Society Directive— and thus obliged to assist in preventing infringement—is that they are ‘best placed’ to do so, the concept of ‘intermediary’ cannot be limited to least-cost avoiders. (Indeed, textually, the recital from which that justification is gleaned suggests that there will be some intermediaries who are not best placed.) One could simply relegate the concept of ‘intermediary’ to denote a party against whom an ‘accountability order’ issues. But limiting the concept to mere recognition of the legal consequence prevents the concept itself doing much useful broader legal work; an actor might be an ‘intermediary’ in one instance but not in another despite performing precisely the same role.103 And it would obscure an important policy concern, namely that intermediaries might—but might not—be required to assist in preventing infringement under the two provisions in question.

4.3  The Directive on Copyright in the Digital Single Market The question posed earlier—should we define intermediaries by reference to what they do or by what we want them to do (or by the legal exposure of which we wish to 101 See Rescuecom Corp. v Google Inc., 562 F.3d 123, 129 (2d Cir. 2009) (US). On this latter point, the Second Circuit distinguished between the practice of the defendant in 1-800 Contacts, which had sold advertising keyed to categories of marks (e.g. selling the right to have an ad appear when a user searches for a mark connected to eye-care products, but not disclosing to the advertiser the proprietary mapping of marks and categories), and that of Google (which sold advertising tied directly to single marks). See ibid. 128–9. 102  Stacey Dogan, ‘“We Know It When We See It”: Intermediary Trademark Liability and the Internet’ [2011] Stan. Tech. L. Rev. 7, 19. 103  Recital 59 of the Information Society Directive explains Art. 11 orders as flowing from the fact that ‘in many cases . . . intermediaries are best placed to bring . . . infringement activities to an end’. However, that proposition turns in part on factual questions—e.g. the technological capacity of the intermediary and the location of the infringers—as well as policy preferences. Thus, we might think that inter­medi­ar­ies are lowest cost avoiders, but that assessment will depend on what items of cost are to be included in the calculus, and what ‘costs’ one assigns to concerns such as overblocking and the like. (To some extent, the proportionality analysis conducted by courts before issuing orders should function to bring these assessments together.)

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Who Are Internet Intermediaries?   55 subject them)—is raised by the 2019 Directive on Copyright in the Digital Single Market. This Directive legislated on the obligations of certain intermediaries to filter content uploaded by users, or more formally to make ‘in accordance with high industry standards of professional diligence, best efforts to ensure the unavailability of specific works . . . for which the right-holders have provided the service providers with the relevant and necessary information’.104 These obligations will, however, be imposed only on a certain subset of intermediaries—‘online content-sharing service providers’ (OCSSPs).105 Article 2(6) defines an online content-sharing service provider as ‘a provider of an information society service of which the main or one of the main purposes is to store and give the public access to a large amount of copyright-protected works or other protected subject matter uploaded by its users, which it organises and promotes for profit-making purposes’.106 As part of the legislative compromise, it was also agreed that ‘providers of services, such as not-for-profit online encyclopedias, not-for-profit educational and scientific repositories, open source software-developing and -sharing platforms, [access pro­viders], online marketplaces, business-to-business cloud services and cloud services that allow users to upload content for their own use, are not “online content-sharing service providers” within the meaning of this Directive’. Moreover, Article 17(6) provided exemption from some of the obligations imposed on OCSSPs. Thus: Member States shall provide that, in respect of new online content-sharing service providers the services of which have been available to the public in the Union for less than three years and which have an annual turnover below EUR 10 million . . ., the conditions under the liability regime set out in paragraph 4 [of Art. 17] are limited to compliance with point (a) of paragraph 4 and to acting expeditiously, upon receiving a sufficiently substantiated notice, to disable access to the notified works 104  See Directive 2019/790/EU of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/ EC [2019] OJ L130/92, Art. 17(4)(b). 105  Art. 2(6) defines ‘online content-sharing service provider’ as ‘a provider of an information society service of which the main or one of the main purposes is to store and give the public access to a large amount of copyright-protected works or other protected subject matter uploaded by its users, which it organises and promotes for profit-making purposes’. It also excludes specifically ‘Providers of services, such as not-for-profit online encyclopedias, not-for-profit educational and scientific repositories, open source software-developing and -sharing platforms, [access providers], online marketplaces, businessto-business cloud services and cloud services that allow users to upload content for their own use.’ 106  The definition is made more complex still by the recitals. Recitals 62–3 provide: (62) The definition of an online content-sharing service provider laid down in this Directive should target only online services that play an important role on the online content market by competing with other online content services, such as online audio and video streaming services, for the same audiences . . . (63) The assessment of whether an online content-sharing service provider stores and gives access to a large amount of copyright-protected content should be made on a case-by-case basis and should take account of a combination of elements, such as the audience of the service and the number of files of copyright-protected content uploaded by the users of the service.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

56   Graeme Dinwoodie or other subject matter or to remove those works or other subject matter from their websites. Where the average number of monthly unique visitors of such service providers exceeds 5 million, calculated on the basis of the previous calendar year, they shall also demonstrate that they have made best efforts to prevent further uploads of the notified works and other subject matter for which the rightholders have provided relevant and necessary information.

5. Conclusions Who are ‘internet intermediaries’? Even after analysis of a range of legal instruments, we find little consensus. It is perhaps thus appropriate to adopt as broad a view as possible so as to admit of the greatest possibility of legal intervention. But the breadth of the term should not obscure—indeed, it demands—an attempt to classify and differentiate among the different actors who are encompassed by the term. Essentialist taxonomies may well assist in supplying us with important facts that feed into the development of legal policy or the resolution of legal disputes. But we must not become hostage to that essentialism, both because it is static and because law need not be purely responsive to technology. But likewise, we need not allow technical legal definitions of ‘internet intermediary’ devised for particular purposes to obscure a search for a broader understanding of the phenomenon that we seek to regulate.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 3

A Theor etica l Ta xonom y of I n ter m edi a ry Li a bilit y Jaani Riordan

Given the rapid emergence of internet technologies in modern life and commerce, it is unsurprising that a growing array of regulatory schemes, statutory rules, judicial doctrines, procedures, and remedies have something to say about internet intermediaries and their legal duties. No discussion of intermediaries’ liability is complete without an understanding of the many forms it may take, the policy levers it may serve, and the areas in which liability may arise. This is an essentially cartographic exercise, as it involves mapping the legislative, judicial, and regulatory landscape to identify and classify legal norms of relevance to internet wrongdoing. Far from being a lawless wasteland or ungovernable dominion, it is now well recognized that the internet is, or should be, governed by the rule of law. This gives rise to new challenges, many of which are discussed elsewhere in this Handbook: most acutely, how to develop liability rules which properly encourage intermediaries to avoid harmful uses of their technologies without creating disproportionate or chilling effects; and how to craft effective remedies that are scalable to meet swifter and more widespread forms of online wrongdoing. Meeting these challenges also requires a degree of sensitivity on the part of national courts and legislators in shaping and applying legal norms which may have extraterritorial effects or may profoundly shape the development and use of future technologies and nascent industries, often in unpredictable ways. In all of this, it is beneficial to have a conceptual architecture within which to analyse and apply national and supranational liability rules to intermediaries, and to consider the policy objectives that may be served by imposing (or excluding) liability of different kinds. This is far from a simple task, both for the myriad forms that liability rules may take and because it requires careful consideration of what ‘liability’ actually means in different contexts. © Jaani Riordan 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

58   Jaani Riordan This chapter therefore aims to provide a taxonomy of the different types of liability which may be imposed upon internet intermediaries, and a theoretical framework for talking about problems of liability. This taxonomy is necessarily modest: it is not intended to provide an exhaustive description of the diverse areas of civil and criminal wrongdoing which may give rise to liability, but rather to situate each of these areas within an overall framework for analysing legal responsibility and the different ways in which it may be imposed upon facilitators of harm. Nor can it hope to address this question wholly unconstrained by the liability structure of the English common law system, though it will endeavour to propose a theoretical account which is of wider application. First, this chapter begins by considering what is meant by ‘liability’ and identifying the different forms that it may take, consistently with widespread assumptions about moral agency and individual responsibility. Secondly, this chapter analyses how ­liability may be attributed or imputed for the acts and omissions of others, noting a distinction between two models of responsibility which may be termed ‘primary’ and ‘secondary’ (or accessory) liability. Thirdly, this chapter considers the functions and policy justifications for imposing liability onto intermediaries for the acts or omissions of others. Finally, this chapter provides an overview of the main kinds of wrongdoing for which intermediaries may, in principle, be liable, by reference to English and EU law.

1.  What is ‘Liability’? Many usages of the word ‘liability’ can be identified, making it necessary to consider what this term is commonly assumed to mean. In its most conventional sense, liability describes the consequence of a person being held legally responsible for an event characterized as civil or criminal wrongdoing. Thus, the pithy phrase ‘D is liable’ is shorthand for a legal formula which refers to the obligation imposed (or recognized) by a court or administrative authority of competent jurisdiction to supply a prescribed remedy, or take (or cease taking) a prescribed action, in response to an event.1 That event is usually, but need not always be,2 characterized as a legal or equitable wrong, or breach of some other legal duty owed by D. The consequence of holding D liable is that C can go to court and obtain an order for a remedy against D. This section begins by considering the principles of moral agency and individual responsibility upon which most instances of liability are founded. This section then analyses the two main forms of obligations which may be imposed on an intermediary who is found liable for civil wrongdoing: monetary and non-monetary liability.

1  Peter Birks, ‘Rights, Wrongs, and Remedies’ (2000) 20 OJLS 1, 23. 2  Robert Stevens, Torts and Rights (OUP 2007) 58 (giving the example of an interim injunction which prohibits lawful conduct).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   59

1.1  Moral Agency and Individual Responsibility In common law systems, the traditional function of tort law was to determine which events generated remedial obligations and which did not.3 Liability rules could be used to make a defendant answerable to the claimant ‘under the rules to be blamed, punished, or made to pay.’4 In these and most other western legal systems, liability is traditionally premised on the fundamental assumption that a natural or legal person is responsible for her (and only her) own voluntary acts and omissions, subject to limited exceptions. The traditional principle was articulated by Lord Sumner in Weld-Blundell v Stephens: In general (apart from special contracts and relations and the maxim respondeat su­per­ior), even though A is at fault, he is not responsible for injury to C which B, a stranger to him, deliberately chooses to do.5

This principle reflects the intuitive claim of moral philosophers that a person is responsible for ‘all and only his intentional actions’.6 Actions (or, it might be added, inactions) by others are not ordinarily our responsibility; they are theirs to bear alone. This may be thought of as the basic principle of moral agency on which liability is ordinarily founded. Within each individual’s area of ‘personal moral sovereignty’, we treat that person as a  moral agent whose conduct may be assessed against the applicable liability rules.7 Because of this, an individual is normally responsible ‘only for conduct (and, within some bounds, its consequences) that he has directed for himself ’.8 Thus, we regard it as intuitively unjust to impose liability upon an innocent person for the wrongful acts or omissions of others, absent something more. The liability flowing from an individual’s own moral agency may be augmented by concepts of collective or secondary fault or responsibility, where ‘by applying facsimiles of our principles about individual fault and responsibility’ an individual may be held liable for the activities of another moral agent for whom they share fault or re­spon­si­bil­ity.9 It is this latter basis for personal responsibility which is most relevant in discussions about intermediary liability. However, this frequently gives rise to difficult questions in demarcating the sphere within which an intermediary can legitimately be said to bear responsibility for the acts and omissions of others. Part of the difficulty underlying many discussions of intermediary liability is that they presuppose a particular model of liability or treat the term ‘liability’ as a form of 3  Oliver Wendell Holmes, The Common Law (first published 1881, 1963 Belknap Press edn) 64. Now, of course, the constellation of statute often displaces, codifies, or abrogates earlier common law rules. 4  H.L.A. Hart and Tony Honoré, Causation in the Law (OUP 1985) 65. 5  [1920] AC 956, 986 (Lord Sumner). 6  John Mackie, Ethics: Inventing Right and Wrong (Penguin Books 1977) 208. 7  Ronald Dworkin, Law’s Empire (HUP 1986) 174. 8  Lloyd Weinreb, Natural Law and Justice (HUP 1987) 200. This assumption appears to be common to many philosophical discussions of individual responsibility: see e.g. Isaiah Berlin, Liberty (Cohen 1995) 6; Emmanuel Kant, The Metaphysics of Morals (first published 1785, 1996 CUP edn, Gregor trans.) ss. 6:223, 6:389–6:390, 152–3. 9  Dworkin (n 7) 170.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

60   Jaani Riordan metonymy denoting a much wider spectrum of liability rules. Thus, it is typical to speak of ‘li­abil­ity’ as a binary and monolithic concept: either an intermediary is liable or it is not. To avoid falling into this trap, it is helpful to disambiguate liability into its constituent forms. In this section, we identify two main branches of liability: monetary and nonmonetary. Monetary liability may be further divided along a spectrum comprising four main standards: strict liability; negligence or fault-based liability; knowledge-based li­abil­ity; and partial or total immunity. Non-monetary liability may be further divided into prohibitory and mandatory obligations.

1.2  Monetary Liability First, remedies for liability can be monetary, as in the case of orders to pay compensatory damages or to disgorge profits.10 Such orders enforce secondary duties to correct losses or gains resulting from the breach of a primary duty. These obligations to pay are backed by the threat of executive enforcement and asset seizure. Almost all information torts recognize an obligation on the legally responsible party to pay money.11 The preconditions for obtaining a monetary remedy vary between types of wrongdoing and of course between different legal systems; however, in broad terms they may be divided on a spectrum from absolute liability to total immunity, and grouped under four headings.

1.2.1  Strict Liability Strict liability requires intermediaries to internalize the cost of user misconduct without proof of fault. By requiring intermediaries to pay for the social harms of third party wrongdoing, a liability rule may increase the expected penalty—and thereby the deterrent effect—of facilitating that wrongdoing, with the effect that intermediaries adjust their activities to reduce wrongdoing to an optimal level.12 Strict liability has the advantage of being simple for courts, intermediaries, and claimants to assess, thereby allowing efficient ex ante pricing decisions. However, although strict primary liability rules are common, strict secondary liability rules are rare, mainly because it is unfeasibly costly for intermediaries to monitor the lawfulness of all their users’ activities. This may also be because strict secondary liability would pose a more direct challenge to the principles of moral agency and individual responsibility considered earlier.

1.2.2  Negligence-Based Standards Liability may alternatively be conditioned upon a finding of fault or limited to circumstances in which an intermediary is said to owe a legal duty of care. For example, such a 10 See Attorney General v Blake [2001] 1 AC 268, 278–81 (Lord Nicholls) (UK). 11 See John v MGN Ltd [1997] QB 586, 608–9 (Sir Thomas Bingham MR) (UK) (defamation); Copyright, Designs and Patents Act 1988, ss. 96(1), 97–100, 103, 184(2), 191I (UK) (copyright and performers’ right). 12  See Jennifer Arlen and Reinier Kraakman, ‘Controlling Corporate Misconduct: An Analysis of Corporate Liability Regimes’ (1997) 72 New York U. L. Rev. 687; Gary Becker, ‘Crime and Punishment: An Economic Approach’ (1968) 76 J. of Political Economy 169, 178–80, 184.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   61 duty may require intermediaries to act reasonably to prevent, deter, or respond to primary wrongdoing. This would ordinarily represent a lower level of monitoring than a strict liability standard, and thereby reduces the risk of over-deterrence by holding intermediaries to an objectively determined but imperfect standard of conduct—for example, a rule which requires a website or platform operator to remove defamatory postings within a reasonable period.13 Fault-based duties which are fixed by reference to external standards such as industry practices can operate more stringently than knowledge-based duties; for example, by imputing constructive knowledge of tortious ma­ter­ial where an intermediary is under a duty to seek it out or prevent its reappearance, or imposing liability for conduct of which an intermediary is wholly unaware on the basis of a duty to control the wrongdoer.

1.2.3  Knowledge-Based Standards By contrast, knowledge-based standards impose obligations upon intermediaries to respond to wrongdoing only once they receive sufficient information to reach a threshold mental state; for example, that they know or reasonably infer that wrongdoing has occurred. This type of liability rule furnishes the dominant mechanism for European internet content regulation: notice and takedown. Less wrongdoing must be internalized, which encourages optimal ex post enforcement. To prevent wilful blindness, knowledge usually incorporates an objective measure, by which an intermediary is taken to know facts which would have been discovered with reasonable diligence by a competent operator standing in its shoes.14 In this way, liability rules which are premised on a standard of knowledge being attained delimit liability according to whether the defendant’s mental state was objectively culpable.

1.2.4 Immunity At the other end of the liability rule spectrum, intermediaries can be partially or wholly exempted from monetary liability. Immunity has the advantages of certainty, subsidizing nascent technology industries and promoting ‘market-based self-help’,15 but has been heavily criticized by some scholars as removing any incentives for least-cost avoiders to intervene in enforcement, even where that might be the most efficient way to prevent wrongdoing or bring it to an end,16 or is simply necessary to uphold claimants’ rights. Conversely, immunity may serve to uphold countervailing public policy ob­ject­ives, such

13 Cf. Emmens v Pottle (1885) QBD 354 (UK) (common law defence of innocent dissemination). 14  See e.g. Directive 2000/31/EC of the European Parliament and of the Council of 17 July 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1, Arts 13(1)(e), 14(1)(b). 15 Doug Lichtman and Eric Posner, ‘Holding Internet Service Providers Accountable’ (2006) 14 Supreme Court Economic Rev. 221, 226. 16  See e.g. Directive 2001/29/EC of the European Parliament and the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L167/10, recital 59; Michael Rustad and Thomas Koenig, ‘Rebooting Cybertort Law’ (2005) 80 Washington L. Rev. 335, 390–1.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

62   Jaani Riordan as constitutional protections for freedom of expression.17 In Europe, partial immunity in respect of monetary liability is conferred within three classes of passive, neutral, and technical activities (hosting and caching, and transmission).18

1.3  Non-Monetary Liability A second set of liability outcomes imposes non-monetary obligations upon inter­medi­ar­ ies, most commonly an injunction to do, or refrain from doing, certain acts (mandatory and prohibitory injunctions, respectively). Such obligations may arise in circumstances where the intermediary is itself liable as a primary wrongdoer, or where it is not so liable but instead must discharge a more limited duty to prevent or cease facilitating wrongdoing, assist a victim of wrongdoing, or uphold the administration of just­ice. Most commonly, this latter class of non-monetary duties requires intermediaries to take reasonable and proportionate steps to prevent third parties’ wrongful activities from being facilitated by the use of their services, when requested to do so.19 As has been observed on a number of occasions, the underlying duty is ‘of a somewhat notional kind’: The duty of a person who had become involved in another’s wrongdoing was held . . . to be to ‘assist the person who has been wronged by giving him full information and disclosing the identity of the wrongdoers’. . . . It is, however, clear that this duty was of a somewhat notional kind. It was not a legal duty in the ordinary sense of the term. Failure to supply the information would not give rise to an action for damages. The concept of duty was simply a way of saying that the court would require disclosure.20

Although both monetary and non-monetary forms of liability require some kind of recognized legal wrongdoing to have occurred, to impose non-monetary liability only requires wrongdoing on the part of a third party, ‘and not that of the person against whom the proceedings are brought’.21 For this reason, non-monetary remedies are described as imposing accountability without liability,22 though of course both mon­et­ary and nonmonetary liability may flow from the same wrongful activity. Liability in this second, non-monetary sense is both broader and narrower than mon­et­ary liability: as noted earlier, it can be imposed without proof of wrongdoing on 17  See, inter alia, Chapters 7, 27, and 29. 18  See, inter alia, Chapters 15 and 16. 19  See e.g. Cartier International AG v British Sky Broadcasting Ltd [2014] EWHC 3354 (Ch) [106] (Arnold J) (UK); aff ’d [2016] EWCA Civ 658 [52] (Kitchin LJ) (UK). 20  Singularis Holdings Ltd v PricewaterhouseCoopers [2015] AC 1675 [22] (Lord Sumption JSC) (UK) (citations omitted). 21  Ashworth General Hospital Ltd v MGN Ltd [2002] 1 WLR 2033 [26] (Lord Woolf CJ) (UK). 22  See Martin Husovec, ‘Accountable, Not Liable: Injunctions against Intermediaries’, Tilburg Law & Economics Research Paper Series, Discussion Paper No. 2016-012 (2016).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   63 the part of the intermediary, but it only protects limited categories of interests and enforces limited types of duties. Further, at least in Europe, injunctive remedies are impervious to safe harbours, which ‘do not affect the possibility of injunctions of different kinds’, whereas monetary remedies may be unavailable in relation to protected activities of an intermediary.23 Finally, while they may not impose a direct obligation to compensate a claimant or disgorge profits, injunctions are enforced, ultimately, by the criminal law of contempt and the associated machinery of incarceration and, in some jurisdictions, monetary penalties.

2.  Classifying Liability So far, we have seen that two main forms of obligations may be imposed upon inter­ medi­ar­ies held to be ‘liable’, each falling within a spectrum of liability rules sharing certain common features and underlying assumptions. A further distinction lies between li­abil­ity rules which impose ‘primary’ (or direct) liability, and those which impose ‘secondary’ (or accessory) liability.24 Primary liability arises where all elements of wrongdoing are fulfilled by the intermediary’s own acts or omissions; conversely, secondary liability is liability which is at least partly conditioned upon proof of prima facie wrongdoing by a third party. Principles of primary and secondary liability reflect a consistent policy of holding intermediaries accountable for harms that are caused or contributed to by third parties when the intermediary has a normatively and causally significant relationship with primary wrongdoing, typically constituted by an assumption of responsibility for the primary wrongdoer’s actions. This policy may partly explain the development of common law liability rules involving internet intermediaries, but—as is demonstrated by the English principles of joint tortfeasorship considered later—seems unlikely to offer a sufficiently granular or responsive means of regulating their obligations and business models.

2.1  Primary Liability A distinction lies between two ways in which the law classifies wrongdoing. First, a person may engage in wrongful activity by his or her own acts or omissions. Such conduct is intended and directed by that person, who carries it out personally. This type 23 Directive 2000/31/EC (n. 14) recital 45. See, in the UK, Electronic Commerce (EC Directive) Regulations 2002 (SI 2002/2013), reg. 20(1)(b), (2). 24  Some judges and scholars argue that ‘accessory’ or ‘indirect’ are better labels to describe this type of liability: see e.g. Paul Davies, ‘Accessory Liability: Protecting Intellectual Property Rights’ [2011] 4 Intellectual Property Quarterly 390, 396. ‘Secondary’ is preferred here for its neutrality and ability to capture the range of standards according to which intermediaries may be held liable.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

64   Jaani Riordan of wrong is described as ‘primary wrongdoing’ and its originating agent is the ‘primary wrongdoer’ or tortfeasor. The consequences of such conduct are caused by the person who engages in it; in other words, there is strict identity between actor and acts. This form of liability typically poses little in the way of challenge to conventional understandings of moral agency or personal responsibility. Breaches of primary duties owed by intermediaries are properly treated as primary wrongs. Common to these instances is that the definition of primary liability is sufficiently wide to accommodate acts or omissions caused by the relevant use of the intermediary’s services. For example, an intermediary may face primary liability in its capacity as a contracting party (perhaps under its terms of service or a distance contract entered into with a consumer),25 as a party that has assumed responsibility for the safety or security of its users or their data,26 to prevent harm which is likely to occur,27 for injury caused by something or someone that the intermediary has a duty to control,28 or under a statutory data protection scheme.29 Other attempts to impose liability raise more difficult questions concerning the scope of primary wrongdoing; for example, whether an act of reproducing a copyright work which occurs when a user uploads an infringing video to a video-sharing platform’s server is to be treated as performed by the intermediary, its user, or both.30 In some cases, the boundary between primary and secondary liability is not wholly distinct. This is most apparent in areas such as copyright, where the right of communication to the public has progressively been expanded to encompass activities which are traditionally thought of as the province of secondary liability.31 The dividing line will often depend on how widely the relevant primary wrong is defined, and may become blurred at the margins. However, despite (or perhaps because of) erosion in certain areas, it is suggested that it remains important to think of primary and secondary responsibility as distinct concepts. 25  See e.g. Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights [2011] OJ L304/64, Arts 8–9 (regulating distance contracts); Directive 2013/11/EU on alternative dispute resolution for consumer disputes, Art. 2(a) (online dispute resolution). 26  See e.g. William Perrin and Lorna Woods, ‘Reducing Harm in Social Media through a Duty of Care’ (LSE Media Policy Project Blog, 8 May 2018) . 27  See by analogy Dorset Yacht Co. Ltd v Home Office [1970] AC 1004, 1030 (Lord Reid) (UK) (recognizing a duty to prevent harm which was ‘very likely’ to occur). 28  See by analogy Haynes v Harwood [1935] 1 KB 146 (driver for bolting horse) (UK); Newton v Edgerley [1959] 1 WLR 1031 (UK) (father for child’s use of weapon). 29  As in the case of an intermediary who is a data controller vis-à-vis personal data. See Regulation 2016/679/EU of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC [2016] OJ L119/1 (‘GDPR’), Arts 5–6. 30  See e.g. National Rugby League Investments Pty Ltd v Singtel Optus Pty Ltd [2012] FCAFC 59 [75]– [78] (Aus.) (concluding that copies were made jointly by users and the platform). 31  See e.g. C-160/15 GS Media BV v Sanoma Media Netherlands BV [2016] ECLI:EU:C:2016:644 (provision of hyperlinks to infringing content); C-527/15 Stichting Brein v Jack Frederik Wullems [2017] ECLI:EU:C:2017:300 (sale of pre-programmed IPTV set-top boxes).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   65

2.2  Secondary Liability A second form of liability is ‘secondary’ in the sense that it requires proof of at least prima facie wrongdoing by a person other than the claimant or defendant. As Lord Hoffmann explained in OBG Ltd v Allan, secondary liability is concerned with ‘prin­ciples of liability for the act of another’.32 Lord Nicholls described it as ‘civil liability which is secondary in the sense that it is secondary, or supplemental, to that of the third party who committed [the primary tort]’.33 A more precise definition may be that secondary liability is liability having as one of its conditions a finding of at least prima facie wrongdoing by a third party. For example, liability for authorizing copyright infringement requires proof of actual infringement by the party so authorized.34 By contrast, liability for breaching a contract is primary, as it does not matter whether any third party has also breached it. Doctrines of secondary liability determine the threshold at which intermediaries may become legally responsible for primary wrongs perpetrated by others, even though they do not independently satisfy the definition of primary wrongdoing. Their operation begins at the penumbra of primary liability and ends at the limits of the connecting factors through which secondary liability may attach. Secondary liability is thus closely related to the definition of a primary wrong, whose boundaries can be adjusted to encompass a wider or narrower range of conduct within it. Secondary liability is not harmonized within the EU and national approaches tend to evolve alongside national doctrines of civil procedure, non-contractual and criminal responsibility.35 Even within national legal systems, there is often little to unite the disparate instances of secondary liability doctrines, except that they express common patterns of attribution in private law and reflect shared policies about the proper limits of personal responsibility. As Birks has observed, secondary liability is an ‘obscure and under-theorised’ part of private law,36 and is one frequently characterized by the use of undefined, inconsistent, or misleading terminology. This significantly complicates the task of discerning common structural principles. In the most general terms, secondary liability may be thought of as attaching to acts or omissions by A, the secondary actor, which (1) are not independently a primary wrong, but either: (2) cause B, a primary wrongdoer, to engage in primary wrongdoing against C in a recognized way (‘causative secondary wrongdoing’); or (3) establish a recognized relationship between A and B within the scope of which B engages in primary wrongdoing against C (‘relational secondary wrongdoing’). Here the key questions for national legal systems are: (i) when will harm caused by A to C fall within a recognized category 32  [2008] 1 AC 1 [27] (Lord Hoffmann) (UK). 33  ibid. [59] (Lord Nicholls). 34  As there is no liability for attempted or inchoate copyright infringement, it would plainly be insufficient for an authorization to be given which fails to lead to any infringement. 35  See e.g. French Civil Code (2016 French Ministry of Justice edn, Cartwright et al. trans.) arts 1240–2; German Civil Code (1 October 2013) Arts 823, 1004. 36  Peter Birks, ‘Civil Wrongs: A New World’ in Butterworths Lectures 1990–1991 (Butterworths 1992) 55, 100.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

66   Jaani Riordan of duty or otherwise be sufficiently culpable to justify imposing secondary li­abil­ity; and (ii) what relationships are sufficiently proximate to justify treating A as responsible for the actions of B? Different legal systems understandably formulate distinct answers to these questions, and draw the line in different places. In English law, these principles are reflected primarily in doctrines of joint tortfeasorship in tort, equitable accessory liability, criminal accessory liability, and vicarious liability. Although these doctrines developed in radically different institutional, doctrinal, and historical settings, some scholars have attempted to derive ‘general principles of accessory liability’ from these disparate instances.37 However, with limited exceptions, English courts have consistently rejected those attempts, and the area is instead characterized by ‘systematic failure’ to explain the basis of principles which are frequently ‘unstructured, un­prin­cipled and incoherent’.38 Partly this reflects terminological confusion,39 and partly the diverse policies, fact-specific circumstances, and remedial values that these principles uphold in different areas of law. If any unifying principle can be identified, it is that an intermediary may be made to answer for the wrongs of others where it has by its own conduct become so involved as to ‘make those wrongful acts [its] own’.40 In other words, these parallel criteria determine whether a secondary actor has voluntarily assumed responsibility for the primary wrongdoer’s conduct.41 Collectively, these doctrines operate as limited exceptions to the general principle that a claimant’s rights extend only to those who have done him wrong, and—the obverse proposition—that a defendant’s liabilities extend only to his own wrongdoing.

2.2.1  Causative Secondary Liability Under English law, to establish secondary liability in tort requires the claimant to show two things. First, reflecting its ‘parasitic’ nature, there must be some primary wrongdoing, without which it is ‘self-evident’ that no liability can attach to other parties.42 For example, an online platform could not plausibly face secondary liability for trade mark infringement without a finding that there has been some primary infringement: ‘No secondary liability without primary liability’, as Lord Hoffmann surmised in OBG.43

37  See e.g. Philip Sales, ‘The Tort of Conspiracy and Civil Secondary Liability’ (1990) 49 Cambridge L.J. 491, 502. 38  Claire McIvor, Third Party Liability in Tort (Hart 2006) 1. 39  Confusingly, secondary wrongdoing can often lead to primary liability. For example, tortious secondary liability is primary in the sense that all wrongdoers are jointly liable for the same tort, subject to rights of contribution. Joint tortfeasors are therefore ‘principals’ rather than ‘accessories’ in the strict sense. See Pey-Woan Lee, ‘Inducing Breach of Contract, Conversion and Contract as Property’ (2009) 29 OJLS 511, 521. 40  Sabaf SpA v Meneghetti SpA [2003] RPC 264, 284 (Peter Gibson LJ) (UK). 41 Cf. Caparo Industries plc v Dickman [1990] 2 AC 605, 628–9 (Lord Roskill) (UK) (describing a duty arising from assumptions of responsibility for the performance of an activity). 42  Revenue and Customs Commissioners v Total Network SL [2008] 1 AC 1174, 1255 (Lord Walker) (UK). 43  OBG (n. 32) [31] (Lord Hoffmann).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   67 Secondly, the secondary wrongdoer’s conduct must fall within a recognized connecting factor. This specifies a threshold of causative participation and knowledge which are, in combination, normatively sufficient for ‘concurrent fault’.44 The two most common categories are procurement and participation in a common design. However, these are non-exhaustive and it would, as Bankes LJ observed in The Koursk, ‘be unwise to attempt to define the necessary amount of connection’ in the abstract.45 Despite some confusion,46 these connecting factors are alternatives.47 Together, they identify the situations when a sufficient nexus exists between secondary and primary wrongdoers to justify extending liability to an intermediary or other third party. They are also routinely supplemented by statutory forms of secondary liability, most notably in the case of copyright.48 2.2.1.1 Procurement Secondary liability for procuring arises where A intentionally causes B ‘by inducement, incitement or persuasion’ to engage in particular acts infringing C’s rights.49 Procurement of a tort is not a separate tort. Instead, it makes the secondary wrongdoer liable for the primary wrong as a joint tortfeasor. It is necessary but insufficient that A’s conduct must cause the primary wrong, in the sense that, ‘but for his persuasion, [the primary wrong] would or might never have been committed’.50 However, merely ‘aiding’ wrongdoing is not procurement: ‘[f]acilitating the doing of an act is obviously different from procuring the doing of the act.’51 On the basis of this case law, in L’Oréal SA v eBay International AG, Arnold J held that eBay had not procured infringements by sellers who offered for sale and sold counterfeit goods on its platform.52 Procurement also has a mental element, as the procurer must ‘wilfully’ have sought to induce the primary wrongdoer to act wrongfully. Ordinarily, A must intend B to engage in wrongful conduct in a particular way, which requires knowledge of at least the existence of the primary right to be interfered with and the acts to be performed, while possessing

44  Glanville Williams, Joint Torts and Contributory Negligence.A Study of Concurrent Fault in Great Britain, Ireland and the Common Law Dominions (Stevens and Sons 1951) 2. 45  The Koursk [1924] P 140, 151 (Bankes LJ) (UK). For example, some scholars argue that authorization or ratification of a wrong is a further basis for imposing secondary liability in tort: see further Patrick Atiyah, Vicarious Liability in the Law of Torts (Butterworths 1967) 292–4. 46  See e.g. CBS Songs Ltd v Amstrad Consumer Electronics plc [1988] 1 AC 1013, 1058 (Lord Templeman) (UK) (D liable if ‘he intends and procures and shares a common design that infringement shall take place’) (emphasis added); cf. MCA Records Inc. v Charly Records Ltd [2002] FSR 26 [424] (Chadwick LJ) (UK) (treating the test as disjunctive). 47  Unilever plc v Gillette (UK) Ltd [1989] RPC 584, 595 (Mustill LJ) (UK). 48  For example, in the UK statutory authorization liability extends the scope of primary liability for copyright infringement, and impliedly abrogates common law authorization as a connecting factor. See Copyright, Designs and Patents Act 1988 (n. 11) s. 16(2). 49  CBS (n. 46) 1057–8 (Lord Templeman). 50  Allen v Flood [1898] AC 1, 106–7 (Lord Watson) (UK). 51  Belegging-en Exploitatie Maatschappij Lavender BV v Witten Industrial Diamonds Ltd [1979] FSR 59, 65–6 (Buckley LJ) (Aus.). 52  L’Oréal SA v eBay International AG [2009] RPC 21, 770–1 (Arnold J) (UK). See Chapter 20.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

68   Jaani Riordan any mental element necessary for primary liability.53 This requirement distinguishes fault-based procurement liability from primary liability, which may be strict (as in the case of most infringements of intellectual property rights and torts such as def­am­ ation). However, it also means that procurement has little role to play in cases where an intermediary is passive and neutral, because it is unlikely to have induced, still less intended, the commission of the primary wrong. 2.2.1.2  Common Design Where an intermediary combines with others ‘to do or secure the doing of acts which constituted a tort’, it will be liable as a joint tortfeasor.54 This requires consensus between an intermediary and another party to cause wrongdoing, and participation, in the sense that both ‘are active in the furtherance of the wrong’.55 In other words, secondary li­abil­ity arises where parties ‘agree on common action, in the course of, and to further which, one of them commits a tort.’56 Common design is a broader category than procurement, since consensus is more easily demonstrated than inducement.57 However, simply lending assistance to a primary wrongdoer, without more, is insufficient to impose secondary liability.58 Like procurement, common design comprises physical and mental elements. The required causal link is ‘concerted action to a common end’,59 rather than independent but cumulative or coinciding acts. In other words, there must actually be agreement, whether express or implicit, which includes within its scope the tortious act or omission.60 However, mere sale of goods does not entail such an agreement without more, as the House of Lords held in CBS v Amstrad.61 There the seller ‘did not ask anyone’ to infringe copyright (which would have been procurement), and there was no common design to infringe, because Amstrad did not decide the purpose for which its cassette recorders should be used; purchasers did, without any agreement between them and the vendor. Secondly, there must be action: ‘some act in furtherance of the common design—not merely an agreement’.62 This requires that the secondary party actually take part in the plan to some more than ‘de minimis or trivial’ extent.63 The required mental element is that each secondary party intended that the events constituting the primary wrong occurred,64 and additionally meets any state of mind required of a primary tortfeasor.65 As Davies has argued, this sets a high bar, and courts 53  See Atiyah (n. 45) 290–1. 54  Fish & Fish Ltd v Sea Shepherd UK [2014] AC 1229 [21] (Lord Toulson JSC) (UK). 55  Glanville Williams, Joint Torts and Contributory Negligence (Stevens and Sons 1951) 10. 56  The Koursk (n. 45), 155 (Scrutton LJ). 57  eBay (n. 52), 766–7 (Arnold J). 58 See Credit Lyonnais Bank Nederland NV v Export Credits Guarantee Department [2000] 1 AC 486, 500 (UK). 59  The Koursk (n. 45), 156 (Scrutton LJ); Credit Lyonnais (n. 58), 493, 499 (Lord Woolf MR). 60  Unilever v Gillette (n. 47) 608 (Mustill LJ). 61  CBS (n. 46) 1055–7 (Lord Templeman). 62  Unilever plc v Chefaro Properties Ltd [1994] FSR 135, 138, 141 (Glidewell LJ) (UK). 63  Sea Shepherd (n. 54) [57] (Lord Neuberger PSC). 64 See CBS (n. 46) 1058 (Lord Templeman). 65  C. Evans & Son Ltd v Spritebrand Ltd [1985] 1 WLR 317, 329 (Slade LJ) (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   69 have not abandoned ‘the shackles of CBS’ in subsequent decisions.66 Although intent includes wilful blindness, it does not extend to reckless or negligent failures to know.67 By analogy, only a specific subjective intention to bring about the acts constituting the wrong will suffice. 2.2.1.3  Criminal Accessory Liability It is also possible to impose criminal liability upon intermediaries as accessories where they participate in criminal wrongdoing (subject to the effect of safe harbour protection).68 In the UK, criminal accessory liability is defined both by statutory and common law rules. Any accessory who ‘shall aid, abet, counsel or procure the commission of any indictable offence . . . shall be liable to be tried, indicted and punished as a principal offender’.69 The exact boundaries of secondary participation in crime are defined judicially. These connecting factors are in some ways broader than those of the civil law; for example, they extend to some forms of deliberate assistance. However, they are also narrower, since the accessory must know or believe that the primary acts will occur. Collectively they may be thought of as another example of causative secondary liability, since they define ways in which an intermediary may contribute to the commission of criminal wrongdoing.

2.2.2  Relational Secondary Liability Secondary liability may also be imposed upon intermediaries who stand in some relationship with a primary wrongdoer. For example, vicarious liability can be used to hold employers and principals liable for wrongful acts and omissions of their employees and agents that are carried out within the scope of their employment or agency.70 In this setting, it is the status of the secondary actor and the proximity of its relationship with the primary wrongdoer which justifies the imposition of liability, rather than the materiality of her causal contribution to primary wrongdoing. Examples of non-causative relational secondary conduct include: wrongdoing by B which occurs within the scope of her employment or agency to A; unauthorized wrongdoing carried out by B but subsequently ratified by A;71 and primary wrongdoing done on premises controlled by A.72 Relational attribution encompasses all tortious conduct that occurs within the scope of the relationship. It is not restricted by medium and could in theory apply to internet 66  See Davies (n. 24) 403. 67  OBG (n. 32) [29]–[30] (Lord Hoffmann). 68  See Electronic Commerce (EC Directive) Regulations 2002 (SI 2002/2013), regs 17–19, 21 (UK) (which immunize against ‘any criminal sanction’ otherwise resulting from conduct falling within the mere conduit, caching, and hosting safe harbours). 69  Accessories and Abettors Act 1861, s. 8 (UK). See also Serious Crime Act 2007, ss. 44–6 (UK). 70  See generally Lister v Hesley Hall Ltd [2002] 1 AC 215 (UK). 71 See Eastern Construction Co. v National Trust Co. [1914] AC 197 (UK). 72  See e.g. Famous Music Corp. v Bay State Harness Racing and Breeding Association Inc., 554 F.2d 1213 (1st Cir., 1977) (US) (imposing liability for infringing performances on property controlled by the defendant). These cases can be viewed as breaches of a primary duty to take reasonable steps to prevent land from causing harm to others: see Leakey v National Trust for Places of Historic Interest or National Beauty [1980] QB 485, 517–19 (Megaw LJ) (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

70   Jaani Riordan intermediaries where primary wrongdoing is carried out by an employee or agent. This makes the boundaries of the employment relationship of considerable importance, particularly in the context of online platforms which function as marketplaces for services supplied by workers to consumers, most notably transportation, delivery, and accommodation services.73

3.  Justifying Intermediary Liability Intermediary liability rules often sit uneasily within the normative and conceptual structure of private law. This is partly because they can constitute exceptions to the deeply ingrained principle of individual moral agency—that a person should normally be responsible only for her own voluntary behaviour—a problem felt most acutely when intermediaries are made liable for unlawful material uploaded or transmitted by third parties over whom they otherwise lack direct control. It is also partly because the im­pos­ ition of liability upon intermediaries frequently pushes at the boundaries of established legal categories. This section provides an overview of the two main sets of justifications for imposing secondary liability upon intermediaries, which are commonly invoked to overcome the problem of moral agency (or at least to defend its curtailment). The first understands these liability rules as methods of attributing blame to secondary actors who have in some way assumed responsibility for primary wrongdoers or their actions. This account is entirely consistent with conventional principles of tortious responsibility, because it treats intermediary liability rules as reflecting the secondary actor’s own culpability. Secondly, at the level of consequentialist analysis, secondary liability rules are justified as reducing enforcement costs and encouraging optimal policing by secondary actors who are likely to be least-cost avoiders.

3.1  Normative Justifications Modern accounts of tort law describe a system of relational directives which impose responsibility for acts or omissions which interfere with the rights of others in prescribed ways.74 Primary tortious liability reflects the defendant’s violation of an obligation not to do wrong to the claimant; remedies are therefore normally only available against ‘the rights violator’, for wrongdoing ‘at the hands of the defendant’.75 Thus, as Goldberg and 73  See e.g. Uber BV v Aslam [2018] EWCA Civ 2748 [95]–[97] (Etherton MR and Bean LJ) (UK) (concluding that the relationship between Uber and its drivers was that of transportation business and workers). 74  See e.g. John Goldberg and Benjamin Zipursky, ‘Rights and Responsibility in the Law of Torts’ in Donal Nolan and Andrew Robertson (eds), Rights and Private Law (Hart 2012) 251, 263. 75  ibid. 268.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   71 Zipursky argue, the power to exact a remedy is available against wrong­doers ‘only if they have violated the victim’s right’.76 The availability of remedies against non-violators such as intermediaries poses a challenge to an account premised on rights or civil recourse. Either victims of wrongdoing have an entitlement to relief against parties who have not themselves infringed their rights, or—perhaps more plausibly—tort law must embed additional rights against secondary wrongdoers. Theoretical responses to this challenge fall under four main headings. These are not mutually exclusive cat­egor­ies; instead, they supply related but distinct explanations for extending responsibility.

3.1.1  Holding Causes of Harm Accountable The first category points to the secondary wrongdoer’s causally significant conduct—be that inducement, conspiracy, or some other recognized form of enablement—as justifying personal responsibility for the consequences. As Hart and Honoré argue, to instigate or supply the means or other assistance ‘may in a broad sense be said to give rise to a causal relationship’.77 Gardner identifies causality—that is, actually ‘making a difference’ to the primary wrong—as the defining attribute of secondary responsibility and the essential difference between primary and secondary wrongdoers: while both contribute to wrongdoing, only secondary wrongdoers make their contribution through primary wrongdoers.78 This explains why primary wrongdoing must be a sine qua non of secondary wrongdoing.79 Superfluous, ineffectual, or inchoate contributions are ignored. Similarly, contributions which might have been effective, but which do not ultimately eventuate in wrongdoing, are forgotten. Causation thus offers a normative justification for imposing tortious liability upon a secondary party: if we are morally responsible for our voluntary conduct, then we ought also to be held responsible for wrongful consequences that conduct causes.80 Causation supplies a rich vocabulary with which to analyse the ‘substitutional visiting of sins’ upon those who set others in motion.81 However, the romanticization of wrongs as billiard balls, which follow deterministic paths of cause and effect, hides a great deal of complexity, and fails to supply ready answers to problems involving intermediaries. First, causation does not always appear necessary for civil secondary liability: ratification may occur after the tortious conduct and have no effect on its occurrence; relational doctrines may impose liability regardless of the principal’s causative role. Stevens goes further and argues that only procuring requires a causal link82—though this ignores the causal element of authorization and common design, which may also ‘bring about’ 76  ibid. 273 (emphasis added). 77  Hart and Honoré (n. 4) 388. See also John Wigmore, ‘A General Analysis of Tort-Relations’ (1895) 8 Harv. L. Rev. 377, 386–7. 78  See e.g. John Gardner, Offences and Defences: Selected Essays in the Philosophy of Criminal Law (OUP 2006) 58, 71–4. 79  See e.g. K.J.M. Smith, A Modern Treatise on the Law of Criminal Complicity (OUP 1991) 6–7, 66, 82. 80  See Jerome Hall, ‘Interrelations of Criminal Law and Torts: I’ (1943) 43 Colum. L. Rev. 753, 775–6. 81  See Philip James and David Brown, General Principles of the Law of Torts (4th edn, Butterworths 1978) 356. 82  See Stevens (n. 2) 254.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

72   Jaani Riordan harm by clothing the primary wrongdoer in authority or giving a plan the legitimacy of consensus. Secondly, causation is an incomplete explanation, since merely causing or contributing to primary harm is never sufficient for secondary liability. Instead, as Hall observes, further principles of culpability—‘a body of value-judgments formulated in terms of personal responsibility’—are needed to determine which consequences individuals should be accountable for causing.83 These principles (reflected in the secondary liability rules examined earlier) ultimately rest on normative claims about justice, personal responsibility, and the allocation of losses which cannot be defended using caus­ation alone.

3.1.2  Fictional Attribution to Secondary Wrongdoers Some scholars argue that secondary liability rules attribute actions to the secondary wrongdoer, as expressed by the maxim qui facit per alium facit per se.84 Under this fiction, secondary wrongdoers are held responsible for conduct they are deemed to carry out which infringes the claimant’s rights. Older cases lend some support to the view that the acts of any participant in a common design are to be imputed to all other participants, or of employee to employer.85 Some theorists have embraced this fiction to explain joint tortfeasorship: Atiyah argues that the secondary wrongdoer has ‘effectively committed the tort himself, and the liability is not truly vicarious’; while Stevens argues that all secondary liability involves attributing actions, leading to liability ‘for the same tort’.86 This amounts to an agency-based explanation: it treats primary wrongdoers as implied agents of secondary wrongdoers, where the ‘physical acts and state of mind of the agent are in law ascribed to the principal.’87 This would rest on an implied mani­fest­ ation of assent that the primary wrongdoer should act on behalf of the secondary actor insofar as he unlawfully causes loss to others.88 However, not all secondary liability is relational: consider a website that procures infringement undertaken by users solely for their own benefit. The agency account is directly contradicted by more modern au­thor­ ities, which impute liability for the wrong of the primary wrongdoer.89 Further, it cannot be that acts constituting primary wrongdoing are literally attributed to joint tortfeasors; otherwise there would be two sets of tortious acts and two torts. Instead, ‘if one party procures another to commit a tort . . . both are the principal wrong­ doers of the same tort’.90 Given that there is a single tort, it must be that joint tortfeasors are liable separately and together for the same act of wrongdoing, rather than liable for the notional acts of two people. This explains the requirement that the secondary actor 83  See Hall (n. 80) 775–6. 84  He who employs another to do it does it himself. 85  See e.g. Launchbury v Morgans [1973] AC 127, 135 (Lord Wilberforce) (UK). 86  Stevens (n. 2) 245. 87  Tesco Supermarkets Ltd v Nattrass [1972] AC 153, 198–9 (Lord Diplock) (UK). 88  See Gerard McMeel, ‘Philosophical Foundations of the Law of Agency’ (2000) 116 LQR 387, 389–90, 410–11. 89 See Majrowski v Guy’s and St Thomas’ NHS Trust [2007] 1 AC 224, 229–30 (Lord Nicholls), 245 (Baroness Hale), 248 (Lord Brown) (UK). 90  Credit Lyonnais (n. 58) 549 (Lord Woolf MR) (emphasis added).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   73 must ‘make the wrong his own’. If the acts were already his own, this addition would be superfluous. The better answer is that a claimant’s rights in tort against one wrongdoer extend to any secondary actors who adopt the primary wrongdoer’s acts as their own. Secondary liability rules merely recognize that we are all under sub-duties—to avoid inducing, granting authorization, or conspiring with others to commit wrongs—as elem­ents inherent in primary duties.91

3.1.3  Upholding Primary Duties A third set of justifications argues that secondary liability rules are necessary to protect the integrity of an underlying primary right, such as a promise, fiduciary relationship, or property. Such rules prevent secondary actors from devaluing primary rights by removing pre-emptive reasons for compliance. This ensures that moral lacunae do not arise where morally culpable parties interpose ‘innocent’ intermediaries. Secondary liability is said to ‘strengthen’,92 ‘extend’,93 or ‘reinforce’ duties owed by primary actors, or to protect ‘species of property which deserve special protection’,94 thereby protecting the claimant’s primary interest in performance. The problem with this account is that doctrines of secondary liability operate throughout private law, so it cannot easily be said that a single species of right is singled out for ‘special protection’. Moreover, the added protection afforded by secondary remedies is incomplete; for example, it would not make conceptual sense to require the secondary party to disgorge profits retained only by the primary wrongdoer. This set of justifications lends itself more naturally to non-monetary liability. There secondary duties may be seen to arise which require the intermediary to take reasonable steps to disable or prevent wrongdoing by others. Alternatively, there may be circumstances in which the claimant’s fundamental rights are engaged by a third party’s wrongdoing and an injunction against an intermediary is a proportionate remedy to protect them. In these circumstances, one may think of the resulting duty to assist claimants to halt wrongdoing as a vehicle for protecting the claimant’s interest in ensuring protection of his or her rights.

3.1.4  Upholding Duties Voluntarily Assumed Finally, secondary liability may be understood in terms of the responsibility which intermediaries and other secondary wrongdoers assume for the actions of primary wrongdoers: for example, by helping, requesting, authorizing, or ratifying them—or, more pertinently to large internet platforms, by putting themselves in a position to regulate, and in practice regulating, such activities. To view secondary liability as premised upon an assumption of responsibility overcomes the basic objection that secondary li­abil­ity rules interfere with a person’s liberty by holding them accountable for conduct 91  Agency-based explanations may be more useful in cases where authority is specifically delegated to a primary wrongdoer—something which internet intermediaries rarely do. 92  Davies (n. 24) 404, 409. 93  Hazel Carty, ‘Joint Tortfeasance and Assistance Liability’ (1999) 19 Legal Studies 489, 668. 94  OBG (n. 32) [27] (Lord Hoffmann).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

74   Jaani Riordan which is not theirs. As Bagshaw argues, there must be ‘special reasons’ for holding someone responsible for third parties’ conduct.95 Attribution is justified where responsibility stems from a person voluntarily undertaking an obligation which can properly be upheld.96 Secondary liability may actually promote the concept of individual re­spon­ si­bil­ity and the purposes of tort law since it enforces secondary wrongdoers’ duties to control primary wrongdoers with whom they share a nexus of causation and responsibility. (In this regard, voluntary assumption of responsibility shares considerable overlap with causation-based justifications.) This account must be clarified in two ways. First, it will often be the case that a secondary wrongdoer wishes to avoid rather than assume responsibility for the primary wrongdoing; accordingly, the responsibility assumed is here notional—it reflects an expectation imposed by tort law having regard to the secondary wrongdoer’s conduct, knowledge, and control. The further riposte, that this simply involves ‘a policy of conscripting “controllers” into the ranks of [tort] prevention authorities’,97 can be met by observing that those who facilitate harm play a part in violating the claimant’s rights. While this may in itself be insufficient for monetary liability, it justifies some level of blame. As Cane argues, wilful disregard for primary rights justifies restricting secondary actors’ choices by imposing liability:98 the choice to be involved in others’ wrongful conduct forfeits any initial right of moral autonomy they once enjoyed. Second is the charge of circularity: to say that the secondary actor is liable because she owes (or has assumed) a duty of care for the primary wrongdoer’s actions begs the question, since whether such a duty exists is the very issue to be determined. Ultimately, the answer is a function of tort law more generally: duties may be assumed expressly—for example, by conducting risk-taking activity,99 giving advice,100 or exercising control101—or by satisfying a connecting factor sufficient for secondary liability to arise.

3.2  Practical Functions Consequentialist justifications of intermediary liability argue that it promotes efficient internalization of wrongdoing, thereby deterring wrongs and lowering both individual and overall enforcement costs. Without expressing a view on whether these distributive arguments are valid normative justifications for imposing liability in particular cases, at 95 Roderick Bagshaw, ‘Inducing Breach of Contract’ in Jeremy Horder (ed.), Oxford Essays in Jurisprudence (OUP 2000) 131, 148. 96  See J.C. Smith and Peter Burns, ‘Donoghue v Stevenson—The Not So Golden Anniversary’ (1983) 46 Modern L. Rev. 147, 157; Stovin v Wise [1996] AC 923, 935 (Lord Nicholls) (UK). 97  K.J.M. Smith (n. 79) 44–5. 98  Peter Cane, ‘Mens Rea in Tort Law’ (2000) 20 OJLS 533, 546. 99  See Harrison Moore, ‘Misfeasance and Non-Feasance in the Liability of Public Authorities’ (1914) 30 LQR 276, 278. 100  See e.g. Hedley Byrne & Co. Ltd v Heller & Partners Ltd [1964] AC 465, 494–5 (Lord Morris), 487 (Lord Reid) (UK). 101  See e.g. Dorset Yacht (n. 27) 1030 (Lord Reid).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   75 least three plausible accounts may be identified: first, reducing claimants’ enforcement costs by conscripting least-cost avoiders; secondly, encouraging innovation; and, thirdly, regulating communications policy. These parallel streams inform and are shaped by the considerations of fault and personal responsibility considered previously.

3.2.1  Reducing Claimants’ Enforcement Costs Enforcement against secondary parties is cheaper than suing primary wrongdoers if the aggregate costs of identifying each primary wrongdoer, proving liability, and recovering judgment outweigh the total costs of recovery against enabling intermediaries. Without a way to target facilitators, inducers, and conspirators, claimants face the Sisyphean task of suing every tortfeasor. Moreover, without the cooperation of secondary actors, claimants may lack the information necessary even to identify them. To solve this problem, doctrines of secondary liability create ‘gatekeeper’ regimes, allowing claimants to exploit natural enforcement bottlenecks and reduce overall costs.102 The function of secondary liability is to set default rules where high transaction costs would otherwise prevent optimal private ordering between claimants and wrongdoers. Such rules encourage intermediaries to internalize the cost of negative externalities their services create—for example, by using contractual mechanisms to allocate liability to primary wrongdoers, increasing service prices, or policing wrongdoing.103 This has two consequences: first, it increases the net value of primary rights, with corresponding increases in any social benefits which those rights were designed to incentivize (for example, the creation of beneficial works); and secondly, it deters primary wrongdoing, which may reduce related social harms (for example, by increasing the quality of speech). The threat of liability incentivizes secondary actors to act in ways that reduce total costs and increase net savings to society—in other words, to act as least-cost avoiders—just as the tort of negligence conditions liability for accidents upon a failure to take optimal precautions.104 In aggregate, this is said to reduce the cost of preventing and correcting breaches of primary obligations. Consequentialists regard secondary liability as appropriate only when these benefits outweigh its costs, such as lost positive externalities caused by restricting non-tortious conduct.105 However, those costs can elude quantification where they produce indirect harms to ‘soft’ interests such as freedom of expression, innovation, and privacy. Insofar as it offers a descriptive account, this view is difficult to reconcile with the fault-based requirements for causative secondary wrongdoing in English law. As Mann and Belzley argue, true efficiency-based secondary liability ‘should have nothing to 102  See Reinier Kraakman, ‘Gatekeepers: The Anatomy of a Third-Party Enforcement Strategy’ (1986) 2 J. of Law, Economics, and Organization 53, 56. 103  Lichtman and Posner (n. 15), 229–30. 104  See Guido Calabresi, The Costs of Accidents: A Legal & Economic Analysis (Yale U. Press 1970); Stephen Gilles, ‘Negligence, Strict Liability and the Cheapest Cost-Avoider’ (1992) 78 Va. L. Rev. 1291. 105  See William Landes and Richard Posner, The Economic Structure of Intellectual Property Law (HUP 2003), 118–19; Lichtman and Posner (n. 15) 238–9.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

76   Jaani Riordan do with a normative assessment of the . . . intermediary’—whether the secondary actor behaved ignorantly, dishonestly, or blamelessly—since its sole criteria are the relative costs and effectiveness of enforcement.106 Yet even least-cost avoiders are not liable to pay a monetary remedy unless they have intentionally induced, dishonestly assisted, authorized, or conspired in wrongdoing. This reflects the reasonable assumption that ignorant intermediaries are often unable to prevent wrongdoing without high social costs; fault is, in other words, a good heuristic for efficient loss avoidance. However, it means that English and European law are not solely concerned with efficient detection and prevention of primary wrongdoing; indeed, in eBay, secondary liability was refused notwithstanding the Court’s conclusion that eBay could feasibly do more to prevent wrongdoing. Conversely, courts have all but ignored the liability of payment and advertising intermediaries, despite their capacity to reduce the expected profit from online wrongdoing. Efficiency cannot account for these additional normative thresholds and therefore offers only a partial explanation of these doctrines’ aims.

3.2.2  Encouraging Innovation Consequentialists explain safe harbours as mechanisms for ensuring that secondary li­abil­ity is imposed upon intermediaries only when they are least-cost avoiders. They help efficiently apportion liability between primary and secondary wrongdoers by creating incentives for intermediaries to adopt low-cost procedures to remove material for which litigation is disproportionately costly, while recognizing that intermediaries are unlikely to be least-cost avoiders unless they are actually aware of wrongdoing.107 In other words, although ex ante monitoring may carry excessive social costs, inter­medi­ar­ ies are usually more efficient at ex post removal than primary wrongdoers.108 When Parliament or courts intervene to impose or limit secondary liability, they use a retrospect­ ive mechanism to shift innovation and dissemination entitlements between incumbent industries and technological innovators. These interventions reflect an assessment of net social welfare that seeks to induce the inefficient party to internalize the cost of wrongdoing and so avoid future inefficient investments.109 Safe harbours also provide bright lines and clear zones of activity within which inter­ medi­ar­ies may act without fear of potential liability.110 In supplying clear guarantees of immunity, they reduce uncertainty and (at least in theory) facilitate investment in new technologies. Further, they reduce the need for secondary actors to make decisions

106  Ronald Mann and Seth Belzley, ‘The Promise of Internet Intermediary Liability’ (2005) 47 William and Mary L. Rev. 239, 265. 107  See Rustad and Koenig (n. 16) 391. 108  See European Commission, Report on the Application of Directive 2004/48/EC (22 December 2010) 9. 109 See Dotan Oliar, ‘The Copyright–Innovation Tradeoff: Property Rules, Liability Rules, and Intentional Infliction of Harm’ (2012) 64 Stan. L. Rev. 951, 1001. 110  See Douglas Lichtman and William Landes, ‘Indirect Liability for Copyright Infringement: An Economic Perspective’ (2003) 16 Harv. J. of L. and Tech. 395, 406.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   77 about primary wrongdoing, which reduces the risk of pre-emptive over-enforcement.111 Intermediaries might otherwise do so because they do not internalize the benefits of tortious activity or incur the social costs of excessive enforcement.112 For courts, these limits function as liability heuristics, reducing decision costs and ultimately the cost of supplying internet services to consumers—all of which encourages investment and innovation. However, safe harbours may not go far enough, since the marginal utility derived from servicing primary wrongdoers may lead intermediaries to abandon ‘risky subscribers’.113 Innovation-based accounts therefore acknowledge that the limits of secondary liability embody a compromise between strict and fault-based responsibility that reflects wider considerations of social policy and market forces.

3.2.3  Regulating Communications Policy Finally, legal realists identify the wider role of secondary liability rules in regulating access to information. Secondary actors are natural targets for propagating communications policy and enforcing rights in and against information, since they have always been gatekeepers crucial for its reproduction and dissemination.114 Those policies serve numerous purposes, from preserving existing business models and protecting incumbent industries, to minimizing consumer search costs.115 Secondary liability rules are one method of regulating the interface between each generation of disseminating industries and those with an interest in what is being disseminated. They appoint judges as technological gatekeepers who assess the likely harms and benefits of new entrants’ technologies, deciding whether, on balance, they should be immunized or face extended liability.116 Following this assessment, Parliament may intervene to reverse or codify an emergent policy. That tort law specifies high thresholds for secondary liability reflects an underlying policy of entrusting regulation to market forces unless the harms of new technology clearly outweigh their benefits. Safe harbours partially codify these policies. If they are pragmatic compromises, this reflects the contested nature of modern communications policies.117 This approach views the limits of secondary liability as an evolving battleground of regulation which corrective theory cannot wholly explain; although prin­ciples of tortious responsibility inform doctrines of secondary liability, they are subservient to a Kronosian cycle of innovation, market disruption, and regulation in which courts and Parliament periodically rebalance wider interests of competition and economic policy, 111  See Mann and Belzley (n. 105) 274. 112  See Assaf Hamdani, ‘Gatekeeper Liability’ (2003) 77 Southern California L. Rev. 53, 73. Cf. Lichtman and Posner (n. 15) 225–6. 113  Neal Katyal, ‘Criminal Law in Cyberspace’ (2001) 149 U. Pa. L. Rev. 1003, 1007–8. 114  See Tim Wu, ‘When Code Isn’t Law’ (2003) 89 Va. L. Rev. 679, 712–13. 115  See e.g. Stacey Dogan and Mark Lemley, ‘Trademarks and Consumer Search Costs on the Internet’ (2004) 41 Houston L. Rev. 777, 795–7, 831. 116  See Tim Wu, ‘Copyright’s Communications Policy’ (2004) 103 Michigan L. Rev. 278, 348–9, 364. 117  ibid. 356.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

78   Jaani Riordan human rights, innovation, regional and international trade policy,118 and the complex incentive structures underlying primary legal norms.

4.  Types of Wrongdoing The preceding sections have provided a theoretical account of the distinct forms that liability rules may take, the relationship between primary and secondary wrongdoing, and the justifications for extending liability to intermediaries. Having done so, we can now complete the cartography of intermediary liability by identifying the main areas in which national legal systems have sought to impose duties upon intermediaries. Although this section is, by necessity, neither comprehensive nor detailed, it aims to provide a bird’s-eye view of the territory which will be explored elsewhere in this book, and to situate each area within the conceptual taxonomy of primary and secondary li­abil­ity developed earlier.

4.1  Copyright Infringement Traditional business models of copyright industries have been challenged by new forms of online distribution enabled by peer-to-peer protocols, user-generated content platforms, search engines, and content distribution networks. Confronted with this challenge, content creators and publishers have sought to use copyright norms to regulate the activities of intermediaries and to force them to internalize the costs of policing and preventing unlicensed exploitation of copyright works. In so doing, copyright has proven to be one of the hardest fought battlegrounds for intermediary liability. The predominant focus of copyright owners has been the imposition of monetary li­abil­ity upon intermediaries who were obviously complicit in, or directly responsible for, the most egregious infringements. Into this category may be grouped cases seeking to impose liability upon website directories of infringing content,119 platforms structured around infringing content,120 and transmission protocols that induce infringement by their users and are overwhelmingly used to transmit infringing content.121 In most such cases, both primary and secondary liability are alleged, which reflects their increasingly indistinct boundary in copyright law. For example, in Twentieth 118  See Graeme Dinwoodie, ‘The WIPO Copyright Treaty: A Transition to the Future of International Copyright Lawmaking’ (2007) 57 Case Western Reserve L. Rev. 751, 757–8 (it might be added that inter­ medi­ar­ies now represent a ‘fourth vector’ of balance). 119  See e.g. Cooper v Universal Music Australia Pty Ltd [2006] FCAFC 187 [42] (Branson J) (French J agreeing) (Aus.). 120  See e.g. Twentieth Century Fox Film Corp. v Newzbin Ltd [2010] EWHC 608 (Ch) [125] (Kitchin J) (UK); [2010] FSR 21. 121  See e.g. Metro-Goldwyn-Mayer Studios Inc. v Grokster Ltd, 545 US 913, 937 (2005) (US).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   79 Century Fox Film Corp. v Newzbin Ltd,122 the operator of a Usenet binary storage service was liable both for itself communicating to the public copies of the claimants’ films that had been uploaded by third parties to Usenet newsgroups, and also for procuring and engaging in a common design with its paying subscribers to copy the films. It made little difference to the outcome in this case whether liability was classified as primary or secondary. A second strand of cases has sought to impose monetary liability upon intermediaries whose business models are not structured around infringement, but whose services nevertheless facilitate or enable infringing transmissions to occur: chiefly, internet service providers (ISPs) and platforms. In these cases, the difference between primary and secondary liability matters a great deal. Courts have generally rejected attempts to pin secondary liability (typically under the guise of authorization liability) upon ISPs and other mere conduits, on the basis that they lack knowledge or control over the in­frin­ ging transmissions of their subscribers.123 Similarly, in The Newspaper Licensing Agency Ltd v Public Relations Consultants Association Ltd,124 the liability of an online news aggregator service was to be decided solely by reference to a question of primary li­abil­ity: namely, whether the service engaged in acts of reproduction in respect of news headlines, and whether those acts satisfied the temporary copying defence. Claims against platforms have produced more equivocal outcomes: in the United States, litigation against YouTube was settled out of court after two first instance judgments in which summary judgment was granted in favour of YouTube on the basis of statutory safe harbours,125 a successful appeal against summary dismissal, and findings upon remission that YouTube had no actual knowledge of specific infringements or any ability to control what content was uploaded by users.126 In another decision, the online video platform Veoh was held not to have sufficient influence over user-uploaded video content to fix it with liability for infringement:127 ‘something more than the ability to remove or block access to materials posted on a service provider’s website’ was needed.128 Both these claims appear to have been focused solely on secondary liability standards. By contrast, European claims against platforms have led to divergent and inconsistent results, which appear to stem from confusion concerning the proper boundaries of primary liability rules. In the United Kingdom, an app that allowed users to upload clips taken from the claimants’ broadcasts of cricket matches infringed copyright by reproducing and communicating to the public a substantial part of the broadcast works, and could not rely on the hosting or mere conduit safe harbours insofar as the clips were 122  [2010] EWHC 608 (Ch) [108]–[112] (Kitchin J) (secondary liability), [125] (UK) (primary liability). 123  See e.g. Roadshow Films Pty Ltd v iiNet Ltd (No. 3) (2012) 248 CLR 42 (Aus.). 124 [2013] RPC 19. See also C-360/13 Public Relations Consultants Association Ltd v Newspaper Licensing Agency Ltd and Others [2014] ECLI:EU:C:2014:1195. 125  See Digital Millennium Copyright Act 1998, s. 512(c) (US). 126 See Viacom International Inc. v YouTube Inc., 676 F.3d 42 (2d Cir., 2012) (US); Viacom International Inc. v YouTube Inc., 07 Civ 2103 (LLC), 1:07-cv-02103-LLS, 20–3 (SDNY 2013) (Stanton J) (US). 127 See UMG Recordings v Shelter Capital Partners LLC, No. 10-55732, 2013 WL 1092793, 12, 19 (9th Cir., 2013) (US). 128  Capital Records Inc. v MP3Tunes LLC, 821 F.Supp.2d 627, 635 (SDNY 2011) (US).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

80   Jaani Riordan subject to editorial review.129 In Germany, the Bundesgerichtshof did not consider it acte clair whether a platform such as YouTube performs an act of reproduction or communication to the public where infringing videos are uploaded by a user automatically and without any prior editorial review by the platform operator, so referred several questions to the CJEU.130 Previously, the Oberlandesgericht Hamburg had held that, as a host, YouTube could avail itself of safe harbour protection irrespective of the answer to that question.131 Meanwhile, in Austria, a television broadcaster has reportedly succeeded in a claim for infringement against YouTube on the basis that YouTube could not rely on the hosting safe harbour.132 Such divergent and irreconcilable outcomes are the unfortunate by-products of using primary liability concepts to conceal differing value assessments of these platforms’ secondary responsibility. More recently, copyright owners and their licensees have shifted their focus towards non-monetary remedies to block or disable access to infringing material. In the United Kingdom, sections 97A and 191JA of the Copyright, Designs and Patents Act 1988 create statutory blocking remedies consequent upon a finding that a third party has infringed the claimant’s copyright or performers’ rights, respectively. These are in substance final injunctions which give effect to the obligation recognized by Article 8(3) of Directive 2001/29/EC.133 These remedies are discussed in more detail later in this book.134 Their growing use reflects a perception that injunctions of this kind can be significantly more valuable for copyright owners than traditional forms of monetary relief, despite the absence of a monetary remedy. This is well illustrated by the fact that the first blocking order made against a British ISP, Twentieth Century Fox Film Corp. v British Telecommunications plc, related to the Newzbin platform, which (despite the liability judgment in Newzbin) continued in operation as ‘Newzbin2’. Indeed, rather tellingly, it was only after being blocked that, finally starved of visitor traffic and advertising rev­enue, the platform eventually shut down in late 2012.135

129 See England and Wales Cricket Board Ltd v Tixdaq Ltd [2016] EWHC 575 (Ch) [169]–[171] (Arnold J) (UK), though it was common ground that the app maker was jointly liable for infringing acts committed by users: see ibid. [94] (Arnold J). 130  See Bundesgerichtshof [Supreme Court] (BGH) LF v Google LLC & YouTube Inc. [13 September 2018] case no. I ZR 140/15 (Ger.); C-682/18 LF v Google LLC, YouTube Inc., YouTube LLC, Google Germany GmbH [2019] Request for a preliminary ruling from the Bundesgerichtshof (Ger.) lodged on 6 November 2018, OJ/C 82, 2–3. 131  See Oberlandesgericht [Higher Court] (OGH) Hamburg, GEMA v YouTube II [1 October 2015] case no. 5 U 87/12 (Ger.). 132  See Handelsgericht [Commercial Court] Vienna, ProSiebenSat.1 PULS 4 GmbH v YouTube LLC [5 June 2018] (Aust.); Press release, ‘PULS 4 wins lawsuit against YouTube’ (APA-OTS, 6 June 2018) . 133  See e.g. Twentieth Century Fox Film Corp. v British Telecommunications plc (No. 1) [2011] EWHC 1981 (Ch) [146] (Arnold J) (UK) (‘Newzbin2’). See also Football Association Premier League Ltd v British Telecommunications plc (No. 1) [2017] EWHC 480 (Ch) (UK), in the context of live sports broadcasts. 134  See Chapters 4 and 29. 135  See ‘Piracy Site Newzbin2 Gives up and Closes 15 Months after Block’ (BBC, 29 November 2012) .

OUP CORRECTED PROOF – FINAL, 03/30/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   81

4.2  Trade Mark Infringement Doctrines of trade mark ‘use’ present almost insurmountable difficulties to claims seeking to impose primary liability upon intermediaries such as search engines and online advertising networks for third parties’ trade mark infringements. As the CJEU explained in Google France SARL v Louis Vuitton Malletier SA, permitting third parties to select keywords which may be trade marks does not mean that a service provider itself ‘uses’ those signs for trade mark purposes.136 In this regard, merely ‘creating the technical conditions necessary for the use of a sign’ by a third party will not ordinarily be sufficient to lead to primary liability: the intermediary’s role must instead ‘be examined from the angle of rules of law other than [primary liability]’.137 In that case, the CJEU rejected claims that the claimants’ trade marks had been used by Google in keyword advertisements placed by third parties on the defendant’s search engine, as such advertisements were not used by Google. However, the Court did advert to the possibility of liability attaching under domestic secondary liability rules.138 In England, at least, this seems unlikely as a result of the combined effect of eBay and CBS, subject to the possibility of non-monetary liability under Article 11 of Directive 2004/48/ EC. The relatively high thresholds of participation attaching to secondary liability under English tort law principles mean that it is difficult to impose monetary secondary li­abil­ ity upon marketplaces and platforms for third parties’ trade mark infringements. In eBay, the claimants had argued that eBay was jointly liable for trade mark infringement pursuant to a common design with registered users who sold counterfeit and parallel-imported versions of the claimants’ perfume and cosmetic goods. Arnold J rejected this argument, despite expressing ‘considerable sympathy’ for the suggestion that eBay could and should do more to prevent infringement.139 The starting position was that tort law imposed ‘no legal duty or obligation to prevent infringement’.140 Liability as a joint tortfeasor is the consequence of failing to discharge a duty (not to procure or participate in a tortious design), and not the source of such a duty. It followed that if eBay was under no duty to act, then whether or not it failed to take reasonable steps was ir­rele­vant. The claimants’ argument was therefore circular: it assumed the duty it set out to prove. All that could be said was that eBay’s platform facilitated acts of infringement by sellers, but mere facilitation—even with knowledge of the existence of infringements and intent to profit from them—was not enough to create a common design. However, the Court expressly left open the possibility that eBay might be susceptible to an injunction under Article 11.141

136 C-236–238/08 Google France SARL v Louis Vuitton Malletier SA [2010] ECLI:EU:C:2010:159, paras 55–6. 137  ibid. para. 57. The CJEU clearly had in mind national rules of secondary liability, as indicated by the cross-reference to ‘the situations in which intermediary service providers may be held liable pursuant to the applicable national law’. See ibid. para. 107. 138  ibid. para. 57. 139  eBay (n. 52) [370] (Arnold J). 140  ibid. [375] (Arnold J). 141  ibid. [454], [464]–[465] (Arnold J).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

82   Jaani Riordan As Aldous LJ observed in British Telecommunications plc v One in a Million Ltd, secondary liability is ‘evolving to meet changes in methods of trade and communication as it had in the past.’142 However, despite the inherent flexibility of such rules, cases like eBay suggest that secondary liability rules embed within them a default policy choice to exonerate passive or neutral intermediaries who play no intentional role in wrongdoing, despite knowingly facilitating it for profit, or making it a necessary or inevitable part of their business models. Like copyright, English courts have recognized the availability of blocking injunctions to prevent trade mark infringement by third parties.143 While these remedies are potentially effective to prevent sales of counterfeit goods from websites that are dedicated to advertising and selling such goods, they are less obviously available against general purpose platforms such as eBay (on which are also advertised a vast array of lawful goods) due to proportionality concerns and the technical difficulty of preventing access only to infringing material.

4.3 Defamation The tort of defamation proved one of the earliest battlegrounds for intermediary li­abil­ity, as claimants sought to impose monetary liability upon hosts of Usenet newsgroups,144 online news publishers,145 social networks,146 bloggers,147 and ISPs148—with varying degrees of success. Defamation provides a good example of primary li­abil­ity being remoulded to encompass certain categories of secondary publishers whose contribution to the publication of defamatory material is considered sufficiently blameworthy to merit liability. This process shares many parallels with earlier developments in the law of defamation which enabled it to accommodate previous technologies of reproduction and dissemination within established patterns of primary liability (for example, telegraphy and radio). Defamation claims involving intermediaries tend to have three focal points: prima facie liability, which asks whether the intermediary is to be considered a publisher at common law; defences, such as innocent dissemination, which exempt innocent secondary publishers from liability; and safe harbours, which immunize passive and neutral publishers who would otherwise be liable. The common law approach to prima facie liability has evolved rapidly in a series of judicial decisions involving intermediaries. By 142  [1998] EWCA Civ 1272; [1999] FSR 1 [486] (Aldous LJ). 143  See e.g. Cartier International AG v British Sky Broadcasting Ltd [2014] EWHC 3354 (Ch) (UK); British Telecommunications plc v Cartier International AG [2018] UKSC 28; [2018] 1 WLR 3259 (UK). 144  See e.g. Godfrey v Demon Internet Ltd [2001] QB 201 (UK). 145  See e.g. Dow Jones & Co. v Gutnick (2002) 210 CLR 575 [44], [51]–[52] (Gleeson CJ, McHugh, Gummow, and Hayne JJ) (UK). 146  See e.g. Richardson v Facebook UK Ltd [2015] EWHC 3154 (QB) [28], [48] (Warby J) (UK) (noting obiter that the defendant could not be held to be a publisher). 147  See e.g. Bunt v Tilley [2006] EWHC 407 (QB) (UK). 148  See e.g. Kaschke v Gray [2010] EWHC 690 (QB) [40]–[41] (Stalden J) (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   83 analogy with cases involving offline intermediaries, English courts developed a test premised upon knowledge of defamatory material coupled with a failure to remove it within a reasonable period, from which an inference of publication could be drawn. Early cases reasoned that an intermediary is to be treated as a publisher (and therefore within the scope of primary liability for defamation) where tortious material is uploaded by a third party to facilities under its control, the intermediary has been put on notice of the unlawfulness of the material, and fails to remove it within a reasonable period despite having the capacity to do so.149 This reasoning relied on an analogy with older cases involving offline intermediaries in which consent to publication was inferred from an occupier’s physical control over premises and its failure to remove statements displayed there.150 This approach was further developed in Tamiz v Google Inc., which concerned de­fama­ tory comments posted to a blog hosted by Google’s ‘Blogger’ service. The Court of Appeal laid down a test which asks whether the platform could ‘be inferred to have associated itself with, or to have made itself responsible for, the continued presence of [the defamatory] material on the blog’, after receiving notice of the material.151 The Court emphasized Google’s own voluntary conduct—in particular, by providing a platform for publishing blogs and comments, setting the terms and policies for the platform, and determining whether or not to remove any posting made to it—as a potential basis for inferring participation in publication. This suggests an approach which is consistent with principles of moral agency and individual responsibility, albeit one founded upon a failure to act after notification.152 Conversely, in Metropolitan Schools v DesignTechnica, the claimant alleged that Google Search was a publisher of defamatory material that appeared within automated ‘snippets’ summarizing search results for the claimant’s name. Google relied on the statu­tory defence of innocent dissemination and safe harbours, though ultimately it needed neither since it was held not to be a publisher of the material at common law, and therefore did not face even prima facie liability.153 Similarly, the Supreme Court of Canada has concluded that a mere hyperlink is not publication of the material to which it leads.154 Australian courts have been slower to reach the same conclusion: in Trkulja v Google LLC,155 for example, the High Court of Australia held that the question was not amenable to summary determination, as ‘there can be no certainty as to the nature and extent of Google’s involvement in the compilation and publication of its search engine results without disclosure and evidence’.156 In so holding, the Court appears to have 149 See Godfrey (n. 143) 209, 212 (Morland J). 150 See Byrne v Deane [1937] 1 KB 818, 830 (Greene LJ) (UK). 151  [2013] EWCA Civ 68 [34] (Richards LJ) (Lord Dyson MR and Sullivan LJ agreeing) (UK). 152  See also Pihl v Sweden App. no. 74742/14 (ECtHR, 7 February 2017) para. 37 (suggesting that this approach strikes an appropriate balance with claimants’ Art. 8 Convention rights). 153  [2009] EWHC 1765 (QB) [124] (Eady J) (UK). 154 See Crookes v Newton [2011] SCC 47 (Can); [2011] 3 SCR 269 [42] (Abella J), [48] (McLachlin CJ and Fish J) (CA) (UK). 155  [2018] HCA 25 [38]–[39] (Kiefel CJ, Bell, Keane, Nettle, and Gordon JJ) (Aus.). 156  ibid. [39].

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

84   Jaani Riordan disagreed with the proposition that a search engine can never be a common law publisher of materials included in search results or autocompleted queries. Intermediaries who are further removed from the defamatory material are in­trin­sic­ al­ly less likely to face primary liability. For example, in Bunt v Tilley the ISP defendant was held incapable of being a publisher at common law despite transmitting defamatory materials from third party website operators to its subscribers.157 Instrumental in this conclusion was the lack of any ‘assumption of general responsibility’ of an ISP for transmitted materials, which would be necessary to impose legal responsibility.158 The same conclusion would appear likely to exonerate other mere conduits, without any need to rely on defences or safe harbours. The hosting safe harbour is most commonly relied upon by intermediaries to shield against monetary liability for defamation.159 In McGrath v Dawkins, the retailer Amazon. co.uk was immunized against claims of liability for allegedly defamatory reviews and comments posted to the claimant’s book product page.160 Similarly, in Karim v Newsquest Media Group Ltd, a claim against a website operator arising from defamatory postings made by users was summarily dismissed on the basis of safe harbours.161 The hosting safe harbour tends to encourage over-compliance by intermediaries, since removal is in most cases the only realistic response to a complaint, even if a mo­tiv­ ated defendant might well be able to plead a substantive defence (such as truth or qualified privilege). Nevertheless, in defamation cases an informal counter-notification procedure appears to be encouraged by the English courts: if an intermediary receives two competing allegations—one claiming that material is defamatory, the other defending the lawfulness of the material—then, faced with two irreconcilable notices, the intermediary will have ‘no possible means one way or the other to form a view as to where the truth lies.’162 It also appears to be the case that intermediaries will not be required to accept complaints at face value if they are defective or fail to identify facts from which unlawfulness is apparent.163 Such notices are ‘insufficiently precise or inadequately substantiated’ to deprive an intermediary of the hosting safe harbour.164 The Defamation Act 2013 (UK) introduced a new defence of exhaustion, which requires the claimant first to exhaust any remedies against a primary wrongdoer before proceeding against a secondary publisher, such as an intermediary. This prescribes a stricter relationship of subsidiarity between primary and secondary liability for def­am­ ation: a court cannot hear a defamation claim against a secondary party unless the court is satisfied that it is not reasonably practicable for the claimant to proceed against the

157 See Bunt v Tilley (n. 146) [21]–[26], [37] (Eady J) (UK). 158  ibid. [22] (Eady J); cf. McLeod v St Aubyn [1899] AC 549, 562 (Lord Morris) (UK). 159  In many respects, the safe harbours cover similar ground—and support similar policy goals—to the defence of innocent dissemination in the law of defamation. 160  [2012] EWHC B3 (QB) (UK). 161  [2009] EWHC 3205 (QB) [15] (Eady J) (UK). 162  See e.g. Davison v Habeeb [2011] EWHC 3031 (QB) [66] (HHJ Parkes QC) (UK). 163  Tamiz (n. 150) [59]–[60] (Eady J). 164  eBay (n. 52) [122].

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   85 primary party.165 In defamation claims, a successful claim against a secondary publisher (such as a service provider) is usually treated as an example of primary liability for a second and distinct publication, even though it is derivative from another wrong (the ori­gin­al publication).

4.4  Hate Speech, Disinformation, and Harassment Discussion continues in the United Kingdom about how best to regulate and deter online abuse, disinformation and ‘fake news’, and cyber-bullying. Proposals have been made for intermediaries (especially social networks and media-sharing platforms) to take a more active role in policing content accessible on their platforms.166 A series of recent cases from Northern Ireland demonstrates how social networks can face non-monetary li­abil­ity to remove materials which constitute unlawful harassment or otherwise interfere in an individual’s fundamental rights. In XY v Facebook Ireland Ltd, the Court held that a Facebook page entitled ‘Keeping Our Kids Safe from Predators’ created a real risk of infringing the claimant sex offender’s rights to freedom from inhuman and degrading treatment and to respect for private and family life. The Court granted an interim injunction requiring Facebook to remove the page in question (designated by URL), but refused to order it to monitor the site to prevent similar material from being uploaded in the future due to the disproportionate burden and judicial supervision that would entail.167 However, no monetary liability was imposed on Facebook, and it appears to have been assumed without any discussion that Facebook owed a duty to prevent interferences with the claimant’s rights even though it was not necessarily the publisher of the page.168 Conversely, in Muwema v Facebook Ireland Ltd, the Irish High Court made an order requiring Facebook to identify the author of a Facebook page, but refused in­junct­ive relief because Facebook could avail itself of statutory defences and because it would be futile.169 Intermediaries and individuals can face criminal liability under section 127(1) of the Communications Act 2003 (UK), which makes it an offence to publish a message or other matter that is ‘grossly offensive’ or of an ‘indecent, obscene or menacing character’ over a public electronic communications network.170 Service providers have not been targeted directly, and it is difficult to envisage a situation in which an intermediary would possess the necessary mens rea for primary liability. In the absence of relevant primary or secondary liability standards, it seems likely that some form of sui generis statutory regulation would be necessary if policymakers wish to impose duties onto 165  Although the provision has not yet been litigated, it may have less application in internet disputes involving anonymous primary defendants. 166  See e.g. Digital, Culture, Media and Sport Committee, Final Report: Disinformation and ‘Fake News’ (18 February 2019) ch 2, paras 14–40. 167  [2012] NIQB 96 [16]–[20] (McCloskey J) (UK). 168  See also AB Ltd v Facebook Ireland Ltd [2013] NIQB 14 [13]–[14] (McCloskey J) (UK). 169  [2016] IEHC 519 [62]–[64] (Binchy J) (Ire.). 170  See e.g. Chambers v Director of Public Prosecutions [2012] EWHC 2157 (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

86   Jaani Riordan platforms, social networks, and other intermediaries involved in the dissemination of harmful material.171

4.5  Breach of Regulatory Obligations Liability can also arise pursuant to statutory provisions that impose specific duties upon intermediaries in particular contexts. Intermediaries now face primary regulation in fields as diverse as network neutrality,172 network and information security,173 access to online pornography,174 and data protection.175 The consequences of breaching such a duty depend on the terms of the relevant legislative provisions, and range from mon­et­ ary penalties and enforcement notices from regulators through to private claims for breach of statutory duty. These are most conventionally thought of as forms of primary liability, since they attach to breaches of primary duties owed by intermediaries under the terms of the relevant legislation. A particularly topical example concerns interception and retention of communications data. Most developed countries have legislation providing for the monitoring and interception of metadata by service providers, under various conditions and pro­ced­ures, to assist in the detection and investigation of crime. Under the Investigatory Powers Act 2016 (UK), law enforcement and intelligence agencies have statutory powers to intercept ‘communications data’ (such as emails, instant messages, HTTP requests, and file metadata) carried by telecommunications service providers (broadly defined to encompass, for example, telephone carriers, web hosts, ISPs, and social networking platforms). Such interceptions can in most cases be authorized by executive warrant. Service providers who receive a valid request are obliged to grant access to the requested data, enforceable by means of injunction. Following the disclosure of mass interception capabilities and intelligence-sharing by GCHQ and other agencies, public debate has ignited about whether these capabilities are lawful, necessary, or desirable, and what conditions and safeguards are needed to uphold fundamental rights. In the meantime, intermediaries continue to be subject to primary duties to store communications metadata and to provide targeted and bulk access to law enforcement authorities.

171 The Final Report (n. 165) recommended the creation of a statutory body responsible for enforcing a Code of Ethics against intermediaries, including by requiring the removal of ‘harmful and illegal content’, ibid. para. 39. 172  See Regulation 2015/2120/EU of the European Parliament and of the Council of 25 November 2015 laying down measures concerning open internet access and amending Directive 2002/22/EC on universal service and users’ rights relating to electronic communications networks and services and Regulation (EU) No. 531/2012 on roaming on public mobile communications networks within the Union [2016] OJ L/310. 173  See Network and Information Systems Regulations 2018 (SI 2018/506) (UK). 174  See Digital Economy Act 2017, ss. 14, 20, 23 (UK). 175  See Regulation 2016/679/EU (n. 29).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   87

4.6  Disclosure Obligations Most legal systems recognize an obligation to give disclosure of information needed to bring a viable claim against an actual or suspected wrongdoer. In common law systems, disclosure of this kind is a discretionary equitable remedy that has been developed by courts of equity.176 The claimant must establish: (1) an arguable case of primary wrongdoing by someone; (2) facilitation of that wrongdoing by the defendant from whom disclosure is sought; (3) that disclosure is necessary to enable some legitimate action to be taken against the primary wrongdoer; and (4) that disclosure would, in the circumstances, be proportionate. Whether disclosure by an intermediary is proportionate will depend on a number of factors, including the nature and seriousness of the primary wrongdoing,177 the strength of the claimant’s claim against the primary wrongdoer,178 whether the intermediary’s users have a reasonable expectation that their personal data are private,179 whether the disclosure would only affect the personal information of arguable wrongdoers,180 whether there are any realistic alternatives, and the cost and inconvenience of giving disclosure. Although there is no presumption either way, the decided cases indicate that it is exceedingly rare for disclosure to be outweighed by the fundamental rights of an intermediary or their anonymous users. The nature of the legal duty recognized in these cases is relatively limited, and such claims give rise only to non-monetary secondary liability of the most nominal kind. The basis of the duty is that, where an intermediary is ‘mixed up in’ wrongdoing, once it has been given notice of the wrongdoing it comes under a duty to ‘stop placing [its] facilities at the disposal of the wrongdoer’.181 However, reflecting their status as a legally innocent party, intermediaries are, in an ordinary case, entitled to be reimbursed for their compliance costs and legal costs of responding to the claim for disclosure.182 Two cases suggest that the English courts are alive to the risk that blanket disclosure orders against ISPs may be abused by claimants for the collateral purpose of extracting monetary settlements from subscribers accused of copyright infringement. In one case,183 the Court commented critically on the use of disclosed information to per­pet­rate a scheme in which parties were given no realistic prospect but to submit to a demand of payment to avoid being ‘named and shamed’ as alleged downloaders of porno­graph­ic copyright works. In another case, the Court undertook a full proportionality analysis, balancing claimants’ and subscribers’ rights and the justifications for interference, before applying the ‘ultimate balancing test’ of proportionality. However, in that case, a refusal 176 See Norwich Pharmacal Co. v Customs and Excise Commissioners [1974] AC 133 (UK). 177 See Sheffield Wednesday Football Club Ltd v Hargreaves [2007] EWHC 2375 (QB) (UK). 178 See Clift v Clarke [2011] EWHC 1164 (QB) (UK). 179 See Totalise plc v The Motley Fool Ltd [2002] 1 WLR 1233 [30] (Aldous LJ) (UK). 180 See Rugby Football Union v Consolidated Information Services Ltd (formerly Viagogo Ltd) (in liq.) [2012] 1 WLR 3333 [43], [45] (Lord Kerr JSC) (UK). 181  ibid. [15]; Cartier (n. 142) [10] (Lord Sumption JSC) (UK). 182 See Totalise (n. 178) [29] (Aldous LJ); see Cartier (n. 142) [12] (Lord Sumption JSC). 183 See Media CAT Ltd v Adams (No. 2) [2011] FSR 29 [724]–[725] (HHJ Birss QC) (UK).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

88   Jaani Riordan to extend disclosure to copyrights that had been licensed for enforcement purposes by third party copyright owners was overturned; this did not amount to ‘sanctioning the sale of the [subscribers’] privacy and data protection rights to the highest bidder’.184

5. Conclusions This chapter has proposed a theoretical framework for describing liability and a tax­onomy for classifying the types of liability which may be imposed upon internet inter­medi­ar­ies. Such liability may be divided into monetary and non-monetary obligations, and further classified along a spectrum of liability rules. Additionally, all forms of li­abil­ity may be described as either primary or secondary, where the latter is reserved to derivative wrongdoers whose conduct meets certain causal, relational, and normative thresholds. This chapter has also considered two sets of policy justifications for imposing liability upon intermediaries. The first seeks to bring secondary wrongdoers’ liabilities within traditional accounts of moral agency and individual responsibility in private law. This chapter argued that doctrines of secondary liability can be best explained, at least in the context of internet intermediaries, by intermediaries’ assumption of responsibility for primary wrongdoers. The second set of justifications uses law and economics to explain why certain secondary actors may be appropriate targets of loss-shifting. Although economic analysis provides a powerful vocabulary to describe the communications policy underlying many intermediary liability rules, the agnosticism of enforcement cost ana­lysis offers few clear answers to underlying questions of fault, responsibility, and fundamental rights. Although our focus has been legal liability, it is also obvious that even in the absence of liability, intermediaries’ conduct may be important in a number of ways. First, voluntary self-regulation now accounts for the vast majority of online content enforcement undertaken by major platforms and search engines. For example, by the end of March 2019, Google had processed over 4 billion requests to de-index copyright-infringing URLs from search results,185 which is many orders of magnitude greater than any conceivable claims for injunctions or damages. Secondly, the norms and practices which intermediaries choose to adopt, and their approach to self-enforcing those norms, in turn affect the likelihood of tortious material being posted, transmitted, or accessed.186 Thirdly, the possibility of liability may cause intermediaries to internalize at least some of the harms caused by their services even if they are not under a strict legal duty to do

184  Golden Eye (International) Ltd v Telefónica UK Ltd [2012] EWHC 723 (Ch) [146] (Arnold J) (UK); rev’d in part [2012] EWCA Civ 1740 (UK). 185  See Google Inc., ‘Requests to remove content due to copyright’ in Transparency Report (March 2019) . 186  See e.g. Twitter Inc., ‘The Twitter Rules’ (Help Center, 2019) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

A Theoretical Taxonomy of Intermediary Liability   89 so.187 Finally, the threat of more stringent future regulation may deter intermediaries from more self-interested behaviour, may incentivize greater self-regulation,188 and may even lead intermediaries to develop new capabilities which are later available to be deployed in aid of legal remedies.189 Each of these forms of conduct by intermediaries can exert a strongly regulatory effect on internet communications, even though it can hardly be said to be a consequence of imposing liability. At least under English law, the traditional tort-based thresholds for imposing secondary liability are rarely met by passive and neutral intermediaries, which usually lack the required knowledge and intention even if they do facilitate wrongdoing. The high thresholds of intervention required for secondary liability suggest that these rules are unlikely to be effective at regulating internet intermediaries alone. Limitations derived from European law, examined elsewhere in this book, further entrench the principle that faultless intermediaries should not face monetary liability or onerous duties to police third parties’ wrongdoing. Despite this general aversion to imposing liability upon intermediaries and other secondary actors, it is clear that intermediaries may face liability in a growing number of areas. This chapter has provided an overview of the main developments from the perspective of English law, which will be further analysed in specific areas elsewhere in this book.

187 See e.g. eBay Inc., ‘About VeRO’ (2019) ; Google Ireland Ltd, ‘How Content ID Works’ (YouTube Help, 2019) . 188  See e.g. Rob Leathern, ‘Updates to Our Ad Transparency and Authorisation Efforts’ (Facebook Business, 29 November 2018) (noting a change to Facebook’s advertising policies in the UK). 189  See e.g. Lord Macdonald, ‘A Human Rights Audit of the Internet Watch Foundation’ (November 2013) 8–9 (noting the development of the IWF blacklist in response to government pressure).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 4

R em edies First, Li a bilit y Secon d: Or W h y W e Fa il to Agr ee on Optim a l De sign of I n ter m edi a ry Li a bilit y Martin Husovec*

Over the last years, attention in the literature has largely focused on analysing how courts decide about the conditions leading to liability of intermediaries for their users’ actions. Numerous authors studied issues related to the wide-range of market players active in all areas of intellectual property (IP) law.1 What remains under-researched, however, is the question of the consequences that these legal qualifications entail. I am not aware of an in-depth comparative or empirical study that would try to map the extent of damages awarded, the scope and type of injunctions granted, the allocation of burden of proof, or similar highly practical issues. Angelopoulos gets the closest.2 For legal scholarship, very often establishing the liability itself is an end-station. It is somehow implicitly assumed that intermediaries cannot recover and will simply shut down their services if held liable. This is striking, as it basically presupposes a world *  I would like to thank Christina Angelopoulos, Giancarlo Frosio, and Miquel Peguera for their valuable comments on the earlier draft of this chapter. All mistakes are solely mine. 1  See e.g. Miquel Peguera, ‘Hyperlinking under the lens of the revamped right of communication to the public’ (2018) 34(5) Computer Law & Security Rev. 1099–118; Christina Angelopoulos, European Intermediary Liability in Copyright: A Tort-Based Analysis (Kluwer Law Int’l 2016); Jaani Riordan, The Liability of Internet Intermediaries (OUP 2016); Paul Davies, Accessory Liability (Hart 2015); Graeme Dinwoodie (ed.), Secondary Liability of Internet Service Providers (Springer 2017); Mark Bartholomew and Patrick McArdle, ‘Causing Infringement’ (2011) 64 Vand. L. Rev. 675. 2  See Angelopoulos (n. 1).

© Martin Husovec 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Remedies First, Liability Second   91 of one-chance intermediaries which cannot err and improve. It also makes the debate overly dramatic when it does not have to be. While some of the recent developments in the area of injunctions against innocent third parties started challenging these assumptions,3 it seems to me that scholarly review is still lagging behind when considering the system in its entirety. To illustrate this in the European setting, let me offer a paradox. In the EU, while deciding on when to trigger liability is left to the Member States to legislate—although subject to the limitations of the e-Commerce Directive, human rights safeguards and increasing review of the Court of Justice of the European Union (CJEU)—the issue of ensuing consequences has comparably more European footing in the Enforcement Directive and its regulation of remedies. In other words, what is more European then when. Despite this, most of the work on intermediary liability tackles the question of when to trigger consequences rather than what consequences to trigger.

1.  Three Legal Pillars IP scholars studying intermediary liability often distinguish between primary and secondary liability. While the first is delineated by the statutory exclusive rights, the second is shaped by doctrines which come to expand those rights. These doctrines are often grounded in general tort law. The basic notion is that while the primary infringers are wrongdoers due to their own exploitation of the protected objects, the secondary infringers only become wrongdoers when they contribute, in some relevant way, to other people’s infringing actions. What is considered relevant differs substantially across the globe. However, what constitutes primary infringement differs too. The line between primary and secondary liability is more fluid than one might think, as the latest developments in copyright law4 and trade mark law testify.5 In a comparative edited volume in 2017, Dinwoodie concludes that ‘an assessment of secondary liability cannot be divorced from (and indeed must be informed by) the scope of primary liability or other legal devices by which the conduct of service pro­viders 3  See Martin Husovec, Injunctions Against Intermediaries in the European Union: Accountable, But Not Liable? (CUP 2017); Pekka Savola, ‘Blocking Injunctions and Website Operators’ Liability for Copyright Infringement for User-Generated Links’ (2014) 36(5) EIPR 279–88. 4 See C-348/13 BestWater v Mebes and Potsch [2014] ECLI:EU:C:2014:2315; C-279/13 C.  More Entertainment AB v Sandberg [2015] ECLI:EU:C:2015:199; C-160/15 GS Media BV v Sanoma Media Netherlands BV [2016] ECLI:EU:C:2016:644; C-610/15 Stichting Brein v Ziggo BV and XS4All Internet BV [2017] ECLI:EU:C:2017:456. 5  In the area of trade mark law, the typical example is trade mark use by Google in its keyword advertising, which is infringing in the EU but allowed in the United States. See Graeme Dinwoodie, ‘A Comparative Analysis of the Secondary Liability of Online Service Providers’ in Graeme Dinwoodie (ed.), Secondary Liability of Internet Service Providers (Springer 2017) 13. Another example is the emer­ ging question about use of trade marks by Amazon on its platform. See C-567/18 discussed also in this Handbook by Richard Arnold in Chapter 20.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

92   Martin Husovec or their customers is regulated’.6 In other words, although categories of primary and secondary liability are important, they cannot be simply studied in isolation. Classification by either of them is made on the basis of our understanding of the breadth associated with each of them. Therefore, primary infringement and its scope cannot be uninformed by the scope of secondary infringement, and vice versa. In this chapter, I try to emphasize that this is true not only for the design of conditions of liability, but also of its associated consequences. In the EU, the question of the interface between primary/secondary liability is complicated by the fact that the choice between these two policy-layers is a reaction not only to associated trade-offs of scope, but also to EU integration. Because the CJEU has no explicit jurisdiction over non-harmonized domestic secondary liability laws, if it wants to assert some jurisdiction, its policy choices have to be made either (1) entirely within primary liability, or (2) outside it, by creating an additional layer of protection drawing on national traditions of accessory liability. What we have been observing in the last couple of years in EU copyright law could be explained in both ways.7 Only the future will tell how the Court will eventually conceptualize its own case law on communication to the public when facing further national requests for clarification.8 Similarly, some national doctrines that are dressed as primary liability in fact incorporate policy con­sid­er­ations of secondary liability, and hence would be more appropriate to be studied as such.9 Secondary liability doctrines differ across the globe. However, in IP scholarship, the term labels situations where the source of infringement is a third party’s behaviour. In the United States, doctrines of contributory liability, including forms of inducement, and vicarious liability incorporate this policy space.10 In the UK, it is liability of ac­ces­sor­ies.11 In Germany, it is liability of aiders and abettors, but also reaching into case law concerning inducement, injunctive ‘disturbance liability’,12 and patent law’s liability for wrongful omissions.13 All these doctrines consider different circumstances relevant. 6  See Dinwoodie (n. 5) 4. 7  For the debate, see Ansgar Ohly, ‘The broad concept of “communication to the public” in recent CJEU judgments and the liability of intermediaries: primary, secondary or unitary liability?’ (2018) 8(1) JIPLP 664–75; Jan Bernd Nordemann, ‘Recent CJEU case law on communication to the public and its application in Germany: A new EU concept of liability’ (2018) 13(9) JIPLP 744–56; Pekka Savola, ‘EU Copyright Liability for Internet Linking’ (2017) 8 JIPITEC 139. 8  See e.g. pending case C-753/18. 9  Dinwoodie (n. 5) 8 ff (discussing whether the concept of authorization also fits secondary liability). 10  See Mark Bartholomew and Patrick McArdle, ‘Causing Infringement’ (2011) 64 Vand. L. Rev. 675; Salil Mehra and Marketa Trimble, ‘Secondary Liability of Internet Service Providers in the United States: General Principles and Fragmentation’ in Dinwoodie (n. 5) 93–108; John Blevins, ‘Uncertainty as Enforcement Mechanism: The New Expansion of Secondary Copyright Liability to Internet Platforms’ (2013) 34 Cardozo L. Rev. 1821. 11  See Paul Davies, Accessory Liability (Hart 2015) 177 ff. 12  Disturbance liability still serves a dual role in German case law—i.e. to supplement missing negligence-based tortious liability of accessories, on the one hand, and to implement the EU system of injunctions against innocent intermediaries, on the other. See Martin Husovec, ‘Asking Innocent Third Parties for a Remedy: Origins and Trends’ in Franz Hofmann and Franziska Kurz (eds), Law of Remedies (Intersentia 2019). 13  Accessory liability in German private law is generally based on the provision of s. 830(2) BGB (‘Instigators and accessories are equivalent to joint tortfeasors.’), which, according to the case law,

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Remedies First, Liability Second   93 In the last decade, we have also witnessed developments that create a third pillar of liability, that of injunctions against innocent third parties, in our context, innocent intermediaries.14 Unlike primary and secondary liability, these injunctions impose obligations on parties that engage in no wrongdoing themselves. These injunctions are often not grounded in tortious actions, although they can relevantly cross-reference to tort law. For instance, the English concept of injunctions against innocent third parties is based on equity and the German concept of ‘disturbance liability’ is based on an analogy with protection in property law.15 Both are dependent on wrongdoings by someone, however not by the person held responsible.16 In other countries, these instruments take the form of new statutory provisions,17 or a separate administrative regulation.18 Whatever the legal basis, the characteristic feature of these orders is that they are not premised on one’s wrongdoing, and rather try to impose obligations due to con­sid­er­ations of efficiency or fairness. They treat intermediaries as accountable (for help), but not liable. While harm is not attributed to them, some form of assistance is required. Although some of these instruments can still come with a ‘liability’ label attached (e.g. German concept), the consequences clearly differ. Obligations are imposed on the services by courts or legislators without trying to attribute to them liability for individual instances of usertriggered harm. While pillars of liability are distinct—and should be understood as mutually exclusive— their application might overlap. Although an intermediary who is a secondary infringer should not also be a primary infringer for the same set of facts, the courts might test both grounds of liability in parallel to strengthen their decisions.19 Intermediaries in a requires, as in criminal law, double intent, e.g. the acts of an accessory as well as of the main tortfeasor have to be intentional. See Mathias Habersack and others, Münchener Kommentar zum BGB (6th edn, C.H. Beck 2013) s. 830(2) para. 15. Apart from this, the Bundesgerichtshof [Supreme Court] (BGH) recognized that inducement can lead to tortfeasorship. See BGH Cybersky [2009] I ZR 57/07 (Ger.) (inducement). On the other hand, so-called ‘adopted content’ seems to fall under the rubric of primary liability. See BGH Marions-kochbuch.de [2009] I ZR 166/07 (Ger.); BGH [2016] VI ZR 34/15 para. 17 (Ger.); Posterlounge [2015] I ZR 104/14 para. 48 (Ger.). In contrast, in patent law a different senate accepted negligence-based tortious liability for omissions. See BGH MP3-Player-Import [2009] Xa ZR2/08 (Ger.). 14  See, recognizing this, Martin Husovec, ‘Injunctions against Innocent Third Parties: The Case of Website Blocking’ (2013) 4(2) JIPITEC 116; Franz Hofmann, ‘Markenrechtliche Sperranordnungen gegen nicht verantwortliche Intermediäre’ [2015] GRUR 123; Riordan (n. 1) ss. 1.49–1.60; Husovec (n. 3); Dinwoodie (n. 5) 3; Tatiana Eleni Synodinou and Philippe Jougleux, ‘The Legal Framework Governing Online Service Providers in Cyprus’ in Dinwoodie (n. 5) 127; Ansgar Ohly, ‘The broad concept of “communication to the public” in recent CJEU judgments and the liability of intermediaries: primary, secondary or unitary liability?’ (2018) 8(1) JIPLP 672; Jan Nordemann, ‘Liability of Online Service Providers for Copyrighted Content—Regulatory Action Needed?’ (2018) 20 ; see also Richard Arnold in Chapter 20. 15  See Husovec (n. 3) 145 ff. 16  Note that the German situation is complicated by the convergence with tortious principles. 17  In Australia, the Copyright Amendment (Online Infringement) Act 2015, effective 27 June 2015, created a novel s. 115A titled ‘injunctions against carriage service providers providing access to online locations outside Australia’. 18  See Chapters 13 and 30. 19 See Twentieth Century Fox Film Corp. & Anor v Newzbin Ltd [2010] EWHC 608 (Ch) (UK) (Justice Kitchin finding Newzbin service liable under the doctrines of direct liability, joint tortfeasorship, and the copyright-specific tort of authorization).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

94   Martin Husovec position of secondary infringers could be targeted by injunctions against innocent intermediaries when rightholders think it is easier not to discuss their wrongful contributions, and focus only on preventive steps to be imposed by injunctions. Given these dynamics, the extent to which the remedies can actually compete among each other when more of them potentially apply should not be underestimated. Remedies are tools given to the plaintiffs to choose from in order to solve their situation. The plaintiffs will select them also taking into account their costs and expected benefits, and the practical difficulties in obtaining and enforcing them. Policymakers and judges disregarding this ‘remedial competition’ can lead to undesirable consequences, such as enforcement outcomes that put too much pressure, and thereby costs, on players who are innocent, as opposed to real wrongdoers.20

2.  Distinguishing Reasons from Consequences Primary and secondary liability tests both end with classifying the implicated persons as some type of infringer; that is, wrongdoer. Despite this, the legal consequences cannot be said to be identical. Different consequences match different policy situations that these doctrines try to resolve. Primary infringement is concerned with those who harm by their own behaviour. Secondary infringement is concerned with those who harm by relying on the behaviour of others to achieve the harmful outcomes. The economic analysis evaluates the preventive function of tort law rules by the parameters of precaution-levels and activity-levels.21 These essentially refer to the quality and quantity of someone’s behaviour. Precaution refers to instantaneous adjustment in behaviour, such as checking authorship before reusing content, or blocking third party content on notification.22 Activity then refers to the decision to participate in an event that may generate harm, such as uploading the content or providing a service al­together.23 The reason why the law wants to influence behaviour is that it aims to stimulate socially optimal levels of care so that harm is avoided at a reasonable cost for society. Hence, the law requires diligence of users and intermediaries when dealing with copyright content owned by rightholders. Users are asked to think about authorship and fair use; intermediaries to avoid inducing others to use protected subject matter without authorization or to remove it once they are notified of its existence on their services. If actors 20  See Husovec (n. 14) ss. 44 ff. 21  See William Landes and Richard Posner, The Economic Structure of Tort Law (HUP 1987) 66; Steven Shavell, Economic Analysis of Accident Law (HUP 1987) 25; Stephen Gilles, ‘Rule-Based Negligence and the Regulation of Activity Levels’ (1992) 21 J. of Legal Studies 319, 327 and 330 (noting that the distinction between the two, however, is not always very sharp). 22  See Keith Hylton, ‘Missing Markets Theory of Tort Law’ (1996) 90 Northwestern U. L. Rev. 977, 981. 23 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Remedies First, Liability Second   95 adopt a sufficient standard of care to respect others’ rights, the law allows them to engage in their activities. Precaution-levels and activity-levels are also crucial for a proper understanding of remedies, and their role in solving liability scenarios.24 First of all, the prevention of damage in online space is typically multilateral because the behaviour of at least three parties needs to be involved—the intermediary, users, and rightholders. And, secondly, unlike in the usual scenario of bilateral damages, the parameters of activity-level (quantity) and care-level (quality) are not necessarily vested in all the parties equally.25 IP rightholders cannot really directly influence the quantity of infringement, as they can only reduce the production of the protected subject matter or stop producing it al­together.26 Indirectly, however, by setting the prices and conditions of legitimate access to their works, some of the rightholders (but not all of them) might be able to influence the activity-level of the users. Intermediaries are mostly unable to directly influence the quantity of the infringements posted by their users. Indirectly, however, by creating a certain culture, or setting incentives, they might produce some outcomes. At the same time, by exercising a duty of care (e.g. taking content down or redesigning their service), they can also indirectly influence the number of infringements. However, unless they shut down the service or pre-moderate completely, they can never entirely control the infringing posts of their users. Usually, the ‘infringing ecosystem’ might be significantly broader than the service itself, which means that this ecosystem can be sustained by independent unrelated services such as pirate discussion fora, which are outside intermediaries’ control. Therefore, business models and technical set-up are only indirect ways of control over other people’s behaviour. Two services with identical governance structure can attract completely different use and communities. Similarly, two rightholders with identical access and pricing strategies might attract different infringement patterns. Both intermediaries and rightholders, in fact, try to influence users’ behaviour in order to reduce the extent of infringement. In both cases, they are not in the driver’s seat. To summarize: primary liability relates to the addressee’s own activity, while secondary liability relates mostly to the precautions the addressee has taken (or has failed to take) regarding the occurrence of someone else’s infringement. Even intentional inducement, where minds of an intermediary and its users are aligned in intentional wrong­doing, does not assume full control of other people’s behaviour.27 Wrongdoing users are not

24  Some of this debate is further developed in Martin Husovec, ‘Accountable Not Liable: Injunctions Against Intermediaries’, TILEC Discussion Paper No. 2016-012 (2016). 25  See also Assaf Hamdani, ‘Who’s Liable for Cyberwrongs’ (2002) 87 Cornell L. Rev. 901, 912. 26  See William Landes and Douglas Lichtman, ‘Indirect Liability for Copyright Infringement: An Economic Perspective’ (2003) 16 Harv. J. of L. and Tech. 395, 409, 405 (noting that intermediaries can only indirectly influence users). 27  This would amount to direct liability as those who assisted by not knowing would be considered instruments used to wrong, and thus innocent agents free of liability of their own. See, for common law, Paul Davies, Accessory Liability (Hart 2015) 68 ff.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

96   Martin Husovec mere instruments of intermediaries but autonomously acting individuals who might be encouraged or incentivized, but still exercise their own will. In a primary liability scenario, damages are imposed on the addressee to compensate the harm caused by its own activity, and injunctions aim at prohibiting him or her from engaging again in the infringing activity. In contrast, in a secondary liability scenario, damages are also imposed as a means for compensating the rightholder’s harm resulting from an infringement but are imposed on an addressee who did not carry out the infringement through its own activity. Damages are usually imposed on the ground that the addressee failed to take the precautions that he should have taken to avoid someone else’s infringement. Likewise, injunctions in this scenario are aimed at imposing the duty to take precautions to avoid or reduce the risk of users’ infringements (and thus other people’s activity). However, this is different in intentional inducement scenarios, where the problem is one’s own intention to stir up other people’s wrongdoing. Finally, in a scenario of mere accountability, damages are not imposed. Injunctions do not impose a duty to take precautions to avoid one’s own accessorship, but a duty to assist in preventing third parties’ infringements similar to regulatory duties known from areas such as antimoney-laundering legislation.

3.  Typology of Consequences To demonstrate some of these differences in practice, in the following sections, I will briefly look at some of the consequences of applying each of the pillars to intermediaries. I try to demonstrate the difference by examining: (1) the scope of damages; (2) their aggregation; (3) the scope and goal of injunctions; and (4) their associated costs.

3.1  Scope of Damages Damages come with a different scope in each of the three pillars. To begin with, damages do not exist in the system of injunctions against innocent third parties (accountability pillar).28 With regards to infringers, the contributions of direct (primary) and indirect (secondary) infringers differ. While primary infringers exploit the protected subject matter themselves, secondary infringers usually only facilitate someone else’s unauthorized exploitation. In between the two are situations of intentional inducement, which act as an important trigger for other people’s unauthorized exploitation. From the rightholder’s perspective, there is a single harm caused by the availability of their protected subject matter. Against the primary infringers, the rightholders might often invoke two basic types of damage: (1) losses suffered by rightholders, and (2) illicit profit obtained by the primary 28  However, uncompensated compliance with injunctions leads to monetary costs.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Remedies First, Liability Second   97 infringer. Sometimes the damages are supplemented by licensing analogies, or statutory damages which are meant to approximate to the negative economic consequences. In the European system, damages are generally meant to put the rightholder in the pos­ ition it enjoyed prior to the infringement. This means that damages ought generally to compensate for the loss actually suffered.29 However, some other countries such as the United States also opt for statutory damages in order to emphasize deterrence.30 The actions of infringing actors and their intermediaries are not always easy to dissect. In the relationship of actions between an actor and an intermediary, however, the relevant wrongful contribution may often cover different parts of the resulting harm, or even be limited to one of them. For instance, imagine a negligent intermediary operating under a notice-based liability regime who becomes liable for not taking down a video after being notified. Such an intermediary acted lawfully during the entire period before the notification, however its actions became wrongful after a reasonable period of time to process the notification had elapsed. The wrongful action of an intermediary and its relevant harm then covers the period after failing to act, which takes place after notification, and not the pre-notification period. If the video was online for two years before being notified, but was taken down only two months after notification, the two harms (pre- and post-notification) can differ substantially. Similar situations might arise more often in instances of gross or minimal negligence, in which the intention to act in concert with the direct infringer ab initio is not present. In cases of intentional tortfeasorship, this might perhaps pose less significant problems, as the intention of both parties often covers identical harm from the outset.31 Therefore, an intermediary that intentionally induces others to do wrong can also be attributed to the harm that precedes any notification, since the notification does not trigger the wrongful behaviour and is thus irrelevant. To further complicate the problem of damages owed by secondary infringers, it is not always easy to calculate even clearly attributable harm. This is because any licence for secondary infringers is unlikely to exist and therefore to provide a benchmark price. Thus, a licensing analogy is not the most appropriate tool. The lost profits might be possible to estimate in general, but hardly practicable to prove specific fractions of time and contributions, especially if the cases are factually too complicated.32 Therefore, as Ohly suggests, targeting part of the illicit profit might be the most realistic way forward,33 29 See C-367/15 Stowarzyszenie Oławska Telewizja Kablowa v Stowarzyszenie Filmowców Polskich [2017] ECLI:EU:C:2017:36, para. 31. 30  See Pamela Samuelson and Tara Wheatland, ‘Statutory Damages in Copyright Law: A Remedy in Need of Reform’ (2009) 51 William and Mary L. Rev. 439. 31  Hence it is also less problematic to apply to it any type of damages aggregation (see later). 32  A related issue is a question of allocation of the burden of proof. It is not exceptional for tort law to presume the form of fault on the side of an alleged tortfeasor. Doing so for secondary infringers who act as accessories is another matter. After all, very wrongfulness of their behaviour depends on their know­ ledge and thus, unlike with direct infringers, is not the usual situation but rather an exception to it. 33 See Ansgar Ohly, ‘Urheberrecht in der digitalen Welt—Brauchen wir neue Regelungen zum Urheberrecht und zu dessen Durchsetzung?’ [2014] NJW-Beil 50 (‘Gegen Intermediäre, die grob fahrlässig Verkehrspflichten verletzen, sollte ein Schadensersatzanspruch bestehen, der in der Höhe durch den aus der Vermittlung erzielten Gewinn begrenzt ist’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

98   Martin Husovec at least for negligent secondary infringers. Although, admittedly, estimating such profits is not without problems and equally lends itself to considerable judicial discretion.

3.2  Aggregation of Damages In relation to attribution of damages, it is crucial to determine whether or not the secondary infringer and primary infringer will be obliged to pay damages which are somehow aggregated. By damage aggregation, I mean a legal mechanism which connects individual debts and interlinks them at the performance stage; for example, an obligation to pay each other’s debt on request. An alternative to aggregation is when the liability of intermediaries and users is separated and is proportionate to their contributions. As Angelopoulos reports34 in her comparative survey of France, Germany, and the UK, existing European tort law usually favours solidary liability for any type of secondary liability. This means that secondary infringers often owe not only their own damages, but also the damages of ‘their’ primary infringers. Although they might recur against the primary infringers themselves, the practical relevance of this is questionable. Since intermediaries have deeper pockets and are easily identifiable, they end up paying the damages, with little prospect of recovering anything from their users who infringed. Does solidary liability therefore always make sense from a policy standpoint in the realm of IP infringements?35 Two key issues merit further consideration. First, online IP infringements are mass torts. Unlike in typical accessory liability scen­arios, litigation usually involves hundreds of cases of primary wrongdoing. This raises the stakes very high for every instance of such litigation. Secondly, the online environment is broadly anonymous. For accessories, this means that recovering any damages after paying them can be practically impossible not only due to the transactions costs involved in mass torts, but also due to the anonymity of the primary infringers. To think of alternatives, the level of aggregation of damages could depend on the sub-type of secondary liability. Especially in cases of intentional aiding and abetting,36 it is understandable that consequences are made more harsh by forcing the primary and secondary infringers into a solidary debtorship. However, the reason for doing the same for negligence-based secondary infringers is less clear, especially when one considers the mass-character and anonymity of primary wrongdoers. The question of aggregation cannot be disconnected from the question of calculation. If intermediaries are exposed to effective disgorgement of illicit profits for their own wrongful contributory acts, then imposing any additional compensatory burden stemming from solidary liability with the primary infringer’s actions transforms the entire liability scheme into damages with a punitive character. And as theory shows, punitive damages can have effects equivalent to prohibitory injunctions.37 This is why not only the calculation of the secondary 34  Angelopoulos (n. 1) 328 ff (asking the same). 35  ibid. (asking the same). 36  ibid. 488. 37 See Louis Kaplow and Steven Shavell, ‘Property Rules Versus Liability Rules: An Economic Analysis’ (1996) 109 Harv. L. Rev. 715; John Golden, ‘Injunctions as More (or Less) than “Off Switches”:

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Remedies First, Liability Second   99 infringer’s damages is crucial. If secondary infringers are subject to potential aggregation of the damages caused by primary infringers, calculation of the damages of the primary infringers cannot be neglected either. Overlooking the issue of aggregation can drive suboptimal policy outcomes for the design of secondary liability doctrines. Davies, for instance, argues that it is precisely the issue of aggregation of damages for accessories that has held back the development of accessory tort liability in the UK. The worry is that a small proportion of their fault could expose them to the much larger damages of primary wrongdoers.38 And since they are more readily available, they might become easy targets for collecting the entire damages claims, with barely any recourse to the primary wrongdoers. He also points out that some jurisdictions, such as the United States, are already moving away from solidary liability towards a system of proportionate apportionment of liability.39 Perhaps a way forward would be to think about alternative ways of how to apportion damages, which are in line with the logic behind their initial calculation. In any event, designing any liability conditions without considering the effects of aggregation skips some very important questions.

3.3  Scope and Goals of Injunctions Injunctions have different scopes in all these pillars. Any injunction against primary (direct) infringers requests them to cease their own wrongful behaviour. However, any injunction directed against secondary (indirect) infringers asks them to stop other people’s behaviour by adjusting their own. Clearly, secondary infringers cannot usually completely stop other’s people’s behaviour, unless they shut down their non-editorial services completely. They cannot coerce others into choices they do not want to make.40 If this fact is not reflected in the scope of an injunction, the consequence is that the secondary infringer is prohibited from engaging in lawful acts. Even worse, there may be no second chance for these players, and any finding of liability will simply mean the

Patent-Infringement Injunctions’ Scope’ (2012) 90 Texas  L.  Rev. 1402; Paul Heald, ‘Permanent Injunctions as Punitive Damages in Patent Infringement Cases’, Illinois Public Law & Legal Theory Research Paper no. 10-38 (2011) (discussing the ‘analogy injunctions can bear to punitive damages’); Henry Smith, ‘Mind the Gap: The Indirect Relation Between Ends and Means in American Property Law’ (2009) 94 Cornell L. Rev. 966 (discussing ‘property rules’ as being ‘embodied in injunctions and punitive damages’). 38  See Davies (n. 27) 216. 39  See American Law Institute, Restatement of the Law: Torts—Apportionment of Liability ( 1999) s. 17. 40  In this regard, consider the Daimler case, in which the CJEU held that lack of control over removal of trade mark use means that the previously infringing behaviour stops being infringing with relinquished control over other party’s actions—see C‑179/15 Daimler AG v Együd Garage Gépjárműjavító és Értékesítő Kft [2016] ECLI:EU:C:2016:134, para. 41 (noting ‘[h]owever, only a third party who has direct or indirect control of the act constituting the use is effectively able to stop that use and therefore comply with that prohibition’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

100   Martin Husovec end of a service or a firm. Some courts are aware of this,41 but designing the wording of such injunctions is particularly difficult. Lemley and Weiser even argue that, for these ­reasons, we should favour damages over injunctions in such cases.42 Angelopoulos, on the other hand, argues the opposite; namely, that injunctions and not damages should govern situations of negligence-based secondary liability.43 An excellent example of how problematic it can be to issue an injunction is the wellknown Napster case in the United States, which continued after the decision of the Ninth Circuit44—upholding contributory and vicarious liability—before the District Court that had to decide on the form of the injunction.45 The District Court first enjoined Napster from copying, downloading, uploading, transmitting, or distributing copyrighted sound recordings, but the plaintiffs were obliged to provide titles of their works, the names of the artists, and one or more files on the Napster system, and a certification of ownership. Soon, however, the RIAA complained that Napster’s name-based filtering system was ineffective. The District Court ordered Napster to implement a better technical solution in order to also prevent misspelled and similar infringing files from appearing on the network. So, Napster purchased access to a database of song titles with common misspellings, and designed its own automated filter to search for those as well. It also licensed audio-recognition software based on fingerprinting technology from Relatable. Even though Napster’s filter success rate was claimed to be 99.4 per cent, it was insufficient for the District Court even if true. Judge Patel stated during one of the hearings that ‘[t]he standard is to get it down to zero, do you understand that?’46 As a consequence, Napster shut down. It is clear that the judges were trying to punish a previously negligent actor by requiring super-optimal care.47 However, as with damages, the question remains how to design injunctions that are not de facto terminal. Discussions would be less dramatic had we known that the defendants would simply need to learn their lesson, internalize the costs, and move forward. Compared to the previous two categories, innocent third parties are subject only to a special type of injunction. This can come in the form of orders to assist enforcement by blocking, filtering, degrading, or providing information. As the European system shows, the target of such orders is not full prohibition of the illegal behaviour of the addressee, but rather some form of positively defined assistance in enforcement. In other words, 41  See Oberster Gerichtshof [Austrian Supreme Court] (OGH) [2014] 4Ob140/14p (Aust.) (arguing that an accessory can only be prohibited from carrying out those that are wrongful). 42  See Mark Lemley and Philip Weiser, ‘Should Property or Liability Rules Govern Information’ (2007) 85 Texas L. Rev. 783 (‘the inability to tailor injunctive relief so that it protects only the underlying right rather than also enjoining noninfringing conduct provides a powerful basis for using a liability rule instead of a property rule’). 43  See Angelopoulos (n. 1) 491. 44 See A&M Records Inc. v Napster Inc., 239 F.3d 1004 (9th Cir. 2001) (US). 45 See A&M Records Inc. v Napster Inc., 2001 US Dist. LEXIS 2169 (ND Cal. 5 March 2001) (US). 46  Evan Hansen and Lisa Bowman, ‘Court: Napster filters must be foolproof ’ (Cnet News, 12 July 2001) . 47  See Michael Einhorn, ‘Copyright, Prevention, and Rational Governance: File-Sharing and Napster’ (2001) 24 Colum. J. of L. & Arts 449, 459–60.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Remedies First, Liability Second   101 any order is mandatory rather than prohibitive. The CJEU preaches that prevention is the core goal of the measures. In Tommy Hilfiger, the Court stressed that ‘[n]or can the intermediary be required to exercise general and permanent oversight over its customers. By contrast, the intermediary may be forced to take measures which contribute to avoiding new infringements of the same nature by the same market-trader from taking place.’48 I have argued elsewhere49 that if we were to accept that an injunction against innocent third parties can restrict their full conduct, we would indirectly expand the scope of exclusive rights, since an injunction would put otherwise abstractly allowed acts under the reservation of a rightholder’s consent. In Europe, the CJEU, and at least some national courts, seem to be aware of this. The CJEU stresses that these measures are of different nature. The UK and Germany, despite different legal traditions, seem to converge on the consensus that these measures should prescribe positive actions, and not be worded or interpreted as prohibitory edicts.50

3.4  Costs of Injunctions A related question concerns the costs structure associated with the remedies. By costs, I mean both compliance and procedural costs. Compliance costs are costs incurred in carrying out orders granted by the courts. If those costs are incurred by infringers, there is little reason to question that they should be the ones bearing the costs of compliance. After all, they are to blame for the infringements, hence they should bear the consequences. The same can be argued for procedural costs, since the litigation is a consequence of their actions, too. It is, however, well known that some countries like the United States do not generally compensate procedural costs and that this drives prac­tical liability outcomes too. However, the situation is different for innocent third parties that are exposed to the third pillar of accountability for assistance, that of injunctions against innocent third parties. These parties are asked to assist in enforcement and not to respond for their own actions. It is therefore more understandable that policy should relieve them from pro­ ced­ural costs, and even compensate them for the inconvenience of compliance. In the absence of such a regime, the costs structures for infringers and innocent third parties become identical. Combined with the fact that no wrongfulness has to be proven on the side of the addressees, this creates a perverse incentive to also choose it as a cause of action in cases where the tortfeasors are known and could potentially be more appropriate candidates to correct the wrongdoing. Thus, in ‘remedial competition’ these measures 48 C-494/15 Tommy Hilfiger Licensing LLC and Others v Delta Center as [2016] ECLI:EU:C:2016:528, para. 34 (this builds on the earlier case law, in particular C-324/09 L’Oréal SA and Others v eBay International AG and Others [2011] ECLI:EU:C:2011:474). 49  See Husovec (n. 3) 107–8. 50  For the UK, see Chapter 20. For Germany, see analysis of case I ZR 64/17, para. 57 in Husovec (n. 12).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

102   Martin Husovec can inflate the aggregate costs of the compliance of parties that had never themselves acted wrongfully. For these and other reasons, some countries distinguish the costs structures of actions against intermediaries as infringers and as innocent third parties. The courts in the UK and Canada recognized a need for compensation for compliance for the latter.51 On the other hand, the French Cour de cassation rejected the idea.52 The German legislator decided to intervene by immunizing some types of providers from out-of-court and pre-trial costs, although not offering any compensation for compliance.53

4.  Putting the Cart before the Horse As the previous debate has shown, there is a lot of ‘colour’ to any determination of the liability of intermediaries. Perhaps we should use these components to inform our design of conditions for such liability. A debate of this type might be less difficult than it might appear at first sight. In her book, Angelopoulos convincingly shows that intentbased accessory liability is resolved well through domestic laws, and offers the best starting point for European (and perhaps global) convergence.54 At the same time, her work shows that it is much harder to achieve convergence on standards of negligencebased secondary liability. A reason for this maybe has less to do with the factors triggering such liability, rather than with the consequences that each country traditionally attaches to them. Two systems each assigning a different gravity of consequences are less likely to agree on when to trigger liability even if they have the same policy outlooks and goals. An agreement is slowed down by the fact that where one scholar anticipates damages subject to solidary liability with primary infringers, the other thinks of stand-alone compensatory damages subject to proportionate apportionment. To provide a metaphor, agreeing on when policemen in two countries should use their guns is hard when those guns are pre-loaded with different projectiles. Hitting a man with a plastic or bean-bag bullet is not the same as using a dum-dum bullet. Similarly, a firm hit with two different sanctions will internalize them differently. Discussing when without clarifying what then seems very confusing and, at the very least, incomplete. To facilitate convergence, we should, as a first-order policy question, understand the

51  In the UK, this was outcome of the Cartier case—Cartier International AG and others (Respondents) v British Telecommunications Plc and another (Appellants) [2018] UKSC 28 (UK). In Canada, a similar principle was postulated in the Rogers case—see Rogers Communications Inc. v Voltage Pictures, LLC, 2018 SCC 38 (Can.). Both decisions relate to the so-called Norwich Pharmacal jurisdiction. 52  The Cour de cassation rejected compensation of such costs. See Cour de cassation, Civil Chamber 1 [2017] Case no. 16-17.217 16-18.298 16-18.348 16-18.595 (Fra.). 53  For description of the current German approach, see Husovec (n. 12). 54  See Angelopoulos (n. 1).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Remedies First, Liability Second   103 ensuing consequences. Only as the next policy question, can we discuss the trigger-factors of such individual consequences. In the comparative analysis, clearly the second pillar of liability matters the most. However, intent-based and negligence-based situations might present different policy challenges. We may want to distinguish intent-based accessories as they are the most implicated in the wrongs of others and are punished virtually everywhere. What kind of damages should they be asked to pay? Do we aggregate them with those of primary wrongdoers? How do we design injunctions against them? Can we agree on a common set of principles? In parallel, we might unpack the situation of negligent secondary infringers. Are they all the same? What kind of damages should they be asked to pay? Do we really want to aggregate their own obligation to pay damages with that of primary wrongdoers? If not, what alternatives can we work out? If yes, under what circumstances? And how do we design injunctions against them? Is it even possible to come up with injunctions that do not mean the shut-down of a service? And, lastly, if countries also developed the system of injunctions against innocent third parties, how should they differ and interact with the previous two? Who should bear the costs of what? And should burdens of proof differ across all three pillars? These all seems like very pressing questions to answer before we can design optimal conditions triggering anything.

5. Conclusions Adopting a consequences-based approach might help to achieve convergence of different models. To utilize this approach, the comparative work first needs to recognize the basic vocabulary of intermediary liability for user-generated content incorporated in three pillars of liability. Not all countries will rely on the same mechanisms, or offer them in the same form; however, a common vocabulary will allow us to more clearly communicate and discuss their contents and the policy-goals behind them. I do not think we are far from accepting such a vocabulary. Secondly, the legal analysis needs to better pair legal consequences (e.g. solidary debtorship under disgorgement of illicit profits) with their ability to tackle specific challenges within each liability mechanism (e.g. compensation for inducement, design of injunction for negligence). This will allow us to better see the regulatory toolkit, and thus respond more sensibly to any questions in the liability design. Only as a third step should we try to connect these consequence–mechanism pairs with the realities of online services and requirements triggering them. This way, the what and when questions are interconnected, allowing the latter to internalize the former’s findings. And, hopefully, such an approach could become more conducive to establishing a comparative consensus.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 5

Empir ica l A pproach e s to I n ter m edi a ry Li a bilit y Kristofer Erickson and Martin Kretschmer

Legal theory has failed to offer a convincing framework for the analysis of the responsibilities of online intermediaries. Our co-contributors to this Handbook have identified a wide range of contested issues, from due process to costs to extraterritorial matters. This chapter considers what empirical evidence may contribute to these debates. What do we need to know in order to frame the liability of intermediaries and, a fortiori, what does the relationship between theory and empirics imply for the wider issue of platform regulation? While the liability of online intermediaries first surfaced as a technical issue with the emergence of internet service providers (ISPs) during the 1990s, recently the focus has shifted to the dominance of a handful of global internet giants that structure everything we do online. Platform regulation has become the central policy focus. There is an awareness of the increasing importance of communication between users in constituting a digital public sphere, and simultaneously greater pressure to control online harm (be it relating to child protection, security, or fake news). The allocation of liability in this context becomes a key policy tool. But we know very little about the effects of this allocation for the different stakeholders. This is in part due to secrecy about the rules under which platforms operate internally. As online users, we are governed by processes and algorithms that are hidden from us. If the rules are the result of algorithmic or artificial intelligence (AI) decision-making, even service pro­ viders themselves may not fully understand why decisions are made. Still, platforms see their rules of decision-making as key to their competitive advantage as firms. We live in what has been catchily labelled a ‘black box society’.1 1  See Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (HUP 2015). See also Chapter 35.

© Kristofer Erickson and Martin Kretschmer 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   105 So how can we as researchers open the box to let in some empirical air? There is a global trend towards increased reporting requirements for platforms. Most prom­in­ent­ly, the German platform law of 2017—NetzDG—requires all for-profit social media services with at least two million registered users in Germany to remove obviously il­legal content within twenty-four hours of notification.2 There are obligations to report every six months and high sanctions for failures to comply (up to €5 million fines that can be multiplied by ten). During the first six months of the law’s operation (January–June 2018), Facebook received a total of 1,704 takedown requests and removed 362 posts (21.2 per cent), Google (YouTube) received 214,827 takedown requests and removed 58,297 posts (27.1 per cent), and Twitter received 264,818 requests and removed 28,645 (10.8 per cent). These are much lower numbers than were expected.3 There are also a number of antitrust inquiries that have extracted sensitive data from platforms, in particular the European Commission’s investigations of Google under EU competition law.4 The Australian Competition and Consumer Commission’s digital platforms inquiry is considering the establishment of a new platform regulator with wide-ranging information-gathering and investigative powers. The regulator would monitor platforms with revenues from digital advertising in Australia of more than AU$100 million. These firms would be subject to regular reporting requirements with the aim of controlling whether they are engaging in discriminatory conduct, for ex­ample by predatory acquisition of potential competitors or advertising practices that favour their own vertically integrated businesses.5 So in the future, there is likely to be much greater public scrutiny and knowledgegathering about platforms’ practices. The paradox is that we will know more only by the time new regulatory regimes, and a new framework for online intermediary liability, have been selected. We appear to be in the midst of a paradigm shift in intermediary 2  See the 2017 Network Enforcement Act (Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken, NetzDG) (Ger.), s. 3(2)(2). The obligation to remove and block content relates to a specific list of criminal offences (not including intellectual property right infringements), and there is a carve-out, exempting platforms that support communications between individuals (this is designed to exempt professional networks, sales platforms, games, and messaging services). 3  See William Echikson and Olivia Knodt, ‘Germany’s NetzDG: A Key Test for Combatting Online Hate’, Centre for European Policy Studies (CEPS) Report no. 2018/09 (2018) . 4  See European Commission, ‘Press Release: Commission fines Google €1.49 billion for abusive practices in online advertising’ (Case 40411 Google Search (AdSense)) (20 March 2019) ; European Commission, ‘Press Release: Commission fines Google €4.34 billion for illegal practices regarding Android mobile devices to strengthen dominance of Google’s search engine’ (Case 40099 Google Android) (18 July 2018) ; European Commission, ‘Press Release: Commission fines Google €2.42 billion for abusing dominance as search engine by giving illegal advantage to own comparison shopping service’ (Case 39740 Google Search (Shopping)) (27 June 2017) . 5  See Australian Competition and Consumer Commission (ACCC), Digital platforms inquiry (intermediary report, published 10 December 2018) . An independent regulator has also been proposed in the UK to oversee a new statutory ‘duty of care’ for platforms. See Department for Digital, Culture, Media & Sport and Home Department, Online Harm (White Paper, Cp 59, 2019) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

106   Kristofer Erickson and Martin Kretschmer li­abil­ity, moving from an obligation to act once knowledge is obtained to an obligation to prevent harmful content appearing. This imminent shift towards filtering, even general monitoring,6 is exemplified by the controversial Article 17 (formerly Art. 13) of the 2019 EU Directive on Copyright in the Digital Single Market that makes certain platforms (‘online content sharing services’) directly liable under copyright law for the content uploaded by their users.7 This chapter evaluates what we already know after almost two decades of operation of one liability regime: the so-called safe harbour introduced in the United States by the Digital Millennium Copyright Act (DMCA), and in a related form in EU Member States with the e-Commerce Directive.8 Immunity for ‘Online Service Providers’ that act ex­ped­itious­ly to remove infringing material was first introduced in the United States under section 512 of the US Copyright Act (as amended by the DMCA 1998). Section 512 specifies a formal procedure under which service providers need to respond to requests from copyright owners to remove material. Rightholders who wish to have content removed must provide information ‘reasonably sufficient to permit the service provider to locate the material’ (such as a URL) and warrant that the notifying party is authorized to act on behalf of the owner of an exclusive right that is allegedly infringed. The practice is known as ‘notice and takedown’. Importantly, ‘counter notice’ procedures are also specified under which alleged infringers are notified that material has been removed and can request reinstatement. Under the EU Directive on Electronic Commerce (2000/31/EC), hosts of content uploaded by users will be liable only on obtaining knowledge of the content and its il­legality. The safe harbour of the e-Commerce Directive applies to all kinds of illegal activity or information, not only copyright materials. But unlike the DMCA, the e-Commerce Directive does not regulate the procedure for receiving the necessary knowledge. This is left to the Member States. The regime is sometimes characterized as ‘notice and action’.9 Even though the notice and takedown regime established under the DMCA is narrow (applying to copyright only) and limited to one jurisdiction, it has become the dominant paradigm for organizing liability of online intermediaries. This is for at least two reasons: (1) through Google’s practices, notice and takedown has become a global standard even in jurisdictions that do not have safe harbour laws;10 (2) copyright liability affects a far 6 Expressly prohibited as an obligation on service providers by Art. 15 of the EU e-Commerce Directive and the case law of the Court of Justice of the European Union (CJEU). See also Chapter 28. 7  See Directive 2019/790/EU of the European Parliament and the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92, Art. 17. 8  See Digital Millennium Copyright Act, Pub. L. no. 105-304, 112 Stat. 2860; Directive 2000/31/EC of the European Parliament and the Council of 17 July 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1. 9  cf. Chapter 27. 10  According to Google’s 2019 Transparency Report, in total more than four billion copyright takedown requests have been received. They are processed under DMCA formalities, regardless of whether the country in which the request was filed prescribed those formalities or had any safe harbour laws: ‘It is our policy to respond to clear and specific notices of alleged copyright infringement. The form of

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   107 wider range of user practices than, for example, defamation, obscenity, and other forms of illegal use. So, the lessons from the operation of the notice and takedown regime may have a wider application, addressing questions of transparency, due process, cost allocation, and freedom of expression. We now proceed to review the body of empirical studies on copyright intermediary liability during the twenty-year period from 1998 (the year that the DMCA was passed) to 2018. We use a snowball sampling method enhanced by a seed sample of empirical papers drawn from the Copyright Evidence Wiki, an open-access repository of findings related to copyright’s effects.11 Based on the initial sample, we searched forwards and backwards in time among those articles to identify further published research. The sample is focused exclusively on work deemed empirical (containing new data gathered or constructed by the authors). It includes articles, books, reports, and published impact assessments. Based on our survey of this body of research, we identify five key sub-fields of em­pir­ ic­al inquiry pursued so far. These relate to: the volume of takedown requests, the ac­cur­ acy of notices, the potential for over-enforcement or abuse, transparency of the takedown process, and the costs of enforcement borne by different parties (see Table 5.1). Each of

Table 5.1  Thematic foci of empirical research on notice and takedown and key studies Policy issue

Key studies

Volume of use of notice and takedown

Urban and Quilter (2006); Seng (2014); Karaganis and Urban (2015); Cotropia and Gibson (2016); Strzelecki (2019)

Accuracy of notices

Urban and Quilter (2006); Seng (2015); Bar-Ziv and Elkin-Koren (2017); Urban, Karaganis, and Schofield (2017)

Over-enforcement/abuse

Ahlert and others (2004); Nas (2004); Urban, Karaganis, and Schofield (2017); Bar-Ziv and Elkin-Koren (2017); Erickson and Kretschmer (2018); Jacques and others (2018)

Transparency/due process

Seng (2014); Perel and Elkin-Koren (2017); Fiala and Husovec (2018)

Balancing innovation/costs

Heald (2014); Schofield and Urban (2016); Urban, Karaganis and Schofield (2017)

notice we specify in our web form is consistent with the Digital Millennium Copyright Act (DMCA) and provides a simple and efficient mechanism for copyright owners from countries around the world. To initiate the process to remove content from Search results, a copyright owner who believes a URL points to infringing content sends us a take-down notice for that allegedly infringing material. When we receive a valid take-down notice, our teams carefully review it for completeness and check for other problems. If the notice is complete and we find no other issues, we remove the URL from Search results.’ See Transparency Report Google . 11 See Copyright Evidence Wiki, Empirical Evidence for Copyright Policy, CREATe Centre, University of Glasgow .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

108   Kristofer Erickson and Martin Kretschmer these areas are discussed in further detail in the remainder of the chapter. We conclude by identifying some of the gaps and limitations in this existing body of scholarship on intermediary liability for copyright, and offer some recommendations for future research.

1.  Volume of Notices In their 2006 study of notice and takedown, Urban and Quilter found that Google had received 734 notices between March 2002 (when the Chilling Effects12 database started collecting Google reports) and August 2005, the cut-off date of their study.13 The majority of takedown requests in their sample related to search engine links, but the overall quantity was relatively small. Since then, a number of follow-up studies have noted an explosion in the quantity of DMCA section 512 notices sent by rightholders to online service providers (OSPs), including Google. This sharp increase in volume did not occur immediately after the introduction of the DMCA and e-Commerce Directive; rather, this uptake took nearly a decade and accelerated from 2010 onwards.14 There are a range of explanations for this increase, ranging from rightholder frustration following rejection of the Stop Online Piracy Act (SOPA),15 to new technical affordances introduced by Google to receive and process requests. A general observation from this research is that notice and takedown processes can, at the very least, be considered successful in terms of uptake and use by rightholders and (OSPs), with the safe harbour it provides to internet intermediaries viewed as important for commercial innovation.16 However, the volume of notices may also produce challenges for OSPs in terms of cost of compliance, with implications for other thematic areas such as due process. Several empirical studies have examined these issues in significant detail.17 A 2014 study by Seng used data obtained from the Chilling Effects repository and from Google’s Transparency Report, on takedown notices received by Google, and other services which report to the database. The dataset consisted of 501,286 notices and some 12  The Chilling Effects (now Lumen) database was founded by Wendy Seltzer in 2001 and is currently maintained by the Berkman Klein Center for Internet and Society at Harvard University. The project website collects and enables analysis of takedown requests received by online intermediaries. 13  See Jennifer Urban and Laura Quilter, ‘Efficient Process or “Chilling Effects”? Takedown Notices Under Section 512 of the Digital Millennium Copyright Act’ (2006) 22 Santa Clara Computer and High Tech. L.J. 621. 14  See Daniel Seng, ‘The state of the discordant union: An empirical analysis of DMCA takedown notices’ (2014) 18 Va. J. of L. & Tech. 369. 15  See Daniel Seng, ‘“Who Watches the Watchmen?” An Empirical Analysis of Errors in DMCA Takedown Notices’, SSRN Research Paper no. 25632023 (2015) 3 . 16  See Jennifer Urban, Joe Karaganis, and Brianna Schofield, ‘Notice and Takedown: Online Service Provider and Rightsholder Accounts of Everyday Practice’ (2017) 64 J. Copyright Soc. 371. 17  Recently see Artur Strzelecki, ‘Website removal from search engines due to copyright violation’ (2019) 71(1) Aslib J. of Information Management 54–71.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   109 56,991,045 individual takedown requests.18 The dataset covered the period from 2001 to 2012 including the period of significant increase in volume. Seng notes that the increase not only affected Google, but other OSPs as well, lending support to his view that legislation was key to explaining the increase. For example, Twitter submitted less than 500 notices to the repository in 2010, but reported more than 4,000 requests the following year in 2011, and more than 6,000 in 2012.19 Seng found that a large majority of notices were sent by industry associations, collecting societies, and third party enforcement agencies. The British Phonographic Industry (BPI) was the top issuer of notices in the study, with 191,790 notices sent, or 38 per cent of the total sample. Agents such as Degban and WebSheriff also made up a significant portion of the total volume of notices sent.20 The music industry accounted for the lar­gest share of total notices sent, by sector, with nearly 60 per cent of the total notices. Seng observed an increasing concentration among the top issuers of notices over time. The top 50 issuers accounted for 23.9 per cent of notices in 2010, but reached 74.7 per cent of notices in the dataset for the year 2012.21 In other words, a smaller number of organizations, such as BPI and RIAA, generated the bulk of notices, skewing the average. Most providers identified in the dataset issued only one notice. Industry organizations and enforcement agents also crammed more claims and requests (sometimes numbering in the thousands) into each notice, while individual claimants tended to include fewer claims and requests together in the same notice. Other studies have examined the volume of takedown notices received by non-commercial institutions such as universities and academic libraries. Both groups receive fewer takedown requests than commercial internet companies, but empirical studies show changing practices over time and concern about possible volume increases in future. Cotropia and Gibson surveyed from a population of 1,377 four-year colleges and universities in the United States.22 They sent the survey to 680 institutions that had a DMCA agent registered with the US Copyright Office. The presence of registered DMCA agents was skewed towards large, higher ranked institutions. Among the 532 institutions that had working contact information and received the survey, the authors achieved a response rate of approximately 15 per cent (or 80 responses).23 The average number of takedown requests received per institution was 200 (std deviation 329.35). The majority of institutions surveyed (72.5 per cent) received between 0 and 270 notices, with a smaller number of schools receiving larger quantities of notices. Some 12.5 per cent of responding institutions spent more than 500 hours per year dealing with takedown requests, but most institutions (62 per cent) reported spending 50 hours or less.24 The majority of DMCA notices received (67.6 per cent) related to cases in which students used institutions’ 18  See Seng (n. 14) 382. 19  ibid. 444. 20  ibid. 448. 21  ibid. 393. 22  See Christopher Cotropia and James Gibson, ‘Commentary to the US Copyright Office Regarding the Section 512 Study: Higher Education and the DMCA Safe Harbors’, SSRN Research Paper no. 2846107 (2016) . 23  ibid. 4. 24  ibid. 12.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

110   Kristofer Erickson and Martin Kretschmer networks to download copyright infringing material onto personal computers (falling under the transitory communications safe harbour under section 512(a)).25 Universities were found to be using a number of technical measures to deter copyright infringing behaviour by users, in a similar manner to the DMCA-plus OSPs discussed by Urban and co-authors.26 Techniques used by universities included requiring individual network logins, port banning/firewalls, packet shaping, bandwidth throttling, and monitoring of suspicious network traffic. Some institutions reported that these techniques had reduced the number of DMCA notices received.27 The findings were limited by the low response rate and the potential for response bias this introduced. Since universities may be wary of reputational damage or increased li­abil­ity for infringing behaviour, there are disincentives to disclose the number of DMCA requests received or to discuss copyright infringing behaviours in general. However, the data offer original and unique insight into the sophistication of legal representatives within higher education with regard to DMCA provisions, and the specific techniques institutions have developed to confront these challenges.

2.  Accuracy of Notices One important focus for empirical investigation has been the accuracy of notices received by service providers. Here, research is concerned with the incentives structure for various parties in the notice and takedown regime, and its effectiveness in identifying and removing actually infringing content. Since the volume of takedown requests being sent has increased substantially, the problem of accuracy is potentially amplified. Directly studying accuracy of notices is challenging without access to the notices themselves and, ideally, the targeted work. The Chilling Effects (now Lumen) database has been used extensively as a source of data for empirical research on accuracy.28 One finding consistent across studies is that a small number of issuers are responsible for a disproportionate amount of takedown requests received by platforms, with consequences for accuracy of notices. Bar-Ziv and Elkin-Koren found that 65 per cent of their sample of takedown notices were sent by a single entity,29 while a study by Urban, Karaganis, and Schofield identified a single individual responsible for 52 per cent of the takedown notices in their separately collected sample.30 The concentration of issuers appears to be related to both the ease of filing using automated web tools, as well as the emergence of third party services which aggregate and work on behalf of rightholders. In fact, only 1 per cent of the copyright notices analysed by Bar-Ziv and Elkin-Koren were filed by private individuals, while 82 per cent of the requests were filed by third party 25  ibid. 13. 26  See Urban, Karaganis, and Schofield (n. 16) 382. 27  See Cotropia and Gibson (n. 22) 14. 28  See Lumen (n. 12). 29  See Sharon Bar-Ziv and Niva Elkin-Koren, ‘Behind the Scenes of Online Copyright Enforcement: Empirical Evidence on Notice & Takedown’ (2018) 50 Connecticut L. Rev. 1. 30  See Urban, Karaganis, and Schofield (n. 16) 99.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   111 enforcement services.31 The authors note this has a potentially negative side effect: increasing the number of steps between actual rightholders and recipients of takedown notices may contribute to a greater number of inaccuracies observed in bulk requests sent by third party agencies. As Seng put it, ‘[i]f the price of each arrow is low or min­ imal, to improve his chances, the reporter will fire off as many arrows as he could to hit a target, regardless of the accuracy or precision.’32 In his 2015 study of notice accuracy, Seng observed some improvements in accuracy rates since earlier studies, but worrying issues related to the substantive content of claims remained. He analysed 501,286 notices for the presence or absence of formalities such as rightholder signature, statement of good faith, statement of accuracy, and statement of authorization to act on behalf of a copyright owner. Seng found an extremely low quantity of errors among the notices analysed, decreasing over time: the error rate for formalities measured in 2012 was less than 0.1 per cent.33 This can be explained by Google’s adoption of a structured web form to intake notices, which requires senders to complete fields and provides instructions on how to complete them before sending. Seng then analysed substantive errors—those related to the nature of the copyright claim itself. In order to do this, he collected notices where the name of the copyright owner had not been redacted in the Chilling Effects dataset, to evaluate whether the claim was legitimate. He identified three specific cases in which an employee of a company (rather than the copyright owner herself) was erroneously identified in the request. Since individual notices can contain thousands of requests, these three systematic errors amounted to a total of 380,379 takedown requests, of which Google complied with over 90 per cent.34 As Seng elaborates, ‘what is alarming is the magnitude, frequency and systematic nature of these errors, which remained undetected and uncorrected for months on end. While we may excuse these errors on the basis that they arose from programs that are misconfigured with wrong information, automated systems propagated these errors across hundreds and thousands of takedown requests.’35 Seng reports that his findings represent a lower bound in the error rate, since he was unable to test accuracy in other ways, such as by observing removed content directly. Direct observation is rendered difficult by the swiftness with which requests are processed and content taken down. Overall, it appears on one hand that accuracy has been improved via automated systems, such as Google’s preferential ‘Trusted Copyright Removal Program’, by providing structured web forms, clear instructions, and negative consequences (revoked membership in the programme) for submitting inaccurate notices. On the other hand, ‘Robonotices’36 which may be generated in large numbers by third party enforcement agencies not closely tied to rightholders, can introduce and amplify errors affecting significant quantities of works. 31  See Bar-Ziv and Elkin-Koren (n. 29) 26. 32  Seng (n. 15) 48. 33  ibid. 19. 34  ibid. 36 35  ibid. 37. 36  Joe Karaganis and Jennifer Urban, ‘The Rise of the Robo Notice’ (2015) 58 Comm. ACM 28, 28–30.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

112   Kristofer Erickson and Martin Kretschmer

3.  Over-Enforcement and Abuse Over-enforcement occurs when non-infringing material is removed, for example because the content has been erroneously identified (e.g. a false positive), or because either the sender or receiver of a notice has not sufficiently considered exceptions such as fair use. Deporter and Walker argue that over-enforcement can be caused by a range of factors, including uncertainties in copyright law, the automation of enforcement, the frequent presence of both infringing and non-infringing uses on the same platform, and the high legal costs of defending one’s right to use copyright material.37 In the first major qualitative study of notice and takedown, Urban, Karaganis, and Schofield assessed the potential for over-enforcement by interviewing twenty-nine OSPs and six senders of high volumes of takedown notices. Respondents included ‘video and music hosting services, search providers, file storage services, e-commerce sites, web hosts, connectivity services, and other user-generated content sites’.38 Data were anonymized before publication. Overall, the authors found that notice and takedown procedures were taken seriously by both OSPs and rightholders. OSP respondents stated that the safe harbour provisions in the DMCA were central to their ability to operate, providing a stable framework for managing liability. On the other hand, some OSPs claimed that the fear of liability might lead them to over-enforce, as they struggled to decide what to do about inaccurate or invalid takedown requests. The authors differentiated OSPs into groups depending on the nature of the takedown notices they received. The group termed ‘DMCA Classic’ received individual takedown requests from single rightholders and dealt with them using human review.39 However, some service providers received massive amounts of takedown notices (more than 10,000 per year) often sent by electronic means, which the authors deemed ‘DMCA Auto’.40 These OSPs developed computerized systems for dealing with the large volume of requests, and made these tools available to rightholders. These OSPs were not always able to engage in human review of bulk requests. A final group, deemed ‘DMCA Plus’ took further steps to limit the upload of content that might trigger notices, at the same time as it offered more direct automated tools to rightholders to manage the detection and removal of potentially infringing content.41 Like the second group, these providers were unable to conduct human review of all requests, leading to issues of transparency, accuracy, and over-enforcement. In addition to the concerns raised by Urban, Karaganis, and Schofield, another area of focus for research has been protection of legitimate reuse of material, such as provided 37  Ben Depoorter and Robert Kirk Walker, ‘Copyright False Positives’ (2013) 89 Notre Dame L. Rev. 319. 38  Urban, Karaganis, and Schofield (n. 16) 376. 39  ibid. 379. 40  ibid. 382. 41  For additional discussion of ‘DMCA Plus’ behaviours, see Annemarie Bridy, ‘Copyright’s Digital Deputies: DMCA-plus Enforcement by Internet Intermediaries’ in John Rothchild (ed.), Research Handbook on Electronic Commerce Law (Edward Elgar 2016).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   113 by fair use in the United States, and by specific copyright exceptions in the UK and Europe. Due to the high volume of requests observed by researchers, and the seemingly strong incentives for platforms to over-comply with requests, there is potential that limi­ta­tions and exceptions to copyright could be eroded in this system. In one study of copyright exceptions, Erickson and Kretschmer longitudinally examined a dataset of user-generated parody videos hosted on YouTube, recording at yearly intervals whether the videos had been taken down.42 The research was conducted in collaboration with Jacques and others, who observed additional time periods. In all, the research covered a dataset of 1,839 videos between 2011 and 2016. The overall takedown rate across the whole four-year period was 40.8 per cent of videos, with 32.9 per cent of all takedowns attributable to copyright requests.43 Parodies varied in terms of uploader skill, parodic intent, and the nature of material borrowed to make the parody. Underlying musical works varied in terms of genre, territory, size of publisher, and commercial success. A hazards model (a statistical technique relating survival over time to one or more covariates) was used to analyse the effect of those variables on the likelihood that a given parody would be taken down. The findings showed that parodist technical skill and production values were significant in reducing the odds of a takedown. More popular parodies with more views also had lower odds of being removed.44 Rightholder behaviour varied significantly by music genre: rock music rightholders were significantly more tolerant of parodies than hip-hop and pop music rightholders.45 The significance of borrowed sound recordings in predicting a takedown, while controlling for other aspects, suggests that algorithmic detection techniques such as Content ID are shaping rightholder behaviour. The availability of a parody exception to copyright in the UK did not appear to deter rightholders from issuing takedown requests. Artists originating from the United States were significantly more tolerant of parodies than their UK counterparts. In a related study, Jacques and others found that the diversity of content is potentially harmed by automated takedown. Using the same dataset of 1,839 parodies, the authors checked to see whether music video parodies had been manually removed or blocked via the Content ID automated system. Differences in the way that YouTube informs would-be viewers of these missing videos allowed the researchers to distinguish between regular takedowns and Content ID blocking.46 The researchers attributed 32.1 per cent of the takedowns measured in 2016 to algorithmic takedown and 6.4 per cent to manual takedown.47 The authors then examined the effect of takedown on cultural diversity. They differentiated between ‘supplied’ diversity, which they define as diversity in the array of messages 42 See Kris Erickson and Martin Kretschmer, ‘This Video is Unavailable: Analyzing Copyright Takedown of User-Generated Content on YouTube’ (2018) 9 JIPITEC 75. 43  ibid. 83. 44  ibid. 86. 45  ibid. 87. 46  See Jacques and others, ‘The Impact on Cultural Diversity of Automated Anti-Piracy Systems as Copyright Enforcement Mechanisms: An Empirical Study of YouTube’s Content ID Digital Fingerprinting Technology’ (2018) 15(2) SCRIPTed 277, 291. 47  ibid. 298.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

114   Kristofer Erickson and Martin Kretschmer that could be watched on the platform, and ‘consumed’ diversity, which is measured in terms of what viewers actually choose to watch.48 They used the Simpson Index of diversity to calculate differences in the ‘effective number of parties’; that is, the concentration of availability of certain expressions, before and after takedowns are detected. The authors find that within the sample there is already a strong difference between ‘supplied’ and ‘consumed’ diversity, related to a skew in the preference by viewers for certain pop songs and specific parodies. This finding mirrors other research on the concentration (‘bottlenecking’) of content consumption online.49 Interestingly, the authors find that the application of automated takedown also reduces the effective number of parties in the sample, resulting in an overall loss of diversity of content. Their index of consumed diversity contracted from twenty-seven in 2013 to twenty in 2016 after takedowns had occurred.50 However, the effect of automated content blocking on diversity is overwhelmed by the built-in lack of diversity in demand (which occurs due to algorithmic sorting and the limited number of search results offered by YouTube). As the authors put it, ‘if [Content ID] removes the most popular parody, would the next most popular simply replace it, almost . . . as a forest fire which, in taking out the old trees, gives breathing space for new growth? If that is the case, then while the take-down . . . may be a personal tragedy for the creator of the most popular parody, this will have little to no effect on diversity or possibly even welfare.’51 Beyond studies of unanticipated effects on freedom of expression, another troubling possibility is that malicious actors could use the copyright claims to remove content they find politically disagreeable, or for other arbitrary reasons unrelated to copyright. In one pair of studies, researchers tested the level of scrutiny that ISPs gave to takedown notices before acting. These studies each created simulated web pages containing noninfringing content, and then sent notices to ISPs asking for it to be removed.52 In both cases, the authors used work clearly in the public domain (published pre-twentieth century), minimizing doubt that the content could be infringing. The study by Ahlert and others used portions of a chapter of John Stuart Mill’s On Liberty, while the experiment by Sjoera Nas used a work by Eduard Douwes Dekker dating from 1871. The researchers sent takedown notices, purporting to originate from the non-existent ‘John Stuart Mill Heritage Foundation’ and the ‘E.D. Dekkers Society’, from anonymous email addresses. In the two cases tested by Ahlert and others (one UK and one US ISP), the UK web host acted immediately to block the test web page, while the US ISP appeared willing to take action, but asked the researchers for further information before removing the content. Specifically, the US ISP asked the researchers to provide a statement ‘that the 48  ibid. 292. 49  See Matthew Hindman, The Myth of Digital Democracy (Princeton U. Press 2008). 50  See Jacques and others (n. 46) 299. 51  ibid. 303. 52 See Christian Ahlert, Chris Marsden, and Chester Yung, ‘How “Liberty” Disappeared from Cyberspace: the Mystery Shopper Tests Internet Content Self-regulation’ (2004) ; Sjoera Nas, ‘The Multatuli Project: ISP Notice and Takedown’ (2004) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   115 complaining party has a good faith belief that use of the material in the manner ­complained of is not authorized by the copyright owner’ as well as a statement that the information contained in the takedown notice was accurate.53 The researchers decided not to pursue the experiment at that point. In the Dutch example, Nas carried out a similar experiment using web pages created on ten Dutch ISPs. Of those, seven removed or blocked the web page containing public domain material by Dekker, sometimes before notifying the owner of the website. A further two ISPs ignored the requests, while one ISP refused to take the material down because it correctly determined it to be in the public domain. These two experimental studies, while illustrative of the troubling fact that it ‘takes only a Hotmail account to bring a website down’,54 are limited in certain important respects. Employees of the targeted ISPs, having various levels of subject-area expertise, could not be expected to consistently identify a literary text in the public domain, even a well-known one. Because they targeted a limited number of ISPs using a controlled scen­ario, these studies were unable to provide data on the actual levels of abuse of notice and takedown mechanisms, which would considerably extend the usefulness of this research. However, both studies suggest that placing legal requirements on notice issuers (e.g. statements of ‘good faith’) may deter abusive behaviour.

4.  Due Process and Transparency Procedural justice, in particular relating to the interests of users, is an issue that surfaces in a number of chapters in this Handbook.55 This has also been investigated empirically in relation to notice and takedown. In a 2018 study, Fiala and Husovec studied the economic incentives that drive overremoval of content and under-use of the counter-notification mechanism among users.56 The main problem identified by the authors is that, ‘according to theory and empirical evidence, [notice and takedown] leads to many false positives due to over-notification by concerned parties, over-compliance by providers, and under-assertion of rights by affected content creators’.57 To investigate the causes of under-use of counter-notification, the authors designed a laboratory experiment to model the relationship between service providers (platforms) and content creators. In the experiment, players were given the task of evaluating whether or not a maze had a valid solution, in a 15-second time limit. This was intended to simulate the real-world decision by platform pro­viders about whether to comply with a takedown request. Creators were given more time to study the maze, simulating their familiarity with their own work. A subsequent round allowed the opportunity to ‘punish’ service providers for incorrect removal of content. A stimulus 53  Ahlert, Marsden, and Yung (n. 52) 21. 54  Nas (n. 52) 6. 55  See in particular Chapters 34 and 35. 56  See Lenka Fiala and Martin Husovec, ‘Using Experimental Evidence to Design Optimal Notice and Takedown Process’, TILEC Discussion Paper no. 2018-028 (2018) . 57  ibid. 1.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

116   Kristofer Erickson and Martin Kretschmer condition simulated a hypothetical dispute-resolution process in which creators were given more power and financial incentive to dispute incorrect decisions by providers. The authors ran the experiment with eighty subjects drawn from university students in the Netherlands, and used real payouts. The researchers found that unlike the baseline condition in which providers tended to over-enforce, and creators tended not to dispute decisions, an alternative dispute reso­lution (ADR) treatment resulted in fewer mistakes by providers and a more profitable condition for creators overall.58 After repeated iterations of the game, the researchers observed that the existence of a credible mechanism through which mistakes can be identified and corrected, represented by the ADR process, decreased the rate of incorrect takedowns from 35 per cent to 19 per cent.59 Another problem for transparency in the notice and takedown process is the presence of automated or algorithmic methods of identification and removal. Algorithms are not subject to public oversight, often complex, walled off behind commercial secrecy, and unpredictable as they adapt to changing conditions over time. As Perel and ElkinKoren write, ‘proper accountability mechanisms are vital for policymakers, legislators, courts and the general public to check algorithmic enforcement. Yet algorithmic enforcement largely remains a black box. It is unknown what decisions are made, how they are made, and what specific data and principles shape them.’60 Perel and Elkin-Koren advocate ‘black box tinkering’ as a method for uncovering the hidden functionality of algorithms and holding them accountable. In the context of the intermediary liability regime, this means conducting experiments on live platforms under conditions controlled by the researcher to test how algorithms such as YouTube’s Content ID system react to various inputs. The authors did this by gathering data on the behaviour of OSPs over the life cycle of a typical uploaded work of user-generated content: from filtering at the moment of upload, to receipt and handling of takedown notices, to the removal of content and notification of the uploader. To accomplish this, they uploaded and tracked various purpose-made clips with paired controls. For ex­ample, one clip contained non-infringing footage but copyrighted music, while another contained only the footage. The researchers obtained ethical approval for the study from their university ethics committee, and notified the platforms at the conclusion of the study that they had conducted an experiment. They found that 25 per cent of videosharing websites and 10 per cent of the image-sharing websites tested in Israel made use of some ex ante filtering technology at the point of upload.61 Fifty per cent of the videosharing websites removed infringing content on receipt of a notice, while only 12.5 per cent of the image-sharing websites did so. After removing content, all of the video-sharing websites tested did notify the uploader about the removal, while only 11 per cent of the image websites did so.62 58  ibid. 15. 59  ibid. 17. 60  ibid; Mayaan Perel and Niva Elkin-Koren, ‘Black Box Tinkering: Beyond Disclosure in Algorithmic Enforcement’ (2017) 69 Fla. L. Rev. 181, 184. 61  ibid. 208. 62  ibid. 209.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   117 The wide variation in practices between online platforms suggests problems with procedural justice in the Israeli setting observed by the researchers. They also noted the presence of false positives (removal of non-infringing content when asked to do so) as evidence of the failure of human/algorithmic systems of handling notice and takedown procedures.63 Methodologically, the authors noted that, ‘to study algorithmic enforcement by online intermediaries may often need to overcome different contractual barriers imposed by the examined platforms or software owners’.64 The authors highlight terms of use which prohibit such tinkering. Internet companies have not readily shared information with researchers about how they process and handle takedown requests, prob­ably because they are wary of increased scrutiny from rightholders and regulators, or because the technical filtering mechanisms are a source of competitive advantage. In fact, none of the studies reviewed in this chapter obtained data with the cooperation of private companies, other than those made available via the Chilling Effects/Lumen database, or independently through experimentation such as by Perel and Elkin-Koren.

5.  Balancing of Responsibilities and Costs An ongoing debate in copyright policy relates to the burden of responsibility for identifying and removing potentially infringing works. While the original DMCA takedown mechanism placed responsibility on the shoulders of rightholders to monitor, identify, and request takedown of infringing material, recent policy discussions have brought focus to re-evaluating the prior balance. Some rightholder groups would like additional responsibilities placed on platforms (e.g. the obligation to ensure that content ‘stays down’ after removal).65 Legislation adopted in Europe in 2019 would add a licensing obligation that may lead service providers to filter content at the point of upload.66 Consequently, an important empirical question relates to understanding how costs of enforcement have been distributed so far, and what the effects of rebalancing those costs may be for internet companies, rightholders, and users. In a study of the market for out-of-commerce musical works, Heald proposed that notice and takedown regimes, in tandem with automatic detection systems, may create a market for previously unavailable works.67 The labour of digitizing, uploading, and disseminating the work is borne by the uploader, while the rightholder, once notified, 63  ibid. 210. 64  ibid. 215. 65 See Martin Husovec, ‘The Promises of Algorithmic Copyright Enforcement: Takedown or Staydown? Which Is Superior? And Why’ (2018) 42 Colum. J. of L. & Arts 53. 66  See Directive 2019/790/EU (n. 7) Art. 17. 67  See Paul Heald, ‘How Notice-and-Takedown Regimes Create Markets for Music on YouTube: An Empirical Study’ (2014) 83 UMKC L. Rev. 313L.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

118   Kristofer Erickson and Martin Kretschmer may simply select to monetize the work and collect advertising revenue from it. Heald examined a dataset consisting of ninety songs which reached number one on the pop music charts in Brazil, France, and the United States between 1930 and 1960, and an additional set of 385 songs dating from 1919 to 1926 (which should be out of copyright in the United States).68 He found that 73 per cent of the in-copyright works in his sample from the United States were monetized by a rightholder, with a lower rate of monetization in France (62 per cent) and Brazil (39 per cent).69 New uploads were less likely to have been monetized, while older uploads, particularly those with a higher number of views, were more likely monetized.70 Similarly to findings by Erickson and Kretschmer, Heald found that uploader creative practices were important in determining rightholder response. Videos consisting of straight recordings were more likely to be mon­et­ized by rightholders than amateur creative videos or cover performances.71 However, even those preferences varied by territory: French rightholders monetized a higher proportion of amateur videos and a lower proportion of straight recordings. In general, Heald found that there were similarly high rates of availability of older incopyright works (77 per cent had an upload on YouTube) and public domain copyright songs from 1919 to 1926 (with 75 per cent availability on YouTube).72 This rate is high compared to other mediums such as books, for example, where only 27 per cent of New York Times bestsellers from 1926 to 1932 were found to have copies available to purchase.73 The higher availability of in-copyright works on YouTube, despite the availability of takedown to rightholders, leads Heald to conclude that the Content ID system creates an efficient form of licensing which reduces transaction costs and enables uploaders to communicate market demand to rightholders. Urban, Karaganis, and Schofield found that algorithmic ‘DMCA plus’ techniques might be a source of competitive advantage for large incumbent platforms. Based on their qualitative interviews with large and small firms, the authors note that ‘In some striking cases, it appears that the vulnerability of smaller OSPs to the costs of implementing large-scale notice and takedown systems and adopting expensive DMCA Plus practices can police market entry, success, and competition.’74 Respondents cited the high costs involved, for example, in replicating bespoke systems such as Google’s Content ID, or outsourcing to third party fingerprinting services such as Audible Magic which was quoted as costing up to $25,000 per month.75 The ability of larger incumbent firms such as Google to monetize all kinds of user-generated content via AdSense and share that revenue with rightholders via Content ID was also seen as a competitive advantage from the perspective of smaller OSPs. Rather than provide rightholders with the option of leaving such content on their platforms, OSPs without that technology were limited to taking down videos in their entirety in response to takedown requests.

68  ibid. 314. 69  ibid. 316. 70  ibid. 319. 71  ibid. 322. 72  ibid. 324. 73  See Paul Heald, ‘How Copyright Keeps Works Disappeared’ (2014) 11 J.  of Empirical Legal Studies 829. 74  Urban, Karaganis, and Schofield (n. 16) 64. 75  ibid. 64.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   119 In addition to differences between large and small commercial enterprises, there are also concerns about the costs of complying with notice and takedown procedures for non-commercial institutions. For example, Schofield and Urban analysed the effects of DMCA and non-DMCA takedown requests on the practices of academic digital libraries. Since libraries have undertaken more digitization as part of their open-access public missions, and since public repositories increasingly allow contributions from useruploaders, this potentially exposes them to DMCA takedown requests. As the authors point out, libraries have historically had ‘sophisticated, careful and public-minded approaches to copyright’ through their handling of physical material.76 At the same time, libraries and academic repositories are not typically equipped with resources to handle large volumes of takedown requests such as those received by internet companies. Schofield and Urban surveyed respondents about institutional practices (how libraries dealt with notices once received, whether they forwarded to other departments, etc.), the volume of requests received, and the nature of those requests (copyright or noncopyright). In total, eleven libraries returned surveys and an additional five interviews were carried out.77 Since 2013, libraries had noted an increase in notices received, and some had put in place procedures to deal with DMCA takedown requests. Respondents reported that handling DMCA notices put pressure on staff time. Some libraries stated that a lack of legal confidence and a requirement to protect their reputation added uncertainty to their roles. Some erroneous takedown requests were reported to have been received. In one case, the IT department removed a deposited article, which the library later reinstated after careful review (the article was in the public domain).78 Overall, the authors found that librarians were more confident dealing with non-copyright removal requests. These included concerns about privacy, sensitivity, and security. Librarians had developed institutional norms over time to deal with these matters, but had not yet accomplished this in the realm of digital copyright. This, combined with the high degree of scrutiny and attention paid to evaluating DMCA notices, made institutions potentially vulnerable to an increase in costs related to handling takedown requests.

6.  Conclusion: Limitations, Gaps, and Future Research A number of studies reviewed here used data contained in the publicly accessible Chilling Effects/Lumen database.79 These are rich data, and the Lumen project provides an interface for researchers to sort and query the voluminous archive. However, use of this database could skew empirical findings in the direction of a small group of inter­medi­ar­ies who share their data (e.g. Google) and focus attention on particular units of observation 76  Brianna Schofield and Jennifer Urban, ‘Takedown and Today’s Academic Digital Library’ (2016) 13(1) I/S: A J. of L. and Policy 125, 130. 77  ibid. 131–2. 78  ibid. 138. 79  See Lumen (n. 12).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

120   Kristofer Erickson and Martin Kretschmer (individual notices and claims). A wider range of publicly accessible datasets on takedown would enrich the possibilities for more diverse empirical work. As noted in this chapter, some research has already considered the way that takedown practices are handled in settings such as universities and public libraries by seeking out data directly from those organizations. The empirical studies reviewed here also demonstrate that the study of copyright notice and takedown is a moving target: patterns of behaviour measured by Urban and Quilter in 2006 had shifted in later observations using the same data source. There was an explosion in quantity and diversity of takedown requests and adoption of new practices by both intermediaries and issuers. Even if the current legal regime remains stable, it is likely that practices will continue to shift: new business models may emerge, and rightholders that were once keen users of notice and takedown procedures may drop off as new users appear. For example, the adoption of subscription-based revenue m ­ odels by firms such as Microsoft and Adobe may result in waning investment in enforcement focused on piracy websites. Other rightholders may find it advantageous to enforce in this manner. The concentration of takedown notices directed at Google Search and YouTube might also change if new platforms become dominant, or if new practices of sharing potentially infringing material emerge. Empirical analysis of such trends is held back by a lack of access to data. As we have seen, what is happening inside ‘black box’ systems often has to be reverse-engineered, or revealed by experimental approaches. However, standardized and transparent automated data-collection methods are entirely feasible. They will be increasingly demanded by regulators that are tasked with overseeing new obligations and duties on platforms imposed by legislators. It would have been more rational to first enable better understanding, before changes to the liability regime were enacted. The current state of evidence suggests that, despite its flaws, the notice and takedown regime is working. A significant (and after 2013, vast) number of takedown notices are being sent by rightholders of various types, and processed expeditiously by service pro­ viders large and small. The concept of providing safe harbour to innovators while en­ab­ ling a mechanism for rightholders to protect their copyrights, appears to be achieving its purpose. Links to infringing materials are being pushed out of the top search results, infringing videos are being removed from sharing websites, and institutions are removing infringing materials hosted on their networks. The problems, as outlined in this review, remain significant. They relate to redressing contextual imbalances between differently situated intermediaries, holding rightholders and platforms to account for accuracy of takedown issuance and compliance, and providing meaningful due process for users whose content is removed. These shortcomings may be addressed through tweaking, rather than overhauling, the safe harbour regime. There is a deep tension between bringing platforms into the regulatory sphere and delegating regulatory functions (e.g. monitoring and filtering) to platforms themselves. The first approach may establish liability rules that platforms cannot escape; the second approach may lead to due process being bypassed. Who, for example, oversees Facebook’s machine learning, AI, and computer vision technology and their

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Empirical Approaches to Intermediary Liability   121 30,000 human content moderators?80 The online world appears to have entered a phase of rad­ical experimentation, exploring new liability rules and new powers for regulatory agencies at the same time without fully understanding the efficacy or shortfalls of the safe harbour paradigm established two decades ago. Our review indicates that designing effective reporting requirements is critical to enabling empirical assessment of changes to the liability regime. A range of regulatory agencies are now crowding the field, ranging from content, data, and competition to electoral regulators. Is notice and takedown still a valid mechanism for addressing these issues, or has it run its course? 80  See Casey Newton, ‘The Trauma Floor: The secret lives of Facebook moderators in America’ (The Verge, 25 February 2019) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 6

The Ci v ic Rol e of OSPs i n M at u r e I n for m ation Societies Mariarosaria Taddeo

Online service providers (OSPs) have gone from offering connecting and informationsharing services to their paying members to providing open infrastructure and applications that facilitate digital expression, interaction, and the communication of information. This evolution put OSPs in a peculiar position, for they design and provide key services on which information societies depend. This raises questions as to what role they have in our societies, what moral responsibilities this role entails, and how OSPs should discharge these responsibilities. Over the years. the discussion concerning the responsibilities of OSPs has gone from defining measures that OSPs should deploy to correct their market bias and ensure a pluralistic web, to the impact that OSPs have on the internet, on the flourishing of demo­ crat­ic values, and on societies at large.1 The debate spans different fields, from information and computer ethics, corporate social responsibilities, and business ethics, to computer-mediated communication, law,2 and public policy. Topics of analyses range from biases and skewing of information indexed by search engines,3 the protection of

1 See Mariarosaria Taddeo and Luciano Floridi, ‘The Debate on the Moral Responsibilities of Online Service Providers’ [2015] Science and Engineering Ethics . 2 See Giancarlo Frosio, ‘Why Keep a Dog and Bark Yourself? From Intermediary Liability to Responsibility’ (2018) 26 Int’l J’ of L. and IT 1. 3  See Lucas Introna and Helen Nissenbaum, ‘Shaping the Web: Why the Politics of Search Engines Matters’, Social Science Research Network Scholarly Paper ID 222009 (2006) ; Laura Granka, ‘The Politics of Search: A Decade Retrospective’ (2010) 26 The Information Society 364.

© Mariarosaria Taddeo 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   123 users’ privacy4 and security,5 to the impact of OSPs on democratic processes,6 and their duties with respect to human rights.7 Elsewhere,8 I have analysed the relevant literature on the moral responsibilities of OSPs and identified three important aspects of the debate concerning: (1) expectations about the conduct of OSPs; (2) an increasing consensus about the relevance of OSPs in our societies; (3) the lack of agreement on the values and principles that should inform their conduct. I shall deal with points (2) and (3) in the second section of this chapter. Let me focus now on point (1), as expectations about the behaviours of OSPs underpin much of the debate on their role and moral responsibilities. Users, but also scholars and policymakers, often expect OSPs to align their goals with the needs of our societies.9 OSPs are expected to perform their tasks well and according to principles of efficiency, justice, fairness, and respect of current social and cultural values (emphasis added).10 As Shapiro stresses: in democratic societies, those who control the access to information have a responsibility to support the public interest. . . . these gatekeepers must assume obligation as trustees of the greater good.11

However, what these obligations may be remains open question. These range from Google’s generic motto ‘don’t be evil’ to much more specific guidelines concerning the protection of the public interest and respect for basic democratic principles, for example openness, transparency, freedom of the internet, security, and legal certainty, as identified in the 2011 G8 Deauville Declaration.12 At the same time, the international and multi­ cul­tur­al contexts in which OSPs operate complicate the definition of their obligations, 4  See Chi Zhang and others, ‘Privacy and Security for Online Social Networks: Challenges and Opportunities’ (2010) 24 IEEE Network 13. 5  See Vinton Cerf, ‘First, Do No Harm’ (2011) 24 Philosophy & Technology 463; Mariarosaria Taddeo, ‘Cyber Security and Individual Rights, Striking the Right Balance’ (2013) 26 Philosophy & Technology 353; Mariarosaria Taddeo, ‘The Struggle Between Liberties and Authorities in the Information Age’ [2014] Science and Engineering Ethics 1. 6 See Eli Pariser, The Filter Bubble: What The Internet Is Hiding From You (Penguin 2012); Cass  R.  Sunstein, Republic.Com (Princeton  U.  Press 2001); Luciano Floridi, ‘Mature Information Societies—a Matter of Expectations’ (2016) 29 Philosophy & Technology 1. 7  See Dennis Broeders and Linnet Taylor, ‘Does Great Power Come with Great Responsibility? The Need to Talk about Corporate Political Responsibility’ in Mariarosaria Taddeo and Luciano Floridi (eds), The Responsibilities of Online Service Providers (Springer 2017) 315–23. 8  For a more extensive analysis of the debate on the moral responsibilities of OSPs, see Taddeo and Floridi (n. 1). 9  See Robert Madelin, ‘The Evolving Social Responsibilities of Internet Corporate Actors: Pointers Past and Present’ (2011) 24 Philosophy & Technology 455; Denis McQuail, Media Performance: Mass Communication and the Public Interest (Sage 1992) 47. 10  McQuail (n. 9) 47. 11  Granka (n. 3) 365. 12  See 2011 G8 Deauville Declaration .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

124   Mariarosaria Taddeo for it requires an ethical framework able to square the different ethical views and stakeholders’ interests that OSPs face. In this chapter, I will describe the debate on the moral responsibilities of OSPs with respect to managing access to information (Section 1) and human rights (Section 2). I will then analyse the role and the nature of the responsibilities of OSPs in mature information societies (Section 3).13 I will conclude the chapter by applying Floridi’s soft ethics to consider what responsibilities the civic role of OSPs entails and how they should discharge them (Section 4).

1.  Managing Access to Information The organization and management of the access to information available online raises problems concerning the way in which OSPs select and rank such information.14 The research on this topic initially focused exclusively on search engines but, with the emergence of the Web 2.0, social networks and news aggregators also became objects of ana­lysis, for these OSPs too can skew users’ access to online information. Introna and Nissenbaum analyse the role of search engines in defining the scope of access to online information and stress the relation between such a scope and the development of a pluralistic democratic web.15 They advocate diversity of the sources of information as a guarantee of the fairness of information-filtering processes and the democratic development of the internet.16 Corporate, market-oriented, interests of the private companies running indexing and ranking algorithms can jeopardize both these aspects. In addition, Introna and Nissenbaum compare search engines to conventional publishers and suggests that, like publishers, search engines filter information following market regulations; that is, according to consumers’ tastes and preferences, and favour

13  See Floridi (n. 6). 14  See Nicholas Negroponte, Being Digital (new edn, Coronet Books 1996). 15  See Introna and Nissenbaum (n. 3). 16  Other relevant contributions on the diversity of the sources and information available on the web have been provided in the literature in information and communication studies, law, and public policy. The interested reader may find useful the following articles: Sandeep Pandey and others, ‘Shuffling a Stacked Deck: The Case for Partially Randomized Ranking of Search Engine Results’ in Klemens Böhm and others (eds), Proc. 31st International Conference on Very Large Databases (VLDB 2005); Frank Pasquale, ‘Rankings, Reductionism, and Responsibility’, Social Science Research Network Scholarly Paper no. 888327 (2006) ; Eszter Hargittai, ‘The Social, Political, Economic, and Cultural Dimensions of Search Engines: An Introduction’ (2007) 12 J.  of Computer-Mediated Communication 769; Elizabeth Van Couvering, ‘Is Relevance Relevant? Market, Science, and War: Discourses of Search Engine Quality’ (2007) 12 J. of Computer-Mediated Communication 866; Amanda Diaz, ‘Through the Google Goggles: Sociopolitical Bias in Search Engine Design’ in Amanda Spink and Michael Zimmer (eds), Web Search (Springer 2008); Lawrence Hinman, ‘Searching Ethics: The Role of Search Engines in the Construction and Distribution of Knowledge’ in Spink and Zimmer ibid.; Dirk Lewandowski, ‘The Influence of Commercial Intent of Search Results on Their Perceived Relevance’ (8 February 2011) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   125 powerful actors.17 This promotes the so-called ‘rich gets richer’ dynamic.18 Namely, websites include links to popular websites in order to be ranked higher by search engines which makes the popular site even more well-known and, thus, ranked even higher. Conversely, this system makes less known those websites that are already poorly linked and hence ranked lower. This vicious circle eventually leads to expunging niche, less renowned sources of information from the web, thus endangering the plurality and diversity of the internet. Two corrective mechanisms are then suggested: embedding the ‘value of fairness as well as [a] suite of values represented by the ideology of the Web as a public good’19 in the design of indexing and ranking algorithms, and transparency of the algorithms used by the search engines. The call for transparency of the search and ranking algorithms is not uncontroversial,20 as disclosing the structure of the algorithms could facilitate malicious manipulation of search results, while not bringing any advantage to the average non-tech-savvy user. It is also unclear to what extent market regulation of the internet really threatens the diversity of information sources. On the contrary, Granka maintains that in a market-regulated environment companies will devote their attention to the quality of the search results, which will have to meet the different needs and expectations of each single user, thereby guaranteeing diversity of the sources and fairness of the ranking. In this respect, the article also objects to the analogy describing OSPs, search engines in particular, as publishers. Search engines: parse through the massive quantities of available information . . ., the mechanisms whereby content is selected for inclusion in a user’s search result set is fundamentally different than in traditional media—search engines universally apply an algorithm, whereas traditional news media makes case-by-case decisions.21

OSPs’ editorial role is also analysed by Goldman who describes search engine bias as a necessary consequence of OSPs’ editorial work: to prevent anarchy and preserve credibility, search engines unavoidably must exercise some editorial control over their systems. In turn, this editorial control will create some bias.22

While the analysis recognizes that such filtering may reinforce the existing power structure in the web and bias search results towards websites with economic power,23 it also 17  See Introna and Nissenbaum (n. 3). 18  See Bernardo Huberman, The Laws of the Web: Patterns in the Ecology of Information (MIT Press 2003). 19  Michael Santoro, ‘Engagement with Integrity: What We Should Expect Multinational Firms to Do about Human Rights in China’ (1998) X(1) Business and the Contemporary World 30. 20  See Granka (n. 3). 21  ibid. 365. 22  Eric Goldman, ‘Search Engine Bias and the Demise of Search Engine Utopianism’ (2006) 8 Yale J. of L. and Tech. 188, 196. 23  See Niva Elkin-Koren, ‘Let the Crawlers Crawl: On Virtual Gatekeepers and the Right to Exclude Indexing’ (2001) 26 U. Dayton L. Rev. 179.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

126   Mariarosaria Taddeo advocates that the correction of search bias will follow from the fine-tuning of the search results with users’ preferences. No extra moral responsibilities should be ascribed to OSPs in that respect. A similar position has also been expressed by Lev-On and Manin.24 Their articles suggest that, given the huge set of data filtered by search engines, unintentional exposure to information conveying diverse and non-mainstream information cannot be excluded. The issue then arises as to whether incidental exposure to diverse information suffices to maintain an open, pluralistic web and unbiased access to information. Tailoring search results leads to an organic refinement of searching and ranking algorithms in order to accommodate users’ preferences and, at the same time, it may correct the distortion operated by OSPs and foster the diversity of the sources and the information circulating in the web. This is, for example, the argument proposed by Goldman.25 However, personalization of search result is far from being the solution to the problems of information-filtering. It has been objected to as a threat to democratic practices. The misuse of social media to tamper with US presidential elections, for example, has shown the concrete risks that personalization can pose to democratic processes.26 Customtailoring of search results challenges the affirmation of deliberative democracies, insofar as it undermines the possibilities of sharing different cultural backgrounds, view, and experiences and reduces the chances of users being exposed to sources, opinions, and information which may support or convey different world views. Several analyses have raised this issue. 27 Sunstein, for example, criticizes any approach relying on users’ preferences and market dynamics to shape information access and communication: it is much too simple to say that any system of communication is desirable if and because it allows individual to see and hear what they choose. Unanticipated, unchosen exposures, shared experiences are important too.28

He argues that custom-tailored access to information leads to a world fragmented in different versions of ‘the daily me’29 in which each individual is isolated in her or his informational bubble,30 from which conflicting views are excluded. Pariser has proposed a similar argument, stressing that the personalization of access to online information

24  See Azi Lev-On and B. Manin, ‘Happy Accidents: Deliberation and Online Exposure to Opposing Views’ in Todd Davies and Seeta Peña Gangadharan (eds), Online Deliberation: Design, Research and Practice (CSLI Publications 2009) 105; Azi Lev-On, ‘The Democratizing Effects of Search Engine Use: On Chance Exposures and Organizational Hubs’ in Spink and Zimmer (n. 16). 25  Goldman (n. 22). 26  See Nathaniel Persily, ‘Can Democracy Survive the Internet?’ (2017) 28 J. of Democracy 63. 27  Concern for the implication that filtering of information may have for participative democracy and the nature of the web have also been expressed in Lawrence Lessig, Code: And Other Laws of Cyberspace (Basic Books 1999). 28  Sunstein (n. 6) 131. 29  Negroponte (n. 15). 30  Pariser (n. 6).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   127 promotes the emerging personal informational ecosystems that undermine the emergence and fostering of democracy and pluralism.31 The combination of personalization of information with artificial intelligence (AI)based profiling and nudging techniques has made the risks highlighted by Pariser and Sunstein even more serious. AI can undermine and erode human self-determination due to its invisibility and influencing power: With their predictive capabilities and relentless nudging, ubiquitous but im­per­ cept­ible, AI systems can shape our choices and actions easily and quietly. . . . AI may also exert its influencing power beyond our wishes or understanding, undermining our control on the environment, societies, and ultimately on our choices, projects, identities, and lives. The improper design and use of invisible AI may threaten our fra­gile, and yet constitutive, ability to determine our own lives and identities and keep our choices open.32

AI may empower OSPs to do more things, from preventing suicide33 to offering more tailored content and enhancing the filter-bubble mechanism. Establishing appropriate governance and defining the moral responsibilities of OSPs are necessary steps for ensuring that possible misuses34 of AI, alongside the other services that OSPs offer, will not trump the proper uses of this technology.

2.  Human Rights: Harmful Content and Internet Censorship In a commentary, Vinton Cerf touched directly on the role of OSPs in preventing harmful uses of the web, stating that: it does seem to me that among the freedoms that are codified . . . should be the right to expect freedom (or at least protection) from harm in the virtual world of the Internet. The opportunity and challenge that lies ahead is how Internet Actors will work together not only to do no harm, but to increase freedom from harm.35

Following this view, ascribing moral responsibilities to OSPs with respect to the circulation of harmful material may be desirable. However, this also raises further problems 31 ibid. 32  Mariarosaria Taddeo and Luciano Floridi, ‘How AI Can Be a Force for Good’ (2018) 361 Science 751, 752. 33  See Norberto Nuno Gomes de Andrade and others, ‘Ethics and Artificial Intelligence: Suicide Prevention on Facebook’ (2018) 31 Philosophy & Technology 669. 34 See Luciano Floridi and others, ‘AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’ (2018) 28 Minds and Machines 689. 35  Cerf (n. 5) 465.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

128   Mariarosaria Taddeo when considering the duties that those responsibilities could prompt; for example, policing and filtering the content available online, and the possible breaches of individual rights, such as freedom of speech and information.36 Striking the balance between security of users and users’ right to freedom of speech and information is problematic. While OSPs should be held responsible for respecting it, it should not be their duty to define arbitrarily and independently the balance and decide, for example, how much freedom of information can be sacrificed in the name of users’ safety and security. This is not desirable for OSPSs—who may find themselves standing between laws curtailing freedom of speech, information, and anonymity, and citizens’ right to internet freedom; nor is it desirable for societies since it could lead to privatization of the judging power and poses issues of transparency and ac­count­abil­ity.37 Consider, for example, OSPs acting as both ‘judge and jury’38 with respect to the decision of the Court of Justice of the European Union on the right to be forgotten.39 To avoid that risk, it is crucial to separate the responsibilities of OSPs from the duties and authority of the state and supranational authorities, which should set clear norms shaping OSPs’ conduct with respect to human rights. At the moment, however, the debate focuses on whether OSPs have any re­spon­si­bil­ ities with respect to human rights. The discussion was reignited in late 2018 when it became clear that Google was considering entering into the Chinese market again,40 and before that in 2012 when the UN Human Rights Council declared the right to ‘internet freedom’ as a human right. This right calls on states to promote and foster access to the internet and to ensure that the rights to freedom of expression and information, as presented in Article 19 of the Universal Declaration of Human Rights, would be upheld online as well as offline.41 In the same vein, a report released by the UN in 2011 stressed that: 36  Internet censorship and freedom of speech have also been at the centre of a debate focusing on the balance between individual rights and state power. The topic does not fall within the scope of this chapter. The interested reader may find useful Mariarosaria Taddeo, ‘Cyber Security and Individual Rights, Striking the Right Balance’ (2013) 26 Philosophy & Technology 353; Taddeo, ‘The Struggle Between Liberties and Authorities in the Information Age’ (n. 5). 37  See Felicity Gerry and Nadya Berova, ‘The Rule of Law Online: Treating Data like the Sale of Goods: Lessons for the Internet from OECD and CISG and Sacking Google as the Regulator’ (2014) 30 Computer Law & Security Rev. 465. 38  Matt Warman, ‘Google is the “Judge and Jury” in the Right to be Forgotten (Telegraph, 14 July 2014) . 39  See Jeffrey Rosen, ‘Protecting Privacy on the Internet Is the User’s Responsibility’ (Philadelphia Enquirer, 2015) ; Luciano Floridi, ‘Should You Have The Right To Be Forgotten On Google? Nationally, Yes. Globally, No.’ (2015) 32 New Perspectives Quarterly 24. 40  See, for the 2018 debate on Google’s Operation Dragonfly aiming to reintroduce Google in the Chinese market, Mark Bergen, ‘Google in China: When “Don’t Be Evil” Met the Great Firewall’ (Bloomberg, 8 November 2018) . 41  See UN General Assembly Human Rights Council, ‘The Promotion, Protection and Enjoyment of Human Rights on the Internet’ (2012) A/HRC/20/L13. See also Florian Wettstein, ‘Silence as Complicity:

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   129 [g]iven the Internet has become an indispensable tool for realizing a range of human rights, combating inequality, and accelerating development and human progress, ensuring universal access to the Internet should be a priority for all States.42

Some authors, such as Chen,43 have argued that OSPs, and in particular social networks, bear both legal and moral responsibilities to respect human rights because of the centrality of their role on the web and of their knowledge of the actions undertaken by other agents, for example governmental actors, in the network. At the same time, both the Universal Declaration of Human Rights and the Resolution on the Promotion, Protection and Enjoyment of Human Rights on the Internet mainly address state actors, making problematic the expectation that OSPs should be held responsible for respecting and fostering human rights.44 This problem not only concerns OSPs; it also involves several other private actors, especially those working in the international market,45 making this issue a central topic in the literature on business ethics. Consider, for example, the case of human rights violation reported by Human Rights Watch which concern energy industries such as Royal Dutch/Shell’s operation in Nigeria, British Petroleum in Colombia, and Total and Unocal’s construction works in Burma and Thailand.46 Santoro47 and Brenkert48 stress the need to consider the context in which companies act before assessing their moral responsibilities. Santoro proposes a ‘fair share theory’ to assess the moral responsibilities of multinational companies complying with the requests of an authoritarian state. According to that theory, the responsibilities for respecting and fostering human rights are ascribed differently depending on the cap­abil­ity of the company. In particular, Santoro poses two conditions for evaluating the capabilities of private companies and ascribing responsibilities: (1) it has to be able to make the difference, that is, change local government policies; and (2) it has to be able to withstand the losses and damages that may follow from diverting from local government directions and laws.

Elements of a Corporate Duty to Speak Out Against the Violation of Human Rights’ (2012) 22 Business Ethics Quarterly 37; Nicola Lucchi, ‘Internet Content Governance and Human Rights’ (2013) 16 Vand. J. of Ent. & Tech. L. 809. 42  Frank La Rue, ‘Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’ (16 May 2011) A/HRC/17/27, s. 85 . 43  See Stephen Chen, ‘Corporate Responsibilities in Internet-Enabled Social Networks’ (2009) 90 J. of Business Ethics 523. 44 See David Jason Karp, ‘Transnational Corporations in “Bad States”: Human Rights Duties, Legitimate Authority and the Rule of Law in International Political Theory’ (2009) 1 Int’l Theory 87. 45  See Geraint Anderson, Just Business (Headline 2012). 46  See Human Right Watch, The Enron Corporation: Corporate Complicity in Human Rights Violations (Human Rights Watch 1999) . 47  See Santoro (n. 19). 48  See George Brenkert, ‘Google, Human Rights, and Moral Compromise’ (2009) 85 J. of Business Ethics 453; Santoro (n. 19).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

130   Mariarosaria Taddeo Both conditions shed little light on OSPs’ responsibilities with respect to human rights, as they can be used to support both sides of the argument. For example, major OSPs may have the means to effect change and they could withstand the consequences of diverging from the directions of local governments. Facebook’s CEO commented on this point, stating that: Today we’re blocked in several countries and our business is still doing fine. If we got blocked in a few more, it probably wouldn’t hurt us much either.49

At the same time, however, condition (1) offers a justification for any private company which may breach human rights; it is hard to determine the (in)ability to make a difference in governmental policies which could then allow a company to claim no moral responsibility for any violation of the human rights in which it participated while collaborating or complying with a local government directive. Condition (2) is at best too generic, since it justifies breaches of (possibly any) human rights when respecting them would harm a company’s profit. Other scholars support a different view and hold private actors morally responsible for protecting and fostering human rights.50 The Preamble to the Universal Declaration of Human Rights is often mentioned to support this point. It states that: every individual and every organ of society, keeping this Declaration constantly in mind, shall strive by teaching and education to promote respect for these rights and freedoms . . .51

The responsibility of all members of societies to promote human rights is mentioned and further elaborated in the Declaration of Human Duties and Responsibilities (the socalled Valencia Declaration),52 which focuses on the moral duties and legal re­spon­si­bil­ ities of the members of the global community to observe and promote respect for human rights and fundamental freedoms. The global community encompasses state and nonstate actors, individuals, and groups of citizens, as well as the private and the public sectors. Private companies are also expressly mentioned as responsible for promoting and securing the human rights set out in the Universal Declaration of Human Rights and in

49  Mark Zuckerberg (Facebook, 16 March 2015) . 50  See Denis Arnold, ‘Transnational Corporations and the Duty to Respect Basic Human Rights’ (2010) 20 Business Ethics Quarterly 371; Welsey Cragg, ‘Business and Human Rights: A Principle and Value-Based Analysis’ in George Brenkert and Tom  L.  Beauchamp (eds), The Oxford Handbook of Business Ethics (OUP 2010); Wettstein (n. 41). 51 Universal Declaration of Human Rights (adopted 10 December 1948) UNGA Res 217 A(III) (UDHR) Preamble. 52  Declaration of Human Duties and Responsibilities (1998) (DHDR).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   131 the Preamble to the UN Norms on the Responsibilities of Transnational Corporations and Other Business Enterprises.53 One of the cases about the moral responsibilities of OSPs and respect of human rights (freedom of speech in particular) that has been most debated in the relevant literature concerns compliance by some OSPs, such as Google, Microsoft, and Yahoo!, with requests made by the Chinese government on internet censorship and surveillance. OSPs have responded in different ways. Some, like Google (in 2010) and Yahoo! (in 2015), decided not to comply with the requests and withdrew from the Chinese market. Others refer to the so-called consequentialist argument to justify their business in China or in a context in which human rights are under a sharp devaluative pressure.54 The argument was first used by Google to support its initial compliance with the Chinese government’s requests. It holds that while the Chinese people cannot access some sources of information due to local censorship, they can still use Google’s services to access much more online information. Facebook and Microsoft have proposed the same argument. As Facebook’s CEO states: I believe we have a responsibility to the millions of people in these countries who rely on Facebook to stay in touch with their friends and family every day. If we ignored a lawful government order and then we were blocked, all of these people’s voices would be muted, and whatever content the government believed was illegal would be blocked anyway.55

Those who maintain that private companies ought to comply with human rights, because these are pre-eminent to local governmental actions, criticize the consequentialist argument. Multinationals . . . should respect the international rights of those whom they affect, especially when those rights are of the most fundamental sort.56

Dann and Haddow maintain the same position and ascribe moral responsibilities to company executives, who make the final decisions and shape their companies’ conduct.57 53  The document was approved on 13 August 2003 by the UN Sub-Commission on the Promotion and Protection of Human Rights. See UN Norms on the Responsibilities of Transnational Corporations and Other Business Enterprises (2003) E/CN.4/Sub2/2003/12/Rev2 . 54  Governmental censorship has spread throughout the globe with the internet; the literature on OSPs’ responsibilities in China casts an interesting light on a problem that concerns several other countries around the world. See Giuseppe Aceto and others, ‘Monitoring Internet Censorship with UBICA’ in Moritz Steiner, Pere Barlet-Ros, and Olivier Bonaventure (eds), Traffic Monitoring and Analysis (Springer 2015). 55  Zuckerberg (n. 49). 56  Thomas Donaldson, The Ethics of International Business (OUP 1992) 68. 57  See Gary Elijah Dann and Neil Haddow, ‘Just Doing Business or Doing Just Business: Google, Microsoft, Yahoo! And the Business of Censoring China’s Internet’ (2007) 79 J. of Business Ethics 219.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

132   Mariarosaria Taddeo Brenkert provides a different account and suggests the notion of ‘obedient complicity’: [t]his would occur when a business follows laws or regulations of a government to act in ways that support its activities that intentionally and significantly violate people’s human rights.58

The notion rests on the idea of permissible moral compromise. This is the com­prom­ise that agents make with themselves to forgo or even violate some of their moral principles to fulfil other, more important, values. OSPs operating in countries requiring internet censorship face conflicting responsibilities towards different stakeholders, not just users but also local employees and shareholders. For that reason, those OSPs may be justified in engaging in a moral compromise that may violate human rights if it enables the achievement of more important objectives. Brenkert’s article proposes the so-called ‘all thing considered’ approach to assess whether an OSP may be in a position to violate its moral principles or universal rights. The article considers the immediate context in which OSPs operate and the multiple responsibilities that this implies. For example, an OSP may be put in a position to com­ prom­ise its moral values or to disregard human rights and comply with local laws lest its employees working in a given territory are held liable for the company’s decision or to avoid damaging the shareholders’ interest. According to Brenkert, a moral compromise is justified in such cases. As with any consequentialist approach, the ‘all thing considered’ view enables a wide range of responsibilities of private companies to be covered and permit them to be assessed with regard to the company’s maximum utility. This is problematic, because an assessment of the moral responsibilities of a company depends on the scope of the context being considered. If one focuses on the immediate context, for example a specific country and the company’s interest in that country, the approach could facilitate the acceptance of moral compromise and justify disregarding human rights. But if a wider context is taken in consideration, for example the global reputation of the company and the impact that breaching human rights could have on the company’s public image, then the approach may justify compromising shareholders’ interests for the sake of human rights. Hence, while the approach was intended to mitigate the burden of OSPs’ moral responsibilities, it actually offers a further argument in favour of the duty of OSPs to respect and foster human rights. Given the global relevance and impact that OSPs have on information societies, it is increasingly less acceptable to maintain that OSPs, as private companies, are only responsible to their employees and shareholders.59 This is a point highlighted, for ex­ample, in the report of the Special Rapporteur on freedom of expression to the Human Rights Council, David Kaye, who stressed that: 58  George Brenkert, ‘Google, human rights, and moral compromise’ (2009) 85(4) J. of Business Ethics 453, 459. 59  See Chen (n. 43); Taddeo and Floridi (n. 1); Emily Laidlaw, ‘Myth or Promise? The Corporate Social Responsibilities of Online Service Providers for Human Rights’ in Taddeo and Floridi (n. 7) 135–55.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   133 Among the most important steps that private actors should take is the development and implementation of transparent human rights assessment procedures. They should develop and implement policies that take into account their potential impact on human rights.60

At the same time, the specification of the responsibilities of OSPs requires contextualizing the role of OSPs within the broader changes brought about by the information revolution and the role that they play in mature information societies.61 This will be the task of the next section.

3.  The Civic Role of Osps in Mature Information Societies Floridi defines mature information societies as societies whose members have developed an unreflective and implicit expectation to be able to rely on information technologies to perform tasks and to interact with each other and with the environment.62 Over the past two decades, we have witnessed a growing reliance on these technologies for developing a number of tasks, ranging from individual daily practices to matters relating to public life and the welfare of our societies. More recently, with big data and artificial intelligence, we have started to rely on computing technology to take sensitive decisions, rather than just performing tasks ranging from medical diagnosis to the administration of justice.63 As the main designers and developers of information technologies, OSPs play a central role in mature information societies. Some contributions to the literature identify this role as information gatekeeping.64 ‘Gatekeepers’ are agents who have a central role in the management of resources and infrastructure that are crucial for societies.65 The notion of gatekeepers has been studied in business ethics, social sciences, and legal and communication studies since the 1940s. For example, in 1947 Lewin famously described mothers and wives as gatekeepers, as they were the ones deciding and managing the access and consumption of food for their families. 60  David Kaye, ‘Freedom of Expression and the Private Sector in the Digital Age’ (2016) . 61  Luciano Floridi, ‘Mature Information Societies—a Matter of Expectations’ (2016) 29 Philosophy & Technology 1. 62  ibid. 1. 63  See Stuart Russell, ‘Robotics: Ethics of Artificial Intelligence: Take a Stand on AI Weapons’ (2015) 521 Nature 415; Julia Angwin Jeff Larson, ‘How We Analyzed the COMPAS Recidivism Algorithm’ (Propublica, 2016) ; Guang-Zhong Yang and others, ‘The Grand Challenges of Science Robotics’ (2018) 3 Science Robotics eaar7650. 64  See Taddeo and Floridi (n. 7). 65  See Kurt Lewin, ‘Frontiers in Group Dynamics’ (1947) 1 Human Relations 143.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

134   Mariarosaria Taddeo According to Metoyer-Duran’s definition, an agent is a gatekeeper if that agent: (a) controls access to information, and acts as an inhibitor by limiting access to or restricting the scope of information; and (b) acts as an innovator, communication channel, link, intermediary, helper, adapter, opinion leader, broker, and facilitator.66

Conditions (a) and (b) entail moral responsibilities insofar as gatekeepers have a regu­la­ tory function. The private nature of gatekeepers, along with the responsibilities entailed by (a) and (b), is one of the cruxes generating the problems concerning their moral responsibilities.67 In our societies, OSPs would be information gatekeepers as they control access to and flows of data and information.68 As gatekeepers, OSPs exercise a regulatory function69 which entails moral responsibilities towards the public good. Framing the discussion of the moral responsibilities of OSPs using the notion of gatekeepers unveils OSPs’ public role, and explains the expectations that users and regulators have with respect to their behaviour. However, the gatekeeping role only partially describes the function that OSPs have acquired in our societies and hence the responsibilities that they bear. OSPs increasingly play a more central role in public and policy debate, working to influence national pol­it­ ics and international relations.70 In this respect, they differ quite radically from other transnational corporations.71 Broders and Taylor argue that OSPs behave as political agents and thus they should bear corporate political responsibilities: OSPs exercise power over their users and are a counter power to state power in all corners of the world. . . . they are also political actors who merit serious diplomatic attention owing to their vital role in digital life . . .72

Also, this conceptualization of OSPs’ role is limited since it focuses mostly on the impact of OSPs in the international arena and disregards their central role as designers of the 66  Cheryl Metoyer-Duran, ‘Information Gatekeepers’ (1993) 28 Annual Rev. of Information Science and Tech (ARIST) 111. 67  See Jody Freeman, ‘Private Parties, Public Functions and the New Administrative Law’, Social Science Research Network Scholarly Paper no. 165988 (1999) ; Julia Black, ‘Decentring Regulation: Understanding the Role of Regulation and Self Regulation in a “Post-Regulatory” World’ (2001) 54 Current Legal Problems 103. 68  See Craig  J.  Calhoun (ed.), Dictionary of the Social Sciences (OUP 2002); Andrew Shapiro, The Control Revolution: How the Internet Is Putting Individuals in Charge and Changing the World We Know (PublicAffairs 2000); Lawrence Hinman, ‘Esse Est Indicato in Google: Ethical and Political Issues in Search Engines’ (2005) 3 Int’l Rev. of Information Ethics 19; Emily Laidlaw, ‘Private Power, Public Interest: An Examination of Search Engine Accountability’ (2008) 17 IJLIT 113. 69  Metoyer-Duran (n. 66) 111. 70  See Broeders and Taylor (n. 7). 71  See Andreas Georg Scherer and Guido Palazzo, ‘The New Political Role of Business in a Globalized World: A Review of a New Perspective on CSR and Its Implications for the Firm, Governance, and Democracy’ (2011) 48 J. of Management Studies 899. 72  See Broeders and Taylor (n. 7) 322.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   135 online environment. This is a key aspect that neither the gatekeeping nor the political conceptualization of OSPs fully grasp and that can be better analysed when contextualizing the role of OSPs within the conceptual changes brought about by the information revolution.73 The blurring of the line dividing real and virtual is one of those changes. This blurring has been noted and analysed by social scientists74 and psychologists,75 as well as by philosophers.76 Before the information revolution, being real was tantamount to (coupled with) being tangible, perceivable, and physical in the Newtonian sense. The information revolution decoupled real and tangible and coupled real and virtual. Reality in the information age includes virtual entities and environments along with tangible (physical) ones, making interactability—and no longer tangibility—the mark of reality.77 Think, for example, of the way in which Alice and her grandfather Bob enjoy their music: Bob may still own a collection of his favourite vinyl, while Alice simply logs on to her favourite streaming service (she does not even own the files on her computer). E-books, movies, and pictures all serve as good examples of the case in point. This decoupling and recoupling process has widened the range of what we consider real and has also blurred the very distinction between online and offline environments. As Floridi put it: ‘onlife’ designates the transformational reality . . . in contemporary developed societies.78

One difference still stands, though; that is, that online environment is designed, shaped, and developed by humans more than the physical one and tech companies, including OSPs, often lead this process. The services that enable our access to, and which shape our activities in, the online environment have a more central role than the one of gatekeepers or political actors. For, through their services, they shape our affordances. They contribute to inform the space of opportunities in which individuals and societies can flourish and evolve, and eventually impact how we understand reality and how we interact with each other and with the environment. As leading designers of online environments, OSPs make decisions that impact private and public lives, social welfare, and individual well-being. For this reason, 73  See Luciano Floridi, The Fourth Revolution, How the Infosphere Is Reshaping Human Reality (OUP 2014). 74  See Monroe E. Price, Media and Sovereignty: The Global Information Revolution and Its Challenge to State Power (MIT Press 2002). 75  See Uwe Hasebrink, Comparing Children’s Online Opportunities and Risks across Europe: CrossNational Comparisons for EU Kids Online :[European Research on Cultural, Contextual and Risk Issues in Children’s Safe Use of the Internet and New Media (2006–2009)] (EU Kids Online 2008) . 76  See Diana Coole and others, New Materialisms: Ontology, Agency, and Politics (Duke U. Press Books 2010); Mariarosaria Taddeo, ‘Information Warfare: A Philosophical Perspective’ (2012) 25 Philosophy and Technology 105; Luciano Floridi, ‘Digital’s Cleaving Power and Its Consequences’ (2017) 30 Philosophy & Technology 123. 77  See Luciano Floridi, Ethics of Information (OUP 2013). 78  Luciano Floridi, The Onlife Manifesto—Being Human in a Hyperconnected Era (Springer 2014) 61.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

136   Mariarosaria Taddeo OSPs play a civic role in mature information societies. And, hence, they have civic re­spon­si­bil­ities with respect to the way they conduct their business. These responsibilities require OSPs to consider the impact of their services and business models on the societies in which they operate and to take into account potential ethical benefits and risks. Ethical considerations need to become a constitutive part of their design process and business models. OSPs can discharge this civic responsibility by ensuring that: Social acceptability or, even better, social preferability must be the guiding principles for any [digital innovation] project with even a remote impact on human life, to ensure that opportunities will not be missed.79

Given the international and multicultural contexts in which OSPs operate, the specification of what is socially acceptable and preferable will be effective—that is, it will be regarded as ethically sound, appropriate, and desirable—only insofar as it will rest on an approach able to reconcile the different ethical views and stakeholders’ interests that OSPs face. Human rights and other principles80 offer guidance as to what fundamental values should shape OSPs’ practices, but these will have to be implemented considering different cultural and moral values. Frictions between fundamental and context-dependent values are to be expected, and resolving them will require collaboration between different stakeholders, including OSPs themselves, national and supranational political actors, as well as civil societies. In this scenario, OSPs (as well as policymakers and decision-makers) need to develop appropriate analyses to consider opportunities to harness ethical risks or to avoid or mitigate them. The civic role of OSPs requires them to develop such analyses in the first place and to establish processes to ensure the ethical governance of their services.

4.  Conclusion: The Duty of Ethical Governance Ethical governance of the digital should not be confused with the legal regulations in place to shape the design and use of digital technologies, nor is this something that erodes the space of legal compliance. Floridi distinguishes between hard ethics and soft ethics.81 If hard ethics is what enables us to shape fair laws or to challenge unfair ones, soft ethics goes over and above legal compliance. In some corners of the world, where 79  Floridi (n. 77). See also Mariarosaria Taddeo, ‘Cyber Security and Individual Rights, Striking the Right Balance’ (2013) 26 Philosophy & Technology 353. 80  See Josh Cowls and Luciano Floridi, ‘Prolegomena to a White Paper on an Ethical Framework for a Good Ai Society’, Social Science Research Network Scholarly Paper no. 3198732 (2018) . 81  Luciano Floridi, ‘Soft Ethics and the Governance of the Digital’ (2018) 31 Philosophy & Technology 1.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Civic Role of Osps in Mature Information Societies   137 laws respect and foster fundamental values, the governance of the digital is a matter of soft ethics. As he put it: [C]ompliance is necessary but insufficient to steer society in the right direction. Because digital regulation indicates what the legal and illegal moves in the game are, so to speak, but it says nothing about what the good and best moves could be to win the game—that is, to have a better society. This is the task of both digital ethics . . .82

At least when operating in open and democratic societies, the responsibilities of OSPs pertain to the ethical governance of the digital and soft ethics is essential to discharge them. OSPs need to embed ethical83 considerations in the design and development of their services at the outset in order to consider possible risks, prevent unwanted consequences, and seize the cost of missed opportunities.84 OSPs need to develop ethical foresight analyses,85 which will offer a step-by-step evaluation of the impact of practices or technologies deployed in a given organization on crucial aspects—like privacy, transparency, or liability—and may identify preferable alternatives and risk-mitigating strategies. This will bring a dual advantage. As an opportunity strategy, foresight methodologies can help to leverage ethical solutions. As a form of risk management, they can help to prevent, or mitigate, costly mistakes, by avoiding decisions or actions that are ethically unacceptable. This will lower the opportunity costs of choices not made or options not seized for lack of clarity or fear of backlash. Ethical governance of the digital is a complex, but necessary, task. The alternative may lead to devaluation of individual rights and social values, rejection of OSPs, and missing the opportunities that digital technologies bring to the benefit of societies.

82  ibid. 4. 83  See Luciano Floridi and Mariarosaria Taddeo, ‘What Is Data Ethics?’ (2016) 374 Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences . 84  See Luciano Floridi, ‘Technoscience and Ethics Foresight’ (2014) 27 Philosophy & Technology 499; Mariarosaria Taddeo and Luciano Floridi, ‘How AI Can Be a Force for Good’ (2018) 361 Science 751. 85  Floridi (n. 84).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 7

I n ter m edi a ry Li a bilit y a n d Fu n da m en ta l R ights Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko

The ethical implications of the role of Online Service Providers (OSPs) in contemporary information societies are raising social challenges. Of course, the OSPs’ role is ­unprecedented for their capacity to influence users’ inter­actions within the informational environment. Therefore, fundamental rights protection lies at the core of any policy debate regarding OSPs.1 It is actually a trifecta of competing fundamental rights in tension with each other that exacerbates the conundrum of the regulation of online inter­medi­ary liability. Being the gatekeepers of the information society, OSPs’ regulatory choices—­ increasingly through algorithmic tools—can profoundly affect the enjoyment of users’ fundamental rights, such as freedom of expression, freedom of information, and the right to privacy and data protection. On the other side, OSPs will make choices and implement obligations dealing with blocking and sanitization of a vast array of allegedly infringing content online that affect IP rightholders and creators. Finally, imposing too strict or expansive obligations on OSPs raises further fundamental rights challenges in connection with possible curtailment of their freedom to conduct business. 1  On the increasing influence of human and fundamental rights on the resolution of intellectual property (IP) disputes, see Christophe Geiger, ‘Constitutionalising Intellectual Property Law?, The Influence of Fundamental Rights on Intellectual Property in Europe’ (2006) 37(4) IIC 371; ‘Fundamental Rights as Common Principles of European (and International) Intellectual Property Law’ in A.  Ohly (ed.), Common Principles of European Intellectual Property Law (Mohr Siebeck 2012) 223; ‘Reconceptualizing the Constitutional Dimension of Intellectual Property’ in Paul Torremans (ed.), Intellectual Property and Human Rights (3rd edn, Kluwer Law Int’l 2015) 115; ‘Implementing Intellectual Property Provisions in Human Rights Instruments: Towards a New Social Contract for the Protection of Intangibles’ in Geiger (ed.) (n. 1) 661.

© Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability and Fundamental Rights   139 In particular, in online enforcement of IP and other rights, algorithms take decisions reflecting policy’s assumptions and interests that have very significant consequences for society at large, yet there is limited understanding of these processes.2 In this context, tensions have been highlighted3 between algorithmic enforcement and the European Convention on Human Rights (ECHR) and the Charter of Fundamental Rights of the European Union (EU Charter),4 with special emphasis on filtering, blocking, and other monitoring measures, which would fail to strike a ‘fair balance’ between copyright and other fundamental rights.5 This chapter will briefly map the complex conundrum triggered by the effects of intermediary liability and regulation on competing fundamental rights both of users, OSPs, and IP owners.

1.  Users’ Rights Users’ fundamental rights can be highly affected by regulation of intermediaries and obligations imposed on them that change the way in which users enjoy online services. Obviously, users’ freedom of information and freedom of expression—or freedom to receive and impart information—can be primarily affected but also the right to privacy.6 2  This very much recalls a previous lively discussion that emerged in the context of the 2001 Copyright Directive regarding technical protection measures (TPM) protected by that legislation sometimes at the expenses of limitations and exceptions foreseen by the same Directive. In that context, it was the technique not the law which could decide what could or could not be used, leaving the balance of copyright law in the hand of technical systems blind to the fundamental rights of users and their privileged positions. This generated strong criticism from academia. History seems to have repeated itself. See on the issue, Christophe Geiger, ‘The Answer to the Machine should not be the Machine, Safeguarding the Private Copy Exception in the Digital Environment’ (2008) 30 EIPR 121; Christophe Geiger, ‘Right to Copy v. Three-Step Test: The Future of the Private Copy Exception in the Digital Environment’ (2005) 1 Computer L. Rev Int’l 7. 3  See Joan Barata and Marco Bassini, ‘Freedom of Expression in the Internet: Main Trends of the Case Law of the European Court of Human Rights’ in Oreste Pollicino and Graziella Romeo (eds), The Internet and Constitutional Law: The Protection of Fundamental Rights and Constitutional Adjudication in Europe (Routledge 2016); Akdeniz v Turkey App. no. 20877/10 (ECtHR, 11 March 2014); C-314/12 UPC Telekabel Wien [2014] ECLI:EU:C:2014:192; C-360/10 Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v Netlog NV [2012] ECLI:EU:C:2012:85, paras 36–8; Stefan Kulk and Frederik Zuiderveen Borgesius, ‘Filtering for Copyright Enforcement in Europe after the Sabam Cases’ (2012) 34 EIPR 791, 791–4; Evangelia Psychogiopoulou, ‘Copyright enforcement, human rights protection and the responsibilities of internet service providers after Scarlet’ (2012) 34 EIPR 552, 555. 4  See Charter of Fundamental Rights of the European Union, 2012 OJ (C 326) 391. 5  See Chapter 29. 6  See Charter (n. 4) Arts 8, 11. The concept of user rights has received the most explicit development in the recently rendered CJEU judgments in Funke Medien and Spiegel Online (C-469/17 Funke Medien NRW GmbH v Bundesrepublik Deutschland [2019] ECLI:EU:C:2019:623, para. 70, and C‑516/17 Spiegel Online GmbH v Volker Beck [2019] ECLI:EU:C:2019:625, para. 54). For further discussion of copyright user rights in the case law of the CJEU, including in the context of the judgments in Funke Medien and Spiegel Online, see Chapter 29, as well as Christophe Geiger and Elena Izyumenko, ‘The Constitutionalization of Intellectual Property Law in the EU and the Funke Medien, Pelham and Spiegel Online Decisions of

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

140   Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko

1.1  Freedom of Information and Internet Access The right to freedom of expression, as protected by Article 10 ECHR and Article 11 EU Charter, benefits from a privileged position in the European constitutional order and is sometimes even called the ‘European First Amendment’.7 This right, which guarantees not only the right to impart information but also the right of the public to receive it,8 has in the course of recent years evolved towards inclusion of a genuine ‘right to internet access’, which has been increasingly construed as a fundamental right.9 The human rights nature of access to the internet has been sustained by noting that ‘the Internet, by facilitating the spreading of knowledge, increases freedom of expression and the value of citizenship’.10 Former US President, Barak Obama, declared on a visit to Shanghai that ‘freedom of access to information is a universal right’.11 The Council of Europe has specifically noted, also in response to three-strike legislation proposals, that access to the internet is a ‘fundamental right’.12 In a decision of 10 June 2009 on the first HADOPI law, the French Constitutional Council, for instance, stated explicitly that ‘[i]n the current state of the means of communication and given the generalized development of public online communication services and the importance of the latter for the participation in

the CJEU: Progress, But Still Some Way to Go!’, Centre for International Intellectual Property Studies (CEIPI) Research Paper No. 2019-09, available at or ; IIC (forthcoming 2020). 7  Dirk Voorhoof, ‘Het Europese “First Amendment”: de vrijheid van expressie en informatie en de rechtspraak van het EHRM betreffende art 10 EVRM (1994–1995)’ (1995) Mediaforum (Amsterdam) 11. See also Dirk Voorhoof, ‘Freedom of expression and the right to information: Implications for copyright’ in Geiger (n. 1) 331; Christophe Geiger, Droit d’auteur et droit du public à l’information, Approche de droit comparé (Paris Litec 2004) 166. 8  See e.g. Times Newspapers Ltd (Nos 1 and 2) v United Kingdom App. nos 3002/03 and 23676/03 (ECtHR, 10 March 2009) para. 27; Ahmet Yildirim v Turkey App. no. 3111/10 (ECtHR, 18 December 2012) para. 50; Guseva v Bulgaria App. no. 6987/07 (ECtHR, 17 February 2015) para. 36; Cengiz and Others v Turkey App. nos 48226/10 and 14027/11 (ECtHR, 1 December 2015) para. 56. On the public’s right to receive information, see also Christophe Geiger, ‘Author’s Right, Copyright and the Public’s Right to Information: A Complex Relationship’ in Fiona Macmillan (ed.), New Directions in Copyright Law (Edward Elgar 2007), Vol. 5, 24. 9  See on this question, Nicola Lucchi, ‘Access to Network Services and Protection of Constitutional Rights: Recognizing the Essential Role of Internet Access for the Freedom of Expression’ (2011) 19(3) Cardozo J. of Int’l and Comp L. 645; Molly Land, ‘Toward an International Law of the Internet’ (2013) 54(2) Harv. Int’l L.J. 393. 10  See Marshall Conley and Christina Patterson, ‘Communication, Human Rights and Cyberspace’ in Steven Hick, Edward Halpin, and Eric Hoskins (eds), Human Rights and the Internet (Macmillan 2000) 211. 11  Transcript of President Barak Obama’s 16 November 2009 Town Hall meeting with Chinese students in Shanghai, as released by the White House (CBS News, 16 November 2016) . 12  See Monika Ermert, ‘Council of Europe: Access to Internet is a Fundamental Right’ (IPWatch, 8 June 2009); ‘Internet Access is a Fundamental Right’ (BBC News, 8 March 2010) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability and Fundamental Rights   141 democracy and the expression of ideas and opinions, [the right to freedom of expression] implies freedom to access such services.’13 Since then, the fundamental right nature of access to the internet has been stressed by several international and national bodies, such as the UN Human Rights Council,14 the ITU-UNESCO Commission,15 the Costa Rican Constitutional Court declaring internet access essential to the exercise of fundamental rights,16 or the Finnish government officially making broadband a legal right.17 In Europe, a survey carried out by the European Court of Human Rights (ECtHR) of the legislation of twenty Member States of the Council of Europe revealed that: the right to Internet access is protected in theory by the constitutional guarantees applicable to freedom of expression and freedom to receive ideas and information. The right to Internet access is considered to be inherent in the right to access information and communication protected by national Constitutions and encompasses the right for each individual to participate in the information society and the obligation for States to guarantee access to the Internet for their citizens. It can therefore be inferred from all the general guarantees protecting freedom of expression that a right to unhindered Internet access should also be recognised.18

Accordingly, any measure that is bound to have an influence on the accessibility of the internet engages the responsibility of the state under Article 10 ECHR.19 Within the framework of that Article—and corresponding Article 11 EU Charter—the court deciding on website-blocking cases will have to look at the (1) manner of the site usage and (2) the effects of blocking on legitimate communication, but also (3) at the public interest in disabled information and (4) whether the alternatives to accessing such information 13  Conseil constitutionnel [Constitutional Council], Decision no. 2009-580 DC of 10 June 2009, s. 12 (Fra.). On this decision, see Christophe Geiger, ‘Honourable Attempt but (ultimately) Disproportionate Offensive against Peer-to-peer on the Internet (HADOPI), A Critical Analysis of the Recent AntiFilesharing Legislation in France’ (2011) 42(4) IIC 457, 467 ff. This ‘right to access internet’ was based on freedom of expression of Art 11. of the French Declaration on Human Rights, the court specifying that therefore any infringement of that right must be strictly limited. 14 UN General Assembly, Human Rights Council Resolution, ‘The Promotion, Protection and Enjoyment of Human Rights on the Internet’ A/HRC/RES/20/8, twentieth session, 16 July 2012. See also UN General Assembly Human Rights Council, ‘The Promotion, Protection and Enjoyment of Human Rights on the Internet’ (Resolution) (20 June 2014) A/HRC/26/L.24. Prior to that, the right to internet access was only implicitly considered by the UN as a human right, insofar as it is inherent in the ‘freedom . . . to seek, receive and impart information and ideas through any media and regardless of frontiers’. Universal Declaration of Human Rights (10 December 1948) 217 A (III), Art. 19. 15  See Kaitlin Mara, ‘ITU-UNESCO Broadband Commission Aims at Global Internet Access’ (IPWatch, 10 May 2010). 16  ‘Acceso a Internet es un derecho fundamental’ (Nacion, 8 September 2010) . 17  ‘Finland Makes Broadband a Legal Right’ (BBC News, 1 July 2010) . 18  Ahmet Yildirim v Turkey App. no. 3111/10 (ECtHR, 18 December 2012) para. 31. 19  ibid. para. 53.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

142   Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko were available. Under certain circumstances, it will further be pertinent to consider (5) the Article 10 implications for not only internet users, but also the intermediaries concerned. The European Commission stressed a similar point by noting: Any limitations to access to the Open Internet can impact on end-users’ freedom of expression and the way in which they can receive and impart information. Although operators need to manage Internet traffic in order to ensure the proper functioning of the networks (including managing network congestion, security threats, etc.), there are many instances when unjustified blocking and throttling occurs.20

The ECtHR provided some guidance on the potential implications of the general public interest in information affected by intermediaries’ actions, by recalling its established case law, in accordance with which: while Article 10 § 2 of the Convention does not allow much leeway for restrictions of freedom of expression in political matters, for example, States have a broad margin of appreciation in the regulation of speech in commercial matters . . ., bearing in mind that the breadth of that margin has to be qualified where it is not strictly speaking the ‘commercial’ expression of an individual that is at stake but his participation in a debate on a matter of general interest.21

The Court referred to its earlier findings in Ashby Donald—a case that concerned the conviction in France of three fashion photographers for copyright infringement by taking photographs of designers’ clothes and publishing them online without the consent of the rightholders. There, likewise, it was noted that, ‘although one cannot deny that the public is interested in fashion in general and haute couture fashion shows in particular, it could not be said that the applicants took part in a debate of general interest when restricting themselves to making photographs of fashion shows accessible to the public’.22 In the light of that case law, the Court was not convinced that the case of Akdeniz raised an important question of general interest. The ECtHR thus seemed to imply that in other cases with greater public interest in information the blocking might not be justified in terms of Article 10 ECHR. It is 20 European Commission, ‘Staff Working Document, Impact Assessment Accompanying the Document Proposal for a Regulation of the European Parliament and of the Council laying down measures concerning the European single market for electronic communications and to achieve a Connected Continent, and amending Directives 2002/20/EC, 2002/21/EC and 2002/22/EC and Regulations (EC) No. 1211/2009 and (EU) No. 531/2012’ SWD (2013) 331 final, s. 3.4. 21  Akdeniz (n. 3) para. 28. On this case (and other freedom of expression-related IP cases), see Christophe Geiger and Elena Izyumenko, ‘Intellectual Property before the European Court of Human Rights’ in Christophe Geiger, Craig Nard, and Xavier Seuba (eds), Intellectual Property and the Judiciary (Edward Elgar 2018) 9. 22  Ashby Donald and Others v France App. no. 36769/08 (ECtHR, 10 January 2013) para. 39, translation from French published in (2014) 45(3) IIC 354. For an extensive comment, see Christophe Geiger and Elena Izyumenko, ‘Copyright on the Human Rights Trial: Redefining the Boundaries of Exclusivity through Freedom of Expression’ (2014) 45(3) IIC 316, trying to extract guidelines of the of the case law of the ECtHR for an Art. 10 ECHR scrutiny of copyright enforcement measures.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability and Fundamental Rights   143 not­able that the general public interest in information is not reduced to political context. In effect, such interest had previously been recognized by the Court in the information on, for example sporting matters23 or performing artists,24 as well as when the material at issue related to the moral position advocated by an influential religious community.25 The standards of scrutiny would also typically be more stringent for artistic, cultural, or otherwise ‘civil’ expression.26 Similar issues have also been raised in connection with the right to be forgotten and, more broadly, data protection rights in the internet.27 In 2014, the Court of Justice of the European Union (CJEU) ruled that an internet search engine operator is responsible for the processing that it carries out of personal data which appear on web pages published by third parties.28 Thus, under certain circumstances, search engines can be asked to remove links to web pages containing personal data. The recognition by the EU of a socalled ‘right to be forgotten’ has ignited critical reactions pointing to the fact that the right to be forgotten would endanger freedom of expression and access to information.29 According to a Communication from the Organization for Security and Co-operation in Europe (OSCE) on open journalism, ‘the legitimate need to protect priv­acy and other human rights should not undermine the principal role of freedom of the media and the right to seek, receive and impart information of public interest as a basic condition for democracy and political participation’.30 As Floridi and Taddeo noted: [s]triking the correct balance between the two is not a simple matter. Things change, for example, depending on which side of the Atlantic one is. According to the European approach, privacy trumps freedom of speech; whereas the American view is that freedom of speech is preeminent with respect to privacy. Hence, defining the responsibilities of OSPs with respect to the right to be forgotten turns out 23 See Nikowitz and Verlagsgruppe News GmbH v Austria App. no. 5266/03 (ECtHR, 22 February 2007) para. 25 (society’s attitude towards a sports star); Colaço Mestre and SIC—Sociedade Independente de Comunicação, SA v Portugal App. nos 11182/03 and 11319/03 (ECtHR, 26 April 2007) s. 28 (an interview by the president of the sports club); Ressiot and Others v France App. nos 15054/07 and 15066/07 (ECtHR, 28 June 2012) para. 116 (doping practices in professional sport). 24 See Sapan v Turkey App. no. 44102/04 (ECtHR, 8 June 2010) para. 34 (a book about a Turkish pop star). 25 See Verlagsgruppe News GmbH and Bobi v Austria App. no. 59631/09 (ECtHR, 4 December 2012) para. 76. 26  See David Harris and others, Harris, O’Boyle, and Warbrick: Law of the European Convention on Human Rights (OUP 2009) 457. 27  See Chapter 25. 28  See C-131/12 Google Spain SL and Google Inc. v Agencia Española de Protección de Datos (AEPD) and Mario Costeja González [2014] ECLI:EU:C:2014:317. 29  See e.g. Miquel Peguera, ‘The Shaky Ground of the Right to Be Delisted’, 15 Vand. J. of Ent. & Tech. L. 507 (2015); Jeffry Rosen, ‘The Right to Be Forgotten’ (2015) 64 Stan. L. Rev. Online 88; Stefan Kulk and Frederik Zuiderveen Borgesius, ‘Google Spain v. González: Did the Court Forget About Freedom of Expression?’ (2014) 3 European J. of Risk Regulation 389; Jonathan Zittrain, ‘Don’t Force Google to Forget’ (New York Times, 14 May 2014) . 30  OSCE Representative on Freedom of the Media, Dunja Mijatović, 3rd Communiqué on Open Journalism, 29 January 2016, 2 .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

144   Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko to be quite problematic, as it involves the balancing of different fundamental rights as well as considering the debate on the national versus international governance of the Internet.31

In this regard, the CJEU stated that a person’s right to privacy generally overrides ‘as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in finding that information upon a search relating to the data subject’s name’.32 However, the CJEU also noted that this general rule should not apply if there is a preponderant public interest in having access to the information ‘for particular reasons, such as the role played by the data subject in public life’.33 On 26 November 2014, the European data protection authorities (DPAs) assembled in the Article 29 Working Party (WP29) adopted guidelines on the implementation of the Google Spain judgment.34 They include common criteria to be used by national DPAs when addressing complaints. According to WP29, a balance must be made between the nature and sensitivity of the data and the interest of the public in having access to that information.35 However, if the data subject plays a role in public life, the public interest will be significantly greater.36 Therefore, the guidelines concluded, the impact of delisting on individual rights to freedom of expression and access to information will be very limited. When DPAs assess the relevant circumstances, delisting will not be appropriate, if the public interest overrides the rights of the data subject.37 The guidelines also contain thirteen key criteria which the national DPAs will apply to handle complaints following refusals of delisting by search engines. These criteria have to be read in the light of the ‘the interest of the general public in having access to [the] information’.38 Following in the footsteps of WP29 clarifications and criteria, European national courts and privacy authorities further operated the necessary balancing between personal priv­acy interest and public interest in access to information by noting that the Costeja ruling ‘does not intend to protect individuals against all negative communications on the Internet, but only against “being pursued” for a long time by “irrelevant”, “excessive” or “unnecessarily defamatory” expressions’.39 Instead, for example, conviction for a serious crime will in general provide information about an individual that will remain relevant,40 users cannot obtain the delisting of search results of recent news with a relevant public 31  Mariarosaria Taddeo and Luciano Floridi, ‘The Debate on the Moral Responsibility of Online Service Providers’ (2015) Sci. Eng. Ethics 1, 18–19. 32  C-131/12 (n. 28) para. 81. 33 ibid 34 See Art. 29 Data Protection Working Party, ‘Guidelines on the Implementation of the CJEU Judgment on Google Spain v. Costeja’ (2014) 14/EN WP 225 (hereafter WP29 Guidelines) . 35  ibid. 2. 36 ibid. 37 ibid. 38  ibid. 11, 13–19. 39  Rechtbank [District Court] Amsterdam [2014] ECLI:NL:RBAMS:2014:6118 (Neth.), as translated in Joran Spauwen and Jens van den Brink, ‘Dutch Google Spain ruling: More Freedom of Speech, Less Right To Be Forgotten For Criminals (Inforrm’s Blog, 27 September 2014) . 40 ibid

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability and Fundamental Rights   145 interest,41 or, again, the personal data included in the Commercial Registry ‘cannot be cancelled, anonymized, or blocked, or made available only to a limited number of interested parties’, given the prevalent interest in promoting market transparency and protecting third parties.42

1.2  Freedom of Expression Freedom of expression as freedom to impart information must also be considered among the counterposing rights that must be balanced within the intermediary liability dilemma. In Google v Louis Vuitton, the Advocate General of the CJEU pointed to the fact that general rules of civil liability (based on negligence)—rather than strict liability IP law rules—best suit the governance of the activities of internet intermediaries. His argument—crafted in the context of trade mark infringement online—stressed that: [l]iability rules are more appropriate, since they do not fundamentally change the decentralised nature of the internet by giving trade mark proprietors general— and virtually absolute—control over the use in cyberspace of keywords which correspond to their trade marks. Instead of being able to prevent, through trade mark protection, any possible use—including, as has been observed, many lawful and even desirable uses—trade mark proprietors would have to point to specific instances giving rise to Google’s liability in the context of illegal damage to their trademarks.43

According to this argument, a negligence-based system would better serve the delicate balance between protection of IP rights, access to information, and freedom of expression that the online intermediary liability conundrum entails. As Van Eecke mentioned, ‘the notice-and-take-down procedure is one of the essential mechanisms through which the eCommerce Directive achieves a balance between the interests of rightholders, online intermediaries and users’.44 In this regard, recital 46 of the e-Commerce Directive explicitly requires the hosting provider to respect the principle of freedom of expression when deciding a takedown request.45 Imperfect as it is, a notice-and-takedown mechanism embeds a fundamental safeguard for freedom of information as long as it forces intermediaries to actually consider the infringing nature of the materials before deciding whether to take them down. 41  See Garante per la Protezione dei Dati Personali [Data Protection Authority], Decision no. 618 (18 December 2014) (It.) . 42  See C-398/15 Camera di Commercio, Industria, Artigianato e Agricoltura di Lecce v Salvatore Manni [2016] ECLI:EU:C:2016:652, Opinion of AG Bot. 43 C-236–238/08 Google France, SARL & Google Inc. v Louis Vuitton Malletier SA, Viaticum SA, Luteciel SARL v Centre Bational de Recherche en Relations Humaines (CNRRH) SARL, Pierre‑Alexis Thonet, Bruno Raboin, Tiger, a franchisee of Unicis ECLI:EU:C:2009:569, AG’s Opinion, para. 123. 44  Patrick Van Eecke, ‘Online Service Providers and Liability: A Plea for a Balanced Approach’ (2011) Common Market L. Rev. 1455, 1479–80. 45  See Directive 2001/29/EC of the European Parliament and of the Council on the harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L167, recital 46.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

146   Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko In contrast, an ex ante mechanism based on filtering and automatic infringementassessment systems that online intermediaries might deploy to monitor potentially infringing users’ activities might disproportionally favour property rights against other fundamental rights, as consistent jurisprudence of the CJEU has highlighted.46 At the present level of technological sophistication, false positives might cause relevant chilling effects and negatively impact users’ fundamental right to freedom of expression. Automated systems cannot replace human judgment that should flag a certain use as fair or falling within the scope of an exception or limitation, in particular since the boundaries of the privileged uses are often blurred and difficult case-by-case analysis is needed to determine whether a particular use is infringing.47 Also, complexities regarding the public domain status of certain works might escape the discerning capacity of contentrecognition technologies. In the own word of the CJEU, these measures: could potentially undermine freedom of information, since that system might not distinguish adequately between unlawful content and lawful content, with the result that its introduction could lead to the blocking of lawful communications. Indeed, it is not contested that the reply to the question whether a transmission is lawful also depends on the application of statutory exceptions to copyright which vary from one Member State to another. In addition, in some Member States certain works fall within the public domain or may be posted online free of charge by the authors concerned.48

Following approval by the European Parliament of Article 17 of the Directive on copyright and related rights in the Digital Single Market,49 the implications of an increase in the general monitoring of content uploaded onto Online Content Sharing Service Providers’ services and the increased use of automated filtering and enforcement systems in this regard, raises important questions relating to the preservation of users’ fundamental rights to expression and information.50 46  See C-360/10 (n. 3) para. 52. 47  Furthermore, in the EU the manner in which the Member States implement even the very same exceptions can vary considerably from country to country. E.g. a quotation exception has very a different scope across Europe. See Bernt Hugenholtz and Martin Senftleben, ‘Fair Use in Europe: in Search of Flexibilities’, Institute for Information Law Research Paper no. 2012/33 (2012) 15–17. See also in this sense, Christophe Geiger and Franciska Schönherr, Frequently Asked Questions (FAQ) of Consumers in relation to Copyright, Summary Report (EUIPO 2017) (noting that ‘[c]opyright law throughout the EU does not give unanimous answers to Consumers’ 15 Frequently Asked Questions. . . . The result is the following: even if a few common basic principles can certainly be identified, the exceptions to these principles as well as their implementation vary significantly’). The study lists exceptions and limitations to copyright as one of the areas of major divergence in national copyright law. See ibid. 6–8. 48  See C-360/10 (n. 3) para. 50. 49  See Directive 2019/790/EU of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L130/92. 50  Many scholars have raised this issue. See recently in this sense, e.g. Martin Senftleben, ‘Bermuda Triangle—Licensing, Filtering and Privileging User-Generated Content Under the New Directive on Copyright in the Digital Single Market’ (April 2019) SSRN Research Paper no. 3367219 .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability and Fundamental Rights   147 Some courts have also recognized that website blocking engages the freedom of expression of the internet service providers (ISPs).51 As noted in the Advocate General’s Opinion in Telekabel, ‘[a]lthough it is true that, in substance, the expressions of opinion and information in question are those of the ISP’s customers, the ISP can nevertheless rely on that fundamental right by virtue of its function of publishing its customers’ expressions of opinion and providing them with information.’52 In support of this contention, the Advocate General referred to an established body of ECtHR case law, in accordance with which ‘Article 10 guarantees freedom of expression to “everyone”, [with] [n]o distinction [being] made in it according to the nature of the aim pursued or the role played by natural or legal persons in the exercise of that freedom.’53 According to the ECtHR, although ‘publishers do not necessarily associate themselves with the opinions expressed in the works they publish, . . . by providing authors with a medium they participate in the exercise of the freedom of expression . . .’54 ISPs’ freedom of expression claims are not as such unusual in the practice of the ECtHR. In the Delfi AS case, the Grand Chamber considered this right in connection with the liability of Estonia’s largest internet portal for hosting infringing content generated by its users.55 On the facts, however, no violation of Article 10 was established. That said, in a recent judgment on a similar issue the Court found that holding an ISP liable for user comments did indeed violate that ISP’s freedom of expression.56 On the national level, it was also highlighted in a series of UK cases that ISPs and website operators’ freedom of expression is affected by website blocking.57 Nevertheless, in these particular cases, property rights of copyright owners clearly outweighed ISPs’ competing rights.58

1.3  Right to Privacy and Data Protection Intermediaries’ regulation and enforcement actions also struggle to find the proper balance between privacy and freedom of expression. On one side, courts stressed the importance of imposing liability on intermediaries by noting that ‘violations of privacy of individuals and companies, summary trials and public lynching of innocents are routinely reported, all practiced in the worldwide web with substantially increased damage

51  See on this issue extensively, Christophe Geiger and Elena Izyumenko, ‘The Role of Human Rights in Copyright Enforcement Online: Elaborating a Legal Framework for Website Blocking’ (2016) 32(1) American U. Int’l L. Rev. 43; and from the same authors Chapter 29 in this Handbook. 52 C-314/12 UPC Telekabel Wien [2013] ECLI:EU:C:2013:781, Opinion of AG Villalón, para. 82. 53  Öztürk v Turkey App. no. 22479/93 (ECtHR, 28 September 1999) para. 49. 54 ibid. 55 See Delfi AS v Estonia App. no. 64569/09 (ECtHR, 16 June 2015). 56 See Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary App. no. 22947/13 (ECtHR, 2 February 2016). 57 See Twentieth Century Fox Film Corp. & Others v British Telecommunications Plc [2011] EWHC 1981 (Ch), [200] (UK); EMI Records Ltd & Others v British Sky Broadcasting Ltd & Others [2013] EWHC 379 (Ch), [94] (UK). 58 See Twentieth Century Fox (n. 57) [200]; EMI Records (n. 57) [107].

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

148   Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko because of the widespread nature of this medium of expression’.59 Again, upholding the protection of the right to privacy against freedom of expression, courts reinforce this ‘internet threat’ discourse by noting that, in the internet, ‘[d]efamatory and other types of clearly unlawful speech, including hate speech and speech inciting violence, can be disseminated like never before, worldwide, in a matter of seconds, and sometimes remain persistently available online.’60 On the other hand, courts have highlighted that the unqualified deployment of filtering and monitoring obligations may also impinge on users’ right to protection of personal data. The CJEU has outlined the disproportionate impact of these measures on users’ privacy by concluding that: requiring installation of the contested filtering system would involve the identification, systematic analysis and processing of information connected with the profiles created on the social network by its users. The information connected with those profiles is protected personal data because, in principle, it allows those users to be identified.61

In Scarlet and Netlog, the CJEU established for the first time a ‘fundamental right to the protection of personal data, which is not fairly balanced with copyright holders’ rights when a mechanism requiring the systematic processing of personal data is imposed in the name of the protection of intellectual property’.62 According to the ECtHR, which tends to be critical of systems that intercept communications, secrecy of communication or the right to respect for private life63 could be also impinged on by filtering technologies, especially when those systems monitor the content of communications.64

2.  OSPs, Freedom of Business, and Innovation Apart from freedom of expression, another fundamental right related to balancing when imposing obligations on OSPs is the freedom to conduct their business as per Article 16 EU Charter.65 In contrast to the long constitutional tradition of freedom of expression, the freedom to conduct a business is not known to any other international 59  Delfi (n. 55) para. 110. 60  Google Brazil v Dafra, Special Appeal no. 1306157/SP (Superior Court of Justice, 24 March 2014) . 61  C-360/10 (n. 3) para. 49. 62  Gloria González Fuster, ‘Balancing Intellectual Property Against Data Protection: a New Right’s Wavering Weight’ (2012) 14 IDP 34, 37. 63  See EU Charter (n. 6) Art. 7. 64  See Kulk and Borgesius (n. 3) 793–4. 65  See EU Charter (n. 4) Art. 16. On this relatively recent fundamental right and its relation to IP, see Gustavo Ghidini and Andrea Stazi, ‘Freedom to conduct a business, competition and intellectual property’ in Geiger (n. 1) 410.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability and Fundamental Rights   149 human rights instrument but the EU Charter, which also makes the freedom to conduct a business a relatively young right.66 As a consequence, ‘to date the case law has not . . . provided a full and useful definition of this freedom’.67 Both the textual context and the judicial history of Article 16, therefore, point to its lack of qualification, and allow the state a wide power to interfere with it.68 This particularly ‘weak’ nature of the right69 has arguably allowed the CJEU to also rule in favour of rightholders in cases where the Advocate General has failed to find a ‘fair balance’.70 Obviously, new obligations imposed on online intermediaries increase barriers to innovation by making it more expensive for platforms to enter and compete in the market. Online intermediaries might be called on to develop and deploy costly technology to cope with legal requirement and obligations. The CJEU has emphasized the economic impact on OSPs of new obligations, such as filtering and monitoring obligations. The CJEU assumed that monitoring all the electronic communications made through a network, without any limitation in time, directed to all future infringements of existing and yet-to-be-created works ‘would result in a serious infringement of the freedom of the hosting service provider to conduct its business’.71 Hosting providers’ freedom of business would be disproportionally affected since an obligation to adopt filtering technologies would require the ISP to install a complicated, costly, and permanent system at its own expense.72 In addition, according to the CJEU, this obligation would be contrary to Article 3 of the Enforcement Directive, providing that ‘procedures and remedies ne­ces­sary to ensure the enforcement of the intellectual property rights . . . shall not be un­neces­sar­ ily complicated or costly [and] shall be applied in such a manner as to avoid the creation of barriers to legitimate trade’.73

3.  IP Owners and Property Rights Another important fundamental right implicated by intermediary liability and regulation is the right to property of IP holders. This is also the right against which the rights 66  Note, however, that some national constitutions provided for the protection of the freedom to conduct a business long before these supranational developments. See e.g. Italian Constitution of 1947, Art. 41; Spanish Constitution of 1978, Art. 38; Croatian Constitution of 1990, Art. 49; and Slovenian Constitution of 1991, Art. 74. 67 C-426/11 Mark Alemo-Herron and Others v Parkwood Leisure Ltd [2013] ECLI:EU:C:2013:82, Opinion of AG Villalón, para. 49. 68  See C-283/11 Sky Österreich GmbH v Österreichischer Rundfunk [2013] ECLI:EU:C:2013:28, para. 46; T-587/13 Miriam Schwerdt v Office for Harmonisation in the Internal Market [2015] ECLI:EU:T:2015:37, para. 55. 69  See Xavier Groussot, Gunnar Thor Pétursson, and Justin Pierce, ‘Weak Right, Strong Court—The Freedom to Conduct Business and the EU Charter of Fundamental Rights’ in Sionaidh Douglas-Scott and Nicholas Hatzis (eds), Handbook on EU Law and Human Rights (Edward Elgar 2017) 326–44; EMI Records (n. 55) [107]. 70  See C-314/12 (n. 3) para. 49. 71  C-360/10 (n. 3) para. 46. 72 ibid. 73  See Directive 2004/48/EC of the European Parliament and of the Council on the enforcement of intellectual property rights [2004] OJ L157, Art. 3.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

150   Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko mentioned in the previous sections need to be balanced. In Europe, this balancing is predetermined by the fundamental-right status of IP as per Article 17(2) EU Charter and (somewhat implicitly) Article 1 Protocol No. 1 ECHR, as well as by considerations of how effective an enforcement of this fundamental right must be. The latter, in turn, would depend on what the European legislator regards as a valid objective of copyright protection. In 2013, the ECtHR noted that ‘[a]s to the weight afforded to the interest of protecting the copyright-holders, the Court would stress that intellectual property bene­fits from the protection afforded by Article 1 of Protocol No. 1 to the Convention.’74 Consequently, as stated in that case and further reiterated since, when balancing two competing interests both protected by the Convention on the level of human rights, states are afforded ‘a particularly wide’ margin of appreciation.75 The European position that IP by definition enjoys human rights protection was predetermined by a number of prior developments on both judicial and legislative levels. First of all, the EU Charter, unlike the ECHR, expressly enshrined the protection of the right to IP in its catalogue of rights under Article 17(2).76 The ECHR, on its part, although not containing a specific IP clause, has been interpreted by the Strasbourg Court as extending its Article 1 Protocol No. 1 protection to the entire range of traditionally recognized IP rights.77 The most indicative in this regard is an oft-quoted Anheuser-Busch case, which expanded Article 1 First Protocol protection to ‘mere’ applications for the registration of trade marks.78 Certain scholars have thus stressed that this European approach to an autonomous fundamental-right nature of IP might lead, however, to a more rigorous protection of IP interests vis-à-vis other fundamental rights involved in balancing, including the users’ information rights and the ISPs’ business freedom.79 74  Neij and Sunde Kolmisoppi v Sweden App. no. 40397/12 (ECtHR, 19 February 2013) para. 10. See also Geiger and Izyumenko (n. 22) 316. 75  Neij and Sunde Kolmisoppi (n. 74) para. 11. See also Akdeniz (n. 3) para. 28; Ashby Donald (n. 22) para. 40. 76 See Christophe Geiger, ‘Intellectual “Property” after the Treaty of Lisbon: Towards a Different Approach in the New European Legal Order?’ (2010) 32(6) EIPR 255; ‘Intellectual Property Shall be Protected!? Article 17(2) of the Charter of Fundamental Rights of the European Union: A Mysterious Provision with an Unclear Scope’ (2009) 31(3) EIPR 113; Jonathan Griffiths and Luke McDonagh, ‘Fundamental Rights and European IP Law: the Case of Art 17(2) of the EU Charter’ in Christophe Geiger (ed.), Constructing European Intellectual Property: Achievements and New Perspectives (Edward Elgar 2012) 75 ff. 77  See in the field of copyright: Akdeniz (n. 3); Neij and Sunde Kolmisoppi (n. 74); Ashby Donald (n. 22); Balan v Moldova App. no. 19247/03 (ECtHR, 29 January 2008); Melnychuk v Ukraine App. no. 28743/03 (ECtHR, 5 July 2005); Dima v Romania App. no. 58472/00 (ECtHR, 26 May 2005). In the field of trade marks: Paeffgen Gmbh v Germany App. nos 25379/04, 21688/05, 21722/05 and 21770/05 (ECtHR, 18 September 2007); Anheuser-Busch Inc. v Portugal App. no. 73049/01 (ECtHR, 11 January 2007). In the field of patent law: Lenzing AG v United Kingdom App. no. 38817/97 (ECommHR, 9 September 1998); Smith Kline & French Lab. Ltd v Netherlands App. no. 12633/87 (ECommHR, 4 October 1990). 78  See Anheuser-Busch (n. 77). See also Klaus-Dieter Beiter, ‘The Right to Property and the Protection of Interests in Intellectual Property—A Human Rights Perspective on the European Court of Human Right’s Decision in Anheuser-Bush Inc v Portugal’ (2008) 39(6) IIC 714, 717. 79  See e.g. for a critique of the approach towards treating IP rights and other fundamental rights ‘as if they were of equal rank’, Alexander Peukert, ‘The Fundamental Right to (Intellectual) Property and the

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability and Fundamental Rights   151 However, the reach of the (human) right to property protection for IP should not be overestimated. As noted by the CJEU in Telekabel, ‘there is nothing whatsoever in the wording of Article 17(2) Eu Charter to suggest that the right to intellectual property is inviolable and must for that reason be absolutely protected’.80 Article 17(2) EU Charter could then be considered to be nothing more than a simple clarification of Article 17(1), that clearly recalls that ‘[t]he use of property may be regulated by law in so far as is ne­ces­ sary for the general interest’.81 Likewise, the first paragraph of Article 1 First Protocol ECHR provides for the possibility of restrictions of the right ‘in the public interest’, while the second paragraph of the same provision allows the state ‘to enforce such laws as it deems necessary to control the use of property in accordance with the general interest . . .’82 This limited nature of the right to property was clearly envisaged by the drafters of the ECHR and the EU Charter. As the preparatory works of the First Protocol to the ECHR demonstrate, a newly introduced property paradigm was viewed as being of a ‘relative’ nature as opposed to the absolute right to own property.83 A similar logic, clearly excluding an ‘absolutist’ conception of IP, accompanied the preparatory documents of the EU Charter, insofar as the drafters took care to specify that ‘the guarantees laid down in paragraph 1 [of Art. 17] shall apply as appropriate to intellectual property’ and that ‘the meaning and scope of Article 17 are the same as those of the right guaranteed under Article 1 of the First Protocol to the ECHR’.84 This approach also matches the traditional construction of Article 27(2) of the Universal Declaration of Human Rights (UDHR) and Article 15(1)(c) of the International Covenant on Economic, Social and Cultural Rights (ICESCR), both of which secure for authors the benefits from the ‘protection of the moral and material interests resulting from [their] scientific, literary or artistic production’. As stressed in the report by the UN Special Rapporteur in the field of cultural rights, although it is tempting to infer from the wording of these provisions that Article 15(1)(c) recognizes a human right to Discretion of the Legislature’ in Geiger (ed.) (n. 1) 132; Robert Burrell and Dev Gangjee, ‘Trade Marks and Freedom of Expression: A Call for Caution’ (2010) 41(5) IIC 544; Christina Angelopoulos, ‘Freedom of Expression and Copyright: The Double Balancing Act’ (2008) 3 IPQ 328. See, however, Christophe Geiger, ‘Fundamental Rights, a Safeguard for the Coherence of Intellectual Property Law?’ (2004) 35 IIC 268; Christophe Geiger, ‘Copyright’s Fundamental Rights Dimension at EU Level’ in Estelle Derclaye (ed.), Research Handbook on the Future of EU Copyright, (Edward Elgar 2009) 27; Christophe Geiger, ‘The Social Function of Intellectual Property Rights, or How Ethics Can Influence the Shape and Use of IP Law’ in Graeme Dinwoodie (ed.), Methods and Perspectives in Intellectual Property (Edward Elgar 2013) 153, underlining that the property aspects protected at a constitutional level refer to a property of a special kind, property with a strong social function, which should therefore not be equated with physical property since it is far more limited in its scope. Under this understanding, IP has to be considered as having a more limited nature than the right to physical property, which should be taken into account when balancing it with other fundamental rights. In fact, as the legislature has a vast margin of appreciation when defining the contours of rights on intangibles, the constitutional protection remains rather ‘weak’ in comparison with other rights. 80  C-314/12 (n. 3) para. 61. 81 ibid. 82 ibid. 83  See Council of Europe, ‘Preparatory Work on Article 1 of the First Protocol to the European Convention on Human Rights’ (13 August 1976) CDH (76) 36, 12 and 16. 84 Note from the Praesidium, ‘Draft Charter of Fundamental Rights of the European Union, Explanations on Article 17 of the EU Charter’ (2000), 19–20.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

152   Christophe Geiger, Giancarlo Frosio, and Elena Izyumenko protection of intellectual property, ‘this equation is false and misleading’.85 According to the UN Committee on Economic, Social and Cultural Rights (CESCR)—the body in charge of the implementation of the ICESCR—an evident distinction exists in principle between standard IP rights and the human rights protection given to creators in accordance with Article 15(1)(c).86 Thus, Article 15(1)(c) guarantees some sort of protection; however, for the UN Committee it cannot be interpreted as guaranteeing IP rights or as elevating IP to the human rights regime.87

4. Conclusions The implications of online intermediaries’ liability and regulation raise important questions relating to the preservation of the fundamental rights of users, OSPs, and IP rightholders. The tension between competing freedom of expression, property, privacy, and freedom of business rights leads to unavoidable constriction of some rights against ­others depending on policy choices. This ‘conundrum’ brought about by multiple opposing competing rights, interests, and players involved in intermediary liability regulation has yet to find a sustainable policy solution. While legislators have so far tended to privilege some of the relevant interests against others, increasingly resorting to mandatory or voluntary promotion of filtering, monitoring, and automated enforcement technologies, courts have often found middle grounds based on a case-by-case analysis, thus securing a better balance between multiple fundamental rights. This flexibility needs to be preserved and could lead policymakers to reflect on adopting rules that allow the judiciary to adapt to the specific situation of a case, while also reflecting on other implementing legal mechanism based on remuneration rights which would ‘legalize’ several infringing situations.88 85  UN General Assembly, ‘Report of the Special Rapporteur in the field of cultural rights, Farida Shaheed, Copyright Policy and the Right to Science and Culture, Human Rights Council’ (2014) A/HRC/28/57, s. 26. On this report, see Christophe Geiger (ed.), Intellectual Property and Access to Science and Culture: Convergence or Conflict? (CEIPI/ICTSD 2016) (including a Foreword by the Special Rapporteur). 86  See CESCR, ‘General Comment no. 17 on Article 15(1)(c) of the ICESCR’ (12 January 2006) E/C12/GC/17. 87  ibid. For a detailed analysis of these documents, see Christophe Geiger, ‘Implementing Intellectual Property Provisions in Human Rights Instruments: Towards a New Social Contract for the Protection of Intangibles’ in Geiger (ed.) (n. 1) 661 ff. 88  See Christophe Geiger, ‘Challenges for the Enforcement of Copyright in the Online World: Time for a New Approach’ in Paul Torremans (ed.), Research Handbook on the Cross-Border Enforcement of Intellectual Property (Edward Elgar 2014) 704; Christophe Geiger, ‘Promoting Creativity through Copyright Limitations, Reflections on the Concept of Exclusivity in Copyright Law’ (2010) 12(3) Vand. J. of Ent. & Tech. L. 515. More recently, see Joao Pedro Quintais, Copyright in the Age of Online Access, Alternative Compensation Systems in EU Law (Kluwer Law Int’l 2017). On the possibility of achieving flexibility under European copyright law via the introduction of the open-ended flexible exception to copyright infringement based on the balancing factors of the European freedom of expression law, see Christophe Geiger and Elena Izyumenko, ‘Towards a European “Fair Use” Grounded in Freedom of Expression’ (2019) 35(1) Am. U. Int’l L. Rev. 1.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

pa rt I I I

SA F E H A R BOU R S , L I A BI L I T Y, A N D F R AGM E N TAT ION

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 8

A n Ov erv iew of the U n ited State s’ Section 230 I n ter n et I m m u n it y Eric Goldman

47 USC § 230 says that websites and other online services are not liable for third party content. This legal policy is simple and elegant, but is hardly intuitive, and it has had extraordinary consequences for the internet and our society. This chapter provides an overview of section 230 and how it compares to some foreign counterparts.

1.  Pre-Section 230 Law Typically, liability for third party content attaches when the disseminator has the discretion to publish it or not. If a disseminator cannot exercise editorial control—such as tele­phone service providers functioning in their legal status as common carriers—the disseminator is not legally responsible for third party content it had to disseminate.1 In contrast, if the disseminator can exercise editorial control over what to disseminate— such as traditional publishers—the disseminator accepts legal liability for the decisions it makes.2 Thus, traditional publishers are usually liable for all content they, in their editorial discretion, choose to publish. With respect to third party content, many online intermediaries do not closely resemble either common carriers or traditional publishers. Unlike offline publishers, many online intermediaries do not have humans pre-screen online content before disseminating it. 1  See e.g. Restatement (Second) of Torts s. 612 cmt. g (1977) (US). 2  See Rodney Smolla, 1 Law of Defamation (Thomson Reuters 2018) s. 4:87.

© Eric Goldman 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

156   Eric Goldman The volume of content often makes pre-screening too expensive or slow;3 and ­pre-screening is effectively impossible for real-time content like live-streamed video. At the same time, most online intermediaries are not legally defined as common carriers,4 and regulators expect them to routinely refuse services that support abusive, objectionable, or criminal outcomes.5 Given the ill-fitting paradigms, how should courts apply traditional editorial control/ liability principles? Before Congress enacted section 230, two cases addressed this issue. The first was Cubby v CompuServe, a 1991 case from the Southern District of New York. At the time, CompuServe provided its subscribers with dial-up access to its network plus content resources and databases that they could browse. CompuServe charged subscribers per minute they were online, and CompuServe could share a portion of those subscriber charges as licence fees to content publishers. CompuServe had a licensing arrangement to carry a third party newsletter called Rumorville. Rumorville periodically uploaded its content electronically to CompuServe’s servers. CompuServe employees did not pre-screen these uploads. Perhaps not surprisingly (given its name), Rumorville was accused of defamation. The plaintiff sued CompuServe and others. The court dismissed CompuServe,6 holding that for defamation purposes, CompuServe acted as the ‘distributor’ of Rumorville, not as the publisher, and therefore could be liable only if it knew or had reason to know of the defamatory content. Due to Rumorville’s automated uploads, CompuServe lacked such scienter. Though CompuServe won the ruling, the court’s legal standard was not necessarily good news for other defendants. The ruling suggested that an online intermediary would be liable for third party content whenever it knew, or should have known, of the defamation. This created a notice-and-takedown standard for defamation, including ‘heckler’s vetoes’ where aggrieved individuals can easily remove content by making unfounded allegations of defamation.7 Cubby also left unresolved what should happen with claims other than defamation or when humans pre-screen third party content before publication. Nevertheless, the Cubby ruling provided some guidance to the nascent industry. Many online intermediaries chose to take a light-handed approach to moderating or removing third party content to position themselves, like CompuServe, as the relatively passive recipient of third party content.

3  e.g. in 2015, YouTube got 400 hours of new uploaded video every minute. See Statista . 4  See e.g. Religious Technology Center v Netcom On-Line Communication Services, 907 F.Supp. 1361 (ND Cal. 1995) (US). 5  See Sen Ron Wyden, ‘The Consequences of Indecency’ (TechCrunch, 23 August 2018) . 6 See Cubby Inc. v CompuServe Inc., 776 F.Supp. 135 (SDNY 1991) (US). 7 See Zeran v America Online Inc., 129 F.3d 327 (4th Cir. 1997), cert. denied, 524 US 937 (1998) (US) (discussing the risks of hecklers’ vetoes for defamation).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   157 The second pre-section 230 ruling shook up that practice. The 1995 New York state court case of Stratton Oakmont v Prodigy involved the online message boards of Prodigy, a CompuServe competitor. A subscriber allegedly posted defamatory remarks about Stratton Oakmont, an investment bank subsequently depicted unfavourably in the movie Wolf of Wall Street.8 The investment bank sued Prodigy for $100 million. The court held that Prodigy could be liable for the subscriber’s defamatory message board posts because Prodigy had marketed itself as a family-friendly service and had taken steps to remove objectionable content from its message board. The court said that collectively these efforts had turned Prodigy into a ‘publisher’ of the subscriber-supplied message board content and exposed it to liability.9

1.1  The Moderator’s Dilemma In 1995, the United States experienced a techno-panic about children’s access to online pornography.10 Congress wanted and expected online intermediaries to aggressively screen out pornography and other objectionable content.11 It would be counterproductive for the law to deter online intermediaries from taking these socially valuable steps. Arguably, the Stratton Oakmont ruling did exactly that. According to Stratton Oakmont, endeavouring to remove objectionable content, but doing the job imperfectly, left the online intermediary legally exposed for everything it missed—with potentially business-ending legal exposure for each and every missed item. This dynamic creates the ‘Moderator’s Dilemma’.12 Stratton Oakmont drives online intermediaries to choose between: • moderate or remove user content to promote a safe or family-friendly environment, but at the cost of accepting legal responsibility for anything it misses, which the service will manage by aggressively screening/removing third party content to mitigate that liability—or by eliminating third party content entirely; or • do as little as possible to manage user content, in which case it can argue that it does not ‘know or have reason to know’ about any legally problematic content and possibly escape legal exposure for that content. The latter tactic is exactly what Congress did not want online intermediaries to do, because it would lead to more children being exposed to pornography online. However, 8 See The Wolf of Wall Street (2013). 9 See Stratton Oakmont Inc. v Prodigy Services Co., 1995 WL 323710 (NY Sup. Ct. 1995) (US). 10  See e.g. Philip Elmer-Dewitt, ‘Cyberporn’ (Time, 3 July 1995) . 11  cf. 47 USC § 230(b)(4) (US) (‘It is the policy of the United States . . . to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material’). 12  See Eric Goldman, ‘Sex Trafficking Exceptions to Section 230’, Santa Clara U. Legal Studies Research Paper no. 13/2017 (2017) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

158   Eric Goldman following the Cubby and Stratton Oakmont cases, many online intermediaries were likely to prioritize risk mitigation and reduce their efforts to screen out objectionable content. Section 230 eliminates the Moderator’s Dilemma. As the legislative history explains, section 230 was intended to overrule Stratton Oakmont ‘and any other similar decisions which have treated providers and users as publishers or speakers of content that is not their own because they have restricted access to objectionable material’.13

2.  Section 230’s Protections for Defendants Section 230(c)(1) says: ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’14 This translates into three prima facie elements of the immunity. (1) The immunity applies to a ‘provider or user of an interactive computer service’. The statutory definition of ‘interactive computer service’15 contemplates the operations of CompuServe, Prodigy, America Online, and other mid-1990 services that packaged dial-up internet/server access with access to proprietary content. However, courts have interpreted ‘providers’ expansively to include virtually any service available through the internet.16 Furthermore, ‘users’ of interactive computer services should, in theory, cover everyone who is a customer of a pro­vider.17 This means, in practice, that everyone online should satisfy this first element. (2) The immunity applies to any claims that treat the defendant as a ‘publisher or speaker’. This standard makes sense in the context of defamation, where ‘publication’ is an express element of the plaintiff ’s prima facie case.18 However, courts usually read this element more broadly so that it applies regardless of whether the claim’s prima facie elements contain the term ‘publisher’ or ‘speaker’. Typically, courts find this element satisfied whenever the plaintiff seeks to hold the defendant responsible for third party content, regardless of which specific causes of action the plaintiff actually asserts, unless the claim fits into one of the statutory exceptions.19 (3) The immunity applies when the plaintiff ’s claim is based on information ‘provided by another information content provider’. This element seeks to divide the universe of content into first party and third party content, and defendants are 13  HR 104-458 (1996). 14  47 USC § 230(b)(4). 15  47 USC § 230(f)(2). 16  See Ian Ballon, 4 E-Commerce & Internet Law (2017 update) 37.05[2] (‘almost any networked computer would qualify as an interactive computer service, as would an access software provider’). 17 See Barrett v Rosenthal, 40 Cal. 4th 33 (2006) (US). 18  See Smolla (n. 2) s. 1:34. 19  See Ballon (n. 16) 37.05[1][C].

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   159 immunized only for the latter. Sometimes, the division between first party and third party content is obvious. Employee-authored content normally qualifies as first party content;20 user-submitted content normally qualifies as third party content. However, plaintiffs have deployed a vast array of legal theories—some more successful than others—to muddy the distinction and hold defendants liable for what is otherwise fairly obviously third party content. A recap of these three prima facie elements: typically, section 230(c)(1) applies to anyone connected to the Internet for any claim (other than statutorily excluded claims) that is based on third party content. In other words, websites and online services are not liable for third party content. Section 230(c)(1) says nothing about the defendant’s scienter. As a result, a defendant can have scienter about problematic content—for example, they can ‘know’ that they are publishing tortious or illegal third party content—and still avoid liability for that content. For that reason, the section 230 jurisprudence has not been plagued by the same tendentious philosophical inquiries into what and when an online service ‘knows’ about user content that we have seen in the copyright context with the Digital Millennium Copyright Act’s (DMCA) online safe harbour.21 Furthermore, section 230(c)(1) applies even if the defendant does human pre-screening or post-publication reviews of third party content. This means section 230(c)(1) is equally available to a service that exercises the same level of editorial control as a traditional publisher—or zero editorial control. Thus, section 230 remarkably collapses the historical legal distinctions between traditional publishers and common carriers. Section 230(c)(1) says that online intermediaries can function like traditional publishers but receive the favourable legal treatment of common carriers. As you can imagine, this counterintuitive result frequently baffles plaintiffs and occasionally baffles judges. Section 230(c)(1) provides substantial immunity for defendants, but it is also highly valued by defendants for its procedural benefits.22 Frequently, section 230(c)(1) defendants win on a motion to dismiss. In those situations, the case ends quickly and at relatively low cost, without expensive discovery, summary judgment motions, or trials. Furthermore, because of its broad scope, plaintiffs’ attempts to work around section 230(c)(1) often does not work, so creative plaintiffs cannot artfully plead past a motion to dismiss. Contrast the quick and easy section 230(c)(1) dismissals with the often-arduous defence wins in section 512 cases.23 The procedural benefits make a huge substantive difference to defendants. 20  But see Delfino v Agilent Technologies Inc., 145 Cal. App. 4th 790 (Cal. App. Ct. 2006) (US) (employer qualified for s. 230 immunity for employee activity). 21  See Eric Goldman, ‘How the DMCA’s Online Copyright Safe Harbor Failed’ (2015) 18 Korea U. L. Rev. 103. 22  See Eric Goldman, ‘Why Section 230 Is Better Than the First Amendment’ (2109) Notre Dame L. Rev. Online (forthcoming) < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3351323>. 23  See e.g. UMG Recordings Inc. v Shelter Capital Partners LLC, 667 F.3d 1022 (9th Cir. 2011) (US) (Veoh went bankrupt defending its eligibility for the safe harbour).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

160   Eric Goldman Online intermediaries also can benefit from the section 230(c)(2) safe harbour. That provision protects the intermediary’s good faith decisions to block or remove content the intermediary considers ‘to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable’,24 as well as the provision of blocking/filtering instructions (e.g. providing anti-spam or anti-spyware software).25 Compared to section 230(c)(1), online intermediaries take advantage of section 230(c)(2) relatively rarely for several reasons. • In the mid-2000s, courts emphatically held that section 230(c)(2) protected antispyware vendors.26 This functionally eliminated lawsuits by software manufacturers who were aggrieved that their software had been classified as spyware. • Otherwise, section 230(c)(2) applies to claims by a service’s content uploaders who are unhappy that the service removed their content. In many cases, any particular content item will not be significant enough for an aggrieved uploader to sue, but there are exceptions. For example, if a record label puts a lot of money behind marketing the URL of a YouTube video, and then YouTube removes the video, the marketing investments may be wasted.27 However, these plaintiffs—the content uploaders—almost always agree to the service’s user agreement, which will contain many protective provisions such as termination for convenience, a disclaimer of any obligation to publish (or a reservation of the service’s unrestricted editorial discretion), limitations of liability, and more. Section 230(c)(2) could be a helpful complement to the contract protections,28 but it is rarely essential. • Unlike section 230(c)(1), section 230(c)(2) considers whether the defendant acted in ‘good faith’. Plaintiffs routinely allege bad faith removal or blocking by the defendant, even if the plaintiff does not actually have any evidence to back up the allegation. These unsupported allegations increase the odds that the case will survive a motion to dismiss and advance to discovery and summary judgment—an expensive and time-consuming proposition. If defendants can find a way to defeat the lawsuit on a motion to dismiss, section 230(c)(2)’s delayed resolution is not that helpful.

2.1  Section 230’s Statutory Exclusions Section 230 has four statutory exclusions where it is categorically unavailable.

24  47 USC § 230(c)(2)(A). 25  ibid. § 230(c)(2)(B). 26  See e.g. Zango Inc. v Kaspersky Lab Inc., 568 F.3d 1169 (9th Cir. 2009) (US). 27  See e.g. the YouTube ‘remove-and-relocate’ cases, including Darnaa v Google LLC, 2018 WL 6131133 (9th Cir. 2018) (US); Kinney v YouTube LLC, 2018 WL 5961898 (Cal. App. Ct. 2018) (US); Song Fi v Google Inc., 2018 WL 2215836 (ND Cal. 2018) (US); Bartholomew v YouTube LLC, 17 Cal. App. 5th 1217 (Cal. App. Ct. 2017) (US); Lancaster v Alphabet Inc., 2016 WL 3648608 (ND Cal. 2016); Lewis v YouTube LLC, 244 Cal. App. 4th 118 (Cal. App. Ct. 2015) (US). 28 See Eric Goldman, ‘Online User Account Termination and 47 USC s230(c)(2)’ (2012) 2 U. C. Irvine L. Rev. 659.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   161 (1) ECPA/state law equivalents. Section 230 does not apply to plaintiffs’ claims based on the Electronic Communications Privacy Act or state law equivalents.29 This exception is functionally a null set because it is almost impossible for a defendant to simultaneously violate the ECPA and qualify for section 230. (2) Intellectual property (IP) claims. Section 230 does not apply to ‘intellectual property’ claims.30 ‘Intellectual property’ is not defined in the statute, and this exception is more complicated than it might appear. In Perfect 10 v CCBill,31 the Ninth Circuit held that the exclusion only applied to federal IP claims, so section 230 pre-empted all state IP claims. State IP claims can include state trade mark claims, state copyright claims, trade secret claims, publicity right claims, and possibly others. Thus, in the Ninth Circuit, the CCBill precedent has been used numerous times in lawsuits predicated on third party state IP claims.32 Courts outside the Ninth Circuit do not agree with the CCBill ruling,33 so state IP claims are still viable in those jurisdictions despite section 230. All courts agree that federal IP claims, such as federal copyright and federal trade mark claims, based on third party content are clearly excluded from section 230. However, Congress expressly said the Defend Trade Secrets Act (DTSA), the federal trade secret law, is not an IP law, so any DTSA claims based on third party content are immunized by section 230.34 (3) Federal criminal prosecutions. Prosecutions of federal crimes are not immunized by section 230.35 The US Department of Justice (DOJ) rarely pursues websites for third party content, but the exceptions are noteworthy. For example, in 2007, Google, Yahoo, and Microsoft paid a total of $31.5 million to settle complaints about running illegal gambling ads.36 In 2011, Google paid $500 million to settle complaints about running ads for illegal pharmacies.37 In the DOJ’s prosecution of the purported operator of the Silk Road marketplace for illegal items, Ross Ulbricht, got a lifetime sentence.38 While federal criminal enforcement is excluded from section 230, civil claims based on federal criminal law are pre-empted to the extent they are predicated on

29  See 47 USC § 230(e)(4). 30  ibid. § 230(e)(2). 31 See Perfect 10 Inc. v CCBill LLC, 488 F.3d 1102 (9th Cir. 2007) (US). 32  See Ballon (n. 16) 37.05[5]. 33  See e.g. Atlantic Recording Corp. v Project Playlist Inc., 603 F.Supp.2d 690 (SDNY 2009) (US). 34  See Eric Goldman, ‘The Defend Trade Secrets Act Isn’t an Intellectual Property Law’ (2017) 33 Santa Clara High Tech. L.J. 541. 35  See 47 USC § 230(e)(1). 36  See Corey Boles, ‘U.S. Fines Google, Microsoft, Yahoo’ (Wall Street Journal, 20 December 2007) . 37  See ‘Google Forfeits $500 Million Generated by Online Ads & Prescription Drug Sales by Canadian Online Pharmacies’ (US DOJ, 24 August 2011) . 38 See US v Ulbricht, 858 F.3d 71 (2d Cir. 2017) (US).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

162   Eric Goldman third party content.39 Similarly, except for FOSTA’s limited exceptions, state criminal prosecutions are pre-empted to the extent they are predicated on third party content.40 (4) FOSTA. In 2018, Congress created several new exceptions to section 230 in the Fight Online Sex Trafficking Act (FOSTA). Those exceptions are beyond the scope of this chapter.41

3.  Section 230’s Implications Section 230 is an exceptionalist statute: it treats the internet differently than other media. Indeed, section 230 is the quintessential exceptionalist statute from the mid-1990s exceptionalism peak.42 To demonstrate section 230’s exceptionalist nature, consider this example: Scenario A: an author submits a ‘letter to the editor’ to her local newspaper. The letter contains defamatory statements. The newspaper publishes the letter in its ‘dead trees’ print edition. The newspaper will be equally liable for defamation with the letter author.43 Scenario B: the same author submits the same letter to her local newspaper, but the local newspaper only publishes the letter in its online edition, not in its print edition. The author will still be liable, but section 230 protects the newspaper from liability.

It is counterintuitive that scenarios A and B produce different outcomes. The letter contains the exact same content, by the exact same author, disseminated by the exact same publisher, yet the liability results diverge. As the maxim goes, the medium matters.44 Section 230 can be characterized as a type of legal privilege. Section 230 ‘privileges’ online publishers over offline publishers by giving online publishers more favourable legal protection. This leads to a financial privilege by reducing online publishers’ prepublication costs and post-publication financial exposure. These legal and financial 39 See Doe v Bates, 2006 WL 3813758 (ED Tex. 2006); Jane Doe No. 1 v Backpage.com LLC, 817 F.3d 12 (1st Cir. 2016) (US). 40  See Eric Goldman, ‘The Implications of Excluding State Crimes from 47 U.S.C. s 230’s Immunity’, Santa Clara U. Legal Studies Research Paper no. 23/13 (2013) . 41  For an explanation of FOSTA and its effect on s. 230, see Eric Goldman, ‘The Complicated Story of FOSTA and Section 230’ (2019) 17 First Amendment L. Rev. 279. 42  See Eric Goldman, ‘The Third Wave of Internet Exceptionalism’ in Berin Szoka and Adam Marcus (eds), The Next Digital Decade: Essays on the Future of the Internet (TechFreedom 2011) 165. 43  See Smolla (n. 2) s. 3:87; New York Times Co. v Sullivan, 376 US 254 (1964). 44  cf. Marshall McLuhan, Understanding Media: The Extensions of Man (MIT Press 1964).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   163 privileges make sense in the 1990s context, where most major newspapers were de facto monopolies in their local communities45 and many online publishers of third party content were small hobbyists.46 But in the modern era, where internet giants like Google and Facebook are among the most highly valued companies in the world47 and most newspapers are struggling to survive,48 this allocation of privileges might seem even more counterintuitive. However, focusing on section 230’s benefit to the internet giants fundamentally misunderstands section 230’s immunity. Section 230 allows companies to dial up or down their level of editorial control to reflect the needs of their community. Other services can adopt different editorial practices than Google or Facebook, so there can be a wider array of options for authors and readers. More importantly, new marketplace entrants do not need to make the same upfront investments into content moderation that Google and Facebook make. If new entrants had to develop industrial-grade content moderation procedures from day one, we would see far fewer new entrants. That makes section 230’s benefit to Google and Facebook almost beside the point. Section 230’s real payoff comes from keeping the door open for new entrants that compete with—and hope ultimately to dethrone—Google and Facebook. Regulators often think curtailing section 230 will be a good way to hurt the internet giants like Google and Facebook, but they are completely mistaken. Google and Facebook can afford to accommodate new regulatory obligations in ways that new entrants cannot. Thus, reducing section 230’s immunity enhances Google and Facebook’s marketplace dominance by making it more difficult for new entrants to emerge and compete. Sometimes people say that ‘the internet’ no longer needs section 230’s 1990s-style exceptionalist privilege,49 but this makes sense only by equating ‘the internet’ with ‘Google and Facebook’. Unless we are at the end of the innovation curve for new internet services based on third party content—and we are likely still closer to the beginning than to the end—section 230’s immunity fosters and encourages the yet-to-be-born services that we will all benefit from. Another common complaint is that section 230 legally authorizes online intermediaries to do nothing about problematic content, which is what every profit-maximizing 45  See e.g. Fed. Comm. Comm’n, The Media Landscape . 46  Such as bulletin-board services (BBSes), which were often run as non-commercial or small enterprises. See Eric Schlachter, ‘Cyberspace, the Free Market, and the Free Marketplace of Ideas: Recognizing Legal Differences in Computer Bulletin Board Functions’ (1993) 16 Hastings Comm. & Ent. L.J. 87. 47  See e.g. Elvis Picardo, ‘Eight of the World’s Top Companies Are American’ (Investopedia, 20 October 2018) . 48  See e.g. Michael Barthel, ‘Newspaper Fact Sheet’ (Journalist.com, 13 June 2018) . 49  See e.g. Fair Housing Council of San Fernando Valley v Roommates.com LLC, 521 F.3d 1157 fn. 15 (9th Cir. 2008) (US) (‘The Internet is no longer a fragile new means of communication that could easily be smothered in the cradle by overzealous enforcement of laws and regulations applicable to brick-andmortar businesses’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

164   Eric Goldman s­ ervice will choose to do. Accordingly, the critics believe section 230 facilitates the creation of ‘cyber cesspools’;50 that is, services that solicit, and profit from, anti-social content. It is true that section 230 protects all services from liability for third party content, even services that do not moderate content and therefore become hotbeds of anti-social content. Despite this, many online intermediaries spend a lot of money on content moderation. Why? As a practical matter, cyber cesspools tend to quickly fail in the market, despite section 230’s protection. In the late 2000s, for example, gossip-focused services like JuicyCampus and People’s Dirt generated substantial angst—until they failed.51 More recently, Yik Yak’s anonymous local service garnered substantial criticism. It, too, is gone.52 It turns out that ‘cyber cesspools’ develop terrible reputations and become poor business investments. People will always keep trying to build them, but section 230 does not ensure their survival. In contrast, any reputable service will invest in content moderation efforts as part of building trust with their users. These services will not just do the minimum. More conceptually, section 230 critics often assume that reducing the immunity will reduce the total incidence of anti-social content, but this is almost certainly false. All legitimate online intermediaries voluntarily undertake socially valuable efforts to screen out anti-social content—just as the designers of section 230 had hoped they would. If amendments to section 230 reinstate the Moderator’s Dilemma, some of those services will turn off or reduce their voluntary policing efforts, and we could counterintuitively see a net society-wide increase in anti-social behaviour due to the services that opt for the do-nothing approach.53 Stated differently, even if some rogue services abuse section 230’s immunity, section 230’s immunity might still produce the lowest net level of anti-social content of any policy option. There is another alternative: we could restore the offline publishers’ liability rule to all online services and hold online services liable for all third party content they publish. This would likely force online services to pre-screen all third party content. This would dramatically reduce the level of anti-social content, but it also would eliminate many of the internet’s best aspects.

50  See e.g. Saul Levmore and Martha Nussbaum (eds), The Offensive Internet: Speech, Privacy, and Reputation (HUP 2012). 51 See Jeffrey Young, ‘JuicyCampus Shuts Down, Blaming the Economy, Not the Controversy’ (Chronicle of Higher Education, 5 February 2009) ; Donna St. George and Daniel de Vise, ‘Slanderous Web Site Catering to Teens Is Shut Down’ (Washington Post, 10 June 2009) . 52  See Valeriya Safronova, ‘The Rise and Fall of Yik Yak, the Anonymous Messaging App’ (New York Times, 27 May 2017) . 53  See Eric Goldman, ‘Balancing Section 230 and Anti-Sex Trafficking Initiatives’, Santa Clara U. Legal Studies Research Paper no. 17/2017 (2017) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   165 For example, imagine how Twitter might work with human-pre-screened tweets. If Twitter could even afford to stay in business due to the labour costs, all tweets would have a delay of minutes, hours, or longer. There might still be some social role for a time-delayed Twitter, but more likely it would lose most of its value. There are many other services we enjoy daily that would not make any sense financially or functionally if  humans had to pre-screen each item of content, with a substantial time delay to publication. Section 230 creates winners and losers, but so does every other policy alternative. Section 230 strikes a balance between promoting innovation and motivating industry players to voluntarily undertake socially valuable screening efforts. Its results are stunning: aided in large part by section 230, the internet has grown into an integral part of our society, companies have created trillions of dollars of economic wealth and millions of new jobs, and we have benefited from new services and content that never would have emerged with a different liability regime. Most people benefit from section 230-protected internet services on an hour-by-hour or even minute-by-minute basis.

4.  Comparative Analysis Section 230 is a globally unique policy solution. No other country has yet adopted anything so categorically favourable to intermediaries. However, NAFTA 2.0 (the US–Mexico–Canada Agreement (USMCA)), agreed to in 2018, requires Canada and Mexico to adopt section 230-like protections.54 This creates the possibility that North America will move in a direction opposite to the rest of the world. This section contrasts section 230 with some examples of other jurisdictions’ rules.

4.1  EU’s ‘Right to Be Forgotten’ In the 2014 Costeja case, the Court of Justice of the European Union (CJEU) concluded that ‘the activity of a search engine consisting in finding information published or placed on the internet by third parties, indexing it automatically, storing it temporarily and, finally, making it available to internet users according to a particular order of preference’ constituted ‘the processing of personal data’.55 This means search engines are required to comply with the 1995 European Data Privacy Directive (95/46/EC). Accordingly, search engines must ‘remove from the list of results displayed following a search made on the basis of a person’s name links to web pages, published by third 54  See Eric Goldman, ‘Good News! USMCA (a/k/a NAFTA 2.0) Embraces Section 230-Like Internet Immunity’ (Technology & Marketing Law Blog, 3 October 2018) . 55  See C-131/12 Google Spain SL and Google Inc. v Agencia Española de Protección de Datos and Mario Costeja González [2014] ECLI:EU:C:2014:317, para. 21. See also Chapter 25.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

166   Eric Goldman parties and containing information relating to that person’,56 even if that information remains on the third party website, and even if the third party lawfully published the information. The right of people to erase search results about them ‘override[s] . . . not only the economic interest of the operator of the search engine but also the interest of the general public in having access to that information upon a search relating to the data subject’s name’.57 However, the erasure right can be trumped by a ‘preponderant interest of the general public in having, on account of its inclusion in the list of results, access to the information in question’, such as the person’s role ‘in public life’.58 The court’s inscrutable language did not specify the exact grounds on which a person may request erasure of search results, putting the burden on search engines, and the data protection authorities who will hear complaints about the search engine’s decisions, to figure it out. Google59 decided it will remove links to ‘irrelevant, outdated, or otherwise objectionable’ information from search results for the individual’s name; but Google will not de-index a search result if ‘there’s a public interest in the information’, such as information ‘about financial scams, professional malpractice, criminal convictions, or public conduct of government officials’.60 A de-indexing request only affects results when the person’s name is searched. Searches for other search keywords could still display those search results. Meanwhile, even if Google de-indexes the search result, the original source material will remain online. Thus, if an online newspaper article is subject to a de-indexing request, Google might remove the search result from the name search, but the original article is still available at the online newspaper. Google’s de-indexing standards—‘irrelevant, outdated, or otherwise objectionable’ content that is not ‘in the public interest’—are obviously problematic. For example, if 99 per cent of searchers would find an information item irrelevant but 1 per cent of searchers would find it exceptionally relevant, what should Google do? Similarly, how can Google determine when information is outdated, ‘otherwise objectionable’, or not in the public interest? Frequently, it is not obvious what information has historical significance, and information’s historical significance might change over time. Something that appears irrelevant at one point in time might emerge as essential information at a different time.61 The CJEU made it clear that it does not want search engines to rely on third party publishers’ judgments of what is relevant or important. In Costeja’s case, he complained 56  ibid. para. 88. 57  ibid. para. 99. 58 ibid. 59  I focus on Google’s responses because it lost the case and Google has a 90+ per cent market share in most European countries. Google also receives the vast bulk of de-indexing requests submitted to all search engines. 60 See European Privacy Requests Search Removals FAQs, Google, . 61  e.g. Justice Kavanaugh’s Summer 1982 calendar (see ) appeared to be historically insignificant for decades—until it became a crucial piece of evidence in the Senate confirmation hearings for his appointment to the US Supreme Court.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   167 about an article referencing him that was published in a traditional newspaper. Normally we would assume a newspaper only publishes items of ‘public interest’ (at least to its audience). Yet, the CJEU said that level of public interest was not sufficient. The CJEU ruling shocked many Americans because US law would not permit a similar result. As an online content publisher, Google is protected by the First Amendment’s free speech and free press clauses.62 Thus, any regulatory mandate that Google include or exclude information in its search index is almost certainly unconstitutional.63 Furthermore, section 230 (both (c)(1) and (c)(2)) statutorily immunize search engines for their indexing decisions, including their refusal to de-index content (even if that content is tortious).64 Collectively, US law makes it clear that search engines, including Google.com, cannot be legally compelled to implement a right to be forgotten (RTBF).65

4.2  EU Electronic Commerce Directive In 2000, the EU enacted the Electronic Commerce Directive (Directive 2000/31/EC), including safe harbours for ‘mere conduits’ (Art. 12), caching (Art. 13), and hosting (Art. 14). The provisions parallel the DMCA online safe harbours, in part because the Directive was enacted shortly after the DMCA and modelled on it. Like the DMCA’s section 512(c), the Directive’s hosting safe harbour is built around a notice-and-takedown scheme. However, this provision differs from the section 512(c) hosting safe harbour in several key ways. First, the Directive requires EU Member States to enact a law consistent with the Directive, but Member States are not required to 62  See US Const. Amend. 1. 63  See e.g. Search King Inc. v Google Technology Inc., 2003 WL 21464568 (WD Okla. 2003) (US); Langdon v Google Inc., 474 F.Supp.2d 622 (D. Del. 2007) (US); Zhang v Baidu.com Inc., 10 F.Supp.3d 433 (SDNY 2014) (US); Google Inc. v Hood, 96 F.Supp.3d 584 (SD Miss. 2015) (US) (vacated on other grounds); e-ventures Worldwide v Google Inc., 2017 WL 2210029 (MD Fla. 2017) (US); Eugene Volokh and Donald Falk, ‘First Amendment Protection For Search Engine Search Results’ (20 April 2012) . See also Martin v Hearst Corp., 777 F.3d 546 (2d Cir. 2015) (US) (publication cannot be obligated to remove article about an expunged arrest). 64 See e.g. Maughan v Google Technology Inc., 143 Cal. App. 4th 1242 (Cal. App. Ct. 2006) (US); Murawski v Pataki, 514 F.Supp.2d 577 (SDNY 2007) (US); Shah v MyLife.Com Inc., 2012 WL 4863696 (D. Or. 2012) (US); Merritt v Lexis Nexis, 2012 WL 6725882 (ED Mich. 2012) (US); Nieman v Versuslaw Inc., 2012 WL 3201931 (CD Ill. 2012) (US); Getachew v Google Inc., 491 Fed. Appx. 923 (10th Cir. 2012) (US); Mmubango v Google Inc., 2013 WL 664231 (ED Pa. 2013) (US); O’Kroley v Fastcase Inc., 831 F.3d 352 (6th Cir. 2016) (US); Fakhrian v Google Inc., 2016 WL 1650705 (Cal. App. Ct. 2016) (US); Despot v Baltimore Life Insurance Co., 2016 WL 4148085 (WD Pa. 2016) (US); Manchanda v Google Inc., 2016 WL 6806250 (SDNY 2016) (US). 65  In 2013, California passed an ‘online eraser’ law that requires user-generated content websites to let minors remove posts they have made. See Cal. Bus. & Profs. Code § 22580-82. This law has not been subject to a constitutional challenge, but it may violate the websites’ First Amendment interests. See Eric Goldman, ‘California's New “Online Eraser” Law Should Be Erased’ (Forbes Tertium Quid Blog, 24 September 2013) . Furthermore, in contrast to Europe’s RTBF, California’s law only applies to content posted by a minor; the statute explicitly says that the minor cannot remove any third party content.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

168   Eric Goldman follow this provision identically. As a result, there are national differences in the Directive’s enactment. Secondly, the Directive’s hosting provision governs all claims related to user-generated content, not just copyright. Thus, the Directive’s notice-andtakedown system applies to all claims asserted pursuant to all legal doctrines, including those that would be covered by section 230 in the United States. Thirdly, the Directive contemplates that hosts may be liable without ever receiving a takedown notice. In practice, courts in the EU find hosts liable in the absence of takedown notices more frequently than we see with section 512(c) cases. Fourthly, unlike section 512(c)(3), the Directive does not define what constitutes a proper takedown notice, so web hosts are more likely to take action in response to any notice submitted by anyone. Section 230 provides dramatically more protection for web hosts in the United States than the Electronic Commerce Directive provides for European hosts. Accordingly, US user-generated content websites launching localized services for European users typically screen and remove third party content in Europe using more rigorous standards than they use in the United States—usually, blocking and removing content that would have been left untouched in the United States.

4.3  The UK Defamation Law The UK Defamation Act 2013, section 5 says that web hosts are not liable for user-generated defamatory content until they have received a takedown notice that meets statutorily specified criteria. However, a web host cannot qualify for the safe harbour if it ‘has acted with malice’ towards the offending post, setting up potential fights about web host scienter pre-takedown notice. For web hosts to qualify for this safe harbour, they must be able to identify the offending user with enough specificity to enable a lawsuit against the user. In effect, the law eliminates unattributable user content in the UK.66 Any web host that wants to qualify for the safe harbour (and who would not?) must know enough about their users to turn them over to anyone who complains about their posts. The postings themselves can be publicly presented anonymously or pseudonymously, but users will know that their identity will be revealed effectively on request and that may act as a deterrent to posting. To the extent unattributed public discourse has value, the UK Defamation Act forecloses that possibility. (Note also that the attributability of user-generated content only relates to a safe ­harbour for defamation, but plaintiffs with other types of claims can demand the poster’s identity because the web host will have everyone’s contact information on file.)

66  See Eric Goldman, ‘UK’s New Defamation Law May Accelerate the Death of Anonymous UserGenerated Content Internationally’ (Forbes Tertium Quid, 9 May 2013) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   169 Overall, the UK Defamation Act is worse for web hosts than section 230 in at least two main ways. First, section 230 does not require content removal upon a takedown notice. Secondly, section 230 applies even if the web host does not know who is doing the posting. Also, for authors, section 230 indirectly preserves their ability to write on an unattributed basis, which may encourage a more robust public discourse.

4.4  Germany’s Network Enforcement Law (NetzDG) In 2017, Germany enacted the Network Enforcement Act (Netzwerkdurchsetzungsgesetz, NetzDG). The law requires sites to maintain procedures to remove ‘unlawful’ user content, to remove ‘manifestly unlawful’ user content within twenty-four hours of getting complaints about it (and within seven days for merely ‘unlawful’ user content), and to make various public reports about content removals. The law imposes substantial fines of up to €50 million for non-compliance. The law nominally targeted social media platforms, but it applies to any online services with 2+ million registered users in Germany. ‘Unlawful’ content includes a wide range of content, including ‘public incitement to crime’, ‘violation of intimate privacy by taking photographs’, defamation, ‘treasonous forgery’, forming criminal or terrorist organizations, and ‘dissemination of depictions of violence.’ The NetzDG supplements the EU Directive in important ways, all in ways that section 230 would forbid. First, it provides a specific turnaround time for takedowns. Secondly, it starts to creep into regulating the mechanics of a service’s content moderation/removal operations. Thirdly, it requires the production of reports about complaints received and the service’s responses, which may spur further regulatory oversight. Finally, the statute specifies penalties that can be punitive in practice, which surely will spur the regulated services to over-remove content.

4.5  Brazil’s Internet Bill of Rights In 2014, Brazil enacted an ‘Internet Bill of Rights’ (Marco Civil da Internet).67 ‘Providers of internet applications’—presumably including both websites and mobile apps— generally are not civilly liable for user-generated content until a court orders its removal. However, apparently copyright claims are not covered by the law; claims related to ‘honor, reputation or personality rights’ can be fast-tracked to a small claims court; and claims related to non-consensual pornography are subject to a notice-and-takedown regime. By generally deferring intermediary liability until a court order, the Marco Civil more closely resembles section 230 than the European notice-and-takedown rules. Still, it does not go as far as section 230. For example, a court cannot order a US web host to remove user-generated content; any lawsuit directly against the web host is pre-empted

67  See Chapter 10.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

170   Eric Goldman by section 230, and a court order in any other lawsuit cannot reach the web host due to Federal Rules of Civil Procedure 65(d)(2).68

4.6  Section 230 and Foreign Judgments In 2010, Congress enacted the SPEECH Act.69 Among other provisions, the law says that a foreign court judgment for defamation cannot be enforced in the United States if the result would have violated section 230 if litigated in a US court; and unsuccessful plaintiffs must pay the attorneys’ fees of the section 230-immunized entity. Thus, even if foreign laws are less protective of web hosts or other intermediaries than section 230, a foreign court’s defamation judgment will not work in US courts if it violates section 230.70

5.  What’s Next for Section 230? In 2018, Congress enacted FOSTA,71 its first significant reduction in section 230’s protective immunity in twenty-two years. While FOSTA is bad policy,72 its passage has emboldened many of section 230’s critics. Numerous victims’ advocacy groups now hope that they too can get specific exceptions to section 230 for their situation; and other critics now believe that the immunity can be undermined more broadly. A wide range of anti-social content—including non-consensual pornography, opioid promotions, election interference, and terrorist-supplied content—have been identified as possible new section 230 exceptions. Furthermore, a number of senators have prominently criticized section 230. For example, Senator Ted Cruz (R-TX) repeatedly (but completely falsely) claims that section 230 only applies to ‘neutral public forums’73 and thus should not protect any service that removes conservative-leaning contributions. Separately, Senator Mark Warner (D-VA) wrote a white paper of policy proposals, including eviscerating section 230.74 68  See e.g. Blockowicz v Williams, 630 F.3d 563 (7th Cir. 2010) (US); Giordano v Romeo, 76 So.3d 1100 (Fla. Dist. App. Ct. 2011) (US). See also Hassell v Bird, 5 Cal. 5th 522 (2018) (US) (Yelp cannot be compelled to honour a removal injunction). 69  See 28 USC §§ 4101–5. 70  See e.g. Trout Point Lodge Ltd v Handshoe, 729 F.3d 481 (5th Cir. 2013) (US). 71  See Allow States and Victims to Fight Online Sex Trafficking Act of 2017, HR1865 (115th Cong. 2017–18) (US). 72  See Goldman (n. 41). 73  See Sen. Ted Cruz, ‘Sen. Ted Cruz: Facebook Has Been Censoring or Suppressing Conservative Speech for Years’ (FoxNews.com, 11 April 2018) . 74  See Sen. Mark Warner, ‘Potential Policy Proposals for Regulation of Social Media and Technology Firms’ (July 2018) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Overview of the United States’ Section 230 Internet Immunity   171 Conspicuously, amendments to section 230 may have bipartisan appeal in an era where partisanship ordinarily leads to gridlock. Also, section 230 is inextricably linked to the widespread regulatory antipathy towards Google and Facebook. Each time those companies make highly visible missteps— which seems to be constantly—prevailing anti-section 230 views grow a little stronger. Thus, section 230 appears to be quite imperilled. Yet, amid this doom and gloom, the USMCA embraced section 230, potentially extending section 230’s reach to make it a North American legal standard and limiting Congress’s ability to undermine it significantly. USMCA was the first time in over two decades that the expansion of internet immunity was included in any trade agreement; and the ‘exporting’ of section 230-like immunity to other countries would seem to represent a major endorsement of its policy outcomes. So which way will section 230 go? Are we near the end of its functional life? Or are we on the cusp of more universally embracing it? The data points provide reason for both optimism and deep, deep pessimism about section 230’s future.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 9

The Impact of Fr ee Tr a de Agr eem en ts on I n ter n et I n ter m edi a ry Li a bilit y i n L ati n A m er ica Juan Carlos Lara Gálvez and Alan M. Sears

Free trade agreements (FTAs) signed by the United States in the current century have consistently included provisions attempting to harmonize copyright provisions, including the regulation of liability for internet intermediaries for online copyright infringement.1 The regulation in those agreements follows the model established in domestic federal law in the United States through the Digital Millennium Copyright Act (DMCA), which regulates certain conditions for liability exemptions for internet intermediaries for acts of copyright infringement taking place over the internet.2 1  These include free trade agreements with Chile (2004), Singapore (2004), Bahrain (2006), Morocco (2006), Oman (2006), Peru (2007), Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Dominican Republic (2005), Panama (2012), Colombia (2012), and South Korea (2012). 2  The DMCA also regulates the circumvention of controlled access to copyrighted works. Third party content posted online, outside copyrighted material, continues to be controlled by the Communications Decency Act 230 (47 USC § 230) (US), which was enacted shortly before the DMCA with the explicit goal of promoting internet development. Section 230 of the CDA gives internet intermediaries complete immunity for third party or user-generated content (UGC), immunizing ISPs and internet users from liability for torts committed by others using their website or online forum, even if the provider fails to take action after receiving actual notice of the harmful or offensive content. It also gives ‘good Samaritan’ immunity for restricting access to certain material or giving others the technical means to restrict access to that material, without receiving requests to do so.

© J. Carlos Lara and Alan M. Sears 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   173 The DMCA was a largely innovative piece of legislation when it became effective in 1998, and was enacted as part of the implementation of two international agreements known as the ‘Internet Treaties’, signed under the World Intellectual Property Organization (WIPO) in 1996: the WIPO Copyright Treaty (WCT) and the WIPO Performances and Phonograms Treaty (WPPT), which were meant to adapt a trad­ ition­al copyright framework so as to be able to address novel issues brought about by the information age.3 The DMCA took the broad provisions of the WIPO Internet Treaties and implemented them with a higher degree of detail. At the time, several features were unique to the DMCA. Among them were its intermediary liability provisions. Every bilateral FTA—as well as a number of multilateral FTAs—the United States has entered into since 2002 has contained similar provisions, thus promoting their inclusion in the national law of other countries. However, these are controversial, and whether they drive the internet economy or create a more restrictive online space is a matter of debate. In this chapter, the impact of such provisions in Latin American countries will be analysed.

1.  Background: Notice and Takedown in the Dmca The DMCA introduced a system that classifies obligations for different online service providers (OSPs) and sets out different requirements for each class to claim a safe ­harbour in the case of an alleged copyright violation through their systems. The OSPs covered are the intermediaries that provide services of: transitory digital network communications (§ 512(a)); system caching (temporary storage services, § 512(b)); information storage at the direction of users (hosting, § 512(c)); and information location tools (search engines and directories, § 512(d)). Of these, the most important provisions in practical terms are those aimed at hosting services (§ 512(c)).4 In general, OSPs are required to designate a representative to receive notifications of alleged infringement,5 they must adopt a policy to terminate the accounts of repeat infringers,6 and implement measures that accommodate standard technical measures that prevent the access or reproduction of copyrighted works.7 The OSPs are under no obligation to monitor copyright infringement, and they must establish a notification/ counter-notification system to address challenges to takedown decisions.8 3  See Ruth Okediji, ‘The Regulation of Creativity Under the WIPO Internet Treaties’, Minnesota Legal Studies Research Paper 30/09 (2009) . 4  For a detailed analysis of the characteristics and trends of the notice-and-takedown system, with a close view of the content of the notices, see Jennifer Urban and others, ‘Notice and Takedown in Everyday Practice’, UC Berkeley Public Law Research Paper no. 2755628 (2017) . 5  17 USC § 512(c)(2) (US).    6  ibid. § 512(i)(1)(A).    7  ibid. § 512(i)(1)(B). 8  A counter-notification system would be put in place to contest the removal of content, in order to restore it, if the original uploader challenges the content of the notice.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

174   Juan Carlos Lara GÁlvez and Alan M. Sears Safe harbours from liability for third party or UGC provide a mechanism whereby OSPs can shield themselves from secondary copyright liability. The safe harbour is triggered by removing or blocking the allegedly infringing content from their systems, after receiving a notice that infringing content is made available on their networks. Section 512(c) provides that OSPs are not liable for copyright infringement when they satisfy the following legal conditions: (1) they do not receive a financial benefit directly attributable to the infringing activity; (2) they are not aware of the presence of infringing material or know any facts or circumstances that would make infringing material apparent; and (3) upon receiving notice from copyright owners or their agents, they act expeditiously to remove the purported infringing material. The last requirement has become a key feature of the American framework: secondary liability exemption is triggered by (a rightholder) providing knowledge through ([effective] ‘notice’) and the subsequent removal (‘takedown’) of content. The law is dependent on receiving a mere private communication, even an electronic one. This has led to a large number of notifications,9 and the automation of both the notification and the removal processes.10 As a result, both purposeful or inadvertent notifications have resulted in some criticism of actors abusing automated processes to remove or block non-infringing content.11 The DMCA system is, in effect, the set of rules under which platforms that dis­sem­in­ ate UGC, such as YouTube or Facebook, have functioned for a long time in the United States, with content removals based on private notices of infringement happening every second, without a review of the merits of the allegation of infringement. A vast body of scholarship in the United States and abroad has criticized the DMCA provisions. The criticisms can be categorized as either general criticisms of the legal framework or criticism of its practical application,12 including the transparency and accountability regarding the legal merit of the requests, leading to unfair removal of lawful content en masse.13 Many, if not most, of these takedowns are automated, and are in fact generated

9  See Urban and others (n. 4). 10  See Maayan Perel and Niva Elkin-Koren, ‘Accountability in Algorithmic Copyright Enforcement’ (2016) 19 Stan. Tech. L. Rev. 473. 11  A database of exemplary cases of content removal, with a large portion of cases of non-infringing content removed on copyright grounds, is the Lumen Database, which can be found at . 12  See Annemarie Bridy and Daphne Keller, ‘U.S. Copyright Office Section 512 Study: Comments in Response to Notice of Inquiry’, Social Science Research Network Scholarly Paper no. 2757197 (2016) . 13  See Urban and others (n. 4) (finding that 31 per cent of takedown requests were ‘potentially problematic’, in that they were either fundamentally flawed or of questionable validity). This may have even worsened with time. According to Google, for over 16 million notices sent by one submitter, 99.97 per cent of the related URLs were not in Google’s search index, and in total 99.95 per cent of the URLs were not in Google’s index. See Michael Geist, ‘Bogus Claims: Google Submission Points to Massive Fraud in Search Index Takedown Notices’ (Michaelgeist.ca, 22 February 2017) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   175 by bots,14 which may have contributed to the dramatic increase in takedown notices seen over time.15 Nevertheless, there has also been some discussion of its positive role for the growth of internet companies, as a balanced approach to internet regulation.16 Moreover, there are arguments that it is not a sufficiently strict model to prevent or deter infringement, and that more tools for copyright holders are needed.17 Its adoption, twenty years after its passage, is still a matter of debate.

2.  Free Trade Agreements and Notice-and-Takedown Provisions The DMCA model for notice and takedown was exported through FTAs signed between the United States and many other countries, including several in Latin America.18 The inclusion of intellectual property frameworks is a reflection of the overall strategy of the US government to use FTAs to further its goals.19 Intellectual property rights (IPRs) expanded from being negotiated in WIPO to include another forum, the World Trade Organization, a less egalitarian venue for participating countries. These negotiations happened shortly before the WIPO Internet Treaties, and led to the signature of the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs) in 1994. But after developing countries demanded flexibility to implement their TRIPs obligations, through the ‘Doha Declaration on the TRIPS Agreement and Public Health’ of the 2001 WTO Ministerial Conference, the forums expanded yet again. It has been suggested that this is why developed countries began promoting bilateral or regional treaties to advance their interests outside multilateral fora, in the form of investment protection agreements and FTAs.20 14  See Urban and others (n. 4). 15  See Daniel Seng, ‘The State of the Discordant Union: An Empirical Analysis of DMCA Takedown Notices’ (2014) 18 Va. J. of L. & Tech. 369, 389–90 and 444 (referencing Table 1: Takedown Notices by Recipient and Year). 16  See David Kravets, ‘10 Years Later, Misunderstood DMCA is the Law That Saved the Web’ (Wired, 27 October 2008) . 17  See Donald Harris, ‘Time to Reboot?: DMCA 2.0’ (2015) 47 Arizona State L.J. 801. A similar debate has occurred in Europe, resulting in the passage of a Copyright Directive in early 2019 that mandates the use of ‘upload filters’. See ‘James Vincent Europe’s controversial overhaul of online copyright receives final approval’ (The Verge, 26 March 2019) . 18  See Office of the United State Representative, ‘Free Trade Agreements’ . 19 Jeffrey Schott, ‘Free Trade Agreements and US Trade Policy: A Comparative Analysis of US Initiatives in Latin America, the Asia-Pacific Region, and the Middle East and North Africa’ (2006) 20(2) Int’l Trade J. 95–138. 20  See Beatriz Busaniche, ‘Intellectual Property Rights and Free Trade Agreements: A Never-Ending Story’ in David Bollier and Silke Helfrich (eds), The Wealth of the Commons (Levellers Press 2012).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

176   Juan Carlos Lara GÁlvez and Alan M. Sears What could not be obtained in multilateral fora could be achieved through FTAs. To that end, the Bipartisan Trade Promotion Authority Act of 2002 was passed,21 and similar to the earlier Trade Act of 1974,22 it allowed the executive branch of the US government to negotiate and sign FTAs as congressional-executive agreements (which need the majority vote of each house of Congress)23 as opposed to treaties (which need the approval of two-thirds of the US Senate, as mandated by the US Constitution).24 The Act included the requirement that ‘the provisions of any multilateral or bilateral trade agreement governing intellectual property rights that is entered into by the United States reflect a standard of protection similar to that found in United States law’. Such a provision promotes a model—based on the US version of intellectual property law— that benefits its own industries.25 Under this authority, the Office of the United States Trade Representative (USTR) negotiated several FTAs later signed by the US government, all of them containing intellectual property provisions that mirrored US law to varying degrees of detail, including internet intermediary liability for copyright infringement. It was under this authority that the United States signed such agreements with Chile (2003), CAFTA-DR (Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Dominican Republic, 2004), Peru (2006), Colombia (2006), and Panama (2007). Although the agreements are currently in force, their complete implementation in most cases is pending. Relatively recently, a new delegation of authority was signed into law—known as the Bipartisan Congressional Trade Priorities and Accountability Act of 2015—which extends the power of the executive to promote trade through 2021, with slight additions to the language of the previous Act from 2002. As discussed later, this was meant to be exercised in the passage of the Trans-Pacific Partnership Agreement (TPP), an agreement involving Latin American countries that included similar internet intermediary liability provisions.

3.  Implementation of FTA Intermediary Liability Provisions in Latin America Provisions demanding the implementation of rules similar to those of the DMCA already exist in the aforementioned FTAs, which suggests that similar rules should become national law in each country, thus enacting the same system in several places of Latin America. However, because these agreements usually cover large portions of each 21  See 19 USC c.24 (US).    22  See 19 USC § 2101. 23  Jane Smith and others, ‘Why Certain Trade Agreements Are Approved as Congressional-Executive Agreements Rather Than Treaties’ (Congressional Research Service, 15 April 2013) . 24  See US Constitution, Art. II, Section 2(2). 25  See Vivencio Ballano, Sociological Perspectives on Media Piracy in the Philippines and Vietnam (Springer 2015) 44–5.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   177 country’s economy and foreign trade (e.g. tariffs and importation restrictions), and the fact that even intellectual property revisions include a large range of subject matter (e.g. trade marks, patents, pharmaceutical patents, substantive copyright issues, and enforcement mechanisms and penalties), intermediary liability provisions are generally not among the aspects of implementation given priority. Notwithstanding the amount of detail that the FTAs include for the regulation of these matters, implementation so far has still allowed for some degree of flexibility. As discussed further later, more recent efforts by the USTR have attempted to prevent such flexibilities in future agreements, as well as push to implement intellectual property provisions without further delay. One of the most frequent tools of pressure by the USTR is its annual Special 301 Report, published since 1989, that reviews intellectual property enforcement in foreign countries, in order to highlight those that in its view do not offer enough protection for intellectual property holders from the United States, and to explain where its concerns lie. In its 2018 edition, four Latin American countries were on its ‘Priority Watch List’, and eight were on the ‘Watch List’.26

3.1  A Comparison of Provisions on Effective Notice in FTAs with Latin American Countries In all agreements signed or negotiated by the United States since 2003, internet inter­ medi­ary liability provisions appear as virtually identical to those in the DMCA: the OSP must expeditiously remove the materials upon notification, there must be a publicly designated representative to receive notifications, the OSP must adopt a policy for repeat infringers and for accommodating standard technical measures, and there is no obligation to monitor copyright infringement, among other obligations. The text in all of the provisions is virtually identical between agreements, as exemplified by the rules that trigger secondary liability exemptions for hosting and search and indexing services, compared below in Table 9.1. Yet one agreement, negotiated by the USTR and signed into an agreement by all of its negotiating parties, included text with a slight variation even more in line with US law. The TPP, which included Chile, Mexico, Canada, the United States, and eight other countries around the Pacific Ocean, represented an attempt by the United States to establish rules even closer to those of the DMCA, and in fact to go beyond them to re­inforce enforcement.27 The internet intermediary liability provisions were strongly promoted 26   See USTR, 2018 Special 301 Report . The ‘Priority Watch List’ countries are those with serious IPR deficiencies that warrant increased attention concerning the problem areas, and the ‘Watch List’ countries are those with similar deficiencies but not rising to the level of ‘Priority Watch List’. See John Masterson, International Trademarks and Copyright: Enforcement and Management (ABA 2004) 20. 27   See Sean Flynn and others, ‘U.S. Proposal for an Intellectual Property Chapter in the Trans-Pacific Partnership Agreement’ (2012) 28(1) American U. Int’l L. Rev. 105–202, 202.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

178   Juan Carlos Lara GÁlvez and Alan M. Sears

Table 9.1  Provisions on actual knowledge of infringement in free trade agreements with the United States Chile

(c) With respect to functions (b)(iii) [hosting] and (iv) [search engines], the limitations shall be conditioned on the service provider: . . . (ii) expeditiously removing or disabling access to the material residing on its system or network upon obtaining actual knowledge of the infringement or becoming aware of facts or circumstances from which the infringement was apparent, including through effective notifications of claimed infringement in accordance with subparagraph (f) [minimum notice requirements]; and . . .28

CAFTA-DR

(v) With respect to functions referred to in clauses (i)(C) [hosting] and (D) [search engines], the limitations shall be conditioned on the service provider: . . . (B) expeditiously removing or disabling access to the material residing on its system or network on obtaining actual knowledge of the infringement or becoming aware of facts or circumstances from which the infringement was apparent, such as through effective notifications of claimed infringement in accordance with clause (ix) [minimum notice requirements]; and . . .29

Peru

(v) With respect to functions referred to in clauses (i)(C) [hosting] and (D) [search engines], the limitations shall be conditioned on the service provider: . . . (B) expeditiously removing or disabling access to the material residing on its system or network on obtaining actual knowledge of the infringement or becoming aware of facts or circumstances from which the infringement was apparent, such as through effective notifications of claimed infringement in accordance with clause (ix) [minimum notice requirements]; and . . .30

Colombia

(v) With respect to functions referred to in clauses (i)(C) [hosting] and (D) [search engines], the limitations shall be conditioned on the service provider: . . . (B) expeditiously removing or disabling access to the material residing on its system or network on obtaining actual knowledge of the infringement or becoming aware of facts or circumstances from which the infringement was apparent, such as through effective notifications of claimed infringement in accordance with clause (ix) [minimum notice requirements]; and . . .31

Panama

(v) With respect to functions referred to in clauses (i)(C) [hosting] and (D) [search engines], the limitations shall be conditioned on the service provider: . . . (B) expeditiously removing or disabling access to the material residing on its system or network on obtaining actual knowledge of the infringement or becoming aware of facts or circumstances from which the infringement was apparent, such as through effective notifications of claimed infringement in accordance with clause (ix) [minimum notice requirements]; and . . .32

  United States–Chile Free Trade Agreement, Art. 17.11, subsection 23.  Central America–Dominican Republic–United States Free Trade Agreement (CAFTA-DR), Art. 15.11, subsection 27. 30   Peru–United States Trade Promotion Agreement, Art. 16.11, subsection 29. 31   Colombia–United States Trade Promotion Agreement, Art. 16.11, subsection 29(b). 32   United States–Panama Trade Promotion Agreement, Art. 15.11, subsection 27. 28

29

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   179

TPP (2016)

(a) With respect to the functions referred to in paragraph 2(c) [hosting] and paragraph 2(d) [search engines], these conditions shall include a requirement for Internet Service Providers to expeditiously remove or disable access to material residing on their networks or systems upon obtaining actual knowledge of the copyright infringement or becoming aware of facts or circumstances from which the infringement is apparent, such as through receiving a notice of alleged infringement from the rightholder or a person authorised to act on its behalf . . .33

USMCA

(a) With respect to the functions referred to in paragraph 2(c) [hosting] and paragraph 2(d) [search engines], these conditions shall include a requirement for Internet Service Providers to expeditiously remove or disable access to material residing on their networks or systems upon obtaining actual knowledge of the copyright infringement or becoming aware of facts or circumstances from which the infringement is apparent, such as through receiving a notice of alleged infringement from the rightholder or a person authorized to act on its behalf . . .34

by the USTR ever since entering the negotiations on behalf of the United States in 2008 (yet without authorization from Congress until 2015), which is evidenced by the proposal of such provisions to become part of the agreement, according to an early 2011 version of the text.35 The TPP had a long, controversial history, including harsh indictments of its internet intermediary liability provisions.36 Eventually, after the agreement was signed, President Trump withdrew the US participation,37 and the rest of the countries came to an agreement suspending many of the intellectual property provisions, as discussed in the following section. Notwithstanding the failure of the TPP to create new intermediary liability rules, the same language was again pushed for—and ultimately included—in the revision of the North American Free Trade Agreement (NAFTA), which became the US–Mexico–Canada Agreement (USMCA).38 The most salient difference between the language in the TPP and the USMCA and the language in prior trade agreements is the explicit specification of the source of the ef­f ect­ ive notice in the TPP and the USMCA. While previous agreements did not mention where the notice could come from (thus allowing for flexibility to, for instance, require a judicial or administrative notice), the TPP and the USMCA make it clear that such a   Trans-Pacific Partnership Agreement, Art. 18.82, subsection 3.   United States–Mexico–Canada Agreement, Art. 20.J.11, subsection 3(a). Text is subject to legal review for accuracy, clarity, and consistency, and is subject to language authentication. 35   See Knowledge Ecology International, ‘The Complete Feb 10, 2011 Text of the US Proposal for the TPP IPR Chapter’ (Keionline, 10 March 2011) . 36   See Matthew Rimmer, ‘Back to the Future: The Digital Millennium Copyright Act and the TransPacific Partnership’ (2017) 6(3) Laws 11. 37   See ‘Trump Abandons Trans-Pacific Partnership, Obama’s Signature Trade Deal’ (New York Times, 23 January 2017). . 38   This agreement is known as CUSMA in Canada, and T-MEC in Mexico. 33

34

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

180   Juan Carlos Lara GÁlvez and Alan M. Sears notice may come ‘from the rightholder or a person authorized to act on its behalf ’. One possible cause of this small but important change in the position of the USTR might be explained by the current state of the implementation of prior FTAs, including those in Latin America, examined later.

3.2  Completed Implementation Only two countries in Latin America have completed the process of implementing their FTA obligations concerning intermediary liability for copyright infringement: Chile and Costa Rica.

3.2.1 Chile In Chile, safe harbour for online copyright infringement was implemented in May 2010 through Law No. 20,435, which modified large portions of the existing copyright law. The draft bill, introduced in April 2007, was perceived by the executive (in charge of drafting the bill implementing the US–Chile FTA) as an opportunity to update existing copyright law well beyond trade agreement obligations, providing a more balanced approach between exclusive rights of authors, interests of copyright holders, and interests of the public in using copyrighted works for legitimate purposes. The executive introduced a number of limitations and exceptions to copyright claims along with new rights and tools for rightholders.39 With regards to internet intermediary liability, Chilean law mostly follows the structure and provisions of the Chile–US FTA. Liability limitations apply only where the service provider does not initiate transmission, or select material or recipients, and some additional formal conditions apply. Following the provisions of the FTA, the law ex­pli­ cit­ly excludes an obligation for service providers to actively monitor content.40 The system is intended to protect service providers who do not provide content directly but who act passively as intermediaries from liability.41 To that end, the law classifies providers of the services into different categories, depending on whether the entity is: (1) transmitting, routing, or connecting; (2) caching through an automatic process; (3) providing storage at the direction of a user of material (hosting); and/or (4) referring or linking users to an online location by using information location tools (search engines), including hyperlinks and directories.42 In general, 39  See Daniel Alvarez Valenzuela, ‘The Quest for Normative Balance: The Recent Reforms to Chileʼs Copyright Law’, International Centre for Trade and Sustainable Development Policy Brief 12 (2011) . 40  See Law no. 17, 336, Art. 85 P (Chile). 41  See Claudio Ruiz Gallardo and J. Carlos Lara Gálvez, ‘Liability of Internet Service Providers (ISPs) and the Exercise of Freedom of Expression in Latin America’ in Eduardo Bertoni (ed.), Towards an Internet Free of Censorship: Proposals for Latin America (Centro de Estudios en Libertad de Expresion y Acceso a la Informacion (CELE) 2011) 18. 42  It should be noted that the same provider could fall under any one of the categories, so long as the activity at the time fits the description.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   181 caching and hosting providers are required to establish a policy to terminate the service of repeat infringers, not interfere with technological protection measures, and not be in the creation or selection of content or its recipients. They must also designate contact information to receive copyright infringement notices. The most important innovation from Chile’s framework, which moves away from the DMCA model, lies in the triggers for secondary liability exemption.43 The removal of infringing content is subjected to judicial proceedings, which allows for a court to order that removal, when several formal requirements are met. That is to say, the standard for ‘actual knowledge’ from the FTA, as implemented in Chilean law, is the reception of a court order for content removal or blocking. The procedure is ‘brief and summary’, conducted before a civil judge, and in the request for removal, the infringed rights must be carefully documented, as must be the copyright holder, infringing content, type of infringement, and its location in the service provider’s network. The court must then issue a resolution ordering removal of content. To be exempt from secondary liability, the intermediary is required to expeditiously remove or block content from its systems only when such judicial court order to remove or block content is received.44 The statute does not define ‘expeditious’, nor does it require content to be removed within a specific time period after notification. Private notices of infringement are not completely excluded from the Chilean implementation, but they do not hold nearly as much weight as in the United States. Private notices can be redirected towards infringers, and such notices have been redirected for ‘educational’ purposes by some rightholders,45 but it does not create a legal obligation to either remove content or stop providing services to the alleged infringer.46 The US government criticized Chile’s approach, from the time of its discussion in Congress, where alternative models to judicial enforcement were rejected.47 After its entry into force, the USTR denounced Chile’s implementation of its FTA obligations as an insufficient way to protect the interests of copyright holders. Since 2010, the USTR has kept Chile on the ‘Priority Watch List’ in its annual Special 301 Report, explicitly calling on the Chilean government to ‘correct’ its law, so as ‘to permit effective and ex­ped­itious action against online piracy’.48 The Chilean government has consistently either rejected the conclusions of the Special 301 Report or dismissed it as a unilateral instrument, outside the dialogue mechanism created in the FTA.49 43  See Alberto Cerda Silva, ‘Limitación de responsabilidad de los prestadores de servicios de Internet por infracción a los derechos de autor en línea’ (2015) 42 Revista de derecho (Valparaíso) 121–48. 44  See Law No. 17,336, Art. 85 Ñ (Chile). 45  The International Federation of the Phonographic Industry sent such notices to be forwarded by the thousands in 2013, explicitly mentioning Art. 85 U of the Intellectual Property Law. See ‘IFPI envía 4000 notificaciones por usar P2P a usuarios de internet en Chile’ (FayerWayer, 30 July 2013) . 46  See Law No. 17,336, Art. 85 U (Chile).    47  See Cerda Silva (n. 43). 48  See USTR (n. 26) 62. 49 See Dirección General de Relaciones Económicas Internacionales, Ministerio de Relaciones Exteriores, ‘Comunicado Oficial: Acerca del Reporte 301 EE.UU.’ (27 April 2016) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

182   Juan Carlos Lara GÁlvez and Alan M. Sears A compromise was reached on the TPP that could redirect to an implementation of the Chile–US FTA. In its Annex 18-F, the TPP allowed its parties to implement Article 17.11.23 of the Chile–US FTA instead of the TPP for ISP liability. Though this has been taken as a success by Chile to retain its system (as Canada also did in Annex 18-E),50 it must be noted that the language does not make it explicit that the current law in Chile is seen as a sufficient implementation of the bilateral FTA.

3.2.2  Costa Rica The government of Costa Rica avoided a long discussion in Congress over the implementation of the internet intermediary liability provisions by approving Executive Decree No. 36,880 in December 2011.51 Interestingly enough, its validity as an act implementing CAFTA-DR is troubling for two reasons. First, as an executive order, it does not have the hierarchical status of a law within the Costa Rican system, and it was not the subject of Congressional debate. Secondly, the Decree limits its own reach from the outset. Article 2 limits its applicability to the service providers that voluntarily submit themselves to its rules, while also establishing measures to address potential copyright infringement. Similar to the CAFTA-DR, the Decree classifies service providers into four categories, distinguishing between providers of services of connection and routing, caching, hosting, and linking or referencing (including search engines).52 It does not require monitoring of infringing activities, notwithstanding those ordered by a court of law or those made through use of technological protection measures.53 It establishes general conditions for liability exemptions, and requires, inter alia, the service provider: (1) to have a policy to terminate the service of repeat infringers; (2) not to interfere with techno­logic­al protection measures, or with content, its origin, or recipients; (3) to designate a recipient of notices (for providers of hosting and referencing services); and (4) to remove infringing content in accordance with the Decree (for providers of caching, hosting, and linking and referencing services).54 The ‘actual knowledge’ standard set by Executive Decree 36,880 has a peculiar formulation. Liability limitations apply to hosting providers and search engines that ex­ped­ itious­ly remove or block access to allegedly infringing content, at the time of: obtaining actual knowledge of the infringement or noticing the facts [or circumstances, for hosting services] from which infringement is evident, including through a notice received according to Article 11 [containing notice requirements within a collaborative scheme] or through a notification from a competent judicial authority ordering the removal of content or blocking of access to it.55

In other words, actual knowledge comes either by fact or circumstances that stem from a court order (as in Chile) or from a private notice (somewhat similar to the United 50  See Michael Geist, ‘The Trouble with the TPP’s Copyright Rules’ in Scott Sinclair and Stuart Trew (eds), The Trans-Pacific Partnership and Canada: A Citizen’s Guide (Lorimer 2016) 158–68. 51  See Decreto Ejecutivo no. 36880-COMEX-JP, Reglamento sobre la limitación a la responsabilidad de los proveedores de servicios por infracciones a Derechos de Autor y Conexos de Acuerdo con el Artículo 15.11.27 del Tratado de Libre Comercio República Dominicana–Centroamérica–Estados Unidos (CR) (hereafter Executive Decree 36880). 52  ibid. Art. 7.    53  ibid. Art. 5.    54  ibid. Arts 6–10.    55 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   183 States). As a result, two secondary liability exemption schemes exist in the Decree: judicial enforcement and a collaboration procedure. The first system in the Decree resembles proper judicial enforcement. The rightholders or their agents must file a petition, within a judicial procedure or as a preliminary injunction, detailing the rights infringed and their ownership, the nature of the in­frin­ging content and form of infringement, as well as its location on the networks or systems of the service provider. The court will then decide under general copyright rules and order appropriate measures such as cancelling the account, removing or blocking the content identified as infringing, or other measures deemed necessary as long as they are the least costly option for the service provider.56 In the second of these schemes, the collaboration procedure, rightholders can send a written sworn statement to the service provider (for caching, hosting, and indexing) if their rights under copyright law are infringed. The service provider has up to fifteen days to determine whether it requires additional information from the copyright holder. After this period, the service provider must send a notice to the content provider (the user) within thirty days of the first notice. The content provider has fifteen days to voluntarily remove the infringing content or file a counter-notice to the copyright holder. If the user does not respond to the first notice, the service provider may cancel their account, or remove the content or block access to it. The user may still file a counter-notice to challenge the removal. This collaborative procedure roughly resembles a ‘notice and notice’ system.57 In its Special 301 Report, the USTR has kept Costa Rica on its ‘Watch List’, stating that this forty-five-day term may have a negative effect on ‘online piracy’. A ‘good faith’ removal can also allow for a safe harbour, provided that the affected user is given a chance to file a counter-notice, unless the rightholder has informed the service provider it will commence judicial proceedings against the alleged infringer.58 The Decree explicitly mentions actual knowledge can derive from ‘facts and circumstances’ surrounding the upload.

3.3  Pending Implementation Of all the other Latin American countries with FTA obligations for internet inter­medi­ary liability provisions, only Colombia has officially attempted to pass an implementation act. The Colombia–United States Trade Promotion Agreement (CTPA) has been the 56  ibid. Art. 18. 57  Under ‘notice and notice’ systems, the intermediary should ‘include a detailed notice about the location of the material considered unlawful and the legal basis for the unlawfulness, as well as an adequate option for counter-notice to the user who produce the content, with judicial oversight guarantees’. IACHR Office of the Special Rapporteur for Freedom of Expression (OSRFE), Freedom of Expression and the Internet (31 December 2013) § 109 . This system is already in place in Canada and has been a defence for Canadian negotiators to exempt Canada from the notice-and-takedown provisions present in both the failed TPP and in the USMCA. 58  Executive Decree 36880 (n. 51) Art. 15 (CR).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

184   Juan Carlos Lara GÁlvez and Alan M. Sears subject of several failed attempts to implement copyright-related provisions. Each one of them is publicly identified as ‘Ley Lleras’, followed by a number.59 The USTR objected to Colombia’s 2018 entry into the Organization for Economic Co-operation and Development (OECD), because of its outstanding and unimplemented copyright obligations.60 The first attempt to implement the internet intermediary liability provisions of the CTPA came through Bill No. 241 of 2011.61 The Bill contained language largely similar to that found in the FTA. The sponsors of the Bill attempted a quick turnaround in Congress, and an open public discussion of its contents was avoided.62 The Colombian Bill closely followed the DMCA model of notice and takedown, with a private notice required to trigger removal of content and activate limitation of liability for intermediaries. After a quick committee discussion and approval, some criticism over its likely unconstitutionality, and strong public outcry,63 the Bill was archived in November 2011. Subsequent attempts at copyright reform, each known as another iteration of ‘Ley Lleras’, addressed different parts of the law and did not include internet intermediary liability provisions. The USTR attempted to reduce flexibility in the intermediary liability framework of the CTPA. A side letter signed on 22 November 2006, the same date as the signature of the CTPA, was filed by the Deputy United States Representative at the time. The ‘ISP Side Letter’ gave a very detailed example of a model that would constitute an ‘effective written notice or counter-notification’ under the CTPA.64 Such a private notification as described in the side letter would be sufficient ‘effective notice’, thereby introducing a private notice as the proper way of building ‘actual knowledge’. The letter is understood by both parties (represented by the USTR and the Colombian Minister of Commerce) to be an ‘integral part of the [trade promotion] agreement’.65 The commitment under the CTPA to adopt the necessary legislation was scheduled to be completed by May 2013. As a result of this failure in implementation, the USTR placed Colombia on the ‘Priority Watch List’ in its Special 301 Report for the year 2018, citing ‘lack of meaningful progress’ in implementing the CTPA, including ISP liability provisions,66 downgrading its status after decades on the ‘Watch List’. 59  The most recent attempt, ‘Ley Lleras 6’, contains substantive provisions on copyright, strengthening exclusive rights as well as penalties and enforcement mechanisms, and implementing obligations against the circumvention of technological protection measures that prevent access to or reproduction of copyrighted works. 60 ‘Colombia invited to join OECD despite some US objections’ (Financial Times, 25 May 2018) . 61  See Proyecto de Ley 241 de 2011 (Col.). 62  See Marcela Palacio Puerta, Derechos de autor, tecnología y educación para el siglo XXI: El tratado de libre comercio entre Colombia y Estados Unidos (2016) 164–6. 63  See Carlos Cortés Castillo, ‘El debate pendiente en Colombia sobre la protección de derechos de autor en Internet. El caso de la “Ley Lleras”’, Fundación Karisma Working Paper 4/2013 (2013) . 64 See US–Colombia ISP Side Letter . 65 ibid.   66  See USTR (n. 26) 10.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   185 A similar situation occurred in Peru. Following the signature of the Peru–United States Trade Promotion Agreement (PTPA), an ‘ISP Side Letter’—with similar content as the Colombian side letter—was sent by the USTR in April 2006, as an ‘understanding’ between the parties that would in essence become part of the agreement.67 The same text appeared years later in the February 2011 text of the TPP, as part of the proposal by the United States for an ISP Side Letter. Almost identical content may be found in similar side letters additional to FTAs already signed with Singapore (2003), Australia (2004), Bahrain (2004), Morocco (2004), and Oman (2006). However, the PTPA framework, including its side letter, has not yet been implemented in national law. Peru appeared in the 2018 Special 301 Report ‘Watch List’, with a very slight recommendation to implement the PTPA intermediary liability rules therein. By the end of 2018, there had been no movement through legislative or executive action to implement those FTA provisions. The remaining Latin American countries (Panama, the Dominican Republic, and the CAFTA bloc, with the exception of Costa Rica), have neither adopted new legislation, nor introduced draft bills in their respective legislatures, nor signed new executive decrees. Only time will tell whether the US government will push for further implementation of existing FTA commitments, and whether those efforts will face resistance.

3.4  The Current Promotion of the DMCA Model in FTAs The TPP was an ambitious attempt to export DMCA rules into an agreement across continents. It was initially signed by Chile, Peru, and Mexico, and another nine countries.68 At the time Peru joined the negotiations for the TPP, it had yet to implement the DMCA-like provisions included in its earlier FTA with the United States. Mexico, on the other hand, it did not have any pre-existing obligations despite being a signatory to NAFTA with the United States. The TPP may have represented a renewed effort by the United States to promote the DMCA model with Chile and Peru, and to expand its cover­age so as to include Mexico. The withdrawal of the United States from the TPP in early 2017 was thought to derail the deal; it was the largest party in the agreement and a force driving the negotiations.69 However, the remaining parties reached an agreement in March 2018 to an amended version of the TPP, now dubbed the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), which notably suspended Article 18.82: Legal

67 See US–Peru ISP Side Letter . 68  The original parties were Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, Vietnam, and the United States. 69  See Xin En Lee, ‘What is the TPP?’ (CNBC, 3 April 2018) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

186   Juan Carlos Lara GÁlvez and Alan M. Sears Remedies and Safe Harbours.70 In April 2018, the United States informally expressed some interest in joining the agreement,71 and it remains to be seen whether they do so and attempt to revive these ‘suspended’ provisions. More recently, the Trump administration has focused on renegotiating the North American Free Trade Agreement (NAFTA). NAFTA (the United States, Mexico, and Canada being the treaty’s signatories) has been in force since 1994; however, it does not include internet intermediary liability provisions, and neither does Mexican federal law. The renegotiation of the TPP within NAFTA presented the US government with an opportunity to promote its DMCA system among its neighbours. The DMCA model had already been touted as a ‘foundation’ of the digital economy in the United States by internet industry representatives promoting its inclusion in the revised NAFTA.72 Although it appeared that the United States was willing to move forward on a bilateral trade deal with Mexico that included provisions on intermediary liability,73 Canada was able to agree to terms in late September 2018 to form the US–Mexico–Canada Agreement (USMCA).74 The final version contains language nearly identical to that used in the TPP in Article 20.J.11: Legal Remedies and Safe Harbors.75 As a consequence of Mexico not having any intermediary liability provisions,76 only Mexico must implement this article within three years of the USMCA’s entry into force.77 The USMCA marks a return to a mandatory counter-notice (and put-back) provisions found in the bilateral FTAs,78 whereas under the TPP it was optional.79 If material is mistakenly removed or disabled, the service provider must restore material upon receipt of a counter-notice, unless the rightholder, who sent the original takedown notification, initiates civil judicial proceedings within a reasonable amount of time.80 Under the

70  See Comprehensive and Progressive Agreement for Trans-Pacific Partnership, Annex 7(k) . It should be noted that the treaty needs six ratifications to come into effect, and currently there are only three. 71  See ‘Trump to reconsider joining TPP trade pact’ (BBC News, 13 April 2018) . 72  See Internet Association, ‘Modernizing NAFTA for today’s economy’, White Paper (2017) . 73  See William Mauldin, ‘Trade Deal Could Move Ahead Without Canada, U.S. Official Says’ (Wall Street Journal, 25 September 2018) ; United States–Mexico Trade Fact Sheet: Modernizing NAFTA into a 21st Century Trade Agreement . 74  See John Paul Tasker and Elise von Scheel, ‘Canada, U.S. have reached a NAFTA deal—now called the USMCA’ (CBC News, 30 September 2018) . 75  See USMCA, Art. 20.J.11, subsection 3(a). 76  See Canada meets the requirements of Annex 20-A to be exempt. 77  See USMCA, Art. 20.90, subsection 3(g). 78  See e.g. Colombia–United States Trade Promotion Agreement, Art. 16.11, subsection 29(b)(x). 79  cf. USMCA, Art. 20.J.11, subsection 4 with the TPP, Art. 18.82, subsection 4. 80  This time period is determined according to domestic laws or regulations.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   187 TPP, this provision applied only if a state party already had a similar system for ­counter-notices in domestic law. Mexico has ratified the agreement, while Canada and the United States have yet to do so. While the ratification of the agreement in the United States was in a state of flux,81 after a number of amendments, the agreement is set to be ratified, and Canada is expected to follow suit.82

4.  The Convenience of the DMCA Approach for Notice and Takedown in Latin America Internet intermediary liability rules are supposed to reflect a balance between the interests of copyright holders, as well as the users uploading that content, and the public in general.83 The liability model established by the DMCA relies on a safe harbour system in which a copyright holder need only send a private notice of infringement to an inter­medi­ary in order to see content removed from the internet, so that the service provider can be exempted from secondary liability for copyright infringement on its network or systems. Enacting DMCA-like rules for such a secondary liability framework is unadvisable, not only in Latin American countries, but also in the rest of the world. The first reason is the fundamental lack of evidence to support the convenience of this approach for Latin American countries, which have very different concerns, economies, and development goals than the United States.84 FTAs, as promoted by the United States (with its detailed language, well beyond the WIPO Internet Treaties framework), while advancing the interests of US industries, have effectively limited the potential of national policies in broad areas such as intellectual property. Moreover, the existing FTAs do not provide any guidance or promotion of a broader balance between the substantive provisions of copyright law and its enforcement mechanisms, and the legitimate interests of internet users that may be affected. In other 81  See Jacob Pramuk, ‘House Dems, Trump administration fail to reach deal on USMCA trade agreement’ (CNBC, 21 November 2019) ; Erica Werner and others, ‘Trump’s North American trade deal at risk of stalling in Congress’ (Washington Post, 29 March 2019) . 82  Erica Werner and Rachel Siegel, ‘Senate approves new North American trade deal with Canada and Mexico’ (Washington Post, 16 January 2020) . 83  See Cerda Silva (n. 43). 84 See Pedro Roffe and Maximiliano Santa Cruz, Intellectual Property Rights and Sustainable Development: A Survey of Major Issues (Economic Commission for Latin America and the Caribbean 2007).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

188   Juan Carlos Lara GÁlvez and Alan M. Sears words, DMCA-like rules would come to establish rules for easy content removal, based on alleged copyright infringement, in countries where such infringement can happen through trivial acts not covered by balancing provisions such as copyright limitations and exceptions or the fair use doctrine, which are not equally promoted in FTAs.85 Enhancing exclusive rights and enforcement mechanisms, without counterweights in the public interest, has the effect of reducing the rights of users to pursue their own legitimate purposes such as expression or education. This criticism of the DMCA shows that it remains a controversial piece of legislation. As such, FTA provisions that follow its framework should not be implemented in Latin American countries that have agreements with the United States without a strong, public debate and an analysis of its adequacy to domestic law, including its constitutional and human rights principles. Otherwise, its transposition into very different legal and cultural landscapes would entail the implementation of a one-size-fits-all solution, without accounting for national context or input from local stakeholders.86 Participation in these negotiations and implementation processes remains a sore point. For all its controversy, the DMCA can still be said to be the product of Congressional debate in the United States, whereas the growing demands for certain provisions through FTAs leaves little room for input from local stakeholders or different interest groups, both in FTA closed-door negotiations and in their implementation by legislatures. The strong opposition from academia and civil society groups against the TPP, including the opposition to its intellectual property and ISP liability provisions, was partly based on the exclusion of the viewpoints of relevant stakeholders and public interest representatives,87 while interest groups from the US intellectual property industries had far more access to the negotiators.88 This lack of transparency not only creates problems for the substance of internet intermediary liability provisions, but for democratic debate itself as the source of the rules that govern societal relations. Finally, the importance of the need for balance cannot be downplayed. Beyond copyright protections and exceptions, content uploads, and removals, lie questions of balance between fundamental rights that are at the crux of criticism of the DMCA system. The exercise, via the internet, of fundamental rights recognized in international human rights law demands a higher level of participation, democratic debate, and risk-assessment analysis to allow for better informed public policy.

85  See e.g. Jonathan Band, ‘Evolution of the Copyright Exceptions and Limitations Provision in the Trans-Pacific Partnership Agreement’, Social Science Research Network Scholarly Paper no. 2689251 (2015) . In the TPP, it was only after a high level of public pressure that some degree of balance was introduced to the text of the agreement. However, its length and detail is minuscule compared to the provisions enhancing the rights of IPRs. 86  See e.g. Cecilia Lerman, Impact of Free Trade Agreements on Internet Policy, a Latin America Case Study (Internet Policy Observatory 2015) 25 . 87  See ‘More Than 400 Civil Society Groups Write Congress on TPP and Fast Track’ (Public Citizen, 4  March 2013) . 88 See Margot Kaminski, ‘The Capture of International Intellectual Property Law Through the U.S. Trade Regime’ (2014) 87 Southern California L. Rev. 977.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

The Impact of Free Trade Agreements in Latin AMERICA   189

5. Conclusions The DMCA model for internet intermediary liability has been proposed in other FTAs negotiated by the United States, including in the revision to NAFTA now known as the USMCA. However, it is questionable whether implementing the DMCA model is ideal for Latin American countries, and whether it should form part of future FTAs. First, the appeals to combat ‘online piracy’ by the USTR are lacking a proper evalu­ation of the supposed damage to the copyright industries in the countries where this rhetoric is promoted. In addition, there is no consensus as to whether the DMCA is the best model for balancing the rights of copyright holders and the general public. Furthermore, this debate opens questions regarding the legitimacy of the democratic processes that might be undermined by FTAs’ obligations and the fundamental rights that might be affected if the intermediary liability regime were implemented in the same manner as in the United States.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 10

The M a rco Ci v il da I n ter n et a n d Digita l Constitu tiona lism Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes*

The laws on intermediary liability lie at the intersection of several complex legal debates. The way these laws are conceived, drafted, and interpreted has important implications in shaping the digital world, by empowering or limiting private and state actors, by promoting or harming free speech and privacy, and by promoting or hindering experimentation and innovation. By limiting online service providers’ (OSPs) liability for the conduct of their users, these rules can even shape business models, and wellcrafted intermediary liability rules can remove, under certain conditions, the internet platforms’ incentive to interfere with their users’ rights and activities. In Brazil, the landmark statute on the liability of internet intermediaries is Law 12.965/2014 which has been labelled the Civil Rights Framework1 for the Internet (Marco Civil da Internet, or MCI).2 The MCI was enacted by Congress in 2014, with enormous support from both civil society entities and internet companies, and it was *  We would like to thank Clara Iglesias Keller and the editor of this Handbook for helpful comments, and Renan Medeiros de Oliveira for excellent research assistance. Brazilian legislation can be found at  and case law at , , and . 1  ‘Framework’ has become the most frequent translation for ‘Marco’. Critics have pointed out, however, that it is not a precise description of the MCI—an internet-centric, ordinary federal law, that ‘establishes the principles, guarantees, rights and obligations for the use of Internet in Brazil’ (Francis Augusto Medeiros and Lee Bygrave, ‘Brazil’s Marco Civil da Internet: Does it live up to the hype?’ (2015) 31 Comp. L. & Sec. Rev. 1, 121). 2  See Marco Civil da Internet 2014 (BR) (hereafter MCI 2014).

© Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   191 praised internationally as a model legislation for the protection of freedom of expression and other human rights.3 In substance, the MCI shifted the balance of power between private and state actors. It created safeguards against intrusion on users’ communications and privacy by both law enforcement authorities and internet access providers, while also creating the obligation for the latter to retain certain types of data that would allow the identification of users engaging in illegal activities. The MCI also created ‘safe harbours’—clauses that establish procedures and conditions that, if followed by OSPs, limit their civil and criminal liability for acts committed by their users. In doing so, the MCI reduced the incentives for OSPs to police the online activity of its users, for example by removing controversial content shared by its users so as to avoid liability. While it adopts specific rules on liability, the MCI also establishes broad principles and expansive guidelines, and expressly affirms the protection of freedom of expression and privacy in the digital sphere, against both public and private actors. These and other traits of the MCI (e.g. the high level of civil society mobilization around its drafting and enactment) have encouraged some observers to discuss the MCI as a ‘Constitution for the Internet’ or a ‘Digital Bill of Rights’.4 Indeed, as we will see later, the MCI does embody some of the elements highlighted by the growing literature on ‘digital constitutionalism’.5 In this chapter, we briefly review the history of the Brazilian Marco Civil da Internet, explaining how different business, government, and civil society actors pushed for and shaped this law. We will then engage with the substance of the MCI, focusing specifically on its main intermediary liability provisions and how they are being applied in practice, so far, by Brazilian courts. In exploring the contrast between the formal provisions and the ‘law in action’, we will briefly explore key aspects of Brazilian intermediary liability law, such as the conditions intermediaries must fulfil in order to benefit from the im­mun­ities the MCI grants them and the requirements for a valid court order targeting illegal content, depending on the kinds of user activities.

3  Paper’s newsroom, ‘Projeto brasileiro de Marco Civil da Internet é modelo internacional, diz relator da ONU’ (ONUBR, 9 April 2014) . 4  Dillon Mann, ‘Welcoming Brazil’s Marco Civil: A World First Digital Bill of Rights’ (World Wide Web Foundation, 26 March 2014) . On a more sceptical note, see Medeiros and Bygrave (n. 1) (pointing out that the MCI ‘does not contain any right that has not been enacted elsewhere in the world’, and that it ‘it fleshes out rights that already exist in Brazil (albeit in a latent or vague form), rather than creating entirely new rights’). 5  For a critical review of the main themes in this literature, see Edoardo Celeste, ‘Digital Constitutionalism: Mapping the Constitutional Response to Digital Technology's Challenges’, HIIG Discussion Paper Series no. 2018-02 (2018).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

192   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes

1.  The Civil Rights Framework for the Internet 1.1  The Path Towards the MCI’s Enactment Prior to 2014, Brazil had no specific legislation regulating the liability of internet intermediaries for their users’ activities. The absence of such rules had been criticized as a source of legal uncertainty that was harmful to internet platforms, as both companies and judges often adopted disproportionate measures to try to curb the spread of potentially illegal content online. An early example of such pre-MCI measures was the judicial decision on the Cicarelli case,6 which led to the blockage of the video-streaming platform YouTube for almost forty-eight hours. The famous model and TV host Daniela Cicarelli and her boyfriend sued YouTube and demanded the removal of an unauthorized video, as well as damages for the violation of privacy and publicity rights. The video depicted the couple engaged in intimate scenes on a beach near the city of Cadiz, in Spain. The plaintiffs requested a preliminary injunction to remove the videos from YouTube. Although the request was denied by the trial judge, the decision was later reversed by the Fourth Chamber of the Court of Appeals of the State of São Paulo, which required YouTube to block online access to the videos.7 Enforcing the blockage decision proved to be trickier than expected. Copies of the original video kept resurfacing on the platform. Considering the potential damages to the plaintiff ’s privacy and publicity, the trial judge ordered several internet access pro­ viders to block the YouTube platform itself. After the decision was implemented by two backbone companies, the Court of Appeals of São Paulo reversed the blocking and said that its original decision had been misinterpreted. The court clarified that the original blocking decision targeted only the specific content that infringed the plaintiff ’s rights, and therefore should not reach the video platform in its entirety. The case was dismissed at trial level in 2007. In ruling favourably for YouTube, the judge considered that no privacy rights had been violated, since the intimate scenes had been filmed in a public space. The Court of Appeals reversed that decision in June 2008, determining that YouTube should adopt all necessary measures to exclude the videos, including warning and ultimately punishing all users who re-uploaded the video. The court also set a daily fine in case of non-compliance with the decision. In 2015, the Superior Court of Justice (STJ), in a new appeal on that case, reviewed the amount of damages

6  For additional details and documents and an overview of the case in English, see ‘YouTube case: Non-compliance with judicial requests for content removal’ (Bloqueios.info, 9 January 2007) . 7  See Fernando Porfírio, ‘Justiça confirma veto ao vídeo de Cicarelli na internet’ (ConJur, 28 September 2006) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   193 awarded to the plaintiffs in the lower courts, and eventually decreased them from almost 100 million reais (in total) to 250,000 reais for each plaintiff.8 Although the blocking of YouTube played a decisive role in raising debates on intermediary liability in Brazil, other cases reached the STJ as early as 2010. These cases further illustrate the legal uncertainty around the liability of internet intermediaries in the pre-MCI era. On one of the first occasions that the STJ assessed the liability of internet intermediaries, the court ruled favourably for state prosecutors from the Brazilian state of Rondônia (RO), in a class action. The State Prosecutor’s Office sought a preliminary injunction ordering Google to remove and to prevent the re-uploading of defamatory content against minors posted by the users of the social network Orkut. Although the company successfully removed the illegal content identified, it did not create a filtering mechanism to prevent the re-uploading of similar content, leading the Rondônia State Court to impose a fine of 5,000 reais per day for failing to comply with the court’s order. Google appealed to the STJ, arguing that it had no technical means for pre-screening or monitoring all the content being uploaded by its users. Although Google promptly removed the content identified as illegal, the Second Chamber of the STJ ruled, in a March 2010 decision, that removing the illegal content was not enough, and that Google should be held jointly liable (together with its users) in case new, similar content was posted.9 The same court, however, pointed in a different direction in a December 2010 decision, also involving Orkut.10 In that case, the Third Chamber of the STJ developed an argument to set aside the strict liability rule required by the Consumer Protection Code. Although it recognized that Google should be subject to consumer protection rules, the court asserted that filtering content to prevent illegal activities was not an ‘intrinsic feature’ of the service Google provided. Therefore, the service should not be considered defective. The court also rejected the idea that, in providing its services, Google had exposed its consumers to a ‘risk’—a scenario in which Brazilian law would allow for the application of a strict liability rule. Instead, OSPs such as Google would be required simply to promptly remove the content on notice and to adopt measures to allow the identification of its users. Failing to do so, the court said, could make OSPs liable for their users’ illegal activities.11 The same rationale would then be used to decide one of the first cases against the Google search engine, but with two important twists. In a case dealing with the deindexation of defamatory content, the STJ moved beyond its previous decisions in affirming that, as the plaintiff had been able to identify the precise URL to be delisted, she could be expected to have been able to identify the original publisher of the harmful content. If that was the case, reasoned the court, then it would be more appropriate for the plaintiff to bring an action against the original publisher and not against the search 8  See Superior Tribunal de Justiça [Superior Court of Justice] (STJ) Renato Aufiero Malzoni Filho v Google and Youtube [2015] REsp 1492947 (Bra.). 9  See STJ Ministério Público do Estado de Rondônia v Google [2010] REsp 1.117.633/RO (Bra). 10  See STJ I P DA S B v Google [2010] REsp 1.193/764/SP (Bra.). 11 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

194   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes engine, which would only index publicly available information. In other words, the decision indicated that search engines cannot be compelled to remove content from their indexes, even when that content is deemed illegal.12 In a case decided in December 2013, however, the court crafted an exception to that rule, affirming that the search engine could be sued even if the original content had been removed or altered in the original source, as long as it still appeared as a search result. Therefore, the decision indicated that a search engine can be compelled to update its cache memory to reflect the current, actual state of the original source.13 In 2011, another case involving Orkut was decided by the Fourth Chamber of the STJ. The plaintiff requested the platform to exclude all defamatory content against him, as well as the payment of damages. In a preliminary injunction, the trial judge ordered Orkut to remove, within forty-eight hours, all negative topics relating to the plaintiff. The trial court decision was confirmed at the Rio Grande do Sul State’s Appeals Court, and ultimately reached the Superior STJ after an appeal by Google. The STJ affirmed that the platform should be able to filter uncontroversially defamatory content. In its decision, the STJ said that, as Google had created this ‘untameable monster’, it should bear responsibility for the ‘disastrous consequences’ of lacking oversight of its users. The decision also stated: (1) that even if automated filtering was impossible in the case of controversial speech, the company should bear responsibility for assessing the content and keeping it online at its own risk; and (2) that a large internet company should be aware of the existence of a potentially offensive message as soon as it is posted, regardless of notification by the person offended. The decision thus upheld and expanded the trial judge’s decision that required Google to proactively monitor and remove the content from the Orkut social network.14 Intellectual property (IP) issues were also brought to the STJ before the MCI was enacted. In the Dafra case (decided in 2014), YouTube was found liable for copyright infringement due to a parody of a commercial of Dafra (a motorcycle manufacturer) that made negative comments about the company’s products. YouTube removed the video after a takedown request. However, Dafra filed a lawsuit requesting YouTube to actively prevent the exhibition of any video using the company’s name or trade mark, regardless of the exact title as the first video and regardless of which user had uploaded it. The plaintiff also sought to identify the user who posted the parody and asked YouTube to pay monetary damages due to its negligence. These claims were accepted by the court even though parodies are among the exceptions to copyright protection in Brazilian copyright law.15 Moreover, the decision affirmed that after being formally notified by the plaintiff, YouTube had an obligation to remove all existing videos with the same title, despite the 12  See STJ Maria da Graça Xuxa Meneghel v Google [2012] REsp 1.136.921-RJ (Bra.). 13  See STJ Segunda Turma Recursal dos Juizados Especiais Cíveis e Criminais do Estado do Acre v Google [2013] REcl 5072-AC (Bra.). 14  See STJ Tiago Valenti v Orkut [2011] REsp 1.175.675-RS (Bra.). 15  See Copyright Law, art. 47 (Bra.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   195 precise identification of the URL.16 The opinion of Judge Luis Felipe Salomão (in the STJ) considered that YouTube would be able to filter new versions of the video re-uploaded to the platform. However, the opinion also emphasized that, because this issue had not been raised by the plaintiff, it was beyond the scope of the appeal. The obligation to indicate the URL of the content to be removed was eventually settled in a different case by the Second Section of the STJ. In another appeal (REcl 5.072/AC), decided on December 2013, the Court stated that the specific URL is an essential requirement for a court order to determine the removal of content.17 Behind these three cases, and several others, we find a common rationale: internet platforms should bear the full risks of illegal activities by their users. As detailed in the next section, the MCI was drafted, among other reasons, to change and limit this state of affairs, bringing more legal certainty for internet users and intermediaries. However, even today, some important pre-MCI cases are still pending in the Brazilian courts. The most important of these relates to the applicability of the Consumer Protection Code’s (CDC) strict liability rules to Orkut.18 A high school teacher, Ms Aliandra Vieira, requested Orkut to exclude a community in which some of her students were posting defamatory comments. As Orkut refused to take the community down, Ms Vieira filed a lawsuit against the company before a small claims court in the state of Minas Gerais, requesting an injunction to take down the community, as well as damages from Google for hosting the content. The court awarded Ms Vieira 10,000 reais in damages. Google’s appeal to the Court of Appeals of Minas Gerais was repealed, and the court affirmed that the company should indeed be considered strictly liable for user-posted content. Google appealed to the Federal Supreme Court (STF), but the case is still pending in the court’s docket. The STF has so far only recognized its jurisdiction on the case by recognizing that the appeal has ‘general repercussion’—but this means that before a final decision is reached by the STF, all similar litigation on the topic throughout the country must be suspended. As these and other cases made their way through the Brazilian courts, the media, and the legal community, legislative debates in Congress were pushing for the enactment of a statute regulating the activities of internet intermediaries. Among many such legislative efforts, one that garnered significant public attention was a ‘cybercrime’ bill proposed by Senator Eduardo Azeredo.19 The bill was regarded by scholars and civil society organizations as a move to censor online speech and to strengthen the protection of IP rights. 16  See ‘Superior Court of Justice, Fourth Chamber, Google Brazil v. Dafra, Special Appeal No. 1306157/ SP’(WILMap,24March2014). 17 See Acre v Google (n. 13). The same conclusion was eventually reached by the Second Section of the STJ in a case concerning copyright infringement on the platform Orkut. See STJ Botelho Indústria e Distribuição Cinematográfica Ltda v Google [2015] REsp 1.512.647-MG (Bra.). 18  See ‘Supreme Court of Justice, Aliandra v Orkut, ARE no. 660861’ (WILMap, 9 April 2012) . 19  Eduardo Azeredo’s Cybercrime Bill was proposed as a substitute for proposal 84/99 in the Brazilian House of Representatives. For a full record of the procedures in the House, see .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

196   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes It triggered reactions from different social sectors and led to a public campaign for the internet not to be regulated from a criminal standpoint. In that scenario, it become clear that innovation and protection of human rights online would depend on the enactment of a civil rights framework for the internet.

1.2  The MCI Legislative Process In 2009, the Brazilian Internet Steering Committee enacted a resolution with a ‘Charter of Principles for the Governance and Use of the Internet’.20 Almost at the same time, the Brazilian Congress was facing mounting pressure from organized civil society groups against the ‘Cybercrime Bill’ presented by Senator Eduardo Azeredo. The Azeredo Bill aimed to create a national legal framework for preventing and fighting cybercrime. His proposal was largely inspired by the Budapest Convention, a controversial international treaty sponsored by the Council of Europe and approved in 2001, shortly after the 9/11 attacks in United States.21 The Azeredo Bill raised serious concerns about privacy and surveillance among Brazilian internet activists. It established mandatory data retention, for a period of five years, for all internet service providers (ISPs) active in the country. Furthermore, public interest groups voiced concerns that the Bill could be employed to criminalize common and widely accepted behaviours by internet users, as well as to strengthen the enforcement of IP rights online.22 Reactions to the Cybercrime Bill eventually converged on an alternative proposal for regulating the internet—instead of establishing crimes and penalties for misuse, the idea would be to affirm principles, rights, duties, and responsibilities relating to use of the internet. Presented under the label of a ‘Civil Rights Framework for the Internet’, the proposal was received with enthusiasm by government officials. In 2009, the Ministry of Justice partnered with the Center for Technology and Society of the Getulio Vargas Foundation (CTS/FGV), a non-profit research and higher education institution, to develop a blueprint for an online public consultation process as part of the drafting of the new legislation. The draft presented for consultation was officially named the ‘Marco Civil da Internet.’ The process began with public debates on the general principles that should govern the internet (broadly inspired by the CGI Charter of Principles for the Governance and Use

20  See Comitê Gestor da Internet no. Brasil (CGI.br), Brazilian Internet Steering Committee Principles for the Governance and Use of the Internet . 21  See Budapest Convention . 22  See Luiz Moncau, Ronaldo Lemos, and Thiago Bottino, ‘Projeto de Lei de Cibercrimes: há outra alternativa para a internet brasileira?’ (2008) 249 Revista de Direito Administrativo 273–94 . See also Guilherme Varella, ‘PL Azeredo: a contramão dos direitos e liberdades na Internet’ (IDEC, 28 July 2011) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   197 of Internet).23 The outcome of those debates served as a foundation and orientation for the first draft of the Bill, which was then subject to a new round of public consultation. The level of public engagement with the consultations24 was high. During the process, individual users and governmental and non-governmental entities submitted more than over 2,000 contributions, and the substance of that input was in many ways reflected in the final text of the Bill. One of the main changes adopted during the public consultations was the substitution of a ‘notice-and-takedown’ regime for all types of il­legal online activity by a set of safe harbour rules, by which intermediaries would be liable only if they failed to comply with a direct court order requiring the removal of infringing content. The final draft was presented to Congress on 24 August 2011.25 It encompassed users’ rights and general principles and rules for the regulation of the internet, including on data retention, ISPs’ liability, and net neutrality. In spite of having garnered public support during the consultation process, the proposal was stopped in its tracks in Congress. Telecommunication companies fiercely attacked the net neutrality provisions; broadcasters lobbied against any change that could amount to making copyright more flex­ible; and public authorities lobbied for longer data retention periods and reduced safeguards and controls in their access to users’ data. It would take an external event to change this state of affairs. The Snowden leaks on surveillance by the US National Security Agency (NSA) changed the whole political landscape, re-energizing public support for increased control on how power can be exercised over individuals on the internet. On 11 September 2013, as a direct response to information revealed by the leaks, President Dilma Rousseff issued a message endowing the MCI Bill with ‘urgent’ status for the purposes of the legislative process in Congress.26 In practice, this meant that each of the two legislative chambers, the Chamber of Deputies and the Federal Senate, would have forty-five days to vote on the Bill. The urgency procedure hastened deliberations in the Chamber but several points in the text had to be negotiated between political leaders before the Bill was finally approved by the Deputies on 25 March 2014. The fact that the NETmundial meeting was taking place in São Paulo probably made things faster in the Senate, where the Bill was quickly discussed and approved on 22 April and officially converted into law during the Meeting on 23 April 2014.

23  See CGI.br, Principles for the Governance and Use of the Internet . 24  See Ministério da Cultura, Cultura Digital, Online Public Consultation for the Marco Civil da Internet . See also the public consultation by the Ministry of Justice for the decree issued to provide further details on the implementation of the MCI. See Ministério da Justiça, Pensando o Direito, Public Consultation on the MCI 2014 Decree . 25  For further information about the Bill and its amendments in Congress, see Câmara dos Deputados, PL 2126/2011 . 26  See Dilma Rousseff, Message 391/2013 .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

198   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes

2.  The ‘MCI on the Books’ In this section, we will review the main traits of the MCI’s original text. In substance, the MCI covers much more than intermediary liability. It addresses key topics of internet governance such as data retention, network neutrality, privacy, and jurisdiction and conflict of laws issues, as well as defining the rights and principles for the internet in Brazil.

2.1  General Provisions of the MCI Data retention. The Azeredo Bill would have established a period of five years of mandatory data retention for internet intermediaries. The MCI has more moderate provisions to that effect: the period of mandatory data retention for access logs for ISPs providing connectivity services is only one year.27 Due to lobbying and public pressure by the federal police, however, other internet services—application services, meaning all websites and over-the-top (OTT) services—are also subject to a regime of mandatory data retention (six months).28 The provision was criticized due to the risks it poses to privacy. However, it provided a clear answer to the question of the obligations of internet intermediaries to provide information to authorities investigating users engaging in illegal activities online. Moreover, it created an important privacy safeguard in establishing that the authorities can only access the data thus retained by means of a court order. Net neutrality. Before the MCI, debates in Congress around the Azeredo Bill con­ sidered requiring intermediaries to proactively and pre-emptively monitor their networks and platforms, as well as to secretly inform authorities of potentially illegal activities by their users. This was a highly demanding obligation for ISPs, and it expressed how cybercrime concerns had moved centre stage in the Brazilian legislative debates on internet regulation at that point. In contrast, the net neutrality provision adopted in the MCI expressly rejects such an obligation. Article 9, § 3º establishes that access providers cannot block, monitor, filter, or analyse the content of data packets flowing through their networks, with the only exception being made for technical measures necessary to provide the internet service itself.29 The rule may provide an important defence against judicial decisions that would require an access provider to block specific content for law enforcement purposes.

27  The MCI establishes that access providers must retain connection records/logs (‘the set of information pertaining to the date and time of the beginning and end of a connection to the internet, the duration thereof and the IP address used by the terminal to send and receive data packages’) and must not retain application logs (‘the set of information regarding the date and time of use of a particular internet application from a particular IP address’). See MCI 2014 (n. 2) art. 5, VI and VIII and arts. 13 and 14. 28  ibid. art. 15. 29  ibid. art. 9, s 3.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   199 However, there are ongoing lobbying efforts by copyright holders to promote new rounds of legislation, in addition to the MCI, that would authorize the blocking of websites ‘dedicated to criminal activity’, including in cases of copyright infringement.30 Privacy. In Brazil, as elsewhere, the Snowden revelations brought privacy concerns to the fore, in addition to the freedom of expression concerns that had typically characterized a previous generation on internet governance debates.31 A small number of explicit privacy provisions were included in the MCI. They include a user’s right to be clearly informed about how their personal data is being collected and processed, and a right not to have one’s data processed for different purposes than those for which the data were originally collected.32 The MCI also requires that a user’s collection of personal data should be subject to their specific consent.33 Other rights and principles. Article 7, IV of the MCI also establishes a right of internet users to stay connected to the internet, ‘except if [the disconnection of the service] is due to a debt directly originating from its use’.34 This means, in practice, that the MCI forbids law enforcement policies or agreements between private actors that would allow for the disconnection of a user engaged in illegal activities (e.g. the ‘three-strike approach’ discussed in the early 2000s).35

2.2  Intermediary Liability Rules The MCI established a broad immunity for internet access providers and a narrower, conditional immunity for other OSPs (‘application providers’, in the statute’s wording).36 Article 18 states that ‘[t]he provider of connection to internet shall not be liable for civil damages resulting from content generated by third parties.’37 Article 19 establishes the intermediary liability rules for application providers, stating that an application provider ‘can only be subject to civil liability for damages resulting from content generated by third parties if, after a specific court order, it does not take any steps to, within the framework of their service and within the time stated in the order, make unavailable the content that was identified as being unlawful’.38 The MCI, therefore, does not take into account whether the intermediary was aware of the existence of infringing content on its platform. Requiring a court order means 30  See International Intellectual Property Alliance, ‘2018 Special 301 Report on Copyright Protection and Enforcement’ (8 February 2018) 95 . 31  See Dennis Redeker, Lex Gill, and Urs Gasser, ‘Towards digital constitutionalism? Mapping attempts to craft an Internet Bill of Rights’ (2018) 80 Int’l Communication Gazette 311. 32  See MCI 2014 (n. 2) art. 7, VI and VIII. 33  ibid. art. 7, IX. 34  ibid. art. 7, III. 35  For a broader account of those topics, see Luiz Moncau and Pedro Mizukami, ‘Brazilian chamber of deputies approves Marco Civil Bill’ (Info Justice, 25 March 2014) . See also Nicolo Zingales, ‘The Brazilian approach to internet intermediary liability: blueprint for a global regime?’ (28 December 2015) 4(4) Internet Policy Rev. . 36  See MCI 2014 (n. 2). 37  ibid. art. 18. 38  ibid. art. 19.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

200   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes rejecting using such awareness as a standard to make OSPs liable. Moreover, it makes it clear that the OSP is not legally entitled to make a call on which content is legal. This perspective is further developed in the first paragraph of article 19, which states that the court order must provide a clear identification pointing to the specific infringing content, ‘allowing the unquestionable location of the material’.39 If the court order standard serves the purpose of repealing any strict liability regimes, as well as the proactive monitoring of content, the specific and clear identification of the infringing content prevents the imposition of ‘staydown’ obligations (i.e. the obligation to ensure that the removed content ‘stays down’ and do not reappear on the platform or service).40 An important exception, however, can be found in the second paragraph of article 19. Originally, the MCI would have adopted a ‘horizontal’ approach, by applying the rules equally to all types of illegal content. However, lobbying efforts from copyright holders and broadcasters led to the creation of this specific provision, which explicitly states that copyright infringement is an exception to the general rule of article 19.41 As the current Brazilian copyright law dates from 1998 and does not have any specific provisions relating to the internet, this clause has led, in practice, to the adoption of a notice-and-takedown regime for copyright infringement, following the model established in the United States by the Digital Millennium Copyright Act and in Europe by the e-Commerce Directive.42 During the MCI debates in Congress, members of parliament were concerned with the possibility that illegal and harmful content would stay online for long periods before courts could issue a decision determining its removal. In this sense, the third paragraph of article 19 was created to allow citizens to file lawsuits in special ‘small claims courts’, which were designed in the 1990s to adjudicate less complex, low-profile cases with increased speed, simpler procedural requirements, and in which parties could represent themselves (i.e. without having to hire a lawyer) under certain conditions.43 Paragraph four of article 19 also allows judges to issue preliminary injunctions that may pave the way for a partial or full takedown request in the future, ‘as long as the requisites of truthiness of the author’s claims, the reasonable concern of irreparable damage, or damage that is difficult to repair’ are met.44 The second important exception in the MCI’s horizontal approach is the non-consensual dissemination of intimate images, popularly known as ‘revenge porn’. The law explicitly affirms that the removal of this kind of content requires only the ‘receipt of notice by the participant or his/her legal representative’.45 Following the general rule of article 19, the notice ‘must contain sufficient elements that allow the specific identification of the

39  ibid. art. 19(1). 40  For the different types of notice and action mechanisms, see Chapter 27. 41  See MCI 2014 (n. 2) art. 19(2). 42  See the Digital Millennium Copyright Act of 1998, 17 USC § 512 (US); Directive (EC) 2000/31 of the European Parliament and the Council on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1. 43  See MCI 2014 (n. 2) art. 19(3). 44  ibid. art. 19(4). 45  ibid. art. 21.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   201 material said to violate the right to privacy of the participant-user and the confirmation of the legitimacy of the party presenting the request’.46

2.3  Intermediary Liability Beyond the MCI While the copyright exception in the MCI is not included in copyright laws, other statutes—such as some electoral laws47 and child protection laws48—create specific rules of intermediary liability and address the question of online content removal. In the elect­or­al sphere, after public debates on whether removal of content that violates electoral law would require a court order, recent statutes have clearly created a judicial decision requirement, thus reconciling electoral laws with the general rule of the MCI. Brazilian electoral laws are updated before each election. Law 9.504 states that promoted content on the internet can only be removed after a specific court order.49 The Superior Electoral Court (TSE) Resolution no. 23.551/2017 repeats, in part, the wording of the MCI’s article 33, § 1º, determining that ‘to ensure freedom of expression and prevent censorship, court orders may determine the removal of content that infringes the electoral law’.50 Article 33 also establishes other details, such as that court orders must point to the URL of the infringing content; that such orders must not require the removal of content in less than twenty-four hours (except in ‘extraordinary’ cases); and that the effects of those court orders will cease after the end of the electoral period.51 The Child and Adolescent Statute (ECA) criminalizes the act of offering, trading, making available, distributing, or publishing by any means photographs, videos, or any records of sexual nature involving children or adolescents.52 The crime is punishable with imprisonment which can be applied to anyone who provides the means or services for access to that type of content. However, § 1º of article 241-A states that the provision is only applicable when the service provider, once notified, does not disable access to the infringing content.53 In practice, this means that service providers must respond to takedown notices to avoid criminal liability under the ECA.

46  ibid. art. 21(1). 47  Brazil holds elections every two years. During the election period, many requests to remove content are addressed by intermediaries. This can be clearly observed on the Google Transparency Report, ‘Solicitações governamentais de remoção de conteúdo’ . A broad account of lawsuits seeking content removal based on the electoral law can be found in the Ctrl-X project . 48  See lei no. 8.069 de 13 Julho 1990 [Child and Adolescent Statute (ECA) of 1990] (Bra.). 49  See lei no. 9.504 de 30 Setembro 1997, art. 57-B(4) (Bra.). 50  Superior Electoral Court (TSE) resolution no. 23.551/2017, art. 33(1) (Bra.). 51  ibid. art. 33 (3, 4, 6). 52  See MCI 2014 (n. 2) art. 241-A. 53  ibid. art. 241-A(1).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

202   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes

3.  The MCI in Practice 3.1  The Brazilian Justice System The Brazilian federation has a dual judicial system, with both state and federal courts.54 The federal courts’ jurisdiction is not defined by subject matter, but rather ratione personae: all cases in which one of the parties is the federal government, one of its agencies, or one its companies must be litigated before the federal courts.55 Moreover, the Brazilian justice system is further divided into specialized branches with separate labour courts, electoral courts, and military courts—each of which have their own specific court of last instance. Both state and federal courts are divided on trial and appellate levels (instâncias), and they both include internal, specialized ‘small claims’ courts for more simple cases. At both levels, the small claims and the ordinary courts can, in principle, hear cases arising under federal law. In the case of the MCI, litigation typically takes place in the state court system, unless the federal government or one of its entities is a defendant or plaintiff. The state and federal systems connect, at the top, in one of the two apex or ‘high’ courts: the Supremo Tribunal Federal, which has jurisdiction over appeals and abstract review lawsuits in which a constitutional question is raised, and the Superior Tribunal de Justiça.56 Appeals on cases involving the MCI typically reach the STJ as the third and last instance for appeals. However, in Brazilian legal practice it is not particularly hard to make the case that a constitutional controversy is involved. In the case of the MCI, for example, it can be argued that the (mis)application of the MCI in a given case violates free speech or privacy, thus potentially triggering the STF’s jurisdiction. Still, in quantitative terms, decisions on MCI-related issues are expected to come more often from the STJ than from the STF. The relationship between high court rulings (‘jurisprudência’) and the behaviour of the lower courts is a complex issue in Brazilian law. Formally, there is no system of binding precedent. When a high court decides a concrete case, the conventional, traditional view is that such a ruling is binding only for the litigating parties; Brazilian judges have been traditionally trained to see high court interpretations of the law and even of the constitution as persuasive, non-binding authority. This scenario has changed considerably since the 1990s, and especially since the Judicial Reform of 2005 and the Civil Procedure

54  For general information on the Brazilian judicial system, see Keith Rosenn, ‘Judicial Review in Brazil: Developments under the 1998 Constitution’ (2000) 7 Sw. J. L. & Trade Am. 291. 55  See Constitution 1998, art. 109(I) (Bra.). 56  While all trial judges and the majority of appellate judges in both state and federal courts are career professionals, who enter the judiciary by means of a public examination at the trial level, high court judicial appointments are more political. STF judges are appointed by the President and confirmed by the Senate; STJ judges follow the same procedure but the appointees must be chosen from a list of names drafted by the Prosecutor’s Office (Ministério Público), state courts of appeals, and federal courts of appeals.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   203 Code of 2015.57 These most recent rules create many mechanisms by which high courts can quickly dismiss appeals that are in tension with their established case law. In the specific case of the STF, statutory and constitutional interpretations adopted when deciding appeals can under certain procedural conditions be treated as binding—for most practical purposes, if not as a formal matter—by the lower courts.58 Regarding the timing of the decisions, judicial developments on the MCI are shaped by a couple features of the justice system. First, the Brazilian judiciary—especially at the appellate level and in the higher courts—is known for its very large docket and potentially very long proceedings. It is not rare for appeals at the STJ and STF to take several years before being decided. Secondly, and most importantly, at the STF and the STJ the reporting judge in the case and a panel’s presiding judge have complete discretion, in practice, to decide when a case is ready for judgment. There is no binding deadline or effective judicial custom in place constraining these judicial actors in that regard.59 This arrangement means that, in spite of the very large backlog, some cases are selected for a quick decision, while others will linger in the docket for years and sometimes decades. The combination of these features makes it very hard to predict when controversies involving new statutes, such as the MCI, will be settled by the STJ or perhaps the STF. Indeed, as we will see, some key lawsuits concerning the interpretation and implementation of the MCI have been lingering in these two courts’ dockets for years and there is no reliable way of estimating when they will be decided.

3.2  Relevant Decisions in High Courts In its pre-MCI decisions, as we saw earlier, the STJ had been pointing in conflicting directions. Depending on how they articulated legal ideas of strict liability, joint liability, and liability arising from ‘risks’ created by economic activity, STJ decisions could move either towards or away from the idea of safe harbours for internet intermediaries that promptly removed illegal content, when requested to do so, and that provided the means to identify users engaged in illegal activities.60 The MCI was drafted, among other ­reasons, precisely to eliminate some of these legal uncertainties. With the MCI, the Brazilian legislation expressly created a safe harbour provision as a general rule. The law became an important trench not only for protecting internet companies but also for freedom of expression. 57  See, generally, Maria Angela de Santa Cruz Oliveira and Nuno Garoupa, ‘Stare decisis and certiorari arrive to Brazil: A Comparative Law and Economics Approach’ (2012) 26(2) Emory Int’l L. Rev. 555, 555–98. 58  For an overview of these reforms, see Maria Angela de Santa Cruz Oliveira, ‘Reforming the Brazilian Supreme Federal Court: a Comparative Approach’ (2006) 5 Wash. U. Global Stud. L. Rev. 99. 59  See Diego Werneck Arguelhes and Ivar Hartmann, ‘Timing Control without Docket Control: How Individual Justices Shape the Brazilian Supreme Court’s Agenda’ (2017) 5(1) J. of Law and Courts 105–40 (including a discussion of (discretionary) timing control mechanisms at the level of the STF). 60  See e.g. STJ IP DA SB v Google [2010] REsp 1.193.764-SP (Bra.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

204   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes Currently, more than 100 decisions concerning the liability of internet intermediaries have been issued by the STJ. In a recent official compilation of its decisions on the topic, the court affirmed that its case law has repeatedly stated that internet intermediaries: (1) are not strictly liable for user-generated illegal content; (2) cannot be compelled to pre-emptively filter information uploaded by users; (3) must remove infringing content as soon as they are made aware of illegal content on their platforms, and have to pay damages if they fail to do so; and (4) must develop and maintain minimally effective mech­an­isms to enable the identification of their users.61 The STJ decisions also seem to assert, as a general rule, that those seeking the removal of infringing content on the internet must clearly specify that content’s specific URL. Indeed, the STJ takes the indication of the URL as a requirement for any court order determining content removal, in order to both protect application providers and mitigate negative impacts on freedom of expression. The STJ’s case law also emphasizes that the use of URLs to clearly identify the infringing content to be removed is a safe, ob­ject­ ive criteria for the assessment of OSPs’ compliance with court orders.62 Another relevant feature of the STJ’s case law concerns the liability of search engines for the indexing of content publicly available on the internet. As mentioned in Section 2.1, the pre-MCI decisions by the STJ had asserted that search engines would not be li­able for the content indexed by their services. However, the court maintained the same position even after enactment of the MCI, in a case decided in December 2016. Following the pre-MCI rationale, the STJ’s Third Chamber decided that a search engine cannot be compelled to remove infringing contents from its index.63 However, this position is not settled within the court. In a case in May 2018 involving a ‘right to be forgotten’ claim,64 the same Third Chamber decided, by a majority, that Google had to delist from its search engine page of results a number of links pointing to news content connecting the plaintiff to a possible fraud in a civil service public exam­in­ ation, whenever the search query was performed using the plaintiff ’s name. Interestingly, the decision is not based on data protection rules. It was issued prior to the recently approved (and still not in force) Brazilian Data Protection Law, which created a legal 61  See ‘Provedores, redes sociais e conteúdos ofensivos: o papel do STJ na definição de responsabilidades’ (Superior Tribunal de Justiça Noticias, 17 September 2017) . 62  See e.g. STJ Cristiane Leal de Oliveira v Google [2018] REsp 1.698.647 (Bra.). The decision mentions that, without the specific URL, the intermediary would have to monitor all the content. As mentioned earlier, prior to the MCI different panels of the STJ (the third and fourth chambers, specifically) were issuing conflicting decisions on the necessity of identifying the URL of the infringing content. The requirement of URL identification eventually prevailed in the decision of the STJ’s Second Section (which gathers the third and fourth chambers of the court) in REcl 5.072/AC, decided on December 2013. The same conclusion was reached by the Second Section on a case involving copyright infringement on the Orkut platform (REsp 1.512.647-MG, decided May 2015). 63  In that case, the plaintiff was seeking to remove from Google’s index the links to search results that would violate her privacy. See STJ SMS v Google [2016] REsp 1.593.873-SP (Bra.). 64  See STJ Denise Pieri Nunes v Google [2018] REsp 1.660.168-RJ (Bra.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   205 framework in Brazil very similar to the current European laws on data protection.65 Therefore, many important issues were not clearly addressed in the decision, including (and most importantly) the legal grounds that would justify the delisting from search engines of content that could be considered perfectly legal.66

3.3  Cases Pending Before the Federal Supreme Court In addition to the pre-MCI Aliandra case (discussed in Section 2.1), three relevant cases are still pending before the STF. In the first case (Recurso Extraordinário no. 1.037.396), the STF has accepted a discussion of the constitutionality of the MCI’s article 19.67 Two other cases (ADI 5527 and APDF 403/2016)68 involve recent trial court decisions determining the nationwide suspension of the WhatsApp service for failing to provide information requested by authorities during the course of criminal investigations. (a) The constitutionality of the MCI’s article 19. In a case originally brought before the small claims courts of São Paulo, the plaintiff requested the exclusion of a fake profile that used her name, the payment of monetary damages from Facebook, and the IP address of the computer used to create and manage the profile.69 The trial judge agreed with the plaintiff ’s claims, with the exception of the payment of monetary damages, as he considered that Facebook had acted promptly after the court ordered the removal of the profile. After the plaintiff ’s appeal, the 2nd Panel of the Piracicaba Special Appeals Court (Colégio Recursal) decided that Facebook was liable for not removing the fake profile and for not providing the proper tools to remove the content. The judges decided that article 19’s limited intermediary liability was incompatible with consumer protection rules enshrined in the Brazilian Constitution and the Consumer Protection Code, and awarded the plaintiff 10,000 reais in damages.70 On appeal, Facebook requested the STF to affirm the constitutionality of article 19, dispelling the alleged conflict between the 65  Brazil’s Data Protection Law will become enforceable on August 2020. See Regulation 2016/679/EU of the European Parliament and the Council on the General Data Protection Regulation (the GDPR) [2016] OJ L119/1 or Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data [2016] OJ L119/1. 66  Other issues would include the public nature of the plaintiff (a public prosecutor, in that case), and the impact of the passage of time for the recognition of a ‘right to be forgotten’ in a given case. Most of these issues were completely ignored by the decision, which does very little to provide legal certainty of the existence and applicability of a right to be forgotten in Brazilian law. For a discussion of recent developments around the right to be forgotten in Brazil and in the region, see Diego Werneck Arguelhes and Luiz Fernando Marrey Moncau, ‘Privacy Law in Latin America’ in Conrado H. Mendes and Roberto Gargarella (eds), The Oxford Handbook of Constitutional Law in Latin America (OUP 2020). 67  For more information and documents, see Felipe Octaviano Delgado Busnello, ‘Case n. 1037396’ (WILMap, 4 April 2018) . 68  See Luiz Fernando Marrey Moncau, ‘Supreme Court, Arguição de Descumprimento de Preceito Fundamental403/2016’(WILMap,19April2016). 69  See Delgado Busnello (n. 67). 70 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

206   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes MCI and the consumer protection rules. The STF has so far only determined that the constitutional controversy underlying the appeal has ‘general repercussions’ and has therefore accepted to decide the case.71 This is one of the most relevant pending cases in Brazil on the issue of intermediary liability, and the fate of the safe harbour regime adopted in article 19 of the MCI hinges on the STF’s decision. (b) The ‘WhatsApp blockage’ cases in the STF. Since WhatsApp arrived in Brazil, trial judges have ordered the suspension of the messaging application (owned by Facebook) on four different occasions, three of which resulted in the complete blocking of the service nationwide. These injunctions were issued due to the alleged non-compliance of the company with previous judicial orders, in the course of criminal investigations, requesting data about investigated individuals. As some of the cases were not public, it is not entirely clear if they involved requests of data that the company would be required to provide under the MCI (e.g. application logs), or if they involved the actual content of WhatsApp conversations, which would be protected by end-to-end cryptography. The blockages were extremely unpopular and heavily criticized, leading two political parties to file constitutional review lawsuits directly before the STF. In ADI 5527,72 filed by the Partido da República (PR), the main claim is that article 12, III and IV of the MCI should be considered unconstitutional. These are the legal provisions that allow for the suspension of services that fail to protect users’ personal data and privacy. In ADPF 403,73 filed by the Partido Popular Socialista (PPS), the plaintiff asked the STF to declare the same provisions unconstitutional in order to protect freedom of expression (art. 5, IX of the Brazilian Constitution), and because suspending a service in that fashion would be a clearly disproportional measure. At the time of writing, the cases are still pending before the STF.74

4.  The MCI and Digital Constitutionalism Digital constitutionalism (DC) has emerged as a terminology to link ‘a constellation of initiatives that sought to articulate a set of political rights, governance norms, and

71 ibid. 72  For a summary of the case, see Felipe Mansur, ‘ADI 5527 and appblocks: a problem in the wording of the Law or in its Interpretation?’ (Bloqueios.info, 18 November 2016) . 73  See, for a summary of the case, Paula Pécora de Barros, ‘ADPF 403 in STF: are whatsapp blocking constitutional?’(Bloqueios.info,21November2016). 74  A timeline of all service blocks in Brazil and case materials is available at .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   207 limitations on the exercise of power on the internet’.75 Instead of focusing on the technical architecture of the internet, the documents adopt a political perspective by committing to specific rights and legal principles that must shape online interactions between citizens, companies, and the state.76 While DC and related expressions have been employed in a variety of senses since the 1990s, they are arguably connected by the commitment to create ‘a normative framework for the protection of fundamental rights and the balance of powers in the digital environment’.77 As a full review of these debates is beyond the scope of this chapter, in this section we briefly attempt to contextualize and position Brazil’s MCI within the broader backdrop of this ‘constellation of initiatives’ and the normative commitments they express.78 In one influential attempt to map the state of DC initiatives globally, Redeker and ­others identified thirty-two documents embodying proposals to regulate the internet with the following shared traits: (1) substantively, they address ‘fundamental political questions that have an inherently constitutional character’, by articulating rights, structuring governance for both state and private actors online, and laying down limits to state power; (2) they ‘speak to a particular and defined political community’; (3) they at least aspire to formal and legitimate political recognition in such community, including the appropriate enforcement mechanisms, while not necessarily being formally enacted as a statute or constitutional norm; (4) they are, at least on some level, comprehensive efforts, in the sense that they favour broad principles instead of focusing on specific policy issues; (5) they ‘represent the views of some organization, coalition, state or other organized group of some kind’, thus excluding documents that express the position of specific individuals.79 Evaluated against these criteria, the MCI arguably stands out as a particularly intense manifestation of DC. Out of the thirty-two documents compiled by Redeker and other, the MCI is the only one that currently has the status of a formal statute. Most examples of DC worldwide are non-binding charters, official statements, or bills that are still being debated by legislatures.80 Moreover, although in the authors’ classification the MCI is coded as originating from the government, and while it is true that the executive branch 75  Redeker, Gill, and Gasser (n. 31) 303. Claudia Padovani and Mauro Santaniello, ‘Digital constitutionalism: Fundamental rights and power limitation in the Internet ecosystem’ (2018) 80 Int’l Communication Gazette 4, 296. 76  See Micheal Yilma, ‘Digital privacy and virtues of multilateral digital constitutionalism—preliminary thoughts’ (2017) 25 Int’l J. of L. and IT 2, 117; Padovani and Santaniello (n. 75) 296 (‘these documents have a political rather than a technical primary scope . . . digital constitutionalists foster a normative framework embedded in, or informed by, international human rights law and domestic democratic constitutions’). 77  Edoardo Celeste, ‘Digital Constitutionalism: Mapping the Constitutional Response to Digital Technology’s Challenges’, HIIG Discussion Paper Series No. 2018-02 (2018) 15 (also noting that there is no consensus among the analysed scholars on the aim of digital constitutionalism). 78  See Celeste ibid. for a useful critical review of the literature, as well as a proposal to organize the debate in conceptual terms. 79  The emphasis is on ‘constitutionalism’ not on ‘constitutions’ because the aspirations and values that underlie those documents are, from that perspective, more important than their formal status in the legal system. See Redeker, Gill, and Gasser (n. 31) 303. 80  See Padovani and Santaniello (n. 75) 298.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

208   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes played an important role in shaping and promoting the proposal, the MCI was enacted by the Brazilian Congress after a high level of participation and engagement by civil society groups, scholars, and business interests. While the executive branch did play an important role, the final text was hailed in the Brazilian political community as a landmark statement of the binding principles that would shape online life from that moment forward.81 This perspective on DC emphasizes the similarities and continuities between the documents and traditional constitutional concerns, arguments, ideas, and texts. It includes documents that, while not having constitutional or even statutory status, adopt constitutional language and aspirations in attempting to regulate the internet, focusing on broad principles aiming to protect rights and regulate how power can be exercised over individuals online. The exact scope of application of these aspirations, however, has been debated. In the most recent version of their study, Redeker and others ac­know­ ledge that, across the set of documents they map, ‘power’ might or might not include both public and private actors.82 In contrast, some authors use DC to refer specifically or primarily to attempts to translate constitutional aspirations into the realm of private companies’ governance. In such perspectives, manifestations of DC are seen first and foremost as a reaction to the fact that, in contemporary societies, and especially on the internet, individual rights are put at risk by private actors as often as (if not more than) by state or quasi-state actors.83 Private power is typically exercised online by corporations that control, in many ways, a user’s access to and enjoyment of rights and liberties online.84 After the 2000s, corporations increasingly became the central arena for the challenge of controlling power and protecting rights online.85 In this scenario, beyond noting descriptive regularities in such movements for protecting rights online, some commentators have adopted a normative dimension and proposed specific versions of constitutionalist ideol­ogy for the virtual world.86 In doing so, they make the claim that we should update and translate the traditional concerns of constitutionalism—protecting rights and limiting

81  The strong connection between national politics and the MCI might even be a problem for the ‘exportability’ of this proposal to other communities. See Guy Hoskins, ‘Draft Once; Deploy Everywhere? Contextualizing Digital Law and Brazil’s Marco Civil da Internet’ (2018) 19 Television & New Media 5 (arguing that the specific political conflicts that shaped the MCI’s legislative process, as well as Brazil’s recent dictatorial past, make it harder to see this document as a potential blueprint for affirming online rights in other contexts). 82  See Redeker, Gill, and Gasser (n. 31) 304 (‘[e]fforts towards digital constitutionalism may aim limit the power of both public authorities and private corporations through the recognition of rights’). 83  See Nicolas Suzor, ‘The Role of the Rule of Law in Virtual Communities’ (2010) 25 Berkeley Tech. L.J. 1818, 1818–86. See also Celeste (n. 77) 15 (arguing that ‘digital constitutionalism . . . is a concept which refers to a specific context, the digital environment, where private actors emerge beside nation states as potential infringers of fundamental rights’). 84  See e.g. Nicolas Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (2018) Social Media + Society 1. 85  Redeker, Gill, and Gasser (n. 31) 314. 86  See e.g. Suzor (n. 83) (for a specific analysis/application of constitutional, rule of law values to the governance of online platforms).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   209 power—into a realm where the enjoyment of powers is very much threatened by private companies that hold the resources and tools to shape our online lives.87 How do the MCI’s provisions measure up against these aspirations? The MCI has been criticized for simply affirming, on the internet, the applicability of the same rights as those enjoyed outside the virtual sphere.88 But this, in itself, is a consequential legislative decision, with implications beyond the specific rights affirmed. Asserting the ap­plic­abil­ity of fundamental rights online provides a justification for stricter state regu­ la­tion of the private governance of the internet. Moreover, in certain legal communities it might also pave the way, at least in principle, for the possibility of judicial enforcement against private actors, such as OSPs, of freedom of expression, access to information, privacy, and property, among other fundamental rights. The MCI includes general provisions that express a commitment to protecting fundamental rights against private parties, although the exact scope of this protection is not necessarily the same for all the rights included in the statute. Article 8 states that ‘guaranteeing the rights to privacy and freedom of expression in communications is a precondition for the full exercise of the right to access to the internet’.89 Article 7 establishes that ‘access to internet is essential to the exercise of citizenship’.90 These broad principles are then detailed in several clauses, which create specific rights that, in the MCI’s terms, override contractual clauses that purport to regulate those matters differently.91 Consider the following examples. First, the privacy and communications provisions. Although falling short of articulating a complete data protection regime, the MCI mandates that OSPs clearly inform their users of how their personal data is collected and processed, and users have the right not to have such data employed for purposes other than that for which consent was originally given.92 Moreover, article 10 and its provisions establish that data concerning internet connection and application logs, as well as personal data and the content of private communications, must be stored and made available in compliance with the rights to ‘intimacy, private life, personal honour, and 87  In this sense, see Suzor (n. 84) 13 (noting that ‘a process that seeks to articulate a set of limits on private power that will best encourage innovation and autonomy and simultaneously protect the legitimate interest of participants in the increasingly important virtual spaces’); Celeste (n. 77) 16 (highlighting that ‘in a context where both public and private actors can affect the protection of fundamental rights, the aim of digital constitutionalism does involve the limitation of power of both these categories of actors’). 88  See Medeiros and Bygrave, ‘Brazil’s Marco Civil da Internet’ (n. 1). 89  See MCI 2014 (n. 2) art. 8. 90  ibid. (‘Art. 7. Access to the internet is essential to the exercise of citizenship, and the following rights are guaranteed to the users’). 91  ibid. (‘Art. 8. The guarantee of the right to privacy and freedom of speech in the communications is a condition for the full exercise of the right to access to the internet. Sole Paragraph: Contractual clauses that are in breach of the provision [caput] above are void by operation of law, such as those that: I – cause an offense to the inviolability and secrecy of private communications over the internet; or II – in adhesion contracts, do not provide an alternative to the contracting party to adopt the Brazilian forum for resolution of disputes arising from services rendered in Brazil’). 92  See MCI 2014 (n. 2) art. 7(VII) (establishing that users have the right to ‘non-disclosure to third parties of users’ personal data, including connection records and records of access to internet applications, unless with express, free and informed consent or in accordance with the cases provided by law’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

210   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes the image of the parties directly or indirectly involved’. The MCI also states specifically that the content of communications can only be made available by means of a court order.93 Secondly, as already mentioned, article 7, IV of the MCI affirms the right of internet users not to be disconnected from the internet, except when the disconnection of the service is due to a debt directly originating from its use.94 Within the MCI framework, access to the internet is fundamentally connected to citizenship, and the specific right to stay connected to the internet is designed to be enforceable against some private parties (access providers). Article 7, V takes a further step and creates a right to the ‘main­ten­ance of the quality of the internet connection contracted with the provider’.95 The wording of these provisions and the legislative history of the MCI leave no doubt that this limited right to internet access is enforceable against private companies.96 Beyond judicial enforcement, however, such provisions can serve to enable a framework for regu­la­tion aimed at limiting violations, by private companies, of other dimensions of individual access to the internet. Thirdly, is the issue of protecting freedom of expression against private parties. While the MCI states that freedom of expression will be protected online,97 it does not ex­pli­ cit­ly state that private parties must respect internet users’ freedom of expression to the same extent that state actors are bound. However, in article 19 the MCI seems to at least hint in that direction. Before laying out the rules for (conditional) intermediary liability, article 19 states that such rules are created ‘in order to ensure freedom of expression and prevent censorship’.98 Indeed, at least some trial and appellate court decisions that have treated article 19 as a sufficient legal basis for enforcing users’ freedom of expression against OSPs. In 2018, the Third Chamber of Santa Catarina Court of Appeals (TJSC), for example, confirmed a trial court decision affirming that in the absence of a specific court order, Google could not remove content posted by a user after receiving notification by a third party claiming that there was a copyright violation. In their reading of article 19, the judges stated that ‘freedom of expression prevails until a court order says otherwise’, and found Google liable for damages caused by its violation of the user’s right.99 The majority opin-

93  Art. 10(2) (‘The content of private communications may only be made available by court order, in the cases and in the manner established by law, and in compliance with items II and III of art. 7’). 94  See MCI 2014 (n. 2) art. 7(IV). 95  ibid. art. 7(V). 96  There is a Constitutional Amendment Proposal (PEC 185/2015) in Congress that, if approved, would include the ‘right to internet access’ to art. 5 of the Brazilian Constitution as an explicit, specific fundamental right. The proposal can be found at . 97  See MCI 2014 (n. 2) art. 3(I) and 8. 98  ibid. art. 19. 99  See Luciano Pádua, ‘Google é condenado por tirar paródias das músicas 10% e Malandramente do YouTube’ (Jota, 9 April 2018) (a case involving the removal, by YouTube, of a parody of a copyrighted song that had been posted on the platform).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   211 ion also found that YouTube’s terms of service were in violation of the MCI in that regard, and therefore should not be enforced.100 Similar decisions can be found in other state courts, and some of them also point to procedural concerns in the way OSPs deal with removal requests.101 That is, even when the MCI authorizes content to be removed, it establishes a procedure for user participation and defence that must be followed. Still, neither of the two high courts (the STF and STJ) have taken a clear position on the issue. It remains to be seen whether this is the direction that Brazilian judges will follow in implementing the MCI’s basic commitment to protecting rights in a virtual world shaped and governed by private actors.102

5.  Concluding Remarks Brazil’s MCI was the outcome of an open legislative process with high levels of public engagement from business, government, and civil society sectors. Against the previous scenario of legal uncertainty, it attempted to define a clear regime for intermediary li­abil­ity, balancing freedom of expression, privacy, and due process concerns as well as affirming a connection between access to the internet and citizenship. The MCI includes several provisions affecting the activities of internet intermediaries, such as data retention, net neutrality privacy, and consumer’s rights. With regard to intermediary liability, it established a broad immunity for internet access providers and a narrower, conditional immunity to other OSPs. As a general rule, application pro­viders will not be liable unless they do not take steps to remove infringing content after a court orders its removal. The only two exceptions crafted by the law are copyright violations and the non-consensual dissemination of intimate images (popularly known as ‘revenge porn’). 100  ibid. Note that the MCI includes a provision stating that art. 19 claims that involve copyright violations will be covered by specific rules, to be enacted in the future. See MCI 2014 (n. 2) art. 2 (providing that ‘[t]he implementation of the provisions of this article for infringement of copyright or related rights is subject to a specific legal provision, which must respect freedom of speech and other guarantees provided for in art. 5 of the Federal Constitution’). 101  See e.g. Redação, ‘Justiça determina que rede social restabeleça página cancelada’ (Tribunal de Justiça do Estado de São Paulo, 3 May 2018) ; BEA, ‘Turma mantém condenação do facebook por desativar página de deputado’ (TJDFT, 30 July 2018) . 102  The provisions mentioned earlier can still be considered limited, in their scope, if measured against a more ambitious normative vision of digital constitutionalism. E.g. the MCI contains very little in terms of procedural guarantees and decision-making rules that can shape how the platforms themselves are governed. See Suzor (n. 83) for an outline of such a proposal. It prohibits them from violating certain substantive rights of their users, but it does not establish specific procedures on how their norms of governance must be created, changed, and enforced.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

212   Luiz Fernando Marrey Moncau and Diego Werneck Arguelhes By requiring such a court order, the MCI repeals strict liability regimes and proactive monitoring of content. Moreover, requiring specific and clear identification of the infringing content prevents the imposition of ‘staydown’ obligations (i.e. the obligation to ensure that the removed content ‘stays down’ and does not reappear on the platform or service).103 In practice, however, other laws may come into play in such cases, depending on the type of content disseminated on internet platforms. Brazilian electoral laws—which are typically updated before each election and play an important role in the removal of online content during electoral periods—currently reaffirm the MCI standards and clearly state that a URL must be indicated by any party seeking content removal. In doing so, they replicate judicial constructions of the MCI provision determining that a court order must provide a clear identification pointing to the specific infringing content, ‘allowing the unquestionable location of the material.’ Electoral laws, however, go beyond the MCI in defining that content removal orders must not require the removal of content in under twenty-four hours (except in ‘extraor­ din­a ry’ cases). The Child and Adolescent Statute (ECA) also established a specific framework for content removal, by defining crimes relating to distributing or publishing by any means, photographs, videos, or any records of a sexual nature involving children or adolescents. In practice, this means that service providers must respond to takedown notices to avoid criminal liability under the ECA. That said, the MCI’s rights and principles have only just begun to be litigated in the courts and three important cases are pending before the STF. The most important case will discuss the constitutionality of the MCI’s article 19, the main provision on the li­abil­ ity of internet intermediaries for user-generated content. Two other cases involve recent trial court decisions determining a nationwide suspension of the WhatsApp service for failing to provide information requested by the authorities during the course of criminal investigations. As these cases illustrate, the MCI rights and principles will have to be interpreted by judges, companies, and regulators in the near future. In this process, many of the conflicts arising in the MCI’s drafting process will resurface, potentially leading both to expansive and restrictive reinterpretations. In that process, the question of the formal status of the MCI will be a key feature of what the future holds for the regulation of the internet in Brazil. Formally, the MCI is simply an ordinary federal statute; it does not have constitutional status—or even the ‘supra-legal, but infra-constitutional’ status that the STJ has conferred on international human rights norms.104 The MCI can thus be superseded by future legislation, and new rounds of legislation are expected on related issues such as copyright, trade mark, and publicity and image rights. Moreover, on these topics, it is still debatable whether even a more recent statute like the MCI has successfully displaced, in the judicial mindset and judicial decisions, older 103  See Chapter 27. 104  See Antonio Moreira Maués, ‘Supra-legality of international human rights treaties and constitutional interpretation’ (2013) 18 SUR Int’l J. on Human Rights.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Marco Civil Da Internet and Digital Constitutionalism   213 statutes of the same rank, for example the Civil Code or the Consumer Protection Code. As we have seen, there are still high-profile cases on these statutory conflicts waiting to be decided by the STJ and even the STF. While it is unlikely that the STF will consider large parts of the MCI to be unconstitutional at that point, the court might still restrict its scope by giving greater weight to rules emanating from different legal regimes. That is not to say that the MCI is in a particularly vulnerable position. In Brazil, many high-profile, landmark statutes are in the same situation as the MCI: the Civil Code, the Criminal Code, the Code of Civil Procedure, and the Consumer Protection Code are all ordinary statutes that, considering how much thought, effort, and (in the case of the MCI) public participation has gone into their drafting process, are expected to last, as a coherent whole, for longer than a few legislatures. But this normative aspiration of scholars and practitioners has rarely carried the day in Brazil. Even the high-profile Civil, Criminal, and Procedure Codes are often tweaked and adjusted by specific, pos­ter­ior laws—and their relationship to one another, however important each might be in principle in its specific domain, is always a matter to be decided and shaped by judicial decisions. At least in its first half-decade of existence, however, and in spite of permanent lobbying efforts by business sectors, the MCI has so far resisted significant change, either through new rounds of legislation or through judicial interpretation. While this re­sili­ence remains to be tested and observed over a longer period, it might suggest that the MCI has gathered political and doctrinal relevance beyond its formal status as an or­din­ary federal law—a consequential outcome for global debates on digital constitutionalism.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 11

I n ter m edi a ry Li a bilit y i n A fr ica: L ook i ng Back, Mov i ng Forwa r d? Nicolo Zingales*

How far does the conventional discourse on the protection of intermediaries for third party content apply to law and policymaking in the African continent? The dynamics of intermediary liability differ significantly in the African context. This chapter reviews key developments in the region, documenting a trend of progressive diffusion of inter­medi­ary liability protections running in parallel to an increasing pressure on intermediaries to fulfil broad and open-ended public policy mandates. It is argued that the tension between these two forces undermines the value of intermediary protections, creating an uncertain terrain for a wide range of actors, and potentially affecting both technological progress and the creation of local content in one of the most underdeveloped regions of the world. This is seen against the backdrop of the great potential offered by the African Union in driving harmonization in the field, and the blueprint of the most sophisticated model of intermediary protections in the region: the South African Electronic Communications Act.

1.  The Slow Rise of the Intermediary Liability Discourse in Africa Limitations of intermediary liability for third party content have not traditionally taken centre stage in discussions about the regulatory framework for information technology *  The author would like to acknowledge the support of the Fundação Getulio Vargas and the CyberBRICS project, in the context of which he was hosted at the FGV Law School in January and February 2019. See . © Nicolo Zingales 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   215 in the African region, typically focused on more immediate concerns such as internet access (or the lack thereof), the use of technology for developmental purposes,1 and cybersecurity. This may help to explain why liability limitations began to surface relatively late compared to other jurisdictions around the world. By way of illustration, a series of studies2 conducted by the Association for Progressive Communications (APC) between 2012 and 2013 on intermediary liability in Nigeria, Kenya, South Africa, and Uganda showed that only half of the surveyed countries provided for some sort of limitation, one of which of fairly recent introduction (Uganda, in 2011). Note that this was more than ten years after the establishment of liability safe harbours in the e-Commerce Directive in Europe, and almost thirteen and fifteen years after the ‘safe harbours’ of the Digital Millennium Copyright Act (DMCA) and the Communication Decency Act (CDA) in the United States. One may naively question the reason for such lag, especially considering that policymakers in Europe and the United States had been discussing intermediary liability across the Atlantic since the 1990s as an integral part of concepts such as the ‘information superhighway’3 and the ‘information society’.4 However, it is possible to offer at least one economic reason why intermediary liability has been far from a priority in the African continent: the major connectivity gap that affected the continent since the beginning of the internet hindered not only the developmental potential of African markets, but also the hosting of African internet sites (mainly based in the United States and Europe)5 and the ability of intermediary services in the region to make substantial profits. This, along with a perception of less reliable enforcement of laws across the continent6 and a widespread use in international contracts of exclusive jurisdiction clauses electing forum outside the African region, may well have served as sufficient deterrent to the unfolding of important intermediary liability battles. On the other hand, precisely in the light of the crucial role of internet service providers (ISPs) in delivering con­nect­ iv­ity across the region, one could argue that failing to explicitly regulate the liability of technology providers in large infrastructure investments in the region amounts to a glaring omission from a policymaking perspective. Understanding the net value of intermediary protection legislation requires an appreciation of the realpolitik of liability in each jurisdiction, which goes beyond the scope of this chapter. However, an analysis 1  This is the field of research and policy called ‘information technology for development’, or ‘ICT4D’. One notable initiative in this space is InfoDev, a global multi-donor programme in the World Bank Group that supports growth-oriented entrepreneurs through business incubators and innovation hubs and aims to foster the attainment of African development goals by building local capacity. See . 2  The entire collection of APC papers on this theme can be found at . 3  See US Vice-President Al Gore’s remarks at the International Telecommunications Union on 21 March 1994 . 4  European Commission, ‘Europe’s Way to the Information Society. An Action Plan’ COM(94) 347 (1994). 5  See e.g. Mike Jensen, ‘Africa: Internet Status, 10/29/00’, University of Pennsylvania African Studies Centre (2000) ; and the periodic status reports at . 6  For concrete measurements, see the World Bank, ‘Strength of legal rights index’ .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

216   Nicolo Zingales of legal provisions addressing intermediary liability provides an interesting picture of this evolving framework.

2. First-generation Liability Limitations 2.1  South Africa Whatever the reason for the slow emergence of the concept of intermediary liability in the continent, it is apparent that South Africa was at the forefront of this discussion. As early as July 1999, just about the time the EU legislator was finalizing the drafting of the framework for intermediary liability in Europe, the South African Department of Communications issued a Discussion Paper on E-Commerce.7 The department called for laws regarding the liability of online intermediaries that ought to be ‘clear about the role of each participant along the chain of on-line services and transactions’,8 noting that such intermediaries seldom have direct responsibility for, or even knowledge of, the specific content of the websites that may be hosted on their facilities.9 The discussion teased out in the paper led to another document for public consultation, the Green Paper on Electronic Commerce for South Africa, which raised a number of fundamental questions about the nature of enforcement in an intermediated environment.10 Ultimately, the process culminated in the enactment in 2001 of the Electronic Communications and Transactions Act (ECTA), which constitutes to date the most articulate framework for dealing with intermediary liability in Africa.11 The present section illustrates that framework in detail. ECTA was above all a legislation designed to enable the growth of e-commerce in South Africa. Among its stated goals, it aimed to remove and prevent barriers to electronic communications and transactions in the Republic; promote legal certainty and confidence 7  See Department of Communications Republic of South Africa, Discussion Paper on Electronic Commerce Policy (1999) . 8  ibid. 29. 9 ibid. 10 See Department of Communications Republic of South Africa, A Green Paper on Electronic Commerce for South Africa (2000) . (It included the following questions: ‘1. How should liability be determined in copyright infringements given intangibility and documents in transit? 2. What is the potential liability of end-users “reproducing” infringing copies (transient copies) of copyrighted works by the mere act of viewing them on their PC’s, etc? 3. How do we strike a balance of enforcing and monitoring intellectual property rights with the need to promote use of e-commerce and cyberspace publishing? 4. Is framing (the incorporation of a website within a website) and hyperlinking (the creation of digital paths linking two or more websites) an infringement of copyright? 5. How are the expenses, efforts, duration and technical evidence demands for enforcing copyright protection in court going to be implemented?’) 11  Electronic Transactions and Communications Act (ECTA) no. 25 of 30 August 2002 (S. Africa).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   217 in respect of electronic communications and transactions; ensure that electronic transactions in the Republic conform to the highest international standards; and encourage investment and innovation in respect of electronic transactions in the Republic.12 In line with those aims, ECTA devoted an entire chapter (Part XI) to limitations of liability for service providers, defined as ‘any person providing information system services’, which includes ‘the provision of connections, the operation of facilities for information systems, the provision of access to information systems, the transmission or routing of data messages between or among points specified by a user and the processing and storage of data, at the individual request of the recipient of the service’. This def­in­ition was broad enough to capture four different types of activity, mimicking the corresponding categories of intermediation enshrined in the US DMCA: mere conduit; caching; hosting; and information location tools. Much like in the DMCA and in the EU’s e-Commerce Directive, the safe harbours were accompanied by the overarching principle that the law cannot be construed to require a service to monitor the data which  it transmits or stores, or to actively seek facts or circumstances indicating an unlawful activity. However, this was made subject to the proviso that the Minister of Communications may, in accordance with section 14 of the Constitution (which enshrines the right to privacy), prescribe procedures for service providers to (a) inform the competent public authorities of alleged illegal activities undertaken or information provided by recipients of their service; and (b) to communicate to the competent authorities, at their request, information enabling the identification of recipients of their service. Thus, it is important to clarify from the outset that the safe harbour provisions do leave some wiggle room for the government to require intermediaries to deviate from the ‘no monitoring’ principle for special categories of case, although this power has never been used to date. It is also worth noting that, as for the safe harbour for hosts in the DMCA, it is not crystal-clear what happens if a notification did not follow the specified takedown pro­ced­ure, specifically as it may be sufficient to trigger actual or constructive knowledge on the provider. Conceivably, traditional common law standards apply. For example, the common law of defamation finds negligence sufficient to trigger liability of the media.13 Similarly, in the field of copyright South African courts have defended the idea that in­dir­ect liability is triggered when a copyright owner suffers an economic loss that was foreseeable by a defendant who was under a legal duty to prevent it;14 and have rejected defences grounded on pleaded ignorance over the illegality of the works being distributed through the services of the ISP, finding sufficient the notice of facts that would 12  ibid. s. 2. See also the preamble to the Act which lists as its purpose ‘to provide for the facilitation and regulation of electronic communications and transactions; to provide for the development of a national e-strategy for the Republic; to promote universal access to electronic communications and transactions and the use of electronic transactions by SMMEs; to provide for human resource development in electronic transactions; to prevent abuse of information systems; to encourage the use of e-government services; and to provide for matters connected therewith’. 13  See s. 4(4). 14 See Arthur E. Abrahams & Gross v Cohen and others [1991] (2) SA 301 (S. Africa).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

218   Nicolo Zingales suggest to a reasonable person that a copyright infringement was being committed.15 What constitutes knowledge will thus vary depending on the type of content in question, a point of difference with the US (copyright-focused) framework and which creates some problems in the overlap between ECTA and other regimes.16 An even more distinct peculiarity of the South African model of intermediary protections is that it conditions the limitations of liability on the fulfilment of two additional requirements: (1) the intermediary’s membership of an industry representative body (IRB); and (2) adoption and implementation of the corresponding code of conduct.17 Section 71 clarifies that such recognition can only be obtained upon application to the minister, provided that (a) members of the representative body under examination are subject to a code of conduct; (b) membership is subject to adequate criteria; (c) the code requires continued adherence to adequate standards of conduct; and (d) the representative body is capable of monitoring and enforcing the code of conduct adequately. The proportionality of the first requirement has been questioned, particularly for small service providers,18 as the imposition of upfront costs in that regard may ef­fect­ ive­ly chill conduct that would otherwise benefit from the liability limitations, including speech and business activity that would be protected in other legal regimes.19 The third requirement, concerning the ‘code of conduct’, was integrated by the Minister of Communications in 2006 with the issuance of the ‘Guidelines for recognition of industry representative bodies of information system service providers’.20 The Guidelines largely confirm the ‘hands off ’ approach chosen by the legislator by stating that ‘the only monitoring or control done by the State . . . is to ensure that the IRB and its ISPs meet certain minimum requirements’. However, this does not take away the important point that through the Guidelines and the ministerial approval the government is able to subordinate liability limitations to the fulfilment of minimum standards of conduct, encourage the attainment of ‘preferred’ (i.e. non-compulsory) standards and promote respect of a number of overarching principles.21 A crucial task of the Guidelines is to direct the IRB to define a specific takedown pro­ced­ure, published on the IRB’s website and to which members must provide a link from their websites. The Guidelines point out that this procedure needs to be in line 15 See Gramophone Co. Ltd v Musi Machine (Pty) Ltd and others [1973] (3) SA 188, 188 (S. Africa); Paramount Pictures Corp. v Video Parktown North (Pty) Ltd [1986] (2) SA 623 (T), 251 (S. Africa). 16  See s. 4(4). 17  See ECTA (n. 11) s. 72. 18  See Alex Comninos, ‘Intermediary Liability in South Africa’, APC Intermediary Liability in Africa Research Papers 3/2012 (2012). 19  Fees for membership of the Internet Service Provider Association (which has for a number of years been the only IRB) are not negligible, amounting to a minimum of a monthly R5,250.00 + R735.00 VAT (more than US$400 in total). Actual rates depend on the size of the applicant’s business. See ISP, Membership . 20  Government Notice, Gazette 14 December 2006, no. 29474. 21  Minimum requirements, e.g., require: the adoption of professional conduct; the use of standard terms with a commitment not to spam or infringe intellectual property; the promise of reasonably feasible service levels; the protection of copyright, minors, and against spam and cybercrime; the availability of a complaint procedure, a disciplinary procedure, and one for monitoring compliance. Preferred requirements entail higher levels of commitment under those categories.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   219 with the requirement set out by section 77(1) of ECTA, which specifies the particulars a complainant has to provide in order to notify a service provider or its designated agent of unlawful activity (e.g. location, nature of the infringing material, remedial action required, and contact details). The same section grants service providers immunity from liability for wrongful takedowns in response to a notification, but imposes such liability on any person who has made material misrepresentations leading to damages. However, the absence of detailed provisions in the Guidelines creates a situation where ISPs are free not to establish any ‘counter-notice’ or ‘notice and putback’ mech­an­ ism, allowing the user to respond to the allegations of infringement and request to restore the allegedly infringing content. In fact, the Internet Service Provider Association (ISPA)—the only IRB to date—refrained from inserting such a safeguard mechanism in the takedown procedure.22 This was under the spotlight in 2012 since the Minister of  Communications proposed, in the Electronic Communication Transaction Act Amendment Bill of 2012, the introduction of section 77A, entitled ‘Right to remedy on receipt of a take-down notice’. The amendment aimed, according to the Explanatory Note accompanying the Bill, to allow for the right of reply in accordance with the prin­ciple of administrative justice and the audi alteram partem rule. However, the mechanism by which it endeavoured to do so was wholly inadequate, as it merely requested ISPs to respond to a ‘first take-down notice’ within ten business days (or less, if the complainant could demonstrate irreparable or substantial harm) instead of informing the user concerned and allowing him to intervene in the process by making representations before the ISP. Further, the proposed amendment did not foresee any kind of consequence for the ISP for failure to respond to such notice: it established liability only in case of failure to implement a ‘final take-down notice’, meaning a notice that a complainant is entitled to issue if: (1) after due consideration of the response by the ISP, he considers that the matter has not been resolved to his satisfaction; or (2) he has received no response from the ISP within the allotted time period. Therefore, following the proposed amendment, it would ultimately be on the complainant to decide whether something should be removed by the ISP, much to the dismay of the principle of due process. An additional problem with the proposed amendment is that the genuineness of the adversarial process may be undermined by the misalignment between the interest of the users and the incentives of ISPs: ISPs would not only be affected by the threat of liability for failure to remove potentially illegal content, but also further burdened by the costs involved in defending the user’s interest. Unsurprisingly, given the aforementioned flaws, the proposal has never made it into law, with the result that the takedown process remains controversial—if not unconstitutional. While the South African model of liability limitations is far from perfect, it has nevertheless been crucial in setting a standard for the development of similar laws in the region. Although the majority of African countries still lack a dedicated framework for intermediary liability, three examples of jurisdictions that followed South Africa are provided below. 22  See ISPA, Take Down Procedure v 3.2 .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

220   Nicolo Zingales

2.2 Ghana Ghana’s model of liability protection was clearly inspired by the South African ECTA. Equivalent liability limitations where introduced in 2008 with the Electronic Transactions Act (ETA), which lists in its article 1 all the aforementioned ECTA ob­ject­ives and follows a very similar structure. Liability limitations are covered in articles 90 to 93 of the Act, followed by a detailed takedown procedure (art. 94), the ‘no monitoring principle’ and the carve-outs for contractual liability and compliance with orders imposed by a court or a competent authority (art. 95).23 Additional carve-outs are then provided by article 97, referring inter alia to obligations imposed under licensing or other regulatory regimes or under any law. The latter allows the government to deviate from the established intermediary liability protections, even without being required to follow a specific procedure as prescribed in ECTA. There are two more important nuances which make ETA different from ECTA. The first concerns the liability of service providers for wrongful takedowns, which is spe­cif­ ic­al­ly prescribed—rather than excluded—by ETA. This is a welcome modification, as it encourages providers to consider the merits of a claim before proceeding to content removal. It thus reduces the well-known chilling effects connected with the noticeand-takedown process,24 whereby providers respond to one-sided incentives to avoid liability for failure to remove. The second is that ETA takes a more hands-off approach on industry self-regulation, by envisaging the possibility of the establishment of a voluntary industry code to deal with any matter provided for in the Act;25 contrary to ECTA, however, it refrains from imposing as a condition for enjoying liability limitations membership of a representative body following a code of conduct. This, too, may be con­sidered an improvement, particularly from the perspective of small to medium-sized enterprises, while on the other hand lessening the ability of regulators to nudge compliance with other public policies.

2.3 Zambia As was the case for Ghana, the Zambian framework for intermediary liability largely follows the South African model. The influence of the latter goes as far as to lead to an identical title of the implementing act, the Electronic Communications and Transactions Act of 2009. The objectives listed in the preamble to the Act differ only slightly from those of the South African ECTA.26 However, the most significant difference on inter­medi­ary liability concerns the liability for wrongful takedown. Perhaps due to the conflict in that 23  See Electronic Transactions Act (ETA) of 2008 no. 772 of 2008 (Ghana). 24  See Wendy Seltzer, ‘Free Speech Unmoored in Copyright’s Safe Harbor: Chilling Effects of the DMCA on the First Amendment’ (2010) 24 Harv. J. of L. & Tech. 171. 25  See ETA, s. 89 (Ghana). 26  Most notably, to include the creation of a Central Monitoring and Coordination Centre for the interception of communications.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   221 regard between the South African model and its first implementer (Ghana), the drafters of the Zambian ECTA decided to leave that matter outside the safe harbour, as is currently the case in many jurisdictions around the world. This means that service providers may in principle be subject to liability under common law with respect to any damages arising from wrongfully executed takedowns, although as a matter of fact it might not always be practical to establish the provider’s responsibility where it is merely a consequence of wrongful third parties notices. For that purpose, it may be relevant to examine the due diligence of a provider in affording aggrieved parties an opportunity to make representations and request a reinstatement of the content, as well as in considering applicable statutory defences.27

2.4 Uganda In 2011, it was Uganda’s turn to introduce its own framework to deal with electronic communications and transaction, titled the Electronic Transaction Act28 and bearing a great resemblance to the South African ECTA. In the interest of time, let us skip over the objectives, which are by and large identical to those identified in ECTA, and zoom in on three notable differences between the two Acts. First, an important peculiarity exists in the wording of the conduit safe harbour here which unlike the South African model explicitly provides shelter against both civil and criminal liability.29 It also differs from that model (and from international practice) as it does not require the access provider to remain passive as to the initiation of the transmission, the selection of the addressee, and the modification of the transmitted content. Although one could argue that ‘merely providing access’ presupposes a passive character of the intermediary activity, it can also be doubted that such phrasing constitutes a sufficiently clear delimitation of the activities for which the safe harbour may be invoked, especially insofar as the clarification in section 29(3) expands the notion of ‘providing access’ to include the automatic and temporary storage of the third party material for the purpose of providing access. Finally, under the heading of ‘jurisdiction of the courts’, the Act not only establishes the exclusive jurisdictional competence of courts presided by a Chief Magistrate or Magistrate Grade 1 over all offences in the Act, but also provides that, notwithstanding anything to the contrary in any written law, such courts have the power to impose a 27  The existence of a duty to consider such defences was affirmed in the United States by the Ninth Circuit Court of Appeals in Lenz v Universal Music Corp. 801 F.3d 1126 (9th Cir. 2015) (US). 28  Electronic Transaction Act, Act Supplement no. 4 of 18 March 2011, Uganda Gazette no. 19 Vol. CIV of 18 March 2011 (Ug.). 29  ibid. s. 29(1) (according to which: ‘A service provider shall not be subject to civil or criminal liability in respect of third-party material which is in the form of electronic records to which he or she merely provides access if the liability is founded on—(a) the making, publication, dissemination or distribution of the material or a statement made in the material; or (b) the infringement of any rights subsisting in or in relation to the material’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

222   Nicolo Zingales penalty or punishment in respect of any offence under the Act. This provision thus dangerously opens the door to the imposition of liability under the Act even in circumstances falling within the safe harbours.

3.  Interstate Cooperation in the African Region: Implications for Intermediary Liability Given the diverging state of intermediary liability across the African region, and the notable absence of intermediary liability protection in a number of African states, it is useful to examine what instruments and institutions may be leveraged to promote harmonization. The history of regional interstate cooperation in Africa began in 1963, when thirty-two signatories adopted the treaty on the Organization of African Unity (OAU). The purposes of the OAU included the promotion of the unity and solidarity of African states; the defence of their sovereignty, territorial integrity, and independence; and the eradication of all forms of colonialism from Africa.30 From the outset, there was an understanding that social and economic issues would be part of the purview of the OAU: the Assembly of the Heads of States and Governments, the OAU supreme organ, established a specialized ministerial commission to deal with social and economic issues. The OAU set its goal in the creation of stronger cooperation to intensify interstate trade and eliminate trade barriers which led, for instance, to the Economic Community for West African States in 1975, the Preferential Trade Area for Eastern and Southern Africa in 1981,31 and the Treaty establishing the African Economic Community in 1991; while support for further coordination among African states has been given since 1958 by the United Nations Economic Commission for Africa. It was only in the 1980s that the scope of cooperation expanded to encompass human rights, with the signature of the African Charter of Human and Peoples’ Rights which entered force in 1986. The Charter marked a significant departure from the principle of unfettered state sovereignty on which the OAU had been based,32 conferring oversight and interpretation of the Treaty to the African Commission on Human and Peoples’ Rights. The role of the Commission was further reinforced in 1998 by a Protocol on the Establishment of an African Court on Human and Peoples’ Rights, that was adopted by members of the OAU in June 1998 and came into force on 25 January 2004.33 30  See OAU Charter of 25 May 1963, Art. 2. 31  See G.W. Uku, ‘Organization of African Unity’ (1994) 47(1) Pakistan Horizon 29, 32. 32  See Claude Welch, ‘The Organisation of African Unity and the Promotion of Human Rights’ (1991) 29(4) J. of Modern African Studies 535–55. 33  However, only eight states of the thirty that ratified the protocol have made the declaration recognizing the competence of the court to receive cases from NGOs and individuals.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   223 The Charter provides a strong foundation for the protection of fundamental rights that might be affected by intermediary liabilities. However, unlike other regional instruments for the protection of human rights, it does not contain (with limited exceptions) a list of circumstances under which each right may be interfered with. Instead, it contains a general clause stating that ‘The rights and freedoms of each individual shall be exercised with due regard to the rights of others, collective security, morality and common interest’,34 and in one instance (the right to liberty) entirely carves out protection measures taken ‘in accordance with national law’. This approach can be criticized for departing from the strict necessity test applicable under other human rights frameworks, which results in little external restraint on a government’s power to create laws contrary to the spirit of the right.35 In an effort to overcome this deficiency, a specific reference to the necessity test commonly used in international human rights law was introduced by the Declaration of Principles on Freedom of Expression in Africa providing that ‘[a]ny restrictions on freedom of expression shall be provided by law, serve a legitimate interest and be necessary and in a democratic society.’36 However, the Declaration merely serves as a recommendation, while ‘State Parties to the African Charter on Human and Peoples’ Rights should make every effort to give practical effect to these principles’.37 In 2001, the OAU was replaced by the African Union (AU), which entailed a stronger commitment to supranational cooperation and a broader set of institutions to develop AU policy: this included the Assembly, the Executive Council, the pan-African Parliament, the Commission, the Permanent Representatives Committee, and most importantly the African Court of Justice to settle legal disputes among member states. The aims of the AU, which administers a number of treaties amongst its fifty-five member states, include a number of objectives which could arguably be used as a springboard for the development of intermediary liability principles: for instance, ‘to accelerate the political and socio-economic integration of the continent’, ‘to promote sustainable development at the economic, cultural and social levels as well as the integration of African economies’, ‘to promote and protect human and people’s rights in accordance with the African Charter of Human and Peoples’ Rights and other human rights instruments’, and ‘to coordinate and harmonize the policies between the existing and future regional economic communities for the gradual attainment of the objectives of the Union’.38 To accomplish those goals, the AU does not need to go as far as adopting interstate treaties 34  African Charter of Human and Peoples’ Rights of 27 June 1981 (entered into force 21 October 1986), OAU Doc. CAB/LEG/67/3 rev. 5, 21 ILM 58 (1982), Art. 27. 35  See Emmanuel Bello, ‘The Mandate of the African Commission on Human and Peoples’ Rights’ (1988) 1 (1) African J. of Int’l L. 55. Richard Gittleman, ‘The Banjul Charter on Human Peoples’ Rights: a legal analysis’ in Claude Welch and Ronald Meltzer (eds), Human Rights and Development in Africa (Albany 1984) 152–76. 36  African Commission, Declaration of Principles on Freedom of Expression in Africa of 2002, Art. 2. 37  ibid. Art. 16. 38  Constitutive Act of the African Union of 11 June 2000, Art. 3(c), (h), (j), and (k).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

224   Nicolo Zingales or agreements: the Pan-African Parliament can propose model laws for the consideration and approval by the Assembly (composed of the heads of state).39 Unfortunately, the only fully fledged initiative taken to date directly relating to electronic communications is the AU Convention on Cyber Security and Personal Data Protection. The Convention was adopted in 2014 following a joint effort with the UN Economic Commission for Africa (UNECA) to establish cyber legislation based on the continent’s needs and which adheres to the legal and regulatory requirements on electronic transactions, cybersecurity, and personal data protection. Among other provisions, the Convention requires states parties to take the necessary legislative and/ or regulatory measures to criminalize a range of conduct that may be taken by an inter­ medi­ary, for instance to ‘produce, register, offer, manufacture, make available, disseminate and transmit an image or a representation of child pornography through a computer system’, and ‘create, download, disseminate or make available in any form writings, messages, photographs, drawings and any other representation of ideas or theories of racist or xenophobic nature through a computer system’.40 More worryingly from an intermediary standpoint, the AU Convention contains aggravations rather than limitations of liability: it requires states parties to ensure that, in the case of offences committed through a digital communication medium, the judge can hand down additional sanctions;41 and it explicitly envisages that states parties ensure that legal persons can be held criminally responsible for the offences listed in the Convention, without excluding the liability of natural persons who are perpetrators or accomplices.42

4.  Second-generation Liability Limitations: the Rise of Hybrid Instruments It is important to note that the AU Convention mentioned in the previous section currently has no binding value: it has only received ratification by three AU Member States, as opposed to the fifteen required to enter into force.43 Nevertheless, it would be disingenuous to ignore the awareness-raising effect triggered by the whole process that led to the adoption of the Convention, which has put cybersecurity and the protection of personal data on the agenda of several AU Member States. This has also provided states with the opportunity of introducing liability limitations into their legal frameworks, sometimes with an added nuance when compared to the first generations of liability protection examined in Section 2. The present section thus offers a snapshot of recent 39  ibid. Art. 8 (relating to the Pan-African Parliament). 40  AU Convention on Cyber Security and Personal Data Protection of 27 June 2014, Art. 29(a) and (e). 41  ibid. Art. 31(2)(a). 42  ibid. Arts 31(2)(a) and 30(2). 43  ibid. Art. 36.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   225 legislative initiatives in the field bearing notable implications for the liability of internet intermediaries.

4.1 Malawi In 2016, Malawi introduced comprehensive legislation, the Electronic Transactions Act,44 with the twofold overarching objective of regulating electronic transactions and attending to cybersecurity concerns.45 Specifically, in line with the AU Convention, the Act establishes in its Part X a number of offences relating to the improper use of computer systems, namely child pornography, cyber harassment, offensive communications, cyber stalking, hacking, unlawfully disabling a computer system, spamming, and the broader categories of illegal trade and commerce and attempting, aiding, and abetting a crime. As for the AU Convention, some of those offences lend themselves to an interpretation that would create intermediary liabilities: for instance, the distribution and transmission of any pornographic material through an information system,46 knowingly permitting any electronic communications device to be used for cyber harassment,47 and aiding and abetting any other person to commit any of the offences under the Act.48 At the same time, the Act devotes its Part IV to the ‘liability of online intermediaries and content editors and the protection of online users’. By way of a connecting thread between these different themes, Part IV begins with an affirmation of the freedom of communication, along with a closed list of exceptions.49 While these carve-outs may be used to justify obligations imposed on internet intermediaries in relation to the offences described above, perhaps most concerning is the open-endedness of the last exception: its reference to ‘the enhancement of compliance with the requirements of any other written law’ provides a hook to justify legislative action entailing further restrictions, ostensibly even where this may be in conflict with the liability protections enshrined in the Act. This virtually nullifies the value of the intermediary liability in section 100, according to which ‘in case of inconsistency between provisions of the Act and a provision of any other written law relating to the regulation of electronic transactions, the former prevails to the extent of the inconsistency’. The remainder of Part IV contains a rather conventional set of intermediary safe harbours, including immunity for wrongful takedowns, clarification of ‘no monitoring obligations’, and a saving clause for liability under contract, law, ministerial direction, and court order. Unlike other frameworks, however, Part IV includes in the safe harbours 44  Electronic Transactions Act (ETA) no. 33 of 2016 (Mal.). 45  ibid. Preamble. 46  ibid. s. 85(2)(e) (emphasis added). 47  See ibid. s. 86(c). 48  See ibid. s. 93(2). 49  These include the prohibition of child pornography, incitement of racial hatred, xenophobia or violence, and justification for crimes against humanity; the promotion of human dignity and pluralism in expressions of thoughts and opinions; the protection of public order and national security, the facilitation of technical restrictions to conditional access to online communication; and the enhancement of compliance with the requirements of any other written law.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

226   Nicolo Zingales a general provision on the liability of an ‘intermediary service provider’ which exempts it from both civil and criminal liability where it has not initiated the message and has no actual or constructive knowledge50 and for any act done ‘in good faith’ under the same section.51 How this general provision is supposed to interact with the more specific safe harbours remains unclear.

4.2 Ethiopia Also from 2016 is the Ethiopian Computer Crime Proclamation,52 the overarching objective of which was ‘to provide for computer crime’ (sic.).53 Some of the crimes established by the Proclamation could conceivably be committed by an intermediary: for example, the distribution of a computer device or computer program designed or adapted exclusively for the purpose of causing damage to a computer system, computer data, or network;54 or the distribution of any picture, poster, video, or image through a computer system that depicts a minor or a person appearing to be a minor engaged in sexually explicit conduct;55 the intimidation of a person or his family with serious danger or injury by disseminating any writing, video, audio, or any other image;56 or the dissemination of any writing, video, audio, or any other picture that incites violence, chaos, or conflict among people.57 However, unlike other jurisdictions introducing cybercrimes, the prohibitions are crafted with explicit intentionality requirements, effectively ruling out intermediary liability in the absence of specific knowledge of the nature of the intermediated speech. At the same time, knowledge of the commission of any of the crimes stipulated in the Proclamation or even of the dissemination of ‘any illegal content data by third parties’ through the administered computer systems triggers a duty to report to the police and take appropriate measures,58 which can be particularly effective for turning intermediaries into ‘cyber-police’59 when seen in conjunction with the blanket duty of retention of computer traffic data for one year.60 Yet the most peculiar part of the Proclamation is section 16, which establishes the criminal liability of a service provider under sections 12 to 14 of the Proclamation (and, thus, not all of the offences mentioned earlier) for any illegal computer content data disseminated through its computer systems by third parties if: (1) it was directly involved in the dissemination or edition of the content/data; or (2) upon obtaining actual know­ ledge that the content data was illegal, failed to take any measure to remove or to disable 50  See ETA, s. 25(1) (Mal.). 51  ibid. s. 25(4). 52  Ethiopian Computer Crime Proclamation no. 958 of 2016, Official Gazette, 22nd Year no. 83, 91904 (Eth.). 53  ibid. Preamble. 54  ibid. s. 7(2). 55  ibid. s. 12(1). 56  ibid. s. 13. 57  ibid. s. 14. 58  ibid. s. 27. 59  For more discussion on the dangers associated with turning intermediaries into ‘cyber-police’, see Luca Belli, Pedro Augusto Francisco, and Nicolo Zingales, ‘Law of the Land or Law of the Platform? Beware of the Privatisation of Regulation and Police’ in Luca Belli and Nicolo Zingales (eds), Platform Regulations. How Platforms are Regulated and How They Regulate Us (Fundação Getulio Vargas Press 2017). 60  Computer Crime Proclamation, s. 24 (Eth.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   227 access to the content data; or (3) failed to take appropriate measure to remove or to dis­able access to the content data upon obtaining notice from the competent administrative authorities. In principle, this is the opposite of a safe harbour, as there is no guarantee in the Act that the intermediary will be shielded from liability by engaging in the opposite of the proscribed conduct. As a matter of practice, however, one can expect that the specific intentionality requirements in the offences will be sufficient to minimize the circumstances in which secondary criminal liability can be imposed. Nevertheless, the flip side of having some sort of liability limitation (rectius, clarification) in that law is that its focus on criminal responsibility potentially leaves intermediaries wide open to other types of claim, including civil liability, administrative orders, and injunctions.

4.3 Kenya Kenya adopted its cybersecurity law, called the Computer Misuse and Cybercrimes Act, in 2018.61 Once again, from the preamble it is clear that the focus of the law is pre­dom­in­ ant­ly to provide for offences relating to computer systems, while ‘connected purposes’ is briefly mentioned at the end. In addition to more conventional cybercrimes, the Act captures some of the most recent concerns such as the publication of false information with the intent that the data will be considered or acted on as authentic,62 its aggravated form when it is calculated to result or does results in panic, chaos, or violence among citizens or is likely to discredit the reputation of a person,63 cyber harassment,64 wrongful distribution of obscene or intimate images,65 and cyber-terrorism66. As is the case with Ethiopia, these provisions are carefully subordinated to the existence of intentionality, which should in principle rule out the liability of service providers in the absence of knowledge. Nevertheless, the regime of liability protection is strengthened by section 56, which provides a tripartite framework of liability limitation: • first, a service provider shall not be subject to any civil or criminal liability, unless it is established that it had actual notice, actual knowledge, or wilful and malicious intent, and not merely through omission or failure to act, and that it had thereby facilitated, aided, or abetted the use by any person of any computer system controlled or managed by a service provider in connection with a contravention of the Act or any other written law; • secondly, a service provider shall not be liable under the Act or any other law for maintaining and making available the provision of their service; • thirdly, a service provider shall not be liable under the Act or any other law for the disclosure of any data or other information that the service provider discloses only

61  Law no. 5 of 2018, entered into force 30 May 2018 (Ken.). 62  ibid. s. 22. 63  ibid. s. 23. 64  ibid. s. 27. 65  ibid. s. 37. 66  ibid. s. 33.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

228   Nicolo Zingales to the extent required under the Act or in compliance with the exercise of powers under Part IV (‘Investigation Procedures’). It should be noted that the first type of limitation represents the opposite extreme of the safe harbour provided by the Ethiopian framework, parsing out the circumstances under which the provider incurs no liability. Furthermore, all three limitations os­ten­sibly reach beyond the scope of cybercrime, excluding all types of civil and criminal liability and in the last two instances even liability under any other law. This provides a useful suggestion on how overlapping liability could be coordinated between different regimes.67 At the same time, it is worth pointing out that the second limitation is extremely vague and unclear, and the last condition has nothing to do with the legality of third party content. The obvious explanation for such an expanded understanding of the liability of intermediaries, matching equivalent provisions on data retention and interception in other cybercrime laws, is that the ultimate goal was the establishment of an infrastructure permitting the detection and investigation of crime, rather than a predictable environment for the business of content intermediation.

4.4  South Africa In 2017, South Africa introduced a Cybercrime Bill68 that aligns with the agenda set by the AU Convention. It has the aims, among others, to: create offences and impose penalties which have a bearing on cybercrime; criminalize the distribution of ‘harmful’ data messages; and impose obligations on electronic communications service providers and financial institutions to assist in the investigation and reporting of cybercrimes.69 Prohibited acts in the Bill include unlawful distribution via a computer system of a data message which incites damage to property or violence;70 unlawful and intentional distribution of a data message containing an intimate image without consent;71 and unlawful and intentional distribution of a harmful data message, which includes messages that could be reasonably regarded as harmful in that they threaten damage to property or violence, encourage self-harm or harm others, and are inherently false in nature and aimed at causing mental, psychological, physical, or economic harm to a specific person or a group of persons.72 The Bill fails to clarify the significance of the characterization as ‘unlawful’ of the conduct proscribed in those offences, although the requirement of intentionality may effectively rule out the liability of intermediaries in the absence of knowledge. However, the Bill does not foresee a specific mechanism whereby providers 67  It is worth mentioning that Kenya’s Copyright Board had consulted in 2016 on specific amendments to the Copyright Act to provide web-blocking measures in cases of copyright infringements online, which introduced a detailed system of safe harbours for copyright infringement. See . However, the amendment was never adopted into law. 68  See Cybercrime and Cybersecurity Bill B6-2017, published in Government Gazette no. 40487 of 9 December 2016 (S. Africa). 69  ibid. Preamble. 70  ibid. s. 16. 71  ibid. s. 18. 72  ibid. s. 17.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   229 are deemed to have obtained such knowledge, which would create a state of greater uncertainty compared to the existing scenario under ECTA. In contrast, the Bill envisages the creation of a specific mechanism, to be defined by the cabinets of the minister responsible for policing and of the minister responsible for the administration of justice, for electronic communication service providers and financial institutions to report to the South African police without undue delay (and in any event within seventy-two hours of becoming aware) the fact that their system is involved in the commission of any category or class of offence identified by those cabinets.73 Similar to ECTA, the Bill contains the caveat that it cannot be interpreted to impose on electronic communication service providers and financial institutions any obligations to monitor the transmitted or stored data or actively to seek facts or circumstances indicating unlawful activity.74 However, the obligation to report and the parallel obligation to preserve any information that may be of assistance to law enforcement are backed up with an offence and the imposition of a fine for failure to comply (R50,000).75 Inevitably, entry of the Bill into law would impact the regulatory environment concerning internet intermediaries. Yet it is important to acknowledge that perhaps even more serious consequences for intermediaries stem from hate speech and anti-discrimination legislation, particularly in the light of the prominent role they play in South African society. In fact, article 1 of the South African Constitution affirms that dignity, equality, and advancement of human rights and freedoms are essential values on which the democratic Republic of South Africa is founded. Such high importance is attributed to the rights to equality and human dignity, contained in articles 9 and 10, that they are both considered non-derogable even under the exceptional circumstances of a state of emergency identified in section 37. The state has given specific implementation to the equality principle through the Promotion of Equality and Prevention of Unfair & Discrimination Act (PEPUDA) of 2000. The Act, which attributes jurisdiction to every magistrates’ court and high court to serve as ‘equality courts’, prohibits race, gender, and disability-based discrimination, and provides further details on the more general notion of unfair discrimination.76 Furthermore, section 12 outlaws the publication and dissemination of any information that could be ‘reasonably construed to demonstrate a clear intention to unfairly discriminate’, subject to the proviso that this shall not prevent bona fide engagement in artistic creativity, academic, and scientific inquiry, fair and accurate reporting in the public interest, or the exercise of freedom of expression in accordance with section 16 of the Constitution. It also addresses hate speech, by adopting a similar prohibition against the publication, propagation, advocacy, or communication of words based on one or more of the prohibited grounds, ‘which could reasonably be construed to demonstrate a clear intention to (a) be hurtful; (b) be harmful or to incite harm; (c) promote or propagate hatred’. This definition goes beyond the narrow notion of hate speech carved out from the protection 73  ibid. s. 52(1) and (2). 74  ibid. s. 52(4). 75  ibid. s. 52(3). On 28 January 2019, that amount was equivalent to US$3,658. 76 See Promotion of Equality and Prevention of Unfair Discrimination Act no. 4 of 2000, s. 14 (S. Africa).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

230   Nicolo Zingales of free speech in the Constitution, consisting of ‘advocacy of hatred that is based on race, ethnicity, gender or religion, and that constitutes incitement to cause harm’.77 As written, the prohibition poses a potentially very significant challenge for content moderation by internet intermediaries, as a determination that a given form of expression is ‘hurtful’ or that it constitutes a bona fide engagement in one of those acts necessarily requires a careful appreciation of the context of a communication. The significance of this legislation is even greater considering that the antidiscrimination law not only concerns civil liability. According to section 10(2) of the Act, the equality courts retain discretion to forward the matter to the Director of Public Prosecutions for the institution of criminal proceedings. In addition, in 2000 the Ad Hoc Joint Committee on Promotion of Equality and Prevention of Unfair Discrimination Bill adopted a resolution recommending the adoption of legislation in line with the Convention on the Elimination of All Forms of Racial Discrimination (ICERD) that would criminalize hate speech offences. The subsequent Draft Prohibition on Hate Speech Bill, proposed in 2004, was never approved; however, in 2013 the Department of Justice and Constitutional Development introduced a Draft Policy Framework on Combating Hate Crimes, Hate Speech and Unfair Discrimination which, in the words of Minister of Justice and Constitutional Development Jeff Radebe ‘refine[s] the concept of hate speech in a way that reflects South Africa’s commitment to high standards of free expression at the same time as combatting hate speech’.78 The Framework resulted in the Prevention and Combating of Hate Crimes and Hate Speech Bill,79 which retains a prohibition identical to that contained in the Equality Act, but removes the reference to ‘hurtful’ and expands the list of exempted activities to include the bona fide interpretation and proselytizing or espousing of any religious tenet, belief, teaching, doctrine, or writings. The Bill also provides specific treatment for the mere propagation of hate speech by a person who ‘intentionally distributes or makes available an electronic communication which that person knows constitutes hate speech through an electronic communications system which is accessible by any member of the public or by a specific person who can be considered a victim of hate speech’. While the subordination of the offence to the existence of knowledge of the hateful nature of the speech is certainly welcome, one can expect that courts will construct the provision to include both actual and constructive knowledge. An expansive reading of this type of legislation would pose a threat to legal certainty for internet intermediaries, and potentially undermine the effectiveness of the liability limitations. Unfortunately, the constitutional protection afforded to the right to freedom of expression appears insufficient to avert those consequences, as measures adopted in relation to the categories of unprotected speech listed

77  ibid. s. 16. 78  Jeff Radebe, ‘Notes’ (Panel: Imagine a World Without Hate!, Maroela Room, Sandton Sun Hotel, 25 August 2013) . 79  Bill published in Government Gazette no. 41543 of 29 March 2018 (S. Africa).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   231 in article 16(2) do not require the fulfilment of the stringent necessity test devised by ­section 36.80 In contrast, such a test would be applicable for other types of liability, for example for defamation that does not rise to the level of ‘hate speech’. Defamation is disciplined in South Africa under the common law notion of actio injuriarum, which requires the existence of wilful or negligent conduct that violates a personality interest, such as one’s good name and reputation.81 The common law initially alleviated complainants from the need to prove intentionality (animus injurandi) in the case of the media, thereby imposing strict liability on newspapers owners, printers, publishers, and editors.82 However, later judgments established that such strict liability had a significant chilling effect on the free flow of information, and thus in line with section 36 a valid claim of defamation to trigger a presumption of animus injurandi would need to show at least negligence on the part of the media.83 To overcome the presumption, the defendant needs to prove the nonexistence of animus injurandi based on the circumstances of the case.84 The section 36 test also applies to the obligations imposed by the Film and Publication Act (FPA), a law passed in 1996 with the objective of regulating the creation, production, possession, and distribution of films, games, and certain publications in order to allow consumers to make informed viewing choices and protect children from exposure to disturbing and harmful material, as well as to make the use of children in and the ex­pos­ure of children to pornography punishable.85 The FPA identifies a number of categories of harmful or potentially harmful material, and creates a regime of registration, classification, and authorization by the Film and Publication Board (FPB), which must be complied with by anyone intending to exhibit, distribute, publish, broadcast, or otherwise make available to the public a ‘publication’.86

80  According to which: ‘The rights in the Bill of Rights may be limited only in terms of law of general application to the extent that the limitation is reasonable and justifiable in an open and democratic society based on human dignity, equality and freedom, taking into account all relevant factors, including: (a) the nature of the right; (b) the importance of the purpose of the limitation; (c) the nature and extent of the limitation; (d) the relation between the limitation and its purpose; and (e) less restrictive means to achieve the purpose.’ 81 See Van Zyl v Jonathan Ball Publishers (Pty) Ltd [1999] (4) SA 571 (W) (S. Africa); National Media Ltd v Bogoshi [1998] (4) SA 1196 (A) (S. Africa). 82  Pakendorf and Others v De Flamingh [1982] ZAENGTR 1 (31 March 1982) (S. Africa). 83  See e.g. National Media Ltd v Bogoshi [1998] 4 All SA 347 (A) (S. Africa). 84 See Khumalo & Others v Holomisa [2002] (5) SA 401 (CC) (S. Africa). The factors that courts will consider to determine whether the publication was reasonable are: (1) the nature and tone of the report; (2) the nature of information on which the allegations were based; (3) the steps taken to verify the information; (4) the steps taken to obtain the affected party’s response; (5) whether that response has been published; and (6) the need to publish. See also, most recently, Sayed v Editor, Cape Times [2004] (1) SA 58 (c) 73 (S. Africa). 85  See Film and Publication Act (FPA) (1996) s. 2 (S. Africa). 86  ibid. s. 1 (by the definition in the FPA, ‘publication’ refers to a wide range of material, including ‘any message or communication, including a visual presentation, placed on any distributed network’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

232   Nicolo Zingales Section 36 provides solid ground for constitutional challenges to this legislation. Shortly after the introduction of the FPA, in De Reuck,87 the Constitutional Court initially condoned the authorization regime of child pornography in recognition of the importance of the accompanying exemptions. A few years later, however, in Print Media South Africa,88 the court reached the opposite conclusion with regard to the broader category of ‘publications containing sexual conduct’: the provision was deemed unconstitutional because the legislator could have imposed less severe restrictions of freedom of expression, or simply permitted the publishers to obtain an advisory opinion by the FPB without being penalized for failure to do so. By contrast, it is striking that no constitutional challenge has so far been made against the requirement for all ISPs (a category which is defined by reference to ‘the business of providing access to the Internet by any means’, which to date includes cyber-cafes) to register with the Board and take rea­son­able steps to prevent the use of their services for hosting or distributing child porn­og­raphy.89 This is despite the fact that failure to comply constitutes an offence punished with a fine and/or up to six months and five years’ imprisonment, respectively.90

4.4.1  Liability limitations when you are also ‘cyber-police’: a complex terrain The framework described previously complicates the picture initially painted for the South African intermediary liability regime. It was noted in Section 2 that the great advantage of this model based on guidelines-driven codes of conduct is that it allows a government to demand intermediaries to comply with specific principles and standards in exchange for liability limitations. However, the main problem with the implementation of that idea is that the Guidelines developed by the Ministry of Communication left ample discretion for IRBs in the design of such a procedure, resulting in a lack of certainty over the effective fulfilment of the codes of conduct of the ECTA requirements: for example, while they list requirements concerning the observance of consumer protection and privacy provisions of ECTA merely as optional (‘preferred requirements’, ss. 6.5 and 6.6), this contrasts with the explicit letter of Chapters 7 and 8 of ECTA and the obligations imposed by Chapters 3, 4, and 5 of PAIA. Of course, section 79 of ECTA makes clear that violations of other laws would not be shielded under the liability limitations offered by Chapter 11 of ECTA, which do not affect ‘any obligations founded on an agreement, licensing and regulatory obligations, and any court or legal obligations to remove, block or deny access to “data message” ’: this means that liability will for instance not be escaped for failure to remove, or wrongful takedown of, unreasonably discriminatory and indecent content’. Such provision suggests that, while it is possible for an IRB or an ISP to establish more demanding liability regimes, there is at the outset no coordination between ECTA and other laws. 87 See De Reuck v Director of Public Prosecutions [2003] (12) BCLR 1333 (S. Africa). 88 See Print Media South Africa v Minister of Home Affairs [2012] (6) SA 443 (CC) (S. Africa). 89  See FPA (n. 85) s. 27A(1) (S. Africa). 90  ibid. s. 27A(4).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   233 Another example of incoherence is the restatement in the Guidelines’ principles of the ‘no monitoring’ principle laid out in section 78 of ECTA. This is followed by an attempt to provide more clarity by specifying that such a principle is not applicable with respect to the requirements set out by the FPA on child pornography,91 which seem to require ISPs to conduct packet inspection92 to verify the content of the communications. However, the enhanced clarity is only apparent insofar as the guidance fails to refer to monitoring which is de facto required for ISPs to avoid liability under the Equality Act. In conclusion, the regime just described only appears to confer on ISPs substantial immunity from liability for the content produced by third parties. However, the regime unfortunately fails to provide the much-needed security, both because of its limited scope and the lack of clarity of some of its key features: first, unlike many other regimes around the globe, ISPs may be subject to injunctions, as well as liable under criminal law, for the conduct of their users. Secondly, immunity from liability does not apply horizontally across the board, but leaves intact liability under specific legislations such as the FPA or the Equality Act. And, thirdly, the immunity conferred is deficient as ISPs could still be found liable if knowledge gathered outside the notice-and-takedown procedure is deemed sufficient under common law. Finally, as mentioned earlier (Section 2), the legal framework adopted conflicts with the principle of due process, as it permits the establishment of a violation without ensuring full respect of the accused infringers’ right to be heard—both users and ISPs. Similar problems were detected in the observation of intermediary liability regimes in other jurisdictions, particularly emerging from the adoption of post-security cyber legislation. In Malawi, it was shown that the hybrid nature of the Cybercrime Act led to a loophole for the protection of freedom of expression enshrined in intermediary liability protections, in particular when a restriction would enhance compliance with the requirements of any other written law. In Ethiopia, the insertion of liability limitations in the context of a cybercrime law was myopically focused on the criminal responsibilities arising from the offences established in that law, without sufficient consideration of the potentially huge realm of civil and administrative liabilities. However, Kenya provided a useful example of how liability limitations can be asserted in a proactive manner, preventing the attachment of liability under any contrasting law. This ‘non-obstante’ clause is not a novelty in intermediary liability regimes, having been used, for instance (not without its shortcomings), in Malawi’s Electronic Transactions Act and India’s Information Technology Act;93 yet, if designed carefully, promises to be a valuable tool to prevent inconsistent frameworks both nationally and across the region. 91  It is also important to clarify that ECTA has wider coverage than the FPA, whose definition of an ISP as ‘any person whose business is to provide access to the Internet by any means’ is applicable only to internet access providers. 92  This could be a shallow packet inspection (e.g. packet fingerprinting) or on some occasions a deep packet inspection, the latter referring to a more invasive technique that allows monitoring at a lower layer. 93  See Information Technology Act no. 21 of 2000, s. 79 (Mal.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

234   Nicolo Zingales

5.  Conclusion: The Role of the African Union and the Promise of the South African Model This section concludes our brief overview of intermediary liability developments in the African territory. After a reflection in Section 2 on the peculiarities of the African context, Section 3 illustrated the success of South Africa in exporting its model throughout the continent in a first wave of diffusion of intermediary liability protections, lasting for about a decade from the early 2000s. Section 4 documented the emergence of a second wave (or second generation) of liability limitations in the steps of the AU Convention, characterized by an interweaving of protections and obligations for internet intermediaries. Section 4 also explained that the AU has not yet played the significant role that it could play to promote regional harmonization on cyber-legislation, although the efforts undertaken for the establishment of the AU Convention have proven influential in the region. It is therefore natural to suggest that similar efforts could be made to create a shared notion of intermediary liability limitations, and develop a consistent answer to the all-important question of their relationship to obligations imposed on intermediaries in other statutes. As an intellectual experiment, then, it is interesting to consider what could be the foundations for the establishment of an African baseline model. There is little doubt that intermediary liability falls within the competences of the AU, insofar as it concerns human rights, economic development, and socio-economic integration. On what grounds should this project be pursued, then? One natural point of departure is the Joint Declaration on Freedom of Expression and the Internet adopted in 2011 by the Special Rapporteur on Freedom of Expression and Access to Information of the African Commission on Human and Peoples’ Rights (ACHPR) together with three other special rapporteurs.94 The Declaration stresses the importance of limiting liability of conduits, searching, and caching providers in the absence of intervention in the content;95 and calls for minimum safeguards with regard to other intermediaries, including the no monitoring principle and takedown procedures offering sufficient protection for freedom of expression.96 Furthermore, article XI explicitly encourages self-regulation as an effective tool for redressing harmful speech.97 This can usefully be read in conjunction with section IX of the Declaration of Principles on Freedom of Expression in Africa, 94  See OCSE, ‘International Mechanism for Promoting Freedom of Expression: Joint Declaration on Freedom of Expression and the Internet by the United Nations Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE) Representative on Freedom of the Media, the Organization of American States (OAS) Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples’ Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information’ (2011) . 95  Or if they refuse to obey a court order. See ibid. s. 2(a). 96  ibid. s. 2(b). 97  ibid. XI.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

Intermediary Liability in Africa   235 which despite recommending the institution of an independent administrative body receiving complaints for violations of freedom of expression in the media, goes on to state that ‘effective self-regulation is the best system for promoting high standards in the media’. The bottom line in these declarations is not that everything should be left to self-regulation. Rather, the suggestion is that the effectiveness of self-regulation can be harnessed when it is constrained by a limiting framework: in essence, it is a call for ‘coregulation’.98 Co-regulation can be implemented in different forms and certainly within that realm is the arrangement that has been in place since 2002 in South Africa’s ECTA. In it and its Guidelines, the State imposed on intermediaries respect of certain principles through a powerful incentive (the liability limitations) and yet left them with a discrete margin of manoeuvre in how respect for those principles is embedded in the design and enforcement of codes of conduct. Linking the safe harbours for intermediaries to the respect of minimum standards of protection in different domains, including cybercrime, goes a long way towards preventing a sacrifice of important values in the pursuit of effectiveness. However, when such responsibilities are passed on to private entities it is crucial not only to maintain close supervision of enforcement, but also to craft the overarching framework in a way that is consistent with the obligations contained in other legislation and ensures respect for fundamental rights. These are two of the problems observed in the case of ECTA and its Guidelines: they fail to address coordination with other legislation and causes of action, and neglect the consideration of important fundamental rights (e.g. due process) that remain conspicuously absent in the code of conduct adopted by the only recognized IRB. While these problems could be tempered by a flourishing of parallel IRBs competing on the quality of their codes of conduct, the state remains ultimately the responsible entity for creating an environment conducive to such competition or otherwise for ensuring fair and effective enforcement of digital rights. In that light, this chapter has argued that the AU has all the cards to pivot the discussion, including framework agreements and model laws, and thereby to promote human rights and economic development across the region.

98  For an in-depth illustration of this concept, see Chris Marsden, Internet Co-Regulation: European Law, Regulatory Convergence and Legitimacy in Cyberspace (CUP 2011).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 12

The Li a bilit y of Austr a li a n On li n e I n ter m edi a r ies Kylie Pappalardo and Nicolas Suzor*

1.  Liability: Active Intermediaries and Recalcitrant Wrongdoers Online intermediary liability law in Australia is a mess. The basis on which third ­parties are liable for the actions of individuals online is confusing and, viewed as a whole, largely incoherent. Australian law does not express a coherent or consistent articulation of when, exactly, an online intermediary will be liable for the actions of third parties. There are conflicting authorities both within and between separate bodies of law that impose separate standards of responsibility on online intermediaries. Courts are struggling to adapt the law to apply to new technological contexts in a way that adequately balances competing interests from within the confines of existing doctrines. The result is a great deal of uncertainty for both plaintiffs and intermediaries. Different doctrines in Australian law employ a range of different tests for determining when an actor will be liable for the actions of a third party. So far, these primarily include cases brought under the laws of defamation, racial vilification, misleading and deceptive conduct, contempt of court, and copyright. These bodies of law have all developed largely independently of each other. They are conceptually different and derive from different historical contexts, and the courts have generally applied them in isolation. The particular fault elements on which liability is based are all different and cannot easily be compared at a detailed level. In some of these doctrines, like copyright, there is a sep­ar­ate *  An extended version of this chapter was originally published in the Sydney Law Review [2018] 40. Suzor is the recipient of an Australian Research Council DECRA Fellowship (project number DE160101542).

© Kylie Pappalardo and Nicolas Suzor 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Liability of Australian Online Intermediaries   237 head of liability for secondary liability as distinguished from the underlying wrongful act; in others, the actions of intermediaries operating on behalf of another are assessed under the same tests of primary liability. It is useful, however, to take a broad view, and look at the common ways that the courts are struggling to deal with very similar issues under the weight of very different doctrinal traditions. This process of abstraction necessarily involves ‘throwing away detail, getting rid of particulars’, but with the intent to ‘produce the concepts we use to make explanatory generalizations, or that we analogize with across cases’.1 The easy cases are those most closely analogous to that of a mass media publisher who exercises editorial control over the content of communications. Where the intermediary moderates or selects the material to be published, courts have been able to draw a clear analogy with, for example, newspaper editors, and are able to find wrongdoing relatively easily. Under both defamation law and consumer protection law, for example, where the intermediary exercises some level of judgment and editorial control, courts have variously explained that the intermediary ‘accepts responsibility’2 or ‘consents to the publication’.3 This is a version of the ‘Good Samaritan’ problem, where intermediaries who voluntarily take on some responsibility to moderate have a greater legal risk of exposure than those who do not exercise any editorial control.4 The law is much more complicated where online intermediaries do not directly exercise a large degree of editorial control. Our common law has not yet developed a clear theory to determine when a service provider who creates a technology or system that enables wrongful behaviour will be liable. One of the basic organizing principles of our legal system is that there is usually no liability without fault. With few exceptions,5 the common law does not impose obligations on institutions or individuals to protect the rights of another against harm caused by third parties. This notion is most commonly expressed in the rule that there is no general duty to rescue.6 This general rule reflects a 1  Kieran Healy, ‘Fuck Nuance’ [2016] Sociological Theory 5–6. 2  Visscher v Maritime Union of Australia (No. 6) NSWSC 350 [29]–[30] (Beech-Jones J) (Aus.); Australian Competition and Consumer Commission v Allergy Pathway Pty Ltd (No. 2) (2011) 192 FCR 34 [33] (Finkelstein J) (Aus.). 3  Trkulja v Google Inc., Google Australia Pty Ltd VSC 533 [31] (Beach J) (Aus.). 4  Part of the protections of the Communications Decency Act 1996, s. 230 (US) were specifically designed to overturn Stratton Oakmont Inc. v Prodigy Services Co., 1995 WL 323710 (NY Sup. Ct. 1995) (US), which held that Prodigy was liable for the defamatory bulletin board postings of its users because it exercised editorial control through moderation and keyword-filtering software. See further, Paul Ehrlich, ‘Communications Decency Act § 230’ (2002) 17(1) BTLJ 401. 5  The first main exception at common law is where there is a non-delegable duty and a special relationship of control between the defendant and the third party, e.g. that between parents and children, school authorities and pupils, and prison wardens and prisoners. See Smith v Leurs (1945) 70 CLR 256 (Aus.); Commonwealth v Introvigne (1982) 150 CLR 258 (Aus.); Home Office v Dorset Yacht Co. Ltd [1970] AC 1004 (UK). The second main exception applies where the defendant has had some role in creating the risk from which the plaintiff needs rescuing. See G.H.L. Fridman, ‘Non-Vicarious Liability for the Acts of Others’ [1997] TLR 102, 115, discussing Smith v Littlewoods Organisation Ltd [1987] AC 241 [279]–[281] (UK). 6 See Dorset Yacht (n. 5) [1027] (Lord Reid); Frank Denton, ‘The Case Against a Duty to Rescue’ (1991) 4(1) Can. J. of L. & Jur. 101, 101–4; Les Haberfield, ‘Lowns v Wood and the Duty to Rescue’ [1998] TLR 56, 59, 65; John Goldberg and Benjamin Zipursky, Torts (OUP 2010) 118–19.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

238   Kylie Pappalardo and Nicolas Suzor fundamental liberal commitment to autonomy:7 individuals are free to act as they choose so long as those actions do not harm others.8 Courts have consistently held that ‘the common law of private obligations does not impose affirmative duties simply on the basis of one party’s need and another’s capacity to fulfil that need’.9 The common law emphasizes personal responsibility; to require a person to help another simply because they have the capacity to do so would unhinge the law from its underlying objectives of promoting personal responsibility for one’s actions and deterring reckless or unreasonable behaviour.10 If the defendant is not personally responsible for causing the harm, then from this perspective, the threat of liability cannot act as an effective deterrent for wrongful behaviour.11 Thus, under the common law—and according to responsibility theory—a person will generally only be responsible for a harmful outcome where his or her actions caused the harm (causation) and where that person might have acted to avoid the harm but did not (fault).12 As Justice Mason has stated, the notion of fault within the law can act as a ‘control device’ to ensure that the burden to repair is proportional to the defendant’s responsible role in the occurrence of harm.13 These general principles, however, conflict with another basic principle: that for every wrong, the law provides a remedy.14 In cases brought against intermediaries in Australia and overseas, courts are often presented with a meritorious claim without a clear remedy, and face the difficult task of determining whether to extend the existing law to require intermediaries to take action to protect plaintiffs’ rights. In two cases in 2012 and 2013, respectively, Roadshow v iiNet15 and Google v ACCC,16 the High Court rejected the extension of existing doctrine to impose liability for large, general-purpose intermediaries.17 Despite these two High Court decisions, the overall state of Australian inter­medi­ary liability law is still one of confusion, both within and across doctrines. 7  See Ernest Weinrib, ‘The Case for a Duty to Rescue’ (1980) 90(2) Yale L.J. 247, 279; Jane Stapleton, ‘Legal Cause: Cause-in-Fact and the Scope of Liability for Consequences’ (2001) 54(3) Vand. L. Rev. 941, 949 fn. 17. 8  See John Locke, Two Treatises of Government (CUP 1988) 106–9; John Stuart Mill, On Liberty (Start Publishing LLC 2012) Ch. IV: Of the Limits to the Authority of Society Over the Individual. 9  Denton (n. 6) 109. 10  See Jane Stapleton, ‘Duty of care: peripheral parties and alternative opportunities for deterrence’ (1995) 111 (Apr.) LQR 301, 312, 317; Denton (n. 6) 124. 11  See Stapleton (n. 10) 305, 310–12, 317. 12  See Stephen Perry, ‘The Moral Foundations of Tort Law’ (1992) 77 Iowa  L.  Rev. 449, 513; John Goldberg and Benjamin Zipursky, ‘Tort Law and Responsibility’ in John Oberdiek (ed.), Philosophical Foundations of the Law of Torts (OUP 2014) 17, 20–1; Emmanuel Voyiakis, ‘Rights, Social Justice and Responsibility in the Law of Tort’ (2012) 35(2) UNSWLJ 449, 458; Denton (n. 6) 127; Peter Cane, ‘Justice and Justifications for Tort Liability’ (1982) 2 OJLS 30, 53–4. 13  Justice Keith Mason, ‘Fault, causation and responsibility: Is tort law just an instrument of corrective justice?’ (2000) 19 Aust. Bar Rev. 201, 207–8. 14  See the maxim ‘ubi jus ibi remedium’: Herbert Broom, Selection of Legal Maxims, Classified and Illustrated (William S. Hein & Co. 1845) [91]. 15 See Roadshow Films Pty Ltd v iiNet Ltd (2012) 248 CLR 42 (Aus.). 16 See Google Inc. v Australian Competition and Consumer Commission (2013) 249 CLR 435 (Aus.). 17  See Robert Burrell and Kimberlee Weatherall, ‘Providing Services to Copyright Infringers: Roadshow Films Pty Ltd v iiNet Ltd’ (2011) 33 SLR 801 [829]–[830] (Aus.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Liability of Australian Online Intermediaries   239 Across different fact scenarios in consumer protection, defamation, racial vilification, contempt of court, and copyright cases, mere knowledge of the content can lead to an inference that a third party publisher adopts or endorses its continual publication and is  responsible for the harm that results. Across each of these doctrines, plaintiffs have sought to link an intermediary’s capacity to do something about wrongdoing with a normative proposition that they therefore ought to do it. We argue that, as a result, inter­medi­ary liability has expanded in a largely unprincipled way that has eroded the important connection between liability and responsibility.

1.1  Consumer Protection Law In Google Inc. v ACCC, the High Court held that Google Inc. was not liable when it created a system to enable third parties to create advertisements that were reproduced on Google’s search results pages. The High Court was clear in finding that Google did not ‘endorse’ advertisements submitted by third parties and published on its own web pages, on the basis that the content of the material was wholly determined by the advertiser and published automatically by Google.18 Google ‘did not itself engage in misleading or deceptive conduct, or endorse or adopt the representations which it displayed on behalf of advertisers’.19 Liability for misleading and deceptive conduct is strict, but requires actual wrongful conduct on the part of the defendant that is likely to mislead or deceive— there is no separate secondary head of liability. Whether Google knew that the content was misleading was irrelevant.20 On its face, the High Court’s ruling is quite strong: even though Google was likely to know that the material was likely to mislead or deceive,21 it was not responsible for advertisements created by others.22

18 See Google Inc. v ACCC (2013) 249 CLR 435 [68] (French CJ, Crennan and Kiefel JJ) (Aus.) (noting that ‘each relevant aspect of a sponsored link is determined by the advertiser. The automated response which the Google search engine makes to a user’s search request by displaying a sponsored link is wholly determined by the keywords and other content of the sponsored link which the advertiser has chosen. Google does not create, in any authorial sense, the sponsored links that it publishes or displays’). 19  ibid [73] (French CJ, Crennan and Kiefel JJ) (emphasis added); Hayne J agreeing but warning that there can be no general rule that would immunize publishers ‘unless they endorsed or adopted the content’: [115]. See also [162] (Heydon J): ‘if a person repeats what someone else has said accurately, and does not adopt it, there is nothing misleading in that person’s conduct’. 20 See Google (n. 16) [68] (French CJ, Crennan and Kiefel JJ). 21  At first instance, Nicholas J would have found that Google was likely to have actual or constructive knowledge that the advertisements in question were using competitors’ brands. See Australian Competition and Consumer Commission v Trading Post Australia Pty Ltd [2011] FCA 1086 [242] (Aus.). Intention, however, is not a relevant element of an action for misleading or deceptive conduct. See Google (n. 16) [97]–[98] (Hayne J). 22  Importantly, however, the High Court’s decision does not foreclose the possibility of intermediaries being liable as accessories, nor does it necessarily prevent future development of the terms ‘adopt’ and ‘endorse’ on the facts in intermediary cases in the future: see Radhika Withana, ‘Neither Adopt nor Endorse: Liability for Misleading and Deceptive Conduct for Publication of Statements by Intermediaries or Conduits’ (2013) 21 AJCCL 152.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

240   Kylie Pappalardo and Nicolas Suzor The decision in Google v ACCC must be contrasted with the earlier Federal Court decision in Allergy Pathway (No. 2), where the respondents were found to have breached their undertaking not to engage in misleading and deceptive conduct23 when they failed to remove comments posted by third parties on their Facebook page.24 Allergy Pathway was liable for contempt of court on the basis that it knew about the comments and failed to remove them. The Federal Court relied specifically on defamation precedent25 in reaching the conclusion that Allergy had ‘accepted responsibility for the publications when it knew of the publications and decided not to remove them’.26 Importantly, Google v ACCC was pleaded in a narrow way that alleged Google itself had made the misleading representations—not that it had misled the public by publishing false claims. An alternative approach in similar circumstances could have seen the ACCC allege that Google’s conduct as a whole in developing its Adwords system and publishing third party content was likely to mislead or deceive consumers. This broader argument could conceivably justify the imposition of liability on Google ‘for the economic harms produced by its industrial activities, centred on devising and operating systems used for trading information’.27 It is an argument to which at least some members of the High Court were apparently sympathetic,28 and it is possible that a differently pleaded case on similar facts could well turn out differently in the future.29

1.2 Defamation In defamation, the word ‘publish’ extends liability to intermediaries who fail to remove defamatory material posted by others.30 For internet hosts that exercise some degree of control over the content they disseminate, they will be liable in the same way that newspaper publishers31 or broadcasters32 who carry content created by others are liable. For others with a less active role, like the operators of discussion fora who provide the fa­cil­ities for others to post comments, liability will accrue as a subordinate publisher 23  The undertaking was given in the context of litigation brought by the ACCC: Australian Competition and Consumer Commission v Allergy Pathway Pty Ltd [2009] FCA 960 (27 August 2009) [5] (Aus.). 24  Note that the respondents were also found to have breached the undertakings through material that they themselves had posted: Australian Competition and Consumer Commission v Allergy Pathway Pty Ltd (No. 2) (2011) 192 FCR 34 (Aus.). 25  ibid. [24]. 26  ibid. [32]–[33] (Finkelstein J). 27  Megan Richardson, ‘Why Policy Matters: Google Inc v Australian Competition and Consumer Commission’ (2012) 34 Sydney L. Rev. 587, 594. 28 See Google (n. 16) [117] (Hayne J); see further Seb Tonkin, ‘Google Inc. v. Australian Competition and Consumer Commission’ (2013) 34 Adelaide L. Rev. 203, 205–7. 29  See Amanda Scardamaglia, ‘Misleading and Deceptive Conduct and the Internet: Lessons and Loopholes in Google Inc v Australian Competition and Consumer Commission’ (2013) 35(11) EIPR 707, 713; see also Peter Leonard, ‘Internet Intermediary Liability: A Landmark Decision by the High Court of Australia in Google Inc v ACCC’ (2013) 15(9) Internet Law Bulletin 158. 30 See Byrne v Deane [1937] 1 KB 818, 837 (UK). 31 See Wheeler v Federal Capital Press of Australia Ltd [1984] Aust Torts Rep. para. 80-640 (Aus.). 32 See Thompson v Australian Capital Television Pty Ltd (1996) 186 CLR 574, 589–90, 596 (Aus.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Liability of Australian Online Intermediaries   241 once they know that the content they carry is likely to be defamatory.33 By contrast, it is generally understood that an internet service provider (ISP) who merely provides a telecommunications service over which others can publish and access defamatory material is not likely to be liable for ‘publishing’ that content.34 In recent cases, however, courts have had much greater difficulty applying these principles to intermediaries who are more removed from the primary act of publication but have more than a purely facilitative role in making material available. For search engines and others who link to defamatory material, the limits to liability in defamation can be drawn from the combination of two notionally distinct principles. The first is the threshold question that an intermediary could not properly be said to ‘publish’ the content, and the second is the defence of innocent dissemination, which applies where a secondary or subordinate publisher does not have actual or constructive knowledge of the content of defamatory material. Because the defence of innocent dissemination will not apply after the content is explicitly drawn to the attention of the intermediary by a complaint, in many cases the most crucial limiting factor is the question of whether an inter­medi­ary has actually published the content in the first place.35 The core issue in difficult suits brought against intermediaries turns on this elusive distinction between active publishing and passive facilitation. In one case, Yahoo!7 conceded that because at least one person had read an article, hosted on a third party website, by following a link presented through the Yahoo! search engine, Yahoo had ‘published’ that article.36 This case may be an outlier; there is emerging authority in the UK37 and Canada38 that suggests that more needs to be done to ‘publish’ a defamatory imputation.39 But where the line should be drawn is not clear. The established law is that what might otherwise be a purely passive role in facilitating publication becomes an act of publication by omission if the secondary actor has ‘consented to, or approved of, or adopted, or promoted, or in some way ratified, the continued presence of that 33  No Australian courts have specifically considered this point in detail, but see Oriental Press Group Ltd v Fevaworks Solutions Ltd [2013] HKCFA 47 (HK). See also Godfrey v Demon Internet Ltd [2001] QB 201, 208–10 (UK). Under the defence of innocent dissemination, liability accrues from the point at which the intermediary acquires knowledge of defamatory material: see David Rolph, ‘Publication, Innocent Dissemination and the Internet after Dow Jones & Co Inc v Gutnick’ (2010) 33(2) UNSWLJ 562, 573–5. 34 See Bunt v Tilley [2007] 1 WRL 1243 [25]–[36] (UK). 35 See Oriental Press Group Ltd v Fevaworks Solutions Ltd [2013] HKCFA 47 (HK). 36 See Trkulja v Yahoo! Inc. LLC [2012] VSC 88 [6]–[7] (Kaye J) (Aus.). 37  See eg Metropolitan International Schools Ltd v Designtechnica Corp. [2010] 3 All ER 528 (UK); Tamiz v Google Inc [2012] EWHC 449 (UK). 38 See Crookes v Newton [2011] 3 SCR 269 [42] (CA) (where a majority of the Canadian Supreme Court found that ‘[m]aking reference to the existence and/or location of content by hyperlink or otherwise, without more, is not publication of that content’). Note, however, that this was a narrow majority. Two members of the Court held that linking to defamatory content in a way that indicates agreement, adoption, or endorsement of that content may be sufficient to ground liability ([48] (McLachlin CJC and Fish J)), and Deschamps J would have found that linking directly to defamatory material would amount to publication: [111]–[112]. 39  See Kim Gould, ‘Hyperlinking and Defamatory Publication: A Question of “Trying to Fit a Square Archaic Peg into the Hexagonal Hole of Modernity”?’ (2012) 36(2) Aust. Bar Rev. 137.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

242   Kylie Pappalardo and Nicolas Suzor statement . . . in other words . . . [if there is] an acceptance by the defendant of a responsibility for the continued publication of that statement’.40 So, for example, in a recent Australian case, the defamatory imputation was found to have been endorsed by the defendant when it used the words ‘read more’ to imply that the content it linked to was a true account.41 These cases become even more difficult when, as in the case of search engines, an intermediary presents a preview or ‘snippet’ of the content of third party sites. For ex­ample, in 2015, Google was found liable in the South Australian Supreme Court for publishing defamatory material when its search engine presented links accompanied by an extract of text that carried defamatory imputations.42 It was also liable in 2012 when its image search results arranged images from third party pages in a way that gave rise to a defamatory imputation.43 In a case brought more recently on very similar facts, the Victorian Court of Appeal would likely have found that Google’s search results amounted to a subordinate publication of potentially defamatory content, but the case was not pleaded in that way.44 The concept of publication is a relatively poor mechanism to delineate responsibility. The general principle in defamation law is that nearly everybody involved in the chain of publication is potentially responsible as a publisher. A conduit—an ISP, for example— that is ‘passive’ and ‘merely facilitates’ communications between users of its system is not likely to be liable for defamation. But the law on the distinction between ‘active’ ‘publishing’ and ‘conduct that amounts only to the merely passive facilitation of disseminating defamatory matter’ is still not well developed.45 Apart from ISPs, it is unclear what types of internet intermediaries may be beyond the scope of defamation law. Liability probably does not extend to people who help design or host website infrastructure but have no substantive involvement with the content.46 Some Australian and UK courts have doubted whether search engines can be liable for the outputs of automated systems designed to identify third party content that matches search terms entered by 40  Urbanchich v Drummoyne Municipal Council (1991) Aust. Torts Rep. 81–127 [7] (Hunt J) (Aus.). 41 See Visscher v Maritime Union of Australia (No. 6) NSWSC 350 [30] (Aus.) (Beech-Jones J found that the defendant’s description of the link ‘amounted to, at the very least, an adoption or promotion of the content’ of the linked article’). 42 See Duffy v Google Inc. [2015] SASC 170 (Aus.). Upheld on appeal in Google Inc. v Duffy [2017] SASCFC 130 (Aus.). 43 See Trkulja v Google Inc., Google Australia Pty Ltd [2012] VSC 533 (Aus.). 44 See Google Inc. v Trkulja [2016] VSCA 333 (2016) [349], [357] (Aus.). 45  Rolph (n. 33) 580; Joachim Dietrich, ‘Clarifying the Meaning of “Publication” of Defamatory Matter in the Age of the Internet’ (2013) 18(2) MALR 88. 46  A decision in the WA Supreme Court considered whether an individual was liable in defamation when she ‘assisted in creating the infrastructure which allowed this material to be displayed to the public’. See Douglas v McLernon [No. 3] [2016] WASC 319 (22 June 2016) [33] (Aus.). The case concerned a defendant that was very far removed from the wrongdoing, but was apparently put on notice of defamatory content on the site. In dismissing the action, Justice Martin turned to the established principles of tort to explain why someone so far removed ought not to be liable: ‘I remain to be persuaded [that a party may be liable for] providing assistance in tort (or the encouraging, counselling or facilitating of the tort) on the basis of involvement, simply because the person does not, as it is here contended, then act to “pull the plug” on a website, or act to terminate the capacity of someone to use an acquired website’: [41].

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Liability of Australian Online Intermediaries   243 the user,47 but the decisions of the Victorian Court of Appeal48 and the South Australian Supreme Court49 referred to earlier explicitly reject this proposition at least from the time the search engine is put on notice of the defamatory content.

1.3 Vilification Like defamation, intermediaries who provide a forum for third party content can be li­able under the Racial Discrimination Act 1975 (Cth) when those comments amount to vilification. Section 18C makes it unlawful to ‘do an act’50 that is reasonably likely to ‘offend, insult, humiliate or intimidate’ a person or group where that act is motivated by ‘race, colour or national or ethnic origin’. In the two decisions that have considered the provision in the context of an online forum, courts have come to somewhat conflicting conclusions as to when a secondary actor will be liable for providing the facilities for another to make vilifying comments. The uncertainty lies primarily in the intentional element of the provision. As in defamation, courts agree that providing the facilities to enable others to post comments and failing to remove them is sufficient to constitute an ‘act’ of publication of the substance of those comments, at least once the operator has knowledge of the comments.51 The conflict is whether failure to remove comments is done ‘because of the race, colour or national or ethnic origin’ of the person or group. In Silberberg, Gyles J found that there was insufficient evidence to draw that conclusion; the respondent’s failure to remove the offensive comments was ‘just as easily explained by inattention or lack of diligence’.52 In Clarke v Nationwide News, by contrast, Barker J held that where the respondent ‘actively solicits and moderates contributions from readers’, the ‘offence will be given as much by the respondent in publishing the offensive comment as by the original author in writing it’.53 The court was able to infer that one of the reasons for the news website’s decision to publish the offensive comments was because of their racial connotations.54 Apart from emphasis placed on the act of moderation in Clarke, there is no easy way to reconcile these two authorities.

47  See e.g. Bleyer v Google Inc. LLC [2014] 311 ALR 529 (Aus.); Metropolitan International Schools Ltd v Designtechnica Corp. [2009] EWHC 1765 (QB) (unreported) (UK). 48  See Google (n. 44) [352] (Ashley JA, Ferguson JA, McLeish JA). 49  See Duffy v Google Inc. [2015] 125 SASR 437 [204]–[205] (Blue J) (Aus.); Google (n. 42) [140], [151], [178] (Kourakis CJ), [536] (Peek J). 50  The reference to ‘an act’, in this case, specifically includes an omission. See Racial Discrimination Act 1975 (Cth), s. 3(3) (‘refusing or failing to do an act shall be deemed to be the doing of an act and a reference to an act includes a reference to such a refusal or failure’). 51 See Silberberg v Builders Collective of Australia Inc. (2007) 164 FCR 475, 485 (Aus.); Clarke v Nationwide News Pty Ltd (2012) 289 ALR 345 [110] (Aus.). 52  Silberberg (n. 51) 486. 53  Clarke (n. 51) [110]. 54  ibid. [199] (‘The act of publishing a comment which is objectively offensive because of race in such circumstances will give offence because of race as much as the public circulation of such a comment by the original author might have done’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

244   Kylie Pappalardo and Nicolas Suzor

1.4 Copyright Under copyright, secondary liability arises when an actor ‘authorizes’ the infringing conduct of another.55 Unfortunately, there is little clear guidance as to the limits of authorization liability. For internet intermediaries, the difficult question is whether the developer of software that facilitates infringement or the operator of a service that hosts or indexes internet content will be taken to have authorized any resulting infringements. The limiting principle was articulated in relation to mass media in Nationwide News v CAL that ‘a person does not authorise an infringement merely because he or she knows that another person might infringe the copyright and takes no step to prevent the infringement’.56 This principle has always been hard to apply in practice. The accepted legal meaning of ‘authorize’ is ‘sanction, approve, countenance’.57 The case law explains that ‘authorize’ is broader than ‘grant or purport to grant the right to do the infringing act’58 but narrower than the broadest dictionary definition of ‘countenance’.59 There is a broad range between those two points and, unsurprisingly, there is therefore considerable uncertainty in Australian copyright law as to the precise meaning of ‘authorization’.60 A central authority is UNSW v Moorhouse, where the university was liable when the photocopiers it provided in a library were used to infringe copyright. Different members of the High Court emphasized different reasons for this conclusion: UNSW was liable either on the basis that it had tacitly invited infringement61 or because it had some degree of control over the technology that facilitated infringement in addition to know­ledge that infringement was likely.62 The relatively few cases on authorization liability in the digital age do not clearly establish the bounds of the doctrine. In Cooper,63 the operator of a website was liable for creating a system that allowed users to post hyperlinks to other websites hosting in­frin­ ging MP3s for download. Justice Branson found that Cooper was liable in part because he could have chosen not to create and maintain the website.64 Cooper’s liability ultimately 55  See Copyright Act 1968 (Cth), ss. 36(1), 101(1) (Aus.). 56  Nationwide News Pty Ltd v Copyright Agency Ltd (1996) 65 FCR 399, 422 (Aus.). 57  University of New South Wales v Moorhouse and Angus & Robertson (Publishers) Pty Ltd (1975) 6 ALR 193, 200 (Gibbs J), 207 (Jacobs J) (with McTiernan ACJ concurring) (Aus.). 58  Roadshow Films Pty Ltd v iiNet Ltd [2012] HCA 16 [126]–[127] (Gummow and Hayne JJ) (Aus.). 59  ibid. [68] (French CJ, Crennan and Kiefel JJ), [125] (Gummow and Hayne JJ). 60  See Rebecca Giblin, ‘The Uncertainties, Baby: Hidden Perils of Australia’s Authorisation Law’ 20 AIPJ 148, 153; David Lindsay, ‘ISP Liability for End-User Copyright Infringements: The High Court Decision in Roadshow Films v iiNet’ (2012) 62(4) Telecomm. J. of Australia 53.1, 53.16, 53.18–53.19. 61  University of New South Wales v Moorhouse and Angus & Robertson (Publishers) Pty Ltd (1975) 133 CLR 1, 21 (Jacobs J) (Aus.). 62  ibid. 14 (Gibbs J), 20–1 (Jacobs J; McTiernan ACJ agreeing). 63  Cooper v Universal Music Australia Pty Ltd (2006) 237 ALR 714 (Aus.). 64  ibid. 723. A similar finding was reached in 2017 in the case of Pokémon Co. Int’l, where Redbubble was liable for authorizing copyright infringement (committed when users sold products via the Redbubble website that featured unlicensed images of Pokémon characters) in part because it had designed and operated the website that allowed those sales. See Pokémon Co. Int’l, Inc. v Redbubble Ltd [2017] FCA 1541 [58] (Pagone J) (Aus.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Liability of Australian Online Intermediaries   245 rested on the finding that he had ‘deliberately designed the website to facilitate infringing downloading’.65 Cooper’s ISP, which provided practically free hosting for Cooper’s website in exchange for advertising, was also liable for failing to take down Cooper’s website despite knowing that it was facilitating infringement.66 In Sharman, the operators of the Kazaa peer-to-peer file-sharing network had less control over the decisions of users to share infringing files.67 The control that it did have was the ability to design the software differently, including developing warnings for users and interfering with searches for content that was possibly infringing. Sharman was held liable for the infringements of their users on the basis that it knew that infringement was prevalent on the system,68 it took active steps to encourage infringement,69 and it failed to do anything to limit infringement.70 In 2012, in iiNet, the High Court refused to extend liability to an ISP who, it found, had no obligation to take action to restrict copyright infringement by its subscribers.71 Unlike Sharman and Cooper, iiNet did nothing to encourage infringement in the way that it provided general purpose internet access services to its subscribers.72 Neither did iiNet have any real advance control over what its users did online—iiNet did not control the BitTorrent system and could not monitor how it was used.73 Much of the High Court’s decision therefore focused on iiNet’s level of knowledge about infringements after the fact. The High Court ultimately found that the notices that alleged that iiNet’s users had infringed did not provide sufficiently specific knowledge of individual infringement to found liability.74 It is nonetheless possible that iiNet could have been liable if the quality of allegations made against its users by rightsholders was better. That is to say, the High Court left the way open for future cases to potentially base liability primarily on knowledge and some ability to mitigate the harm, even without the fault elements of encouragement or control.

2.  Limiting Devices and their Flaws There are few effective safe harbours for intermediaries in Australia. The copyright safe harbour, designed to mirror the US Digital Millennium Copyright Act, was drafted to 65  Cooper (n. 63) 720–1 (Branson J) (French J agreeing), 745 (Kenny J). 66  ibid. 725–6 (Branson J), 746 (Kenny J); Justice French concurred with both judgments. 67 See Universal Music Australia Pty Ltd v Sharman License Holdings Ltd [2005] 222 FCR 465 (Aus.); Pokémon (n. 64). 68  ibid. [404]: infringing file-sharing was ‘a major, even the predominant, use of the Kazaa system’. 69  ibid. [405]–[406]. 70  ibid. [411]. 71 See Roadshow (n. 58) [77] (French CJ, Crennan And Kiefel JJ), [143] (Gummow and Hayne JJ). 72  ibid. [112] (Gummow and Hayne JJ) (noting that ‘iiNet had no intention or desire to see any primary infringement of the appellants’ copyrights’). 73  ibid. [65] (French CJ, Crennan and Kiefel JJ). 74  The High Court emphasized that the notices provided by AFACT were insufficiently reliable to justify potential action by iiNet to suspend or ban subscribers. See ibid. [34], [74]–[75], [78] (French CJ, Crennan and Kiefel JJ) and [92], [96], [138], [146] (Gummow and Hayne JJ).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

246   Kylie Pappalardo and Nicolas Suzor only apply to internet access providers (‘carriage service providers’), and not the search engines and content hosts that would most need to rely on the protection that safe harbours provide.75 An amendment in 2018 extended the safe harbour to cover libraries, cultural and educational institutions, archives, and organizations assisting people with disabilities, but not general internet intermediaries.76 There are another set of copyright exceptions designed to protect ‘mere conduits’ that has very little work to do, because as soon as a provider is alleged either to have taken a positive step or to have failed to act to restrain infringement, it is no longer ‘merely’ passive and loses protection.77 The High Court in Roadshow v iiNet held that these provisions offer protection ‘where none is required’.78 Meanwhile, the safe harbour for other types of liability under state law— found in Schedule 5, clause 91 of the Broadcasting Service Act—disappear once the service provider is put on notice, and therefore provide little protection for intermediaries who are uncertain about the lawfulness of user-generated content that they host.79 The lack of effective safe harbours means that the limiting factors within each body of law are critical for establishing liability and mitigating risk for service providers. Despite doctrinal differences, the liability of intermediaries often appears to turn on the degree to which the intermediary is seen by the court to be an active participant in the wrong. The courts adopt complex factual tests based on analogies with the historical application of each doctrine in the mass-media era. Because each doctrine evolved distinctly, each includes a different test on which liability is based. Each doctrine, however, requires some active behaviour on the part of the intermediary to found liability. In cases where an intermediary is found to be liable, it is invariably viewed as an active wrongdoer. The textual tests are often expressed as an overarching factor—often a single word, like ‘authorize’ or ‘publish’—to purportedly distinguish those intermediaries that actively participate in the wrong from those that merely provide the infrastructure that facilitates it. The Australian case law demonstrates that when courts attempt to distinguish between active and passive actors, they sometimes appear to be using intent, either actual or inferred, to determine whether the act of designing the system was morally wrongful. The result is that intermediaries that appear to be performing similar functions face quite disparate consequences. Where a court must choose to focus either on the initial positive act of designing a system or the later passive act of merely facilitating an isolated instance of harm, there is a great deal of uncertainty in the doctrine. This problem becomes worse when the evaluation of whether an intermediary was ‘passive’ or ‘active’ 75  See Robert Burrell and Kimberlee Weatherall, ‘Exporting Controversy—Reactions to the Copyright Provisions of the US-Australia Free Trade Agreement: Lessons for US Trade Policy’ (2008) U. of Illinois J. of L. Tech. and Policy 259. 76  See Copyright Amendment (Service Providers) Act 2018 (Cth) (Aus.). 77  Copyright Act 1968 (n. 55) ss. 39B and 112E. 78  Roadshow Films Pty Ltd v iiNet Ltd (2012) 248 CLR 42, 55–6 [26] (French CJ, Crennan and Kiefel JJ). 79  See Peter Leonard, ‘Safe Harbors in Choppy Waters—Building a Sensible Approach to Liability of Internet Intermediaries in Australia’ (2010) 3 J. of Int’l Media & Ent. L. 221; Brian Fitzgerald and Cheryl Foong, ‘Suppression Orders after Fairfax v Ibrahim: Implications for Internet Communications’ (2013) 37 Aust. Bar Rev. 175.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Liability of Australian Online Intermediaries   247 depends on moral culpability at the time of designing the system. Intent is not an element of most of the causes of action that apply to intermediaries; liability requires some volitional act, but not intent to cause the harm. The result can be to effectively read intention as an element of secondary liability where it does not otherwise exist. The other limiting device deployed by courts is closely linked to knowledge. A passive facilitator can be transformed into an active wrongdoer once they know of the harm but fail to respond appropriately. Liability in these types of cases comes in one of two ways. The first is through editorial control. Where an intermediary actively moderates content on its site, it is often taken to have assumed responsibility for content, and is accordingly liable. This is the most straightforward application of intermediary liability; unless there is some statutory immunity, it is usually the case that principles of liability applied to broadcast and print media transfer relatively easily to internet intermediaries who exercise direct editorial control over (and therefore assume responsibility for) posts made by others. The second way that intermediaries gain knowledge that triggers liability is when the existence of content is drawn to their attention—usually by the plaintiff. The danger with making knowledge central to liability is that the ambiguity that exists within the traditional fault elements of each doctrine may sometimes effectively be replaced with the simpler proposition that knowledge of unlawful content or behaviour, coupled with some ability to limit its impact, is sufficient to found liability. The big challenge, across all of these cases, is that knowledge, without a clearly defined concept of fault, actually does little to ground liability.80 An intermediary’s knowledge of wrong­ doing has only a minimal relationship to the question of whether its technology, service, or actions actually cause or contribute to the wrong.81 Actual or constructive knowledge is being used to try to separate out ‘bad actors’, but this does not easily fit within the doctrinal history of each of the causes of action.82 Nor does it fit generally with the overarching assumption of the Australian legal system that liability only follows fault.83 Certainly, but for the intermediary’s actions, no harm would be suffered. This is true in the broad sense of the ‘but for’ test in common law, which throws up all the relevant conditions that can be said to be ‘causes in fact’ of the harm.84 Thus, it is possible to argue 80  See Joachim Dietrich, ‘Authorisation as Accessorial Liability: The Overlooked Role of Knowledge’ (2014) 24 AIPJ 146. 81  See Rebecca Giblin-Chen, ‘On Sony, Streamcast, and Smoking Guns’ (2007) 29(6) EIPR 215, 224; Peter Cane, ‘Mens Rea in Tort Law’ (2000) 20 OJLS 533; Avihay Dorfman and Assaf Jacob, ‘Copyright as Tort’ (2011) 12(1) Theoretical Inquiries in Law 59. 82  See e.g. Kylie Pappalardo, ‘Duty and Control in Intermediary Copyright Liability: An Australian Perspective’ (2014) 4(1) IP Theory 9. 83  See Stephen Perry, ‘The Moral Foundations of Tort Law’ (1992) 77 Iowa L. Rev. 449, 513; Goldberg and Zipursky (n. 12) 21; Cane (n. 12) 53–4; Bernard Weiner, Judgements of Responsibility: A Foundation for a Theory of Social Conduct (Guilford Press 1995) 7–8. 84 See Chappel v Hart (1998) 195 CLR 232, 283–4 (Jayne J) (Aus.); March v E. & M.H. Stramare Pty Ltd (1991) 171 CLR 506, 532 (Deane J) (Aus.); Richard Wright, ‘The Grounds and Extent of Legal Responsibility’ (2003) 40 San Diego L. Rev. 1425, 1494; Jane Stapleton, ‘Choosing what we mean by “Causation” in the Law’ (2008) 73 Missouri L. Rev. 434, 471–4; H.L.A. Hart and Tony Honoré, Causation in the Law (OUP 1985) 106, 114; David Hamer, ‘“Factual causation” and “scope of liability”: What’s the difference?’ (2014) 77(2) Modern L. Rev. 155, 170–1; Stapleton, ‘Legal Cause: Cause-in-Fact and the Scope of Liability for Consequences’ (n. 7) 961; Richard Epstein, ‘A Theory of Strict Liability’ (1973) 2(1) J. of Legal Studies 151, 190–1.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

248   Kylie Pappalardo and Nicolas Suzor that without access to the internet, service, or platform, the user would not have been able to post the content that has infringed copyright, defamed another, or otherwise caused harm. But merely providing the facilities is said not to be enough to ground li­abil­ity. Something more is always required, but the way in which the case law is developing makes it very difficult to identify what exactly that means. What the courts seem to be doing in these cases is confusing or converging the assessment of whether an intermediary ought to act in response to the risk of harm with the standard of care that might be expected of the intermediary once that duty to act is established. Under general tort law principles, after a duty is established, courts ask: ‘What is the standard of care that a reasonable person in the defendant’s position would exercise in the circumstances?’ This question sets a benchmark against which to determine whether the defendant’s conduct falls short. The standard of care exhibited by a reasonable person will take into account any special skills or knowledge that a person in the defendant’s position would have.85 Across a range of intermediary liability cases, by contrast, instead of treating knowledge as a factor that informs the standard of care in these cases, the courts are treating knowledge (or allegations of harm) as a factor that informs the imposition of a duty, like reasonable foreseeability. This is a lot of work for the concept of knowledge to do, and creates a great deal of uncertainty around the extent of liability when intermediaries create systems that enable others to post content or communicate. Knowledge is a poor limiting device in intermediary liability law. Whether an inter­ medi­ary has sufficient knowledge of potential harms to be liable is often very difficult to ascertain. Where knowledge is imputed at the design stage on the basis that the system is likely to be used to cause harm, the court must come to some determination of what degree of harm, or likelihood of harm, is sufficient.86 Meanwhile, when courts infer knowledge from editorial control, they actively discourage moderation in a way that encourages, rather than limits, risky behaviour. When knowledge is provided on notice, it is often imputed on the plaintiff ’s assertion of wrongdoing and nothing more.87 ‘Knowledge’, in a substantive sense, requires more than mere awareness of potentially problematic content—it requires intermediaries to make a judgment about whether the material falls within the ambit of the relevant law. At the time that an intermediary is put on notice, it is usually only through an allegation of harm, and it is sometimes difficult for an intermediary to evaluate whether a claim is likely to be made out. In defamation, for example, this may require an evaluation of whether evidence of the truth of an 85 See Imbree v McNeilly (2008) 236 CLR 510 [69] (Aus.); Heydon v NRMA Ltd (2000) 51 NSWLR 1, 117 (Aus.); Roe v Minister of Health [1954] 2 QB 66 (UK); H v Royal Alexandra Hospital for Children (1990) Aust. Torts Rep. 81-000 (Aus.); Amanda Stickley, Australian Torts Law (Lexis Nexis Butterworths 2013) 226, 228–9; Donal Nolan, ‘Varying the Standard of Care in Negligence’ (2013) 72 CLJ 651, 656. 86  See e.g. Universal Music Australia Pty Ltd v Cooper (2005) 150 FCR 1 [84] (Aus.); Universal Music Australia Pty Ltd v Cooper (2006) 156 FCR 380 [149] (Aus.); Universal Music Australia Pty Ltd v Sharman License Holdings Ltd (2005) 22 FCR 465 [181]–[184] (Aus.). 87  See e.g. Pokémon (n. 64) [54] (Pagone J).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

The Liability of Australian Online Intermediaries   249 im­put­ation can be gathered; in copyright, the existence of a fair dealing defence or a licence88 can be a difficult question of fact and law. If the notice relates to the transitory communications of users, an intermediary may have no ability to evaluate whether the past conduct is actually wrongful.89

3. Conclusions This chapter has explored the principles by which online intermediaries are held liable for third party actions across a range of legal areas in Australia: defamation, vilification, copyright, and content regulation. Modern intermediary liability law is not simply an admonishment against consciously helping others to commit legal wrongs; it is an expectation that in appropriate circumstances intermediaries will proactively prevent wrongdoing by others, sometimes by designing systems that seek to prevent wrongful behaviour. In many of the legal areas canvassed in this chapter, courts and legislators ask intermediaries such as ISPs, search engines, website hosts, and technology developers to take some responsibility for the acts of users that occur over their networks and services. These questions are fundamentally about responsibility. However, the ways in which legal rules and principles have developed to ascribe responsibility to online intermediaries have not always been clear or coherent. In many ways, the push for greater online enforcement and intermediary regulation has not been based on responsibility at all, but has been about capacity—the capacity to do something when faced with knowledge that harm may otherwise result. We argue that much of the uncertainty at the heart of inter­ medi­ary liability law stems from the merger and confusion of concepts of capacity and responsibility. Australia’s current laws lack clear mechanisms for disentangling these concepts and distinguishing those intermediaries that are closely involved in their users’ wrongful acts from those that are not. Many of the areas of intermediary liability covered here have their origins in tort law. Responsibility theory in tort law tells us that a person will be responsible for a harmful outcome where his or her actions caused or contributed to the harm (causation) and where harm was the foreseeable result of those actions such that the person might have acted to avoid the harm but did not (fault).90 This is more than liability based on know­ledge of wrongdoing and a failure to act. It requires more active involvement than 88  See e.g. Viacom Int’l Inc. v YouTube Inc, 676 F.3d 19 (2d Cir. 2012) (US), where some of the allegations of infringement turned out to be authorized; see further Zahavah Levine, ‘Broadcast Yourself ’ . 89  In some cases, e.g. transitory communications, an intermediary has no way of evaluating an allegation of infringement: see Nicolas Suzor and Brian Fitzgerald, ‘The Legitimacy of Graduated Response Schemes in Copyright Law’ (2011) 34(1) UNSWLJ 1. 90  See Perry (n. 12) 513; Goldberg and Zipursky (n. 12) 20–21; Voyiakis (n. 12) 458; Denton (n. 6) 127; Cane (n. 12) 53–4.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

250   Kylie Pappalardo and Nicolas Suzor that—normally, a clear and direct contribution to the resulting wrong. Our review of Australian intermediary liability law across different doctrines reveals that often, in asking whether intermediaries are liable, courts have been asking what intermediaries can do to prevent harm. But, in most cases, courts have not been closely examining the intermediary’s causal role in the wrong to determine whether the intermediary indeed ought to be held responsible. The result is that Australian law has sometimes ascribed liability without first establishing fault.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

chapter 13

From Li a bilit y Tr a p to the Wor ld’s Sa fe st H a r bou r: L e ssons from Chi na, I n di a, Ja pa n, Sou th Kor e a, I n don esi a, a n d M a l aysi a Kyung-Sin Park

Germany’s social media law sounds innocent to the extent that it obligates social media companies to take down only ‘unlawful content’ and holds them liable for failure to do so.1 However, the lessons from Asia where liability was imposed on failure to take down notified unlawful content speak otherwise. Such an innocent-sounding rule has profound repercussions. It is no wonder that Germany’s impact on other countries was discussed in a 2018 case on ‘whether online speech regulation laws in democratic countries could lead to a damaging precedent for illiberal states to also regulate online speech in more drastic manners’.2 Unless we want to paralyse the freedom of unapproved uploading and viewing and therefore the power of the internet, an intermediary that cannot possibly know who posts what content, should not be held responsible for defamation or copyright infringements committed via third party content hosted on its services. If intermediaries 1 See the 2017 Network Enforcement Act (Gesetz zur Verbesserung der Rechtsdurchsetzung in ­sozialen Netzwerken, NetzDG) (Ger.). 2  ‘Beyond Intermediary Liability: The Future of Information Platforms’, Yale Law School Information Society Project (13 February 2018) .

© Kyung-Sin Park 2020.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

252   Kyung-Sin Park are held liable for this unknown content, the intermediaries will have to protect themselves either by constantly monitoring what is posted on their service (i.e. general monitoring obligations) or by approving all content prior to being uploaded. If that occurs, it can be said that when a posting remains online, it remains online with the acknowledgement and tacit approval of the intermediary that was aware of the posting and yet did not block it. The power of the internet—the unapproved freedom to post and download—will be lost. The United States made headway by stipulating that no ‘interactive computer service’ should be considered the speaker or publisher of that content.3 Some think that went too far because it shielded liability for content clearly known to be unlawful even for the intermediaries themselves. In response, a ‘safe harbour’ regime could have been set up to exempt from liability only content not known about. The EU did just that, although adding a requirement to act on such knowledge in order to obtain immunity,4 while the US Digital Millennium Copyright Act (DMCA)—perhaps heeding calls from Hollywood, the biggest rightholder group in the world—went further by limiting immunity only to when intermediaries take down all content on notification regardless of whether the content is illegal or intermediaries are aware of that fact.5 The DMCA, incentivizing intermediaries into automatic takedowns, is often criticized6 whereas the EU model makes available immunity to intermediaries who receive notification but are not convinced of its substance. In any event, notice-and-takedown safe harbours have spread.7 In this chapter, we will compare against the EU- or US-style approach of the selfproclaimed ‘safe harbours’8 of six major countries in Asia: China, India, Japan, South Korea, Indonesia, and Malaysia—selected for their religious diversity from Islamism to Confucianism, a combined population covering 3.4 billion out of the total 4.5 billion on the continent,9 and extremely high internet-penetration rates.10 Confusion abounds. 3  See the Communication Decency Act [1996] 47 USC § 230 (US). 4  See Council Directive 2000/31/EC of the European Parliament and the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (e-Commerce Directive) [2000] OJ L178/1, Art. 14. 5  See the Digital Millennium Copyright Act of 1998, 17 USC § 512(c) (US). Importantly, the noticeand-takedown safe harbour is not applicable to content where intermediaries have actual knowledge of its illegality even before and without notice being given by a rightholder or any other person. 6  See Jennifer Urban and Laura Quilter, ‘Efficient Process or “Chilling Effects”? Takedown Notices Under Section 512 of the Digital Millennium Copyright Act: Summary Report’, Electronic Frontiers Foundation, Takedown Hall of Shame ; Wendy Seltzer, ‘Free Speech Unmoored in Copyright’s Safe Harbor: Chilling Effects of the DMCA on the First Amendment’ (2010) 24 Harv. J of L. & Tech. 171. 7  See Daniel Seung, Comparative Analysis of National Approaches of the Liability of the Internet Intermediaries (WIPO, 2010). 8  ibid. (noting e.g. that ‘the [DMCA] safe harbor provisions served as the template for the enactment of similar defenses in the European Union, the People’s Republic of China, and India’). 9 See World Population Review, Asia . 10  See Internet World Stats .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   253 Contrary to the term ‘safe harbour’, some civil society organizations have characterized China’s system as ‘strict liability’.11 Maybe the truth lies in between as we shall discover.

1. China 1.1  Basic Laws and Regulations As in other countries, the default pre-safe-harbour position of Chinese law is that inter­ medi­ar­ies who have the requisite knowledge of infringing activity on their services will be held jointly and severally liable. In addition to that general rule, Article 36 of the Law of Tort Liabilities (LTL) controls intermediary liability in China.12 Article 36 paragraph 3 of the LTL restates the above default position, and paragraph 2 stipulates that when a person whose rights are being infringed sends notification of that fact to the intermediary, which then fails expeditiously to take the necessary measures on receiving that notification, the intermediary is jointly and severally liable with the direct infringer to the extent that any further damages have been caused by its inaction.13 These provisions are considered by Chinese commentators to have been ‘borrowed’14 from Regulation on the Protection of the Right of Communication Through Information Network of the People’s Republic of China,15 which was in turn a Chinese attempt to adapt the DMCA safe harbour.16 11  See Article 19, ‘Internet Intermediaries: Dilemma of Liability’ (2019); Center for Democracy and Technology, ‘Shielding the Messengers: Protecting Platforms of Expression and Innovation’ (December 2012). 12  Promulgated by the National People’s Congress Standing Committee (NPCSC) on 26 December 2009, effective 1 July 2010, in 2010 Standing Comm Nat’l People’s Cong. Gaz. 4 (Ch.) . 13  ibid. Art. 36 (stating: . . . Paragraph 2. Where a user engages in online infringing activity, a right holder so harmed has a right to notify the corresponding service provider, requesting the latter to take necessary measures, such as deleting, screening, removing, references or links to the online infringing material or activity. Where the service provider upon receipt of the notification fails to take prompt measures, the service provider shall be jointly liable for the harm resulted from this failure. Paragraph 3. When the service provider knows that a user injuries other person’s rights and interests and does not take necessary measures, the service providers shall be jointly liable with the user). 14 See Huaiwe He, ‘Online Intermediary Liability for Defamation under Chinese Laws’ (2013) 7 . 15  See Regulation on the Protection of the Right of Communication Through Information Network of the People’s Republic of China, promulgated by the State Council on 10 May 2006, effective 1 July 2006 in St Council Gaz. no. 468, amended 16 January 2013, effective 1 March 2013 in St Council Gaz. no. 634 (Ch.) (hereafter RCIN Regulation). Regulations issued by the executive branch State Council of the PRC are lower in standing than laws passed by the National People’s Congress (NPC) or its Standing Committee, the National People’s Congress Standing Committee (NPCSC). What is unique is that the Regulations are binding on all courts, provided that courts are able to disregard them in the event that they find the Regulations contrary to higher laws. 16  cf. RCIN Regulation (n. 15) Art. 14 (‘Where a right owner believes that a work, performance, or sound or video recording involved in the service of a network service provider who provides information

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

254   Kyung-Sin Park However, what is glaringly absent from the text of LTL 36 is that there is no clause that states that intermediaries ‘shall be exempt’ from liability in certain circumstances, which is the central predicate of section 512 of the DMCA. This may come as a surprise to Chinese commentators who typically reason as follows: Upon receiving notification, if the ISP expeditiously takes measures to disable the allegedly infringing online material, it may be exempt from liability. For instance, in Gengxin Chen et al v Baidu (Beijing) Network Tech Co Ltd,17 after receiving a notification from the plaintiff, Baidu removed the defaming hyperlink from search results. While finding that the plaintiff ’s reputation was damaged, the court held that the damage was not caused by Baidu and thus Baidu was not liable.18

Notice the phrase ‘may be exempt’. Although the entire paragraph seems to be in conformity with a normal description of DMCA-like notice and takedown, the truth is that there is no phrase ‘may be exempt’ in the relevant statute, nor a phrase ‘shall be’. There is not even an attempt to create an elective safe harbour, let alone a mandatory one. In contrast, the only exemption stated in the statute works in the opposite direction. According to the Supreme Court’s Judicial Interpretation of LTL, the intermediary ‘shall be’ exempt from liability for removing the content on notification to the user who posted the content.19 On the request of the poster, the intermediary is required to provide the content of the notification so that the poster is able to bring a lawsuit against the complainant for damages to restore the removed material or seek damages for removal.20 However, there is no obligation to inform the poster that the posting has been removed. Although RCIN itself is properly phrased after the liability-exempting language of DMCA down to emulating the rule whereby the exemption from liability to the poster is bartered with restoration on the poster’s notification,21 RCIN 14-17 is often interpreted storage space or provides searching or linking service has infringed on the right owner’s right of communication through information network, or that the right owner’s electronic rights management information attached to such work, performance, or sound or video recording has been removed or altered, the right owner may deliver a written notification to the network service provider, requesting it to remove the work, performance, or sound or video recording, or disconnect the link to such work, performance, or sound or video recording. The written notification shall contain the following particulars: (1) the name, contact means and address of the right owner; (2) the title and network address of the infringing work, performance, or sound or video recording which is requested to be removed or to which the link is requested to be disconnected; and (3) the material constituting preliminary proof of infringement. The right owner shall be responsible for the authenticity of the written notification’). 17  See Jiangsu Province Nanjing City Interim Ct Gengxin Chen et al. v Baidu (Beijing) Network Tech Co. Ltd (2014) (Ch.) (emphasis added). 18  He (n. 14) 9. 19  Judiciary Interpretation for Statutory Application in Adjudicating Torts to Personal Rights Through Information Networks promulgated by the Supreme People’s Court on 23 June 2014, effective 10 October 2014, in Sup. People’s Ct Gaz. no. 12, Art. 7(1) (Ch.) (Judiciary Interpretation for Online Torts). 20  ibid. Arts 7(2) and 8. 21  See RCIN Regulation (n. 15) Art. 16 (a network service is not liable for damages by reason of storage of works, performance, sound recordings/video recordings (collectively called ‘material’) at the direction of a user who makes the material available to the public through the information network, if the service

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   255 as a liability-imposing rule (or at least so that a failure to obtain a safe harbour directly translates into a liability) as you will see in cases such as Fanya and Pomoho. This only adds to the deep divergence between section 512 of the DMCA and LTL 36.

1.2  Liability-Imposing v Liability-Exempting Overall, the belief of some commentators that the Chinese LTL resembles the DMCA’s notice and takedown is misplaced although it is less so for RCIN. The DMCA was not enacted to specify when intermediaries would be held liable for third party content; it was enacted to specify when intermediaries would not be held liable. Therefore, the fact that an intermediary does not take corrective action under the DMCA is in itself not a cause for liability.22 Its failure to take corrective action under the DMCA only starts a substantive analysis of joint and several liability under general tort law. Intermediaries are not required to take corrective action under the DMCA but, if and when they choose to do so, they receive the legal benefit of exemption from joint and several liability for the infringing content. In contrast, LTL 36 specifies when intermediaries will be held liable when it states ‘where the service provider on receipt of the notification fails to take prompt measures, the service provider shall be jointly liable for the harm resulting from this failure’. Such a transformation of the liability-exempting regime into a liability-imposing regime has tremendous consequences for intermediaries’ liability exposure and therefore their behaviour. First, once a liability-imposing regime has been created—depending on how broadly intermediaries’ obligations are interpreted to respond under the notice-and-takedown regime—intermediaries can be held liable for unknown content. The Supreme People’s Court issued the Judiciary Interpretation for Statutory Application in Adjudicating Torts to Personal Rights Through Information Networks of which Article 5 requires an provider meets the following conditions: (1) clearly represents itself as a provider of storage services and posts its name, the contact person, and its web address; (2) does not modify the stored material; (3) does not know and has no reasonable ground to know that the stored material is infringing; (4) does not obtain financial benefit directly from the user who make available to the public the stored material; and (5) upon receiving notification from a copyright owner, deletes the stored material according to the Regulation); ibid. Art. 17 (‘[u]pon receiving a written explanatory statement delivered by a service recipient, a network service provider shall promptly replace the removed work, performance, or sound or video recording, or may replace the disconnected link to such work, performance, or sound or video recording and, at the same time, transfer the written explanatory statement delivered by the service recipient to the right owner. The right owner shall not notify the network service provider anew to remove the work, performance, or sound or video recording, or to disconnect the link to such work, performance, or sound or video recording’). Translations excerpted from He (n. 14). Some commentators believe that Chinese judicial practice as well as these provisions on copyright have lived up to their promise of delivering a liability-exempting regime. See Jie Wang, ‘Regulating Hosting ISPs’ Responsibilities for Copyright Infringement: The Freedom to Operate in the US, EU and China’, Ph.D. Dissertation at Maastricht University, October 2016 . 22  See DMCA, s. 512(l).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

256   Kyung-Sin Park effective notification to include the following: (1) the identity of the complainant; (2) the web address which is the target of remedial measures or the information necessary to identify the infringing material; and (3) the grounds for removing the target material.23 One company’s complaint stating that ‘we learned that there are comments posted on the Yudao Search Engine stating that our water is of poor quality’ without specifying the comments was not considered sufficient to trigger the notice-and-takedown ­obligations.24 However, when Baidu refused to respond to a copyright holder’s (Fanya) request to address illegal copies of ‘about 700 songs’ without any specified URL, the Supreme Court still held Baidu responsible for not requesting Fanya to produce the web ad­dresses.25 Although Baidu v Fanya was a RCIN 16 case, intermediaries’ obligations are considered no more lenient for personal rights than for copyright and it can be easily inferred that an intermediary receiving an LTL 36 notice is under a similar obligation to affirmatively require the complainant to supplement its first defective notice. Indeed, intermediaries’ obligations under LTL 36 may be even more austere than under RCIN 16. In one instance, even when an intermediary, on receiving an incomplete notice, did request a URL,26 the court said that the intermediary should have taken earlier proactive measures to remove the insulting comments even without the web address having been notified. The impact of Fanya-like cases is clear: once intermediaries are held liable for content not effectively notified, intermediaries will start to take down all notified content, regardless of the substance or procedural efficacy of that notice. When intermediary li­abil­ity rules are liability-exempting, intermediaries’ compliance with the intermediary liability regime will give certainty that they will not be held liable under general tort law. In contrast, a liability-imposing regime only adds another layer of liability in addition to general tort liability. Intermediaries not liable under general tort law (e.g. lack of previous knowledge) may still be liable for not acting on notification and, conversely, compliance with notice and takedown does not guarantee that an intermediary will not be held liable. Under this regime, intermediaries are bound to over-enforce. Secondly, LTL 36 imposes liability not only in cases of non-response but also in cases of knowledge, and does so explicitly: ‘when the service provider knows that a user injures another person’s rights and interests and does not take the necessary measures, the service provider shall be jointly liable with the user’. Although section 512 of the DMCA also has a red flag component whereby exemption for infringing content is not recognized when the infringement is ‘apparent’ to the intermediary, such knowledge only disarms the exemption. It does not, in itself, become a basis for liability as in the

23  See Judiciary Interpretation for Online Torts (n. 19) Art. 5. 24  See Shanghai No. 2 Interim People’s Ct Michun (Shanghai) Beverage & Food Co. Ltd v Qihu (Beijing) Tech. Co. Ltd et al. [2015] [pkulaw.cn] CLI.C.4275913 (Ch.). 25  See Sup. People’s Ct Fanya e-Commerce Co. Ltd v Baidu (Beijing) Network Tech. Co. Ltd [2009] [pkulaw.cn] CLI.C.1766439 (Ch.). 26  See Nanjing City Gulou Dist Ct Chen Tangfa v Blog Information Tech. (Hangzhou) Co. Ltd [2006] [pkulaw.cn] CLI.C.1436674 (Ch.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   257 case of LTL 36. It is in this sense that LTL 36 is a liability rule while section 512 of the DMCA is an exemption rule. The liability rule may be a restatement of general tort liability but, as such, the end result brings us back to face the perils of the pre-safe-harbour days: no matter how cautious courts claim to be in finding constructive knowledge,27 counter-examples abound. For instance, a video platform was held liable under RCIN for not proactively looking out for the release of a famous film28 and Baidu was held liable under LTL for not pro­active­ly disabling nude photographs of a popular celebrity alleged to be having an affair,29 all without any notification from the rightholders. It is a very serious threat if liability can be incurred for infringing someone’s right even if no one cries foul, especially where so much information is automatically de­livered to so many people without anyone’s knowledge. Intermediaries will be tempted into general monitoring or pre-approval. It is difficult to maintain a happy medium since once some level of editing begins, it is deemed that there is more capacity for editing, and eventually liability will be incurred for third party content. Article 9 of the Judiciary Interpretation of Online Torts instructs courts deciding on knowledge issues to consider ‘the technological capabilities of the internet service provider [ISP] to take measures to prevent infringement and what the ISP in fact did’.30 The more measures are taken to prevent unlawful content, the more content an ISP will be held responsible for. Hence a liability trap. Thirdly, even if LTL 36 clearly states that intermediaries will be held responsible only for unlawful content and even if the requirements of knowledge and notification are carefully guarded, it is very difficult for intermediaries to know in advance whether certain content is unlawful, and they will seek shelter by taking down even lawful content. 27  See Zhejiang Province Shaoxing City Dist. Ct Ma Weiying v Weimeng Kechuang (Beijing) Network Tech. Co. Ltd [2012] [pkulaw.cn] CLI.C.2680424 (Ch.) (finding that knowledge on the part of the ISP must be assessed according to the cognizant capability of a normal reasonable person. In the present case, the disputed blog did not include any obvious ‘vilifying, insulting, denigrating or tarnishing words . . . the defendant could not learn that the disputed blog infringed any private rights’). Translations excerpted from He (n. 14). 28 See Beijing no. 2 Interim People’s Ct Huayi Bros Media Group v Pomoho [2009] [pkulaw.cn] CLI.C.188931 (Ch.). 29  See Shanghai no. 2 Interm People’s Ct Yin Hong v Baidu (Beijing) Network Information Co. Ltd [2009] [pkulaw.cn] CLI.C.1765615 (Ch.). 30  See Judiciary Interpretation of Right of Communication through Information Networks, Art. 9 (Ch.) (‘Courts should consider the following factors in assessing whether an ISP should know an Internet user was using its service to infringing other’s right of communication through information networks: (1) The nature of the Internet service involved, its probabilities of being used to infringe the right, and the capabilities of the ISP to police the service; (2) The nature and publicity of the disputed work of authorship, performance or video/audio recording, and whether they can be easily characterized as infringing materials; (3) Whether the ISP selected, edited, modified or recommended the disputed work of authorship, performance or video/audio recording; (4) Whether the ISP takes reasonable measures to prevent infringement; (5) Whether the ISP designated convenient procedures to receive notifications of infringement and to take expeditious measures to check infringement upon notification; (6) Whether the ISP took reasonable measures towards repeated infringing activities by the same Internet user’). Translations excerpted from He (n. 14).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

258   Kyung-Sin Park Now, one may wonder about the practical consequences of this result. Even under the DMCA, a great deal of lawful content is taken down because intermediaries have chosen to benefit from the exemption provision. Under the LTL, they are also incentivized to take down lawful content because they do not know in advance whether the content is lawful or unlawful. In both scenarios, the intermediaries are incentivized to take down: the key difference being that they choose to do so under the DMCA while they are ef­f ect­ive­ly coerced into doing so to avoid liability under the LTL. The difference manifests itself on put-back. If the author of content attempts to prove legitimacy, intermediaries under an exemption rule will have the leeway to reinstate the content because losing the exemption is not directly equivalent to being liable. However, intermediaries under a liability rule will not have that leeway because being wrong about the lawfulness of the content directly translates into liability.

1.3 Conclusions The Chinese intermediary liability regime for defamation is neither strict liability nor a true safe harbour. It is limited liability in a sense that intermediaries are held liable only for: (i) unlawful content that (2) they are aware of. However, instead of formulating an exemption rule (e.g. ‘shall not be liable for unknown or lawful content’), China created a liability rule that imposes liability: (a) for known unlawful content or (b) for unlawful content notified by the rightholder. Such a liability-imposing rule suffers from the interpretation of the courts gravitating towards broad conceptions of knowledge and ef­fect­ive notification, unfairly holding intermediaries liable, and naturally incentivizing them into proactive censorship. Also, even if knowledge and effective notification are strictly interpreted, intermediaries are likely to act on lawful content as well as erring on the safe side of deleting the notified content.

2. India 2.1  Basic Laws and Regulations India’s intermediary safe harbour provisions are in the Information Technology (IT) Act amended in 2008. Under section 79 of the IT Act, intermediaries can avail themselves of a safe harbour as long as they did not have ‘actual knowledge’ of the third party content, and they have complied with the various due diligence requirements promulgated by the government in 2011 (hereafter Intermediary Guidelines 2011).31 Intermediaries are required to take down such content within thirty-six hours of receiving ‘actual know­ledge’ 31  See IT Act, s. 79 (India) (the amendment in 2008 provided immunity for intermediaries which did not initiate or interfere with the transmission, stipulated that the government would provide guidelines

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   259 (including a complaint from a user). In the landmark judgment of Shreya Singhal, the Supreme Court interpreted the requirement of ‘actual knowledge’ to mean receipt of a court order by the intermediary.32 The court clearly stated that intermediaries should not be made to take down content until they receive a judicial order requiring them to do so.33 It is an anomaly to require knowledge to be established by a court order. We shall see how this transpired in the following. Some commentators seem to think that immunity and liability are co-extensive (e.g. the minimum requirements for obtaining immunity are identical to the minimum requirements for avoiding liability), and therefore losing immunity is equivalent to establishing liability. Hence, ‘[w]hether intermediaries might be exposed to secondary liability in the context of online defamation depends on whether or not they qualify for immunity under the IT Act.’34 However, a close reading suggests otherwise. Exemption from liability of intermediary in certain cases—(1) Notwithstanding ­anything contained in any law for the time being in force but subject to the provisions of sub-section (2) and (3), an intermediary shall not be liable for any third party information, data, or communication link made available or hosted by him. (2) The provisions of sub-section (1) shall apply if—(a) the function of the intermediary is limited to providing access to a communication system over which information made available by third parties is transmitted or temporarily stored or hosted; or (b) the intermediary does not—(i) initiate the transmission, (ii) select the receiver of the transmission, and (iii) select or modify the information contained in the transmission; (c) the intermediary observes due diligence while discharging his duties under this Act and also observes such other guidelines as the Central Government may prescribe in this behalf. (3) The provisions of sub-section (1) shall not apply if—(a) the intermediary has conspired or abetted or aided or induced, whether by threats or promise or otherwise in the commission of the unlawful act; (b) upon receiving actual knowledge, or on being notified by the appropriate Government or its agency that any information, data or communication link residing in or connected to a computer resource controlled by the intermediary is being used to commit the unlawful act, the intermediary fails to expeditiously remove or disable access to that material on that resource without vitiating the evidence in any manner.35

As can be seen, the text shows that the safe harbour is liability-exempting not ­li­abil­ity-imposing. It does not set out when intermediaries will be held liable but when inter­medi­ar­ies will not be held liable. Therefore, it is not the fact that an intermediary’s liability for third party content depends on whether it has obtained immunity under that would apply to intermediaries as well as requiring intermediaries to expeditiously take down content once they received ‘actual knowledge’). 32  Shreya Singhal (decided May 2015) 12 SCC 73, s. 117 (Ind.). 33  Sunita Tripathy, Vasudev Devadasan, and Bani Brar, ‘Understanding the Recent Developments in the Law Regulating Cyber Defamation and the Role of the Online Intermediary in India’ (undated) . 34 ibid. 35  IT Act, s. 79 (Ind.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

260   Kyung-Sin Park section 79 of the IT Act because there can be situations where the intermediary may fail to obtain immunity and yet it is not held liable.

2.2  Liability-Imposing v Liability-Exempting The problem arises from interpretation of the Intermediary Guidelines 2011, which state in paragraph 3 as follows: Due diligence to be observed by intermediary—The intermediary shall observe ­following due diligence while discharging his duties, namely: . . . (4) The intermediary, on whose computer system the information is stored or hosted or published, upon obtaining knowledge by itself or been brought to actual knowledge by an affected person in writing or through email signed with electronic signature about any such information as mentioned in sub-rule (2) above, shall act within thirty six hours and where applicable, work with user or owner of such information to disable such information that is in contra­ven­tion of sub-rule (2). Further the intermediary shall preserve such information and associated records for at least ninety days for investigation purposes.

Now, what makes this provision confusing is that section 79 of the IT Act merely seems to state the requirement of immunity, not of liability. One such requirement is observance of due diligence. Thus far, due diligence is not an obligation but a choice which is relevant only in the event that intermediaries wish to apply for immunity. However, the Intermediary Guidelines 2011 may have their own binding force. One civil society commentator seems to think so when describing the Intermediary Guidelines 2011 as ‘the rule that mandates the intermediary to disable the content’ and as ‘endowing an adjudicating role to the intermediary in deciding questions of fact and law, which can only be done by a competent court’.36 It is noticeable that the applicability of immunity does not decide whether inter­medi­ ar­ies will be held liable. The inapplicability of immunity does not in itself mean that intermediaries will be held liable since it has yet to go through the pre-safe-harbour regu­lar torts analysis. The facts that disqualify the said intermediary from immunity may also help to prove its liability. For instance, failure to take action after receiving notification will be a factor contributing to its liability. However, failure to take action will be only one of the factors deciding the liability question. Also, the language ‘endowing an adjudicating role’ will make sense only if intermediaries are held liable for ­getting it wrong (e.g. retaining unlawful content and mistaking it for being lawful). The Intermediary Guidelines 2011 do not impose a blanket requirement that intermediaries must remove all unlawful content on knowledge of the same and must do so correctly; it simply grants the immunity of section 79A of the IT Act only to intermediaries who 36 Software Freedom Law Center, ‘Intermediaries, users and the law—Analysing intermediary li­abil­ity and the IT Rules’ (2012) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   261 remove all unlawful content once they have knowledge. Intermediaries getting it wrong are not immediately held liable for getting it wrong. In a sense, the prevailing view of the Intermediary Guidelines 2011 is that it is liability-imposing, not liability-exempting.

2.3  Dialectical Turn of Singhal In an unexpected turn, the court in Singhal accepted the liability view of section 79 of the IT Act and the Intermediary Guidelines 2011 and dissolved what they believed to be a problem in the only relevant paragraph in the case as follows: Section 79(3)(b) has to be read down to mean that the intermediary upon receiving actual knowledge that a court order has been passed asking it to expeditiously remove or disable access to certain material must then fail to expeditiously remove or disable access to that material. This is for the reason that otherwise it would be very difficult for intermediaries like Google, Facebook etc. to act when millions of requests are made and the intermediary is then to judge as to which of such requests are legitimate and which are not. We have been informed that in other countries worldwide this view has gained acceptance, Argentina being in the forefront.37 Also, the Court order and/or the notification by the appropriate Government or its agency must strictly conform to the subject matters laid down in Article 19(2). Unlawful acts beyond what is laid down in Article 19(2) obviously cannot form any part of Section 79. With these two caveats, we refrain from striking down Section 79(3).38

Section 512 of the DMCA does not require takedown notices to come from courts. The whole idea of the DMCA safe harbour is that intermediaries do not have to correctly decide on the substantive legitimacy of takedown requests. As long as they comply with them (and comply with restoration requests with the same level of automation), they will not be held responsible either for the unlawfulness of any third party content or for its removal. Section 79 of the IT Act is also an exemption rule on its surface: as long as intermediaries comply with the due diligence rule in the Intermediary Guidelines 2011 (i.e. take down on notification), they will not be held liable for third party content. If we take this exemption view of section 79 of the IT Act, the Supreme Court’s holding does not make sense. Under a true safe harbour like section 512 of the DMCA, Facebook and Google are not even supposed to try to decide whether the notified content is lawful. They are expected to claim exemption for all content simply by complying with takedown requests (and with restoration requests). Likewise, intermediaries under section 79 do not have to decide whether notified content is unlawful: they can claim 37 See Argentinean Supreme Court Rodriguez  M.  Belen v Google y Otro s/daños y perjuicios (29  October 2014) R.522.XLIX (Arg.). One can only think that this case is what the Indian Supreme Court had in mind but the Argentinean decision requires a court order as a prerequisite for imposing liability, not for exempting it. 38  Shreya Singhal (n. 32) 117.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

262   Kyung-Sin Park exemption for all content simply by removing content they come to know to be unlawful. They do not have to make any effort to try to be accurate because being inaccurate does not mean that they will be held liable. The Intermediary Guidelines 2011 state that intermediaries ‘shall act within 36 hours’ but only if they wish to claim exemption—that is, if the exemption-rule view of section 79 correct. The Singhal court may have taken a liability-rule view of section 79: that this provision holds intermediaries liable if they do not comply with due diligence rules under the Intermediary Guidelines 2011. If so, they may have been right to be worried about private, non-judicial notifications pulling intermediaries into liability. Of course, such a liability-rule view is, however, contrary to section 512 of the DMCA which explicitly states that failure of immunity does not mean liability39 and whose optional character has been established by the legislators.40 There are obviously differences between section 512 of the DMCA and section 79 of the IT Act. One such difference concerning the Singhal court could be that section 79 provides an incentive only in one direction: towards taking down as much as they can, while section 512 of the DMCA provides incentives for reinstating content. The Singhal court may have been worried that too much content would be taken down if inter­ medi­ar­ies operating under the unidirectional incentive faced difficulty in deciding on the legitimacy of content. In the end what is important is that the Singhal court tried to undo what it believed to be a problem and created one of the strongest safe harbours whereby intermediaries are shielded from liability even if they refuse to take any action on all non-judicial takedown requests. Indeed, the stories of platforms refusing to take down and not being held liable until they are ordered to take down by the courts abound in relation to section 230 of the US Communications Decency Act (CDA).

2.4 Conclusions India’s section 79 ‘safe harbour’ is properly configured as an exemption rule. However, the enforcement rule of section 79 and the Intermediary Guidelines 2011 impose 39  Falling outside the safe harbours does not incur liability for infringement. See 17 USC § 512(l) (‘[t]he failure of a service provider’s conduct to qualify for limitation of liability under this section shall not bear adversely upon the consideration of a defense by the service provider that the service provider’s conduct is not infringing under this title or any other defense’). 40  See Senate Report 105-190—The Digital Millennium Copyright Act of 1998, 55 (‘[n]ew section 512 does not define what is actionable copyright infringement in the online environment, and does not create any new exceptions to the exclusive rights under copyright law. . . . Even if a service provider’s activities fall outside the limitations on liability specified in the bill, the service provider is not necessarily an infringer; liability in these circumstances would be adjudicated based on the doctrines of direct, vicarious or contributory liability for infringement as they are articulated in the Copyright Act and in the court decisions interpreting and applying that statute, which are unchanged by section 512. In the event that a service provider does not qualify for the limitation on liability, it still may claim all of the defenses available to it under current law’).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   263 af­fi rma­tive obligations to take down prohibited content on notification. The Singhal court may have been worried about the censorship-incentivizing effects of section 79 or may have taken the liability-rule view of the Intermediary Guidelines 2011, and decided to restrict the action-triggering notifications to court decisions only. The end result is one of the ‘safest’ harbours in the world where intermediaries do not have to take down anything unless the courts find the content to be unlawful. This is similar to section 230 of the CDA where intermediaries do not have to take down anything unless ordered by a court to do so.

3. Japan 3.1  Basic Laws and Regulations The Diet passed the Provider Liability Law in 2001, which covers both copyright and non-copyright law.41 According to section 3(1), when someone’s rights are infringed by a flow of information, intermediaries ‘shall not be held liable for the damage caused unless it is technically feasible to take measure to prevent the transmission of infringing information to unspecified persons’ and either: (1) the intermediary had known the fact of the infringement; or (2) the intermediary knew the existence of the relevant information and there were ‘reasonable grounds’ for the intermediary to know the fact of the infringement. This is similar to EU’s e-Commerce Directive where the intermediary has exemption for unknown content even if it is unlawful as long as the intermediary has neither actual knowledge nor apparent knowledge, and when the intermediary does have that know­ledge it takes down the content.42 (The comparison to the e-Commerce Directive is especially appropriate since both laws are horizontal; that is, they cover all fields of

41 See Act on the Limitation of Liability for Damages of Specified Telecommunications Service Providers and the Right to Demand Disclosure of Identification Information of the Senders, Act no. 137 of 2001, amended by Act no. 10 of 2013 (Jap.) (English translation) (‘Provider Liability Limitation Act’ or PLL). 42  To ensure that it is appropriate to compare an EU transnational law to a Japanese national law, it may be advisable to look at how Member State laws are shaped by the e-Commerce Directive in the EU which resulted in national laws such as in Germany, which amended the Act on Utilization of Teleservices as follows: ‘Section 8 General Principles . . . (2) Providers as defined under Sections 9 to 11 shall not be obliged to supervise information they have transmitted or stored or to research to determine circumstances that indicate an illegal activity. Obligations to remove or block the use of information under binding law shall remain unaffected even if the provider is not responsible pursuant to Sections 9 to 11. . . . Section 11 Storage of Information (Hosting). Providers shall not be responsible for third-party information that they store for a user if (1) they have no actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent, or (2) acts expeditiously to remove or to disable access to the information as soon as they become aware of such circumstances’).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

264   Kyung-Sin Park li­abil­ity, although we are comparing all safe harbours across national and sectoral lines anyway.) Japanese commentators are not satisfied: [t]he ISP is faced with a delicate and hard choice . . . When the ISP received a complaint that some statements posted on its bulletin board was defamatory, however, it had no means to know whether the defendant was negligent or whether the defendant could assert immunity. As a result, for the fear that the ISP might be held liable if it refuses to remove defamatory statements, it is likely to remove them even if the defendant could not be held liable. Such an attitude of the ISP would wipe out many protected speech from the Internet. The Provider Liability Law is, I believe, unconstitutional so long as it subjects the ISP to liability when there is a reasonable ground to believe that it should have known that the information it was carrying invaded someone’s rights . . . I therefore believe that if someone wants the ISP to remove the defamatory statements from its bulletin boards, he or she should go to the court first seeking order to remove them and the court must review the claim and decide whether the defendant would be likely to be held liable before issuing an order to remove them.43

However, it is not the case that the Provider Liability Limitation Act ‘subjects [any] ISP to liability’ on the basis of the existence of ‘reasonable ground’ for an intermediary’s ­constructive knowledge. The Act merely states that an intermediary will not be held li­able if it had neither actual knowledge nor constructive knowledge. The statement ‘if no notice, no liability’ is different from the statement ‘if notice, then liability’. The enforcement regulations confirm this: ‘[c]onsequently, liability for damage shall not always be incurred if the response is not made in accordance with these Guidelines. Conversely, the provider, etc. shall not be absolved from liability for damages even if the provider, etc. responds in accordance with these Guidelines.’44

3.2  Comparison With Other Safe Harbours There may be concern that even the liability-exempting safe harbour could unduly favour takedown over retention45 but such a critique takes on the entire class of safe ­harbour regimes including Article 14 of the EU’s e-Commerce Directive and section 512 of the DMCA in the United States, albeit at differing degrees. The idea behind a safe h ­ arbour is to create a balance away from section 230 of the CDA in order to make inter­medi­ar­ies do something about known unlawful content, or at least to make 43  Shigenori Matsui, ‘The ISP’s Liability for Defamation on the Internet—Japan’ (Seoul, South Korea, May 2015) (on file with the author) (emphasis added). 44  Provider Liability Limitation Act Guidelines Review Council, ‘Provider Liability Limitation Act Guidelines Relating to Defamation and Privacy’ (May 2002, amended December 2014) (English translation). 45  See Mark Lemley, ‘Rationalizing Internet Safe Harbors’, Stanford Public Law Working Paper 979836 (10 April 2007) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   265 ­inter­medi­ar­ies do something without worrying that their actions will be used as ­evidence against them (e.g. of their ability to take down content which will justify penalizing their inactions). Any such system is bound to invite some criticism because it is designed to increase the censorship activities of intermediaries beyond doing nothing. The three safe harbours (Art. 14 of the e-Commerce Directive, s. 512 of the DMCA, and s. 3 of Japan’s Provider Liability Limitation Act (PLL)) vary in their degree of incentivizing intermediaries into censorial activities. Section 512 of the DMCA conditionally46 requires taking down all notified content expeditiously and Article 14 of the e-Commerce Directive conditionally requires intermediaries to take down only known content expeditiously. On its surface, section 3 of the PLL seems similar: the ‘technical feasibility’ of removal also disqualifies intermediaries which probably means that inter­medi­ar­ ies should remove unlawful content as soon as they have knowledge (or constructive knowledge). However, on closer inspection, the Japanese safe harbour seems narrower than the EU’s in the following hair-splitting sense: broad constructive knowledge (e.g. ‘reasonable grounds’ for know­ledge of illegality) is sufficient to disqualify an intermediary from the safe harbour while in the EU only actual knowledge of illegality or apparent illegality is sufficient for disqualification. What is interesting is that the intermediary is also given exemption for removing unknown content. At the same time, according to section 3(2) the intermediary shall not be held liable to the originator of the information for taking measures to prevent the circulation of infringing information if: (1) such measures are reasonably necessary; or (2) the intermediary received notification of infringement, then relayed it to the originator, and did not receive any notice of non-consent within seven days of doing so. The reality is that it is unusual for intermediaries to be held liable for removing content in any event, with or without the PLL, so that there is little incentive for giving the originator notice of the proposed takedown. This is unlike the Canadian notice-and-notice where notification to the author of a post is mandatory.47

3.3 Conclusions Japan has a proper exemption-type safe harbour but it is narrower in scope than the EU e-Commerce Directive because even ‘reasonable grounds’ short of apparent illegality become a basis for disqualifying exemption. However, it is broader in scope than the DMCA’s safe harbour in the sense that failure to act on notification does not in itself 46  ie. if intermediaries want exemption. 47  See Copyright Act, s. 31.1 (Can.) (providing that OSPs are ‘exempt from liability when they act strictly as intermediaries in communication, caching and hosting activities’). Canada has already established section 230-type exemption for intermediaries in the area of copyright. As well as and independent of this safe harbour, OSPs must comply with the notice-and-notice system (s. 41.25–41.26) whereby, on receipt of any notice of infringement, the intermediary service provider does not have to remove the alleged infringing content, but must forward the notice to the alleged infringer (s. 41.26(1)(a)) on penalty of an administrative fine.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

266   Kyung-Sin Park disqualify intermediaries from exemption, only failure to act on knowledge does so, although knowledge itself is interpreted broadly to include ‘constructive knowledge’.

4. Indonesia On 30 December 2016, the Ministry of Communication and Informatics released Circular Letter 5 of 2016 on the Limitations and Responsibilities of Trade Platform Providers and Merchants through Electronic Commerce Systems in the Form of UserGenerated Content, its first evrer attempt at an intermediary safe harbour in any field of law.48 The ministry dubbed the circular as Indonesia’s safe harbour policy for e-commerce platforms, therefore it does not apply to all intermediaries, only those mediating the sale of goods and services. The ministry intends to follow this circular with the Ministerial Regulation on Safe Harbor Policy for User Generated Content, which is expected to contain more detail than the 2016 circular. However, the Regulation has not yet been released. Prior to the Regulation’s release, the Ministry of Communications and Informatics has indicated that it will hold public consultations in order to collect input. The circular on safe harbours aims to clearly distinguish the roles and responsibilities of e-commerce platforms, users, and all other parties in the e-commerce ecosystem. It aims to establish a safety and reporting protocol for e-commerce platforms, as well as defining restricted content for both users and platforms, which includes (but is not limit­ed to): negative content (e.g. pornography, gambling, violence, and goods/services deemed illegal by other legislation); intimidating content (e.g. goods/services depicting gore, blood, horrific accidents, and torture); violation of intellectual property rights; hacking and illegal access to electronic systems; provision and/or access to drugs, ad­dict­ive substances, and hallucinogenic substances; illegal weapons; trafficking of people and organs; and protected flora and fauna.49 The circular on safe harbour obligates e-commerce platforms to include a mechanism which allows users to report discovered illegal goods and services. When a platform is alerted of illegal goods and services, it is mandated to take it down within one, seven, or fourteen days depending on the severity of the content. Content deemed harmful to national security and human health must be taken down within one day, pornography within seven days, and goods/services that infringe intellectual property rights within fourteen days.50

48  See Indonesian Ministry of Communication, Circular Letter 5 of 2016 on the Limitations and Responsibilities of Trade Platform Providers and Merchants through Electronic Commerce Systems in the Form of User-Generated Content (2016) (Indonesian only) (hereafter Circular Letter). 49 ibid. 50 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   267 E-commerce platform providers are also obligated under the circular to provide clear information on which items are illegal and forbidden from being traded on their systems.51 Users and merchants must enter an agreement (agreeing on terms and conditions) with the platform provider prior to the use of its services. Concerns about overuse or criminalization of the Circular Letter’s definition of illegal content appear to be somewhat limited, considering that it does not set any sanctions or penalties for non-compliance, and e-commerce platform providers are also not obligated to report illegal activity to law enforcers. However, it is very likely that illegal activities by merchants and/or users will be dealt with through criminal sanctions detailed in the Information and Electronic Transactions Law.52 One anomaly is that intermediaries are obliged to actively evaluate and monitor merchants’ activities on their platforms. It is not clear whether the obligation to ‘actively evaluate and monitor merchants’ activities in their platforms’ will require providers to  establish an advanced content-filtering system in their platforms.53 That said, the Indonesian safe harbour also suffers from confusion over the distinction between a li­abil­ity rule and an exemption rule. It is unclear what is meant by the Circular Letter: Section V, titled ‘Limitation and responsibility of Platform Providers or Electronic System Providers and Merchants in Trade Through Electronic Systems (Electronic Commerce) in the Form of User Generated Content’, of the Circular states: C.  Obligations and Responsibilities of UGC [User-Generated Content] Platform Providers 1. The obligations of the UGC Platform Provider include: a. Presenting the terms and conditions for using the UGC Platform which at least contains the following: 1) obligations and rights of Merchants or Users to use the UGC Platform services; 2) the obligations and rights of the Platform Provider in carrying out the UGC Platform business activities; 3) Provisions regarding accountability for uploaded content. b. Providing Reporting Facilities that can be used to submit complaints regarding Prohibited Content on the UGC Platform it manages, to obtain the least information including: 1) specific links leading to Prohibited Content; 2) reasons/ basis for reports of Prohibited Content; 3) supporting evidence of the report, such as screenshots, statements, brand certificates, power of attorney. c. Acting on complaints or reporting on content, including: 1) conduct the examination of the truth of the report and ask the reporter to complete the requirements and/or include other additional information related to the complaint and/or reporting in the event that is needed; 2) take action to remove and/or block the prohibited content; 3) give notification to the Merchant that the content uploaded is Prohibited Content; 4) provide a means for Merchants (Merchants) to argue that the content they upload is not Prohibited content; 5) reject complaints and/or reporting if the reported content is not prohibited content. d. Pay attention to the period of removal and/or blocking of 51 ibid. 52  See Law no. 11 of 2008, Information and Electronic Transactions Law (Indon.). 53  See Kristo Molina, ‘Indonesia Implements a Safe Harbor Policy for E-Commerce (Marketplace) Platforms’ (White and Case blog,13 March 2017) .

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

268   Kyung-Sin Park reporting Prohibited Content: 1) For Prohibited Content that is urgent is no later than 1 (one) calendar day from the report received by the UGC Platform Provider. Prohibited Content is urgent including, but not limited to: i) Products of goods or services that are harmful to health; ii) Products/services that threaten state security; iii) human trafficking and/or human organs; iv) terrorism; and/or v) other content determined by the laws and regulations. 2) Prohibited Content as stated in Roman Letter V Letter B other than urgent Prohibited Content is not later than 7 (seven) calendar days from the report received by the UGC Platform Provider; 3) Prohibited Content as mentioned in Roman Letter V Letter B number 1 letter e, that is, content related to goods and/or services that contain content that violates intellectual property rights is not later than 14 (fourteen) calendar days since the complaint and/or reporting is received by the UGC Platform Provider with supporting evidence required. e. Evaluate and/or actively monitor the activities of the Merchants in the UGC platform. f. Comply with other obligations established under the provisions of the legislation. 2. Responsibilities of the UGC Platform Provider include: a. being responsible for implementing electronic systems and content in a platform that is reliable, safe and responsible. b. The provisions of letter (a) above do not apply if there is an error and/or negligence of the merchant or platform user.54

The impact of paragraph V.C.2 of the Circular Letter is unclear. Paragraph V.C.1 states the ‘obligations’ of the platform provider while paragraph V.C.2 states the ‘responsibilities’, which seem to refer to the technical reliability of the platform. If so, the noticeand-takedown regime discussed in paragraph V.C.1, standing alone, reads as though it is liability-imposing, meaning that platform operators must comply with the noticeand-takedown process and their failure to remove the prohibited content on notification will directly translate into liability. This is evinced by the requirement that the platform ‘conduct examination of the truth of the complaint . . . and reject the complaint if it finds the content not prohibited’.55 It thus requires the platform provider to ‘get it right’. Again, the risk of a liability-imposing regime discussed earlier in relation to China also apply to the Indonesian legal framework.

5. Malaysia The Copyright Act of 1987 as amended in 2012, Part VIB ‘Limitation of Liabilities of the Service Provider’ sections 43B to 43I, provide a safe harbour provision for internet intermediary liability for copyright infringement.56 The provisions are similar to the DMCA in the United States whereby ISPs and content aggregators are provided with immunity from liability for copyright infringement if they protect copyright owners by removing or disabling access to infringing content. 54  Circular Letter s. V.C.1–2 (emphasis added). 55  ibid. s. V.C.1.c(1)–(5). 56  See Copyright Act of 2012, Part VIB s. 43B–I (Malay.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   269 43E. Storage and information location tools (1) The service provider shall not be held liable for storing infringing work at the direction of a user of its primary network; or linking a user via a hyperlink, directory or search engine to an online location which makes an infringing work available. Under the condition that the service provider: (i) does not have actual knowledge that the electronic copy of the work or activity is in­frin­ging; is not aware of the facts or circumstances from which the infringing activity is apparent, which is the red flag requirement (contributory liability); (ii) that the service provider does not receive any financial benefit directly attributable to the infringement of the copyright and that the service provider does not have the right and ability to control the infringing activity (vicarious liability); (iii) that upon receipt of a notification of any infringement, the service provider responds within the time specified to remove or disable access to the material. If no notification is given the service provider shall not be held liable. (2) A test is given for vicarious liability, which includes taking into account any industry practice in relation to charging of services by a service provider.57

Malaysian commentators seem content:58 in relation to the exemption for removal (s.  43F), the penalty for bad faith (s. 43I), and the counter-notification procedure (s. 43H(3)–(5)), they are confident that the Malaysian safe harbour effectively implements section 512 of the DMCA. Notification by copyright owner and its effect (1) The copyright holder can notify the service provider to remove or disable any access to copyright infringing content, provided that the copyright owner shall compensate the service provider or any other person against any damages, loss or liability arising from the compliance by the service provider of such notification. (2) The service provider shall remove or disable any access to work that is infringing copyright within 48 hours after he received the notification.59

However, if the provision above (s. 43H) is read carefully, it implements a liabilityimposing framework. It does not conditionally require removing the ‘infringing work’ but absolutely requires it. This reads like a separate obligation under section 43(I)(1)(iii) to respond to ‘notification of infringement’ since there is a substantive difference between an ‘infringement’ and a ‘notification of infringement’.60 Section 43H is therefore able to create all the problems that the Chinese liability-imposing system has generated. As to the defamation area, there is no separate liability safe harbour but the case of Kho Whai Phiaw v Chong Chieng Jen [2009] 4 MLJ 103 discussed the liability of bloggers for third party content, and ruled that bloggers are not to be considered publishers of the

57  ibid. Part VIB s. 43E. 58  See Ida Madieha Abdul Ghani Azmi, Suzi Fadhilah Ismail, and Mahyuddin Daud, ‘Internet Service Providers Liability for Third Party Content: Freedom to Operate?’, Conference Paper DOI: 10.1109/ CITSM.2017.8089226 (August 2017). 59  Copyright Act of 2012 (n. 56) Part VIB s. 43H. 60  ibid. s. 43(I)(1)(iii).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

270   Kyung-Sin Park defamatory comments where there is no specific knowledge of the comments even if the bloggers retain the ability to edit and control access to the blog space. However, the new section 114A of the Malaysian Evidence Act 1950 came into force in July 2012 and deems all persons who act as owners, hosts, administrators, editors, or sub-editors, or who facilitate to publish or republish any publication are to be ­presumed as publishers under the law unless otherwise stated. The imagined impact of this law on intermediary liability will be tremendous but has not yet materialized in real cases.

6.  South Korea 6.1  Introduction: Basic Laws and Regulations We began with China’s limited liability rule and contrasted it with the true safe harbours in Japan and India which, albeit misunderstood, pointed to the risks of the Indonesian and Malaysian legal frameworks, which, despite claiming to implement ‘safe harbours’, are in fact liability-imposing regimes by nature. In this section, we will look at South Korea where a liability-imposing rule was established earlier and we will examine its impact on intermediary behaviour. The theory to test runs as follows: pre-safe-harbour, intermediaries are liable for content that they know (1) exists and (2) is known to be unlawful. The liability-imposing rule applies to a subset of these knowledge-laden cases where the intermediary has received a notice of infringement and therefore has knowledge of the existence of the unlawful material. Technically, when it receives notification all it has is knowledge of the existence of some controversial material, which is not equal to knowledge of unlawful material. However, the liability-imposing rule holds them immediately liable if they wrongly decide on the illegality. Cornered by fear of liability, intermediaries tend to take down clearly lawful content. Is this also the case in South Korea? Article 44-2 (Request to Delete Information) of the Act Regarding Promotion of Use of Information Communication Networks and Protection of Information reads: Anyone whose rights have been violated through invasion of privacy, defamation, etc., by information offered for disclosure to the general public through an information communication network may request the information communication service provider handling that information to delete the information or publish rebuttal thereto by certifying the fact of the violations. Paragraph 2. The information communication service provider, upon receiving the request set forth in Section 1 shall immediately delete or temporarily blind, or take other necessary measures on, the information and immediately inform the author of the information and the applicant for deleting that information. The service provider shall inform the users of the fact of having taken the necessary measures by posting on the related bulletin board. [omitted] Paragraph 4. In spite of the request set forth in Section 1, if the service provider finds

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   271 it difficult to decide whether the rights have been violated or anticipates a dispute among the interested parties, the service provider may take a measure temporarily blocking access to the information (‘temporary measure’, hereinafter), which may last up to 30 days [omitted] Paragraph 6. The service provider may reduce or exempt the damage liability by taking necessary actions set forth in Paragraph 2.61

As is immediately apparent, the provision is structured not with phrases such as ‘the ­service provider shall not be liable when it removes . . .’ but starts out with the phrase ‘the service provider shall remove . . .’ Paragraph 6, referring to the ‘exemption from or reduction of liability in event of compliance with the aforesaid duties’, makes a feeble attempt to turn the provision into an exemption provision but the exemption here is not mandatory. This means that intermediaries will accept paragraph 2 obligations as mandatory, not conditional. Historically, the predecessors of Article 44-2 simply required the service provider to take down content on the request of a party injured by that content and did not provide any exemption.62 The law was amended in 2007 in Article 44-2 to create a ‘temporary (blind) measure’ for ‘border-line’ content, which the service provider can now resort to in fulfilling its responsibility under the previous law.63 The central idea that continued to remain was that the intermediary must take action (temporary or not) on infringing content on notification. Again, the general idea of holding intermediaries liable for identified infringing content seems innocuous but the South Korean cases are compelling for why it should be abandoned.64 As you will see later, intermediaries respond by removing even lawful content, and courts impose liability when the illegality is apparent only in hindsight reinforcing the censorial tendencies of intermediaries. Now, this failure to set up a liability-exempting regime does not mean that the law directly incentivizes inter­medi­ar­ies into a general monitoring obligation or prior approval for uploading. It may just mean maintaining the pre-safe-harbour status quo based on general torts joint li­abil­ity. However, the reality in Korea shows that such a half-baked attempt may worsen the situation.

6.2  Proof: Intermediary Behaviour and Courts’ Expansionist Interpretation Politicians and government officials often make takedown requests for clearly lawful postings critical of their policy decisions, such as postings critical of a Seoul City

61  Act Regarding Promotion of Use of Information Communication Networks and Protection of Information, Art. 44-2 para. 1 (Kor.). South Korean legislation can be found at . 62  See Law no. 6360 of 16 July 2001, Network Act, Art. 44(1)–(2) (Kor.). 63  See Law no. 8289 of 27 July 2007 (Kor.). 64  See Woo Ji-Suk, ‘A Critical Analysis of the Practical Applicability and Implication of the Korean Supreme Court Case on the ISP Liability of Defamation’ (2009) 5(4) Law & Tech. 78, 78–98.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

272   Kyung-Sin Park mayor’s ban on assemblies in Seoul Square;65 a posting critical of a legislator’s drinking habits;66 clips of a television news report on the Seoul police chief ’s brother who al­leged­ly ran an illegal brothel hotel;67 a posting critical of politicians’ pejorative remarks on the recent deaths of squatters and police officers in a redevelopment dispute;68 a posting calling for immunity from criminal prosecution and civil damage suits on labour strikes;69 and a posting by an opposition party legislator questioning a conservative media executive’s involvement in a sex exploitation scandal relating to an actress and her suicide.70 Some of these requests were accepted by the intermediaries and the requests submitted to an independent progressive intermediary were not accepted, and others were restored after public scrutiny intensified. MP Choi Moon-soon obtained the relevant data from the top three content host intermediaries though the Korea Communications Commission and revealed that they took down 60,098 postings in 2008, 96,503 in 2009, and 100,000 estimated for 2010 in November of that year.71 MP Nam Kyung-pil obtained similar data on the top two content hosts Naver and Kakao taking down 209,610 postings in 2011 and MP Shin Yong-Hyun reports 1,643,528 takedowns by Naver and 442,330 takedowns by Kakao between January 2012 and June 2017.72 Notice a possibility that the announced availability of legal recourse may have encouraged more people to submit takedown requests, many of which intermediaries blindly comply with just to be safe. The courts have worsened the situation by taken expansionist approaches in the same  way as the Chinese courts, which might be expected since both China and South Korea are liability-imposing regimes. In a crushing judgment in 2009,73 the South Korean Supreme Court held Naver, Daum, SK Communications, and Yahoo Korea liable for the defamation of a plaintiff when user postings on those sites accused him of deserting his girlfriend during her second pregnancy after he had talked her into aborting the first, after which the girlfriend committed suicide. The court upheld judgments of 10 million won, 7 million won, 8 million won, and 5 million won, respectively, against those services, stating that: [i]ntermediary shall be liable for illegal contents . . . when (1) the illegality of the content is clear; (2) the provider was aware of the content; and (3) it is technically 65  See (Korean only). 66  The original posting now taken down was at (Korean only). 67  See (Korean only). 68  See (Korean only). 69  See (Korean only). 70  The original posting now taken down was at (Korean only). See the following news report on the takedowns: ‘NHN-Daum took down 298 postings related to Chang Ja-Yeon’ (ZD Net, 15 April 2009) 71 See (Korean only). 72 See , (Korean only). 73  See Supreme Court 2008Da53812 (16 April 2009) (Kor.).

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   273 and financially possible to control the contents . . . The Court will find the provider’s requisite awareness under (2) above: a) when the victim has requested specifically and individually for the takedown of the content b) when, even without such request, the provider was concretely aware of how and why the content was posted OR c) when, even without request, it was apparently clear that the provider could have been aware of that content.74

The intermediaries here could not investigate the plaintiff ’s affairs with his girlfriend and yet were held liable for not removing them upon notifications. Only the courts, armed with hindsight, could comfortably find the comments defamatory, which means that intermediaries will be forced to err on the side of deleting. On top of that, the most renowned part of the judgment concerned what inter­medi­ ar­ies must do with content for which no notification is given at all. The conclusion of the court was that the intermediary will be absolutely liable for a posting later found to be ‘clearly’ defamatory if ‘it was apparently clear that the provider could have been aware of that content’ even if the victim did not notify the intermediary of the existence of the content. This sets up probably one of the strictest intermediary liability regime because it imposes liability for ‘unknown but could-have-known’ content. Note the parallel that can be drawn with the Chinese Fanya case discussed earlier. Anupam Chander plainly describes this ruling as stating that a web service ‘must delete slanderous posts or block searches of offending posts, even if not requested to do so by the victim’.75 Of course, the DMCA’s notice-and-takedown immunity76 also does not apply to content where the OSP had ‘actual knowledge’ of its infringing nature or an ‘awareness of facts or circumstances from which infringing activity [was] apparent’. However, the DMCA is a safe harbour provision. It merely says that the safe harbour will not apply in the event of ‘actual knowledge’ or ‘awareness’. It does not say that the OSP will be held liable in the case of such knowledge or awareness. Furthermore, in 2012 the Constitutional Court even interpreted Article 44-2 of the Network Act as requiring the takedown of unlawful content as well as lawful content. The court stated: ‘if the pre­requis­ites are met, the service provider must without hesitation take the temporary measure’.77 Now, the prerequisites do not include the illegality of the content as shown in this paragraph asserting that requiring intermediaries to blind lawful content is constitutional: [t]he instant provisions are purported to prevent indiscriminate circulation of the information defaming or infringing privacy and other rights of another . . . Temporary blocking of the circulation or diffusion of the information that has the possibility of such infringement is an appropriate means to accomplish the purpose.78

74 ibid. 75  Anupam Chander, ‘How Law Made Silicon Valley’ (2014) 63 Emory L.J. 639. 76  See DMCA (n. 5) ss. 512(c)(1)(A)(ii) and 512(d)(1)(A). 77  Constitutional Court 31 May 2012 Decision 2010 Hun-ma 88 (Kor.). 78 ibid.

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

274   Kyung-Sin Park

6.3  Origins: Syntactical Error in Adopting Section 512 of the DMCA? Interestingly, this gratuitous and possibly unconstitutional censorship obligation on lawful content is often justified by reference to section 512 of the DMCA. Table 13.1 shows a cross-jurisdictional comparison of the relevant provisions relating to third party postings against which a takedown notice was sent to the hosting intermediary (hereinafter ‘noticed posting’). As can be seen from Table 13.1, it is clear that South Korea tried to adopt a provision approximating the safe harbour provisions of the EU, the United States, and Japan, through the Copyright Act, Article 102. The problem is the existence of Article 103 of the Copyright Act. Other countries’ laws consist of just one provision corresponding to South Korea’s Article 102 but South Korea adds the superfluous Article 103, which might

Table 13.1  Takedown notices for third party postings Liability exemption for ‘noticed posting’

Liability

Europe: e-Commerce Dir., Art. 14(1)

On obtaining knowledge or awareness of the infringing information

On condition that the provider acts expeditiously to take down the content, the service provider is not liable for the information stored

N/A

United States: DMCA, s. 512(c)

On obtaining knowledge or awareness of facts and circumstances of infringing material or activity; or on notification of claimed infringement

If the service acts expeditiously to take down the material claimed to be infringing, the service provider shall not be liable

N/A

Japan: Provider Liability Law, Art. 3(1)

When the relevant service provider knew or there were reasonable grounds for the provider to know

Unless it is technically possible to take down the infringing content, the service provider shall not be liable for any loss incurred from such infringement

N/A

South Korea: Copyright Act, Arts 102 and 103

If an OSP actually knows of or has received a takedown notice and thereby learned of the fact or circumstances of an infringement

If an OSP immediately takes down the noticed posting, the intermediary shall not be liable for any infringement

In the event of a takedown request, the OSP must immediately take down (Art. 103(2)), in order to limit its liability (Art. 103(5))

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

Lessons From China, India, Japan, Korea, Indonesia, and Malaysia   275 be read like an on-demand obligation for online intermediaries to take down lawful content. The legislative intent was clearly not that seen in Article 103(5) which states that any OSP engaging in notice and takedown will reduce its liability. The intent was to replicate section 512 of the DMCA but in doing so the South Korean legislators broke down one sentence into two as follows: ‘If notice is given and inter­medi­ar­ies take down, the intermediaries will be exempt from liability.’

(1) When notice is given, intermediaries must take down. (2) If intermediaries do so, they will be exempt from liability.

Standing alone, Article 103(2) reads as if intermediaries have independent mandatory obligations to take down lawful content when the DMCA, e-Commerce Directive, and PLL obligations are all conditional (i.e. applicable if intermediaries seek exemption). The fact that the exemption-giving language of Article 103(5) is non-committal (i.e. ‘may reduce’) does not help to prevent the misreading. See Table 13.2. More problematic is that many in South Korea overlooked Article 103 and believed the Copyright Act to be a sound adaptation of other international norms and especially section 512 of the DMCA, and a similar mandatory notice-and-takedown system was replicated in other areas such as defamation and privacy infringement, hence Article 44-2 of the Network Act. This is very relevant to this discussion because the Malaysian Copyright Act 43H and the Indonesian Safe Harbor Circular V.C.1.c carry the extra­or­ din­ary risk of reading like stand-alone obligations of on-demand takedown and resemble the very strict liability regime in South Korea. These provisions textually apply only to unlawful content but so does South Korea’s INCA 44-2.

Table 13.2  Takedown obligations When notice is given, intermediaries shall not be liable if they expeditiously take down

When notice is given, intermediaries must take down; if intermediaries do so, they may reduce their liability

Europe (all claims)

Applicable

N/A

United States (copy­right)

Applicable

N/A

Japan (all)

Applicable

N/A

South Korea (copy­right)

Applicable

Applicable

South Korea (def­am­ation)

N/A

Applicable

OUP CORRECTED PROOF – FINAL, 03/27/2020, SPi

276   Kyung-Sin Park

6.4 Conclusions As with its Chinese counterpart, the South Korean safe harbour for defamation is not a true safe harbour but a limited liability regime which tends to suffer from expansionist judicial interpretations of ‘knowledge’ and ‘notification’. In turn, this leads to inter­medi­ ar­ies’ strategic behaviour of erring on the side of takedown.

7. Epilogue Chinese and South Korean ‘safe harbour’ provisions for defamation and Malaysian copy­right ‘safe harbour’ and Indonesia’s all-purpose ‘safe harbour’ all have the structure of a liability-imposing regime despite their claims to create a safe harbour. Although they do not explicitly impose intermediary liability for unknown contents, the main evil to be prevented by safe harbour efforts in relation to general monitoring obligations or prior approval, the problem of contents known to exist but not yet known to be illegal, remains there to incentivize intermediaries into deleting lawful contents through fear of liability. This state of affairs is in a sense worse than the pre-safe-harbour general torts liability because the existence of procedure invites more people to submit takedown requests. In the light of this, the operation of true safe harbours built into section 512 of the US DMCA, Article 14 of the EU e-Commerce Directive, and section 3 of the Japanese PLL (and s. 79 of the Indian IT Act, which has grown stronger than its original mandate) should be carefully examined in order to avoid aggravating intermediaries’ liability through inaccurate processes of legal cross-pollination. It will be beneficial not only for informing the discussion in Asia but also discussions in Europe such as Germany’s new Network Enforcement Act, which is clearly based on the belief that intermediaries should be held liable for not removing unlawful content when they are notified of its existence. Korea’s case shows how intermediaries may end up responding, that is, almost automatic on-demand takedowns. Alternatively, intermediaries may respond by investing in human resources to review all notifications and trying to make decisions as ac­cur­ ate as possible79 but such a response becoming standard is not satisfactory because platforms not afforded such resources will struggle under legal liability forming around that new standard and continue the dominance of the current global giants, eroding the promise of ihe Internet in another abysmal way.

79  Facebook has ‘150,000 content moderators reviewing notifications around the world, taking down more than 10 million postings each year’. Remark by a Facebook representative at a workshop ‘Addressing Terrorist and Violent Extremist Content’ at 14th Internet Governance Forum (Berlin), November 2019.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

chapter 14

Chi na’s IP R egu l ation a n d Om n iscien t I n ter m edi a r ies: Oscill ati ng from Sa fe H a r bou r to Li a bilit y Danny Friedmann

There are opposing forces that influence intermediary liability (IL) regulation in the People’s Republic of China (China). China’s E-Commerce Law draft1 raises the standard for knowledge before infringing information might be removed; while the many laws and regulations involved in China’s Great Firewall to regulate the internet domestically, exclude the possibility of ignorance.2 Moreover, developments in big data3 and artificial intelligence4 have caught up with discussions about the desirability of safe 1  On 26 December 2016, the Standing Committee of the National People’s Congress (NPC) issued the first draft of China’s first E-Commerce Law. The public consultation period ended on 26 January 2017. No drastic changes from the first draft are expected. See ‘Legislation to regulate the market order is imminent’ (NPC, 27 December 2016) . 2  China’s Great Firewall effectively censors information the government deems unfit for a harmonious society. The regulations are partly overlapping and fragmented in relation to the organizational ­layers of the internet. Examples include Art. 57 of the Telecommunications Regulation and Art. 15 of the Measures on the Administration of Internet Information Services. Subsequently, online service pro­viders will receive lists of words that need to be immediately censored. To be able to abide by the comprehensive censorship requirements, online platforms have put human review systems in place. Qian Tao, ‘The knowledge standard for the Internet Intermediary Liability in China’ (2011) 20(1) IJLIT 11. 3  See Tom Brennan, ‘Alibaba Launches “Big Data Anti-Counterfeiting Alliance” ’ (Alizila, 16 January 2017) . 4  See Cade Metz, ‘Google’s AI Wins Fifth And Final Game Against Go Genius Lee Sedol’ (Wired, 15  March 2016) . See also, Sarah Zhang, ‘China’s Artificial-Intelligence Boom’ (The Atlantic, 16 February 2016) .

© Danny Friedmann 2020.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

278   Danny Friedmann harbours5 and the degree of filtering requirements.6 On the other hand, there is case law, codified in 2016 in guidelines for Beijing courts,7 which reinforces the duties of care. This chapter applies a holistic approach by analysing the individual forces to assess their influence on IL case law . Since a growing part of copyrighted works and trade-marked goods in China is consumed via network service providers,8 the regulation of IL has become ever more rele­vant. In July 2016, it was estimated that about 52 per cent of the population in China (721 million people) had access to the internet.9 Goods are traded between businesses and consumers, between consumers, and between businesses, at enormous online market platforms, such as those of the Alibaba Group.10 To give an indication of the magnitude of online sales in China, on 11 November 2016, ‘Singles’ Day’, the Alibaba Group sold US$18 billion in one day.11 China took notice of the introduction of the Digital Millennium Copyright Act (DMCA)12 in the United States and the e-Commerce Directive13 in the EU, in 1998 and 1999 respectively. China’s IL regulation has evolved from broad and granular to more specific and sophisticated. The regulation of IL copyright infringement led the way, followed by case law in regard to trade mark infringement, which was applied by analogy. The big leap forward that refined and codified China’s experience with IL for copyright infringement was the promulgation of the Regulations for the Protection of the Right of Communication through the Information Network in 2006 (Regulations 2006).14 5  See Danny Friedmann, ‘Sinking the Safe Harbour with the Legal Certainty of Strict Liability in Sight’ (2014) 9(2) JIPLP 148–55. 6  Online platforms use different kinds of digital fingerprinting systems to help content creators to manage and enforce their copyrighted works; and provide systems to submit complaints of copyright and trade mark infringement. Untitled (AliProtect, undated) . 7  See Beijing High People’s Court Guidelines on Trial of IP Cases involving Networks (13 April 2016) (Ch.) (hereafter Beijing Guidelines). 8  Most western intermediaries for user-generated content are blocked in China. Therefore, national champions have developed in China which, after first having imitated their western counterparts, went on to emulate some of them by combining different functionalities. Danny Friedmann, ‘Rise and demise of U.S. social media in China: A touchstone of WTO and BIT regulations’ in Paolo Farah and Elina Cima (eds), China’s Influence on Non-Trade Concerns in International Economic Law (Routledge 2016). 9  Internet Live Stats; elaboration of data by the International Telecommunication Union, World Bank, and United Nations Population Division. 10  Taobao Marketplace is an online marketplace for consumer-to-consumer business; Taobao Mall (TMall) for business-to-business, and Alibaba for business-to-business. They are all part of the Alibaba Group in Hangzhou, Zhejiang. 11  ‘Singles Day: Alibaba breaks record sales total’ (BBC News, 11 November 2016) . 12  17 USC § 512, 112 Stat. 2860 Pub. L. 105-304, 28 October 1998 (US). 13  Council Directive 2000/31/EC of the European Parliament and the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L178/1. 14  Regulations for the Protection of the Right of Communication through the Information Network promulgated by decree of the State Council no. 468, adopted at the 135th Executive Meeting of the State Council on 10 May 2006 and coming into effect on 1 July 2006 (Ch.) (hereafter Regulations 2006).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   279 The Regulations 2006 clarify which kind of network service providers are eligible for safe harbours, and makes clear when immunities are lifted. In the case of trade mark law, China, for the first time, issued an E-Commerce Law in 2017.15 Copyright law,16 which touches on the expressions of ideas, has always been politicized in China. With filter technology and monitoring obligations, Chinese authorities have focused on unwelcome expressions that were deemed to endanger social har­mony.17 Trade mark law,18 instead, was considered apolitical and the filtering of counterfeit products more or less a private affair.19 However, convergence between IL regulation for copyright and trade mark infringement has been taking place. With the emergence of the comprehensive and extremely ambitious Social Credit system that rewards moral and punishes amoral conduct from a Chinese socialist perspective, the realm of trade mark law also becomes political.20 The success of the Social Credit system, and the ability to monitor online conduct, depends on real-name registrations. Although the Chinese authorities have tried real-name registrations since 2012, with little success,21 overcoming this challenge would have far-reaching consequences for identifying direct infringers. This redirection of liability away from intermediaries will probably be coupled with increased monitoring obligations for intermediaries and foreshadows the end of safe harbours.22 Although the doctrine of internet sovereignty rules supreme in China, the country does not operate in a vacuum. China acceded to the 1967 Stockholm Act of the Paris Convention in 1984, and the 1971 Paris Act of the Berne Convention in 1992.23 These World Intellectual Property Organization (WIPO) conventions were negotiated before 15  See E-Commerce Law (n. 1). 16  See Copyright Law amended up to the Decision of 26 February 2010 by the Standing Committee of the National People’s Congress on Amending the Copyright Law (Ch.). 17  See Danny Friedmann, ‘Paradoxes, Google and China’ in Aurelio Lopez-Tarruella (ed.), Google and the Law: IT and the Law (TMC Asser 2012) 15. 18  See Trademark Law (as amended up to Decision of 30 August 2013, of the Standing Committee of National People’s Congress on Amendments to the Trademark Law). 19  Although counterfeit products, threatening health and safety, can cause serious social upheaval. An example: ‘China “fake milk” scandal deepens’ (BBC News, 22 April 2004) . 20 See State Council Guiding Opinions concerning Establishing and Perfecting Incentives for Promise-Keeping and Joint Punishment Systems for Trust-Breaking, and Accelerating the Construction of Social Sincerity, State Law no. (2016)33 of 30 May 2016 (Ch.), original and translation at China Copyright and Media . See also Catherine Lai, ‘China announces details of social credit system plan, deemed “Orwellian” by critics’ (Hong Kong Free Press, 3 January 2016) . 21  See Catherine Shu, ‘China attempts to reinforce real-name registration for Internet users’ (TechCrunch, 1 June 2016) . 22  See Friedmann (n. 5). 23  See Berne Convention for the Protection of Literary and Artistic Works of 1886, last amended in 1979.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

280   Danny Friedmann the digital revolution, and remain silent about the internet, the relationship between network service providers, rightholders, and internet users. In 2001, with the digital revo­lu­tion in full swing, China became a member of the World Trade Organization (WTO). It thereby accepted the 1994 Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs),24 which largely incorporated the Paris and Berne Conventions, without specifically taking the internet into account. However, in 1996, WIPO filled the gap by introducing the WIPO Internet Treaties (WIPO Copyright Treaty and WIPO Performances and Phonograms Treaty). Section 512 of the Digital Millennium Copyright Act (DMCA) in the United States in 1998, and Article 14 of the e-Commerce Directive in the EU, which implemented the WIPO Internet Treaties in 1999, have heavily influenced China’s policy towards IL for copyright and trade mark infringement. In 2006, the State Council of China promulgated the Regulations 2006,25 which prepared China to become a member in 2007 of the WIPO Internet Treaties. Although IL regulation for trade mark infringement in China followed the copyright regulation, this chapter will discuss trade marks first, since IL regulation for trade mark infringement provides an indicator of the convergence of IL regulation in China. In doing so, Section 2 deals first with the E-Commerce Law 2017. Then it will cover the historical development of IL for trade mark infringement. Section 3 deals with IL legislation and case law for copyright infringement; Section 4 provides the conclusion. Due to limited space this chapter will not deal with the obligations for intermediaries to disclose the identity of direct infringers, and the consequences if they do not do so, such as withdrawal of their immunity.

1.  Intermediary Liability for Trade mark Infringement Counterfeiters in China make extensive use of online market platforms, infringing intellectual property rights (IPRs),26 and endangering the safety and health of the public with their supply of fake and substandard goods. By sending one or two packages at the same time, counterfeiters remain under the criminal thresholds,27 which makes the enforcement of online trade mark infringement particularly challenging. 24  See Marrakesh Agreement Establishing the World Trade Organization, Annex 1C: Agreement on Trade-Related Aspects of Intellectual Property Rights (15 April 1994) 1869 UNTS 299, 33 ILM 1197. 25  See Regulations 2006 (n. 14). 26  IL in the case of patent infringement is another important issue. See e.g. Zaigle v TMall, where the Zhejiang High People’s Court held TMall jointly liable for patent infringement in 2015. See also ‘SBZL Awarded a Top-Ten China IP Case of 2015’ (SBZL Intellectual Property, undated) . 27  The threshold for criminal liability is RMB 50,000, pursuant to Art. 140 of the Criminal Law, adopted by the Second Session of the Fifth National People’s Congress on 1 July 1979 and amended by the Fifth Session of the Eighth National People’s Congress on 14 March 1997 (Ch.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   281 The Alibaba Group which includes Taobao, TMall, and Alibaba, was identified in the United States Trade Representative’s (USTR) Special 301 Reports of 2008, 2009, 2010, and 2011 for facilitating the sale of counterfeit goods to consumers and businesses. Assurances and measures taken against counterfeit products were the reason that Taobao was removed from the Lists of Notorious Markets in 2012, 2013, 2014, and 2015. However, in 2016 Taobao returned to the list. Not only the USTR complained that Taobao was taking insufficient measures against counterfeit products, but also the State Administration for Industry and Commerce (SAIC), China’s authority responsible for trade mark registration and administration nationwide, made complaints in a 2015 White Paper.28 Subsequently, in 2016, the head of the SAIC, Zhang Mao, did so again in a TV interview saying that the executive chairman of the Alibaba Group, Jack Ma, is not outside the law and should take responsibility in regard to counterfeit products being traded on online market platforms.29 On 13 April 2016, Alibaba became the first online market platform to become a member of the International Anti-Counterfeiting Coalition (IACC). However, a month later the IACC suspended Alibaba’s membership.30 Alibaba vowed ‘to keep fighting fakes’,31 but a month later Ma was harshly criticized after he asserted that the fakes offered online are often of better quality and better priced than the genuine products.32 During the ‘Two Sessions’ of the National People’s Congress and China People’s Political Consultative Conference in March 2017, Ma urged legislators in an open letter to increase the punishments for counterfeiters.33 Besides deflecting attention away from online market platforms facilitating counterfeiting to direct infringers,34 online trading platforms were also successful in lobbying to increase the knowledge standard in China’s first E-Commerce Law draft. In this E-Commerce Law draft,35 the emphasis seems to be 28  The White Paper was retracted on 30 January 2015, two days after being issued. See Megha Rajagopalan, John Ruwitch, and Edwin Chan, ‘Alibaba meets with China regulator, controversial report retracted’ (Reuters, 30 January 2015) . 29 See Scott Cendrowski, ‘Chinese Regulator Again Calls Out Alibaba for Counterfeit Goods’ (Fortune, 11 August 2016) . 30  Several brands had complained of its non-compliance; another reason was that there was a conflict of interest for the IACC president who did not disclose that he had stocks in the Alibaba Group. See Rishiki Sadam, ‘Anti-Counterfeiting group suspends Alibaba’s membership’ (Reuters, 13 May 2016) . 31  See Haze Fan, ‘Alibaba vows to keep fighting fakes despite IACC snub’ (CNBC, 16 May 2016) . 32  See David Ramli and Lulu Yilun Chen, ‘Alibaba’s Jack Ma: Better-Than-Ever Fakes Worsen Piracy War’ (Bloomberg Technology, 14 June 2016) . 33  See Meng Jing, ‘Alibaba’s Jack Ma calls for laws on counterfeiting to be “as tough as those on drunk driving” ’ (SCMP, 7 March 2017) . 34  However, Art. 54 of the E-Commerce Law draft protects vendors against abuse of IP rights, via a system of counter-notifications and imposing liability on the rightholder if he causes losses to a vendor who was not selling counterfeit goods. See n. 1. 35  See Mark Cohen, ‘E-Commerce Law Up for Public Comment’ (China IPR, 30 December 2016) .

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

282   Danny Friedmann more on facilitating e-commerce than on regulating it. The draft can be perceived as a way of trying to ‘dilute duties of care already applicable to online trade platforms’.36 Any obligations to take preventive measures have been removed. The draft also leaves uncertainty about the role of administrative enforcement authorities such as the SAIC.37 Article 88 of the E-Commerce Law draft provides that ‘[i]f a third-party platform for e-commerce violates the provisions of Article 5338 and clearly knows or has actual knowledge that the operator of the platform does not take the necessary measures for infringement of IPRs, the relevant departments of the people’s governments at various levels shall order it to make corrections within a prescribed time limit.’39 By contrast, other Chinese laws adopt the lower standard of ‘knowledge’.40 Judicial decisions also make reference to a standard of deemed knowledge (‘knew or should have known’) that creates a duty for intermediaries to intervene against infringements. This proactive measures standard was already accepted in Article 27 of the Beijing Guidelines.41 Although formally this opinion guides only Beijing courts, it can be seen as the accumulation and codification of authoritative case law, and therefore has persuasive power. The Guidelines work horizontally and are relevant for IL for both trade mark and copyright infringement. They provide eight factors that indicate when a platform service provider knows that a network vendor is infringing IPRs:42 (1) if the alleged infringing information is located on the front page of the website, on the front page of a section, or in other obvious visible locations; (2) if the platform service provider initiated the editing, selection, sorting, ranking, recommendation, or modification of the alleged infringing information; (3) if there is notification by the rightholder enabling the platform service provider to know that the alleged infringing information or transaction is transmitted or implemented through its network service; (4) if the platform service provider did not take appropriate reasonable measures even though the same network vendors repeated the infringement;

36 SIPS Asia, ‘China: Trade Marks, Draft Ecommerce Law issue’ (Managing Intellectual Property Magazine, March 2017). 37  They have legal authority ex officio and/or inter partes and can request evidence held by platforms and other parties, e.g. payment services and transport companies. See ‘Draft Ecommerce Law Issued for Public Comment’ (SIPS Asia, 28 February 2017) . 38  See E-Commerce Law draft (n. 1) Art. 53 (providing that ‘[i]f the e-commerce operator infringes the IP rights within the platform, the online trade platform shall take the necessary measures such as deleting, shielding, breaking the link, terminating the transaction and service according to the law’). Whether the scope of the provision includes social media or search engines or domestic and international purchasers, is not yet known. 39  ibid. Art. 88. 40  See Tort Liability Law, adopted at the 12th Meeting of the Standing Committee of the Eleventh National People’s Congress on 26 December 2009, in effect as of 1 July 2010, Art. 36 (Ch.); Trademark Law (n. 18) Art. 52. 41  See Beijing Guidelines (n. 7) Art. 27. 42 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   283 (5) if there exists information that the network vendor acknowledged that it infringed rights; (6) if the sale or offering of goods or services is at a price that is clearly unreasonable; (7) if the platform service provider directly obtains an economic benefit from the spread or transaction of the accused infringing information; (8) if the platform service provider knows of the existence of an alleged infringement from infringing behaviour on other trade mark rights. A ninth factor, that can be found in the case law but not in the guidelines, although it is probably implied, is that network service providers have to develop and implement methods for notification and takedown. Before the E-Commerce Law draft, there was no particular nationwide regulation for IL in the case of trade mark infringement. In its absence, the horizontal43 General Principles of Civil Law 198644 and the Tort Liability Law 200945 could be used to base an action against online trade mark infringement. Article 118 of the General Principles of the Civil Law 1986 states that IPR holders have the right to demand that infringement by plagiarism, alteration, or imitation be stopped, that its ill effects be eliminated, and that damages be compensated.46 This Article, together with Article 130 of the same law, which imposes joint liability on two or more persons if they jointly infringe another person’s rights and cause that person damage, served as a basis for IL in the case of trade mark infringement.47 The legislation and case law regarding IL for online copyright infringement was applied by analogy in the case of online trade mark infringement. This was especially the case with the notice-and-takedown and safe harbour provisions of Regulations 200648 and the Tort Liability Law 2009. Article 36(1) of the Tort Liability Law 2009 imposes tort liability on a network user or network service provider who infringes the civil rights or interests of another person through the network.49 Therefore, if the network service provider directly infringes a trade mark, he falls within the provision. Article 36(2) of the Tort Liability Law 2009 provides that the network service provider will be jointly and severally liable if it fails to take the necessary measures in a timely manner, such as deletion, blocking, or disconnection after being notified.50 The following case law answers the question of what the courts consider as ‘necessary’ and what they understand to be ‘timely’. The Provisions on Relevant Issues Related to the Trial of Civil Cases Involving Disputes over Infringement of the Right of Dissemination through Information

43  cf. the e-Commerce Directive, that works across all sorts of IPRs. 44  See General Principles of Civil Law, adopted at the Fourth Session of the Sixth National People’s Congress on 12 April 1986 and promulgated by Order no. 37 of the President on 12 April 1986 (Ch.). 45  See Tort Liability Law (n. 40). 46  See General Principles of Civil Law (n. 51) Art. 118. 47  See Du Ying, ‘Secondary Liability for Trademark Infringement Online: Legislation and Judicial Decisions in China’ (2014) 37 Colum. J. of L. & Arts 541. 48  In Section 3, the Regulations 2006 will be dealt with more comprehensively. 49  See Tort Liability Law (n. 40) Art. 36(2). 50 ibid.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

284   Danny Friedmann Networks of 2012, a judicial interpretation by the Supreme People’s Court,51 sets out a range of factors for determining the liability of online service providers. Although these are intended to address online copyright infringements, judges of the Supreme People’s Court have indicated that those principles may also be applied to online infringements of other IPRs, including counterfeiting.52

1.1  Establish Internal Procedures and Enforce Accordingly Chinese courts have defined criteria for enforcement obligations by online platforms in multiple cases of trade mark infringements brought to their attention. In turn, this has led platforms to establish related internal procedures to comply with courts’ requests. In Aktieselskabet AF v eBay Shanghai in 2006, the establishment of a procedure to notify the online market platform of trade mark infringement by a vendor was sufficient, because, according to the judge, not all advertised goods can be checked.53 In 2010, E-land Fashion sued Taobao for contributory liability on the ground that it complained five times about Xu’s offer for sale and sale on Taobao of goods bearing the E-LAND trade mark, licensed to the plaintiff.54 However, the court held that Taobao had fulfilled its reasonable duty of care as a network service provider.55 In 2011, E-Land Fashion sued Taobao again, this time with more success. The Shanghai First Intermediate People’s Court held an online platform jointly liable with the counterfeiter for the first time in China.56 The court decided that Taobao knew or should have known that its user was selling counterfeit goods but nevertheless had not adopted effective measures to end the illegal activity. The Shanghai Pudong New Area People’s Court held that although Taobao might qualify for immunity from liability—because it had deleted the infringing links—it was aware of Du Guofa’s acts of infringement after reviewing relevant complaints

51  On 17 December 2012 the Supreme People’s Court promulgated the Provisions on Relevant Issues Related to the Trial of Civil Cases involving Disputes over Infringement of the Right of Dissemination through Information Networks. 52  See Joe Simone, ‘Comments on the PRC Trademark Law Amendments’ (China IPR, 27 January 2013) . 53  See Shanghai First Intermediate People’s Court, Aktieselskabet AF v eBay Network Info. Services (Shanghai) Co. [21 August 2006] 2005 no. 371 (Ch.) (deciding that it would be sufficient for defendants to establish a reporting system to stop online IP violations. The court argued that online market platforms cannot investigate every piece of merchandise sold and even if the defendants had checked the products advertised online, which might be genuine, then fake products could still be delivered offline). 54  See Shanghai Huangpu District People’s Court E-land Fashion (Shanghai) Trade Co. v Taobao Network Co. & Xu [10 September 2010] (Ch.). 55 ibid. 56  See Shanghai First Intermediate People’s Court E-Land Fashion (Shanghai) Trade Co. v Taobao Network Co. & Du Guofa [25 April 2011] 2011 no. 40 (Ch.) affirming Shanghai Pudong New Area People’s Court [17 January 2011] 2010 no. 426 (Ch.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   285 and notice-and-takedown requests from the plaintiff. Despite this knowledge, the defendant did not adhere to its own rules.57

1.2  Appropriate Reasonable Measures in the Case of Repeat Infringement In 2010, after seven complaints E-Land Fashion sued Du Guofa and Taobao for trade mark infringement at the Shanghai Pudong New Area People’s Court,58 even though Taobao had removed the infringing links. The defendants, however, did not take any further measures. The case was decided in favour of E-Fashion because Taobao had knowledge of further infringements and intentionally continued to provide services to the infringer, thus becoming guilty of contributory infringement according to Article 50(2) of the Implementing Rules of the Trademark Law 2001. Taobao appealed to the Shanghai First Intermediate People’s Court. The court considered that E-Land Fashion and its licensee, Yinian, had filed 131,261 notice-and-takedown requests (with the rele­ vant explanations and justifications) to Taobao concerning links on its online platform that offered counterfeit goods from September to November 2009. Taobao deleted 117,861 infringing links, and from February to April 2010 E-land Fashion filed another 153,277 complaints, which resulted in 124,742 deletions. The sheer number of complaints and deletions, after investigation by Taobao, demonstrated that Taobao was aware of the high frequency of infringements, therefore the defendants should have put in place appropriate reasonable measures for repeat infringement.59 In 2012, in Descente v Today Beijing City Info Tech, the Beijing Higher People’s Court held that because the defendant was running a website for group purchasing it should not only apply reactive but also proactive measures.60 In other words, depending on its involvement in the services offered and the process used, its duty of care would increase or decrease.

57  See ‘Rules of Taobao.com Governing the Management of User Behaviors (non-mall)’ (adopted by Taobao on 15 September 2009) (providing that if a Taobao user committed any act of infringement, Taobao could prevent the accused vendor from releasing the goods, withdraw the disputed goods, make public announcements about the penalty, deduct points from the accused vendor, freeze the accused’s account on Taobao.com, cancel the account concerned, etc.). 58  See Shanghai Pudong New Area People’s Court E-Land Fashion v Taobao and Du Guofa [19 July 2010] (Ch.) (where E-Land Fashion claimed compensation for its losses and a public apology in newspapers from the defendants). 59  ibid. It certainly did not help Du Guofa’s defence that he had explicitly stated on his Taobao vendor’s site that some of his goods were high-quality counterfeits. Moreover, Du Guofa did not react with any counter-notifications to Taobao’s notifications that they would remove infringing links. In the eyes of the court, that was another indication that Du Guofa was an infringer. 60  See Beijing Second Intermediate People’s Court Descente Ltd v Today Beijing City Info. Tech. Co. [25 April 2012] 2011 no. 11699 (Ch.) affirming Beijing Higher People’s Court [19 December 2012] 2012 no. 3969 (Ch.).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

286   Danny Friedmann

1.3 Inferences In 2005, Taobao was sued by the Swiss watch brands Longines, Omega, and Rado. The Swiss watch brands argued that Taobao should have known of counterfeits because of the large price differences between genuine and counterfeit watches offered on their online platforms. The parties settled.61 In 2013, the Trademark Law was amended. Article 57(6) of the Trademark Law 2013 became: ‘Providing, intentionally, convenience for such acts that infringe upon others’ exclusive right of trademark use, to facilitate ­others to commit infringement on the exclusive right of trademark use.’62 Compare this with the previous provision in Article 52(5) of the Trademark Law 2001, which did not include the condition ‘intentionally’ for infringement.63 Prior to 2013, language that included the intentional requirement only appeared in Article 50(2) of the Regulations Implementing the Trademark Law 2002.64 The inclusion of the intentionality language in the Trademark Law 2013, demonstrates the importance that the National People’s Congress’ drafters attached to the issue. Trade mark holders expressed concern that the term ‘intentionally’ excluded constructive knowledge and suggested use of the phrase ‘knew or should have known’. The concern was picked up by the legislators, since Article 75 of the Implementing Regulations of the Trademark Law 2014 does not include the word ‘intentionally’.65

2.  Intermediary Liability in the Case of Copyright Liability Before the e-Commerce Directive and the DMCA were able to inspire China’s IL regulation, Article 130 of the General Principles of the Civil Law 1987 could be applied to network service providers which together with a direct infringer infringed on a copyright 61  ‘Taobao sued for selling fakes’ (Global Times, 22 July 2011) . 62  Trademark Law (n. 18) Art. 57(6). 63  See Article Trademark Law 2002 (n. 18) Art. 52(5) (stating that ‘impairing by other means another person’s exclusive right to the use of its registered trademark shall constitute an infringement on the exclusive rights to the use of a registered trademark’). 64  See Regulations for the Implementation of the Trademark Law, promulgated by State Council Decree no. 358 of 3 August 2002, and effective as of 15 September 2002, Art. 50(2) (Ch.) (defining infringement as ‘intentionally providing facilities such as storage, transport, mailing, concealing, etc. for the purpose of infringing another person’s exclusive right to use a registered trademark’). 65  See Regulations for the Implementation of the Trademark Law, promulgated by State Council Decree no. 358 of 3 August 2002, revised and promulgated by State Council Decree no. 651 of 29 April 2014, and effective as of 1 May 2014, Art. 75 (Ch.) (‘[a]n act of providing such facilities as storage, transport, mailing, printing, concealing, business premises, or an online goods trading platform for infringing upon another person’s exclusive right to use a registered trademark constitutes an act of providing convenience prescribed in subparagraph (6) of Article 57 of the Trademark Law’).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   287 holder’s rights and caused that party damage, in which case they should bear joint li­abil­ity.66 This fault-based liability doctrine was applied in Music Copyright Society of China v Netease & Mobile Communications in 2002.67 In that case, a distinction emerged between primary and secondary liability. Allegedly, Netease would be primarily liable for direct infringement of the right of communication to the public of the works, in that instance ringtone files. Mobile Communications, instead, would be secondarily liable for alleged negligence in its duty of care to examine the works it was disseminating or, after being informed by the copyright holder, to stop the transmission of the infringed works. However, the Beijing Second Intermediate People’s Court accepted Mobile Communications’ defence that it was merely providing a technical and passive service of network dissemination. It only received ringtones from Netease and forwarded them to its subscribers. According to the court, Mobile Communications was unable to select, examine, or delete the infringing ringtone files it transmitted, and was not at fault. This doctrine is also called the passive conduit or transmission doctrine.68 Article 20 of Regulations 2006 would later codify this conduit or transmission defence.69

2.1 Non-Interference The same non-interference principle can be observed in Go East Entertainment v Beijing Century Technology in 2004.70 Here, the Beijing High People’s Court held that ChinaMP3.com by selecting and organizing various links to infringing third party sources, demonstrated that it could discriminate between licensed and unlicensed recordings, which showed the defendant negligent in its own duties and having intentionally participated in the illegal dissemination of unlicensed recordings. This made ChinaMP3.com jointly liable with the third party websites under Article 130 of the General Principles of the Civil Law 1987.71 According to the court, it was irrelevant that the plaintiff had not sent any notice-and-takedown requests to enable the defendant to take the necessary measures to remove the infringing links.

66  See Opinions of the Supreme People’s Court on Several Issues concerning the Implementation of the General Principles of Civil Law, Art. 148 (stating that ‘a person who instigates or assists others to perform a tortuous act is joint tortfeasor, and shall bear civil liabilities jointly’). 67  See Beijing Second Intermediate People’s Court Music Copyright Society of China v Netease.com Inc. & Mobile Communications Corp. [20 September 2002] 2002 no. 3119 (Ch.). 68  cf. Transitory Digital Network Communications in 17 USC § 512(a) (US). 69  See Regulations 2006 (n. 14) (under the conditions that the network service provider should not choose or alter the transmitted works, and that the transmitted works be offered only to its subscribers). 70  See Beijing High People’s Court, Go East Entertainment Co. Ltd (HK) v Beijing Century Technology Co. Ltd [2 December 2004] no. 713 (Ch.). 71  Note that Art. 4 of the Interpretation of the Supreme People’s Court on Several Issues Concerning the Application of Law in the Trial of Cases Involving Copyright Disputes over Computer Network 2003 was the basis for the case. The 2006 version of the Interpretation supersedes the previous one and the provision can be found in Art. 3.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

288   Danny Friedmann More specific regulation in regard to IL for online infringements was promulgated in 2000. This included the Regulation on Internet Information Services, which deals with providing information services to online subscribers.72 Article 15 of the Regulation postulates the prohibition to produce, copy, publish, or distribute information containing content forbidden by laws and regulations.73 Article 16 requires network service pro­ viders to immediately terminate a transmission, keep a record of it, and report it to the relevant authorities when it finds unlawful information.74 The Interpretation of Several Issues Relating to Adjudication of and Application to Cases of Copyright Disputes on a Computer Network was enacted by the Supreme People’s Court in 2000 to clarify IL for online copyright infringement.75 Rule 4 imposes joint liability on the direct infringer and the intermediary for aiding and abetting copyright infringement. Rule 5 also imposes joint liability if the intermediary obtained clear knowledge or was warned by a rightholder based on solid evidence of copyright infringement. After the above-mentioned Regulation on Internet Information Services, the copyright statutes were updated in 2001. However, the copyright law reform of 2001 only prepared China’s accession to TRIPs and did not deal with IL.76 Then, in 2006, the most important specific piece of legislation for IL for copyright infringement was promulgated: Regulations 2006.77 These Regulations distinguish between the following cat­ egor­ies of network service providers: (1) those who provide automatic access to their subscribers;78 (2) those who provide automatic storage of works, performances, and audiovisual recordings from other network service providers to their subscribers;79 (3) those providing subscribers with storage space to make works, performances, and 72  Regulation on Internet Information Services, promulgated by State Council Decree no. 292 of 20 September 2000, Art. 2 (Ch.). 73 ibid. 74 ibid. 75 See Interpretation of Several Issues Relating to Adjudication of and Application to Cases of Copyright Disputes on Computer Network, adopted at the 1144th Meeting of the Adjudication Commission of the Supreme People’s Court on 21 December 2000 and in effect on 21 December 2000. 76  See Yong Wan, ‘Safe Harbors from Copyright Infringement Liability in China’ (2012–13) 60 J. of Copyright Soc’y USA 635. 77  See Regulations 2006 (n. 14). The Regulations were amended on 30 January 2013, by Order No. 634 of the State Council. However, for IL the changes are insignificant. Although the most important laws in China are promulgated by the National People’s Congress or sometimes the Standing Committee of the National People’s Congress, Regulations 2006 and its 2013 amendment were promulgated by the State Council. 78  See Regulations 2006 (n. 14) Art. 20. Cf. 17 USC § 512(a): Transitory Digital Network Communications. 79  See Regulations 2006 (n. 14) Art. 21 (providing that a network service provider caching works, performances, and audiovisual recordings from another network service provider ‘for the purpose of elevating the efficiency of network transmission’, will not be held liable for compensating the rightholder in damages, if it did not alter any of the automatically cached materials, and did not affect the originating network service provider’s ability to obtain information about use of the cached materials, and automatically revises, deletes, or disables access to the materials where the originating network service provider does the same). Caching links fall outside the scope of this safe harbour as Flyasia v Baidu makes clear. See Beijing High People’s Court Zhejiang Flyasia E-Business Co. Ltd v Baidu Inc. [2007] no. 1201 (Ch.) (where Baidu provided on its own initiative access to an archived copy and modified that copy). Cf. 17 USC § 512(b): System Caching.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   289 audiovisual recordings available to the public;80 and (4) those providing searching or linking services to their subscribers.81 Network service providers in the first two categories cannot be held liable if they did not interfere with their automatic access or storage processes. For the latter two categories, network service providers have to comply with the following rules to benefit from the protection of the safe harbour.

2.2  Removal after a Notice-and-Takedown Request Article 15 of Regulations 2006 clarifies that network service providers that provide storage space or searching or linking services for their subscribers, after they receive a notice-and-takedown request82 from a rightholder should promptly remove or disconnect the link to the work, performance, or audiovisual recording that is thought to be infringing.83 There case law is inconsistent on whether one notice can concern multiple works. The court in Guangdong Mengtong Culture Development v Baidu in 2007 confirmed that this was possible.84 Other courts, however, have demanded that each work should receive its own notification.85 Article 14 of Regulations 200686 states that when rightholders file a notice-andtakedown request they must detail the contact information, information regarding the work, and how it can be located as well as preliminary evidence of the infringement. After notification, the storage-space provider should promptly87 remove the alleged in­frin­ging

80  See Regulations 2006 (n. 14) Art. 22. Cf 17 USC § 512(c): Information Residing on Systems or Networks At Direction of Users. 81  See Regulations 2006 (n. 14) Art. 23. In 2006, the Beijing District High People’s Court held that search engines are prima facie not liable for referring to infringing resources because their indexing engines cannot predict, distinguish, or control the content of unrestricted websites they search. See Beijing High People’s Court EMI Group Hong Kong Ltd v Beijing Baidu Network Technology Co. Ltd [17 November 2006] no. 593 (Ch.). However, the safe harbour based on the referrer’s defence is lifted where search engines fail or insufficiently remove the infringing links after notification by the copyright holder. This was decided by the Beijing High People’s Court in 2007 holding the search engine Alibaba liable for removing only fifteen out of twenty-six allegedly infringing recordings, pursuant to takedown notices. After receiving from the copyright holder a request to remove twenty-six links, the court held that Alibaba should have known that its search engine contained infringing links to those recordings, and was thus negligent in not terminating all links pointed out. See Beijing High People’s Court Go East Entertainment Co. Ltd (HK) v Beijing Alibaba Technology Co. Ltd [20 December 2007] no. 02627 (Ch.). Compare this with 17 USC § 512(d): Information Location Tools. 82  See Regulations 2006 (n. 14) Art. 22(1) (stating that the network service provider that provides storage space to subscribers, should make this clear and indicate its name, contact person, and network address; otherwise it will be held liable). 83  See Regulations 2006 (n. 14) Art. 15. 84 See Guangdong Mengtong Culture Dev. Co. Ltd v Baidu Inc. [2007] no. 17776 (Ch.). 85  See e.g. Warner Music Hong Kong Ltd v Alibaba Info. Tech. Co. Ltd [2007] no. 02630 (Ch.). 86  Regulations 2006 (n. 14) Art. 14. 87  Art. 14 of the Draft of the Regulations of September 2005 defined ‘promptly’ as within five days. However, in the final Regulations 2006 draft, that definition was removed.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

290   Danny Friedmann work, performance, or audiovisual recording otherwise it will be liable.88 If the network service provider removes or disconnects a link to a work, performance, or audiovisual recording that turns out to be non-infringing, the rightholder who requested the removal will be held liable.89 Network service providers will replace or restore a link to the work, performance, or audiovisual recording90 on receipt of a counter-notification by the subscriber in the case of evidence of non-infringement.91

2.3  Necessary Measures Article 36(2) of the Tort Liability Law 200992 imposes obligations on the network service provider to take necessary measures such as deletion, blocking, or disconnection after notification by a rightholder or if it knows that a network user is infringing on the civil rights or interests of another person through its network services.93 If the network service provider fails to take the necessary measures in a timely manner, it will be jointly and severally liable for any additional harm along with the network user. Qiao has highlighted the debate about the exact scope of the knowledge standard: in the first and second drafts of the Tort Liability Law the term ‘actually knew’ was used, followed by ‘knew’ in the third draft, and ‘knew or should have known’ in the fourth draft, and back to ‘knew’ in the fifth and final version.94 The Tort Liability Law makes a distinction between direct tort liability95 and indirect liability.96

2.4  No Financial Benefits Article 22(4) of Regulations 200697 states that the safe harbour is not for network service providers providing storage space that receive financial benefits from copyright infringements. However, unlike section 512(c) of the DMCA, it does not mention ‘without the ability to control’, which is the other condition for the US safe harbour. The logic is that a network service provider, that has no control over the infringement, should not be held liable. However, in China, in principle, a network service provider without control can still be held liable. In practice, the specificity of the connection between the

88  See Regulations 2006 (n. 14) Art. 22(5). 89  ibid. Art. 24. 90  ibid. Art. 17. 91  ibid. Art. 16. 92  See Tort Liability Law (n. 40) Art. 36(2). 93  ibid. Art. 36(3) (noting that notice that the infringement of the civil rights or interests of another person could concern copyright, trade mark, or patent infringement). 94  Qiao Tao, ‘The Knowledge Standard for the Internet Intermediary Liability in China’ (2012) 20 IJLIT 3. 95  See Tort Liability Law (n. 40) Art. 36(1). 96  ibid. Art. 36(2). 97  See Regulations 2006 (n. 14) Art. 22(4).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   291 infringed work and the financial benefit for the network service provider determines whether it will lose the immunity provided by the safe harbour.98

2.5  Inducement and Contributory Liability Tortuous liability consists of inducement or contributory liability which are based on encouragement or aid to the wrongdoer.99 Tortuous liability is created where the network service provider has or should have knowledge of the illegal nature of the subscriber’s activity that it has instigated or assisted.100 Article 22(3) of Regulations 2006 states that a network service provider that provides storage space is not liable if it does not know or has no reasonable grounds to know that the works, performances, or audiovisual recordings provided by a subscriber infringe another person’s rights.101 In other words, the network service provider will be held li­able if it has actual knowledge—knowledge of the infringement itself—or should have deduced it from the circumstances, as any reasonable person would do.102 This standard of knowledge is noticeably lower than apparent knowledge, where a network service provider deliberately proceeds despite ‘red flags’ to that effect;103 is wilfully blind to the infringement. Therefore, a network service provider becomes liable if it has actual, constructive, or apparent knowledge.104

98  Guiding Opinions of Beijing Higher People’s Court of 2010, s. 25 (noting that ‘[g]enerally, the advertising fees charged by an ISP for the information storage space service provided shall not be determined as the directly gained economic interests; the advertisements added by an ISP to specific works, performances or sound or video recordings may be taken into account in the determination of the fault of the ISP as the case may be’). 99 Before Metro-Goldwyn-Mayer Studios Inc. v Grokster Ltd, 545 US 913 (2005) (US), contributory liability could be used (defendant has knowledge of infringement by another and materially contributed to the infringement) as well as vicarious liability (defendant had control over another’s infringement and had a direct financial interest in it) as actions in the case of copyright infringement. Grokster added intentional inducement (defendant acts with the object of promoting infringement by others) to the possible actions. See Alfred Yen, ‘Torts and the Construction of Inducement and Contributory Liability in Amazon and Visa’ (2009) Colum. J. of L. & Arts 32, 513. 100  Qiao Tao (n. 94) 5. 101  See Regulations 2006 (n. 14) Art. 22(3). 102  See Yong Wan, ‘Safe Harbors from Copyright Infringement Liability in China’ (2012) 60 J.  of Copyright. Soc’y USA 646 . 103  Shanghai First People’s Court Zhongkai Co. v Poco.com [2008] (Ch.) (noting ‘[a]s a professional video website, in its daily operation of the website, it must have known, or at least should have known that the movie concerned was unauthorized by viewing the poster and the introduction of the movies by the users’). See also Shanghai First Intermediate People’s Court Sohu v Tudou [2010] (Ch.). 104  Guiding Opinions of Beijing Higher People’s Court of 2010, s. 19(1)–(4) (providing that current content on a prominent page; infringing content on prominent page; infringing content recommended or ranked (apparent knowledge); any selection, organization, or classification of the alleged infringing material uploaded by service receivers is conducted (constructive knowledge)).

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

292   Danny Friedmann

2.6 Non-Interference A few norms provide general principles of non-interference. Article 22(2) of Regulations 2006 asserts that works, performances, or audiovisual recordings cannot be altered by a network service provider that provides storage space, otherwise liability will be incurred.105 On 17 December 2012, the Supreme People’s Court promulgated the Provisions on Relevant Issues Related to the Trial of Civil Cases involving Disputes over Infringement of the Right of Dissemination through Information Networks.106 Article 10 of these provisions107 clarifies that if a network service provider provides web services, by establishing charts, catalogues, indexes, descriptive paragraphs, or brief introductions or in other ways recommends the latest film and television programmes that can be downloaded or browsed or are otherwise accessible by the public on its web pages, that is an indication to the court that the network service provider should have known that its users were infringing.108 Article 12 of the same Provisions states that the court may determine that the network service provider should know that uploaded content is infringing if popular films or TV shows are posted on the homepage or other prominent web pages; or the hosting service provider actively selected, edited, arranged, or recommended the topic of the work, or produced specific ‘popularity lists’.109 The court will assume that the infringing nature of the uploaded work could not have escaped the attention of the hosting service provider, which nevertheless took no reasonable measures to stop the infringement.

3. Conclusions In a complex interplay of successive laws, regulations, case law, judicial interpretations, and guidelines, a convergence has evolved in the case law of China in regard to IL for 105  See Regulations 2006 (n. 14) Art. 22(2). 106 See Provisions on Several Issues Concerning Application of Law in Civil Dispute Cases of Infringing the Right of Dissemination via Information Network, Judicial Interpretation 2012 no. 20 (passed on 26 November 2012 at the Trial Committee of Supreme People’s Court Conference no. 1561) (original Chinese text). (English translation) (hereafter Provisions 2012). See also Gabriela Kennedy and Zhen Feng, ‘China’s highest court clarifies copyright liability of network service providers’ (Hogan Lovells, 5 February 2013) . 107  See Provisions 2012 (n. 106) Art. 10. 108  The Supreme People’s Court issued five model cases decided by lower courts on 23 June 2014, which included a civil case concerning copyright infringement: CCTV International v Shanghai TuDou Network Tech. Co. Ltd. See Susan Finder, ‘A model copyright infringement case—‘A Bite of China’’ (SupremePeople’sCourtMonitor,26June2014). 109  See Provisions 2012 (n. 106) Art. 12.

OUP CORRECTED PROOF – FINAL, 03/26/2020, SPi

CHINA’S IP REGULATION AND OMNISCIENT INTERMEDIARIES   293 copyright and trade mark infringement. As a result, the Beijing Guidelines 2016,110 have successfully codified many trends in the IL case law. These guidelines have only mandatory authority over Beijing courts, however their persuasive authority can also reach other courts. Eight factors are indicative for the determination of knowledge by the platform service provider: