Collisions in the Digital Paradigm: Law and Rule-making in the Internet Age 9781509906529, 9781509906536, 9781509906505

It has been said that the only asset that a lawyer has is time. But the reality is that a lawyer’s greatest asset is inf

319 43 10MB

English Pages [425] Year 2017

Report DMCA / Copyright


Polecaj historie

Collisions in the Digital Paradigm: Law and Rule-making in the Internet Age
 9781509906529, 9781509906536, 9781509906505

Table of contents :
Table of Cases
The Analytical Framework
I. Introduction
II. Elizabeth Eisenstein and the Qualities of Print
III. Digital Information
IV. Conclusion
The Transition to the Digital Paradigm-Analogies and Functional Equivalence
I. Introduction
II. A Historical Perspective
III. Digital Writing
IV. Change and Communication in the Digital Paradigm
V. The Law"s Approach to Equating the Old with the New
VI. Functional Equivalence
VII. The Problem of Analogies
VIII. Conclusion
Aspects of Internet Governance
I. Introduction
II. The Internet Governance Forum
III. Technical Governance
IV. Models of Internet Governance
V. Conclusion
The Property Problem
I. Introduction
II. Information as Property-The Debate in the Digital Paradigm
III. The British Commonwealth Approach
IV. The United States" Position
V. Property or Cyberproperty
VI. Conclusion
Recorded Law-The Twilight of Precedent in the Digital Age
I. Introduction
II. Law and Precedent in the Print and Digital Paradigms
III. The Twilight of Precedent?
Digital Information-The Nature of the Document and E-discovery
I. Introduction
II. The Development of E-discovery Rules
III. Common Themes in the Development of E-discovery in Asia-Pacific Jurisdictions
IV. The Rules and Utilisation of Technology
V. Conclusion
Evidence, Trials, Courts and Technology
I. Introduction
II. Orality and Physical Presence of Witnesses
III. Facing Up to Change
IV. Technology in Court
V. The Next Phase
VI. Using Technology to Change Process Models
VII. Conclusion
Social Media
I. Introduction
II. What is Social Media?
III. Social Media Meets the Law
IV. The Googling Juror
V. Lost in Translation-Interpreting Social Media Messages
VI. Other Aspects of Social Media
VII. Conclusion
Information Persistence, Privacy and the Right to be Forgotten
I. Introduction
II. Privacy Themes
III. Privacy Taxonomies
IV. Obscurity of Information-Practical and Partial Obscurity
V. Judicial Approaches
VI. The Internet and Privacy
VII. Search Engines and Information Retrievability
VIII. The Right to be Forgotten
IX. A Right to Update?85
X. Conclusion
Reputational Harms
I. Introduction
II. The Publication Issue
III. Google and Defamation
IV. Linking and Publication
V. Reputational Harms-Where Defamation Does Not Tread
VI. Triaging Reputation
VII. Conclusion
I. The Qualities of Digital Information
II. Governance of a Distributed, Dynamic, Changing Environment?
III. Behavioural Change and Values
IV. Old Rules in New Bottles-Seeking Consistency
V. Volume, Dissemination and Availability of Information
VI. Participation, Interactivity and the Message
VII. Who Am I Online?
VIII. The Message is the Medium-What the Law must Recognise
Journal Articles and Web References
Essays and Chapters in Collections

Citation preview

COLLISIONS IN THE DIGITAL PARADIGM It has been said that the only asset that a lawyer has is time. But the reality is that a lawyer’s greatest asset is information. The practice and the business of law is all about information exchange. The flow of information travels in a number of different directions during the life of a case. A client communicates certain facts to a lawyer. The lawyer assimilates those facts and seeks out specialised i­nformation (legal information) which may be applicable to those facts. In the course of a generation there has been a technological revolution which represents a paradigm shift in the flow of information and communication. This book is about how the law deals with digital information technologies and some of the problems that arise when the law has to deal with issues arising in a new paradigm.


Collisions in the Digital Paradigm Law and Rule-making in the Internet Age

David Harvey


Hart Publishing An imprint of Bloomsbury Publishing Plc Hart Publishing Ltd Kemp House Chawley Park Cumnor Hill Oxford OX2 9PH UK

Bloomsbury Publishing Plc 50 Bedford Square London WC1B 3DP UK Published in North America (US and Canada) by Hart Publishing c/o International Specialized Book Services 920 NE 58th Avenue, Suite 300 Portland, OR 97213-3786 USA HART PUBLISHING, the Hart/Stag logo, BLOOMSBURY and the Diana logo are trademarks of Bloomsbury Publishing Plc First published 2017 © David Harvey 2017 David Harvey has asserted his right under the Copyright, Designs and Patents Act 1988 to be identified as Author of this work. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage or retrieval system, without prior permission in writing from the publishers. While every care has been taken to ensure the accuracy of this work, no responsibility for loss or damage occasioned to any person acting or refraining from action as a result of any statement in it can be accepted by the authors, editors or publishers. All UK Government legislation and other public sector information used in the work is Crown Copyright ©. All House of Lords and House of Commons information used in the work is Parliamentary Copyright ©. This information is reused under the terms of the Open Government Licence v3.0 ( except where otherwise stated. All Eur-lex material used in the work is © European Union,, 1998–2017. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. ISBN: HB: 978-1-50990-652-9 ePDF: 978-1-50990-650-5 ePub: 978-1-50990-651-2 Library of Congress Cataloging-in-Publication Data Names: Harvey, David, 1946 December 31- author. Title: Collisions in the digital paradigm : law and rule-making in the internet age / David Harvey. Description: Oxford ; Portand Oregon : Hart Publishing, an imprint of Bloomsbury Publishing Plc, 2017.  |  Includes bibliographical references and index. Identifiers: LCCN 2017000491 (print)  |  LCCN 2017000641 (ebook)  |  ISBN 9781509906529 (hardback : alk. paper)  |  ISBN 9781509906512 (Epub) Subjects: LCSH: Digital media—Law and legislation.  |  Internet—Law and legislation. Classification: LCC K4240 .H36 2017 (print)  |  LCC K4240 (ebook)  |  DDC 347.00285/4678—dc23 LC record available at Typeset by Compuscript Ltd, Shannon To find out more about our authors and books visit Here you will find extracts, author information, details of forthcoming events and the option to sign up for our newsletters


This book is the culmination of many years involvement in the area of Information Technology and Law. My father introduced me to a Model 1 TRS-80 computer in 1979, suggesting that I should become adept in the use of the machine. His view was that if I did not, technological illiteracy would follow. I took up the challenge and was deeply involved in the introduction of computerisation into the law firm of which I was a partner, first in the form of word processing systems and then in the implementation of trust accounting and other office management processes. In addition I used a personal computer—first a TRS-80 Model 4 and latterly an IBM PC—in my litigation practice, developing reference databases and using spreadsheets to assist in clarifying and illustrating the advantages of settling civil and matrimonial property cases. This was all before the Internet went commercial. Upon appointment to the District Court Bench I became involved in a project the aim of which was to put computers on judicial desks and benches. This included the development of a suite of case management tools for Judges. Then the Internet became available. This was revolutionary, improving and increasing access to case law and legal information to an extent never before imaginable. Sadly in New Zealand the development of a paperless Court and the full digitisation of the Courts system suffered a number of setbacks and a loss of will on the part of successive Governments and Ministries. Even now steps towards digitisation and the use of digital systems tend to be imitative of the Print Paradigm rather than grasping the innovative opportunities that the Digital Paradigm offers. Over the years I have been fascinated by the challenges that digital systems pose to the law and legal processes. Some of these challenges have demonstrated problems that established systems of legislation and the development of case law have in keeping up with the onward rush of disruptive change. Other challenges lie in the way in which digital systems can be used to improve legal and court practice, especially in the area of courtroom and evidence/information presentation. After having spent more years on the Bench than I did in practice, many of the observations in this book are influenced by that experience, and having been involved in the litigation process all my professional life, many of my perceptions are seen through that lens. Much of the thinking behind this book developed over the years and aspects of some of the topics formed the basis for presentations and seminars in which I was



involved, not to mention the undergraduate Law and IT class which I taught (and still teach) at Auckland University. I first coined the term ‘Collisions in the Digital Paradigm’ as the main title of a presentation that I did in Canberra, Australia for the Australian Digital Alliance in March 2013 about some of the collisions between established copyright principles and digital systems. The topic was not new but it was the first time that I discussed my properties-based theory (then in embryo form) that appears in Chapter 2 of this book. Subsequently I used the ‘Collisions’ trope as a starting point for discussions on other aspects of problems between established legal rules and digital technologies. Some of the fruits of those discussions appear in this book. It had long been a desire of mine to see if there was a way to develop a ‘Unified Theory’ that could be applied by lawmakers and judges when confronting problems, paradoxes or collisions between existing law and technological reality, as well as providing a formula that could be applied when considering the development of new rules for new information technologies. The answer to that -if posed as a research question—is no, although that outcome is, in reality, a little more nuanced. What I would like to think is that this book could provide some signposts for rule makers. Perhaps some indication of traps to avoid. Perhaps an alternative approach based on an understanding of the technology rather than trying to uncomfortably squeeze a technological reality into an ill-fitting corset of existing legal principle. Perhaps a recognition that there may have to be an exceptionalist approach to the development of some of the rules that are applicable to the digital space. But if this book causes the reader to pause, think and recognise that there are collisions and that some of them may be avoidable, then I consider that I have been successful. No work of this nature is the product of a single person. There are many to whom I am indebted and to whom I owe thanks. By not including some people by name I in no way wish to belittle the extraordinary assistance that I have received from so many quarters. I must acknowledge the continued support of the good people at InternetNZ, especially Dr Ellen Strickland, CEO Jordan Carter, Assistant CEO Andrew Cushen, Issues Adviser James Ting-Edwards President Jamie Baddeley and Vice-President Joy Liddicoat—dedicated to the Open Internet and engaged with developing proper, pragmatic workable regulatory structures; Nat Torkington and Russell Brown and the Unconference community of KiwiFoo who have provided me with a private forum to put some ideas out ‘in the wild’; Rick Shera for his continued assistance and insights into the law; Sacha Judd for her innovative ideas and unquenchable enthusiasm, Susan Chalmers for her frequent and useful insights into aspects of Internet Governance and Net Neutrality, Clive Elliott QC and his willingness to publicly debate issues—I value the ‘agora’ principle of the development and honing of ideas; all of those who labour in the digital forensics area and especially Professor Lech Janczewski who annually reminds me that I must do a presentation to the New Zealand Information Security Forum which requires me



to delve into new areas. I acknowledge, too, the support of Barry Brailey, Chair of the New Zealand Internet Task Force and his Vice Chair Mike Seddon. I must also acknowledge the assistance that has been provided over the years by Netsafe and their wonderful team, Martin Cocker, Sean Lyons and Lee Chisolm. They now embark upon the role of Approved Agency under the Harmful Digital Communications Act and I look forward to continuing to work with them and their legal team as they chart new waters of the law. An enormous debt of gratitude is owed to my judicial colleagues at all levels of the Court system in New Zealand. I acknowledge the pioneering efforts of former Justices Robert Fisher and David Tompkins and to my colleagues in the District Court who have had to tolerate the occasional outburst of ‘geek-speak’. They have borne my proclivities in this area with patience, forbearance and, from time to time, amusement. I have derived an enormous amount of assistance from an international cohort—judicial, legal and technical. From England I acknowledge the continued friendship of former County Court Judges John Tanzer and Simon Brown QC, of Paul Sachs of Netmaster Solutions, Chris Dale, Stephen Mason and Sir Henry Brooke, former Lord Justice of Appeal who continues to astound and educate us per medium Twitter and his Blog. In Australia the long association that I have had with Professor Graham Greenleaf of the University of New South Wales has kept my mind sharp on how digital systems can improve and enhance access to legal information. From the United States I want to thanks Judge Alex Kozinski of the Ninth Circuit Court of Appeals, variously at Pasadena, San Francisco or his beautiful home at Palos Verdes, California where he dispenses legendary salads, stunning mojitos, great conversation and fine hospitality, Judge Larry Smuckler of New Hampshire who loves Tolkien and photography as well as the Law and technology and the way in which technology can improve and facilitate access to Court records, and north of the border, John de P. Wright, Richard Mosely and Jack Watson from Canada. Also grateful thanks to Professor Fred Lederer of the Centre for Court Technology at William and Mary School of Law for his continued innovative work in the field of technology use in Court and Jim McMillan of the National Center for State Courts in the lovely town of Williamsburg, Virginia, There is a little known but highly active e-mail List that was established in the 1990’s to which I have had the privilege of belonging. The contacts that I have made through this list have been invaluable and the discussions on law and technology (and everything else) always interesting. I extend a special thanks to Bob Franson and Bryan Davis who set up the list and all the members who contribute from time to time. The chance to connect with an international community that shares the values of law and justice and recognises the advantages of technology has been and continues to be a privilege. Thanks to you all. I owe a great debt to the students who have attended my Law and IT class at the Law Faculty of the Auckland University. As time passes I have had the pleasure of sharing a stage with some of them at a seminar or have had them before me in



Court. Wonderful to see them fly. I also must acknowledge the support given over the years by Deans of the Law Faculty at Auckland, Julie Maxton, Bruce Harris, Paul Rishworth QC and latterly Andrew Stockley. My colleagues at the Law Faculty at Auckland have been welcoming and supportive and they all have my thanks, especially Rosemary Tobin whose knowledge and understanding of Media Law has been of considerable assistance. My family has had to tolerate absences during days and evenings as this book slowly came together and without their understanding, support and patience it is doubtful that it would have seen the light. Fern, Giles, Rebekah—as always I owe eternal gratitude. Auckland November 2016


Preface�������������������������������������������������������������������������������������������������������������������������v Table of Cases��������������������������������������������������������������������������������������������������������� xvii

1. Introduction��������������������������������������������������������������������������������������������������������1 2. The Analytical Framework�������������������������������������������������������������������������������16 I. Introduction���������������������������������������������������������������������������������������������16 II. Elizabeth Eisenstein and the Qualities of Print��������������������������������������19 A. Eisenstein’s Qualities������������������������������������������������������������������������20 B. Eisenstein, McLuhan and Media�����������������������������������������������������21 III. Digital Information���������������������������������������������������������������������������������22 A. Identifying Digital Qualities������������������������������������������������������������23 i. Environmental Qualities���������������������������������������������������������23 a. Continuing Disruptive Change����������������������������������������24 b. Permissionless Innovation������������������������������������������������25 ii. Technical Qualities�������������������������������������������������������������������25 a. Delinearisation of Information����������������������������������������25 b. Information Persistence or Endurance����������������������������28 c. Dynamic Information������������������������������������������������������29 d. Volume and Capacity��������������������������������������������������������30 e. Exponential Dissemination����������������������������������������������30 f. The ‘Non-coherence’ of Digital Information������������������31 g. Format Obsolescence��������������������������������������������������������33 iii. User Associated Qualities��������������������������������������������������������35 a. Availability, Searchability and Retrievability of Information�����������������������������������������������������������������������35 b. Participation and Interactivity�����������������������������������������37 B. Some Observations��������������������������������������������������������������������������38 i. Online Disinhibition���������������������������������������������������������������38 C. The Internet and How We Think����������������������������������������������������41 IV. Conclusion�����������������������������������������������������������������������������������������������44 3. The Transition to the Digital Paradigm—Analogies and Functional Equivalence������������������������������������������������������������������������������������46 I. Introduction���������������������������������������������������������������������������������������������46 II. A Historical Perspective���������������������������������������������������������������������������46


Contents III. Digital Writing��������������������������������������������������������������������������������������49 IV. Change and Communication in the Digital Paradigm�����������������������50 V. The Law’s Approach to Equating the Old with the New��������������������������������������������������������������������������������������������������54 VI. Functional Equivalence������������������������������������������������������������������������55 A. Functional Equivalence Problems������������������������������������������������58 B. Functional Equivalence and Links������������������������������������������������59 C. Functional Equivalence—Final Thoughts�����������������������������������63 VII. The Problem of Analogies��������������������������������������������������������������������63 A. The Nature of a Computer and the Scope of a Search����������������64 B. Defamation, Publication Analogies and the Digital Paradigm���������������������������������������������������������������������������69 i. The English Approach����������������������������������������������������������69 ii. Looking at the Technology—Canada and Australia�����������74 a. Crookes v Newton�����������������������������������������������������������74 iii. New Zealand—Wishart v Murray����������������������������������������76 a. Wishart v Murray on Appeal������������������������������������������78 b. The Use of Analogy��������������������������������������������������������79 c. Problems with the ‘Ought to Know’ Test����������������������81 d. The Deeper Aspects of the Case������������������������������������82 VIII. Conclusion��������������������������������������������������������������������������������������������83

4. Aspects of Internet Governance�����������������������������������������������������������������������85 I. Introduction������������������������������������������������������������������������������������������85 II. The Internet Governance Forum���������������������������������������������������������87 A. What does the IGF do?������������������������������������������������������������������89 B. IGF Development��������������������������������������������������������������������������90 C. State Actors and Stakeholders—The Creation of the IGF���������������������������������������������������������������������������������������91 D. Nation-state Initiatives������������������������������������������������������������������94 III. Technical Governance���������������������������������������������������������������������������99 A. ICANN—The Internet Corporation for Assigned Names and Numbers�������������������������������������������������������������������101 IV. Models of Internet Governance���������������������������������������������������������102 A. Net Neutrality and Regulation at the Ends��������������������������������103 B. The Five Models��������������������������������������������������������������������������106 i. Cyber-libertarian Theory���������������������������������������������������107 ii. Transnational Theory���������������������������������������������������������108 iii. Code is Law—Code Writers and Engineers����������������������109 a. The Layer Theory���������������������������������������������������������110 iv. National Governments�������������������������������������������������������110 v. Market Regulation and Economics�����������������������������������112 C. Internet Exceptionalism��������������������������������������������������������������113 V. Conclusion������������������������������������������������������������������������������������������116



5. The Property Problem������������������������������������������������������������������������������������120 I. Introduction�������������������������������������������������������������������������������������������120 II. Information as Property—The Debate in the Digital Paradigm������������������������������������������������������������������������������������������������121 III. The British Commonwealth Approach������������������������������������������������122 A. Information as Property for the Purposes of Theft���������������������125 B. The Dixon Case������������������������������������������������������������������������������127 i. Dixon in the Supreme Court�������������������������������������������������131 C. Information or Data in the Digital Space�������������������������������������135 IV. The United States’ Position�������������������������������������������������������������������138 A. Intangibles and Conversion—A United States Approach������������140 V. Property or Cyberproperty��������������������������������������������������������������������142 A. Virtual Property�����������������������������������������������������������������������������143 B. ‘Digital Assets’��������������������������������������������������������������������������������148 VI. Conclusion���������������������������������������������������������������������������������������������149 6. Recorded Law—The Twilight of Precedent in the Digital Age���������������������152 I. Introduction�������������������������������������������������������������������������������������������152 II. Law and Precedent in the Print and Digital Paradigms�����������������������153 A. Print and Precedent as a Brake on Change�����������������������������������158 B. The Digital Revolution and the Legal Process������������������������������164 C. Hyper-regulation, the Internet and Too Much Law���������������������164 D. Informing Judicial Decisions���������������������������������������������������������166 III. The Twilight of Precedent?��������������������������������������������������������������������169 7. Digital Information—The Nature of the Document and E-discovery������������������������������������������������������������������������������������������������������172 I. Introduction�������������������������������������������������������������������������������������������172 II. The Development of E-discovery Rules�����������������������������������������������174 A. Introduction�����������������������������������������������������������������������������������174 B. The Sedona Conference�����������������������������������������������������������������174 C. The United States’ Experience—The Federal Rules of Civil Procedure��������������������������������������������������������������������������177 D. The English Experience�����������������������������������������������������������������182 III. Common Themes in the Development of E-discovery in Asia-Pacific Jurisdictions�������������������������������������������������������������������188 A. Introductory�����������������������������������������������������������������������������������188 B. Engagement Threshold������������������������������������������������������������������188 C. Court and Judicial Management���������������������������������������������������189 D. Consult, Confer, Cooperate�����������������������������������������������������������190 E. Reasonableness and Proportionality Approaches������������������������190 F. The Checklist Approach����������������������������������������������������������������191 G. Early Case Assessment�������������������������������������������������������������������192 IV. The Rules and Utilisation of Technology���������������������������������������������193 A. The Use of Technical Expertise�����������������������������������������������������193


Contents B. Technology Solutions—Keywords and TAR�����������������������������194 i. Keyword Searching�������������������������������������������������������������195 ii. Technology Assisted Review (TAR)�����������������������������������198 iii. TAR in England�������������������������������������������������������������������204 V. Conclusion������������������������������������������������������������������������������������������207

8. Evidence, Trials, Courts and Technology�������������������������������������������������������209 I. Introduction����������������������������������������������������������������������������������������209 II. Orality and Physical Presence of Witnesses���������������������������������������210 A. The Problems of Presence and the Confrontation Right�������������������������������������������������������������������212 III. Facing Up to Change��������������������������������������������������������������������������215 IV. Technology in Court���������������������������������������������������������������������������218 A. Spatial Technologies��������������������������������������������������������������������218 B. Temporal Technologies���������������������������������������������������������������219 C. The Translation Problem������������������������������������������������������������220 D. The Court Environment�������������������������������������������������������������220 E. The Ability to Recount����������������������������������������������������������������220 F. Intellectual Ability and Suggestibility����������������������������������������221 G. Dealing with the Translation Problem���������������������������������������221 H. Dealing with Environment and the Ability to Recount������������222 I. What of the ‘Confrontation Right’?�������������������������������������������222 J. Presentational Technologies�������������������������������������������������������225 K. Documents—Digitisation, Searchability and Analysis�������������225 L. 3D Use�����������������������������������������������������������������������������������������226 i. 3D Projection����������������������������������������������������������������������227 ii. 3D Printing�������������������������������������������������������������������������227 M. Other Possibilities�����������������������������������������������������������������������229 V. The Next Phase�����������������������������������������������������������������������������������230 VI. Using Technology to Change Process Models�����������������������������������231 A. Online Dispute Resolution���������������������������������������������������������233 B. From Online ADR to Online Court�������������������������������������������236 VII. Conclusion������������������������������������������������������������������������������������������241 9. Social Media����������������������������������������������������������������������������������������������������243 I. Introduction����������������������������������������������������������������������������������������243 II. What is Social Media?�������������������������������������������������������������������������246 A. The Social Media Taxonomy������������������������������������������������������250 III. Social Media Meets the Law���������������������������������������������������������������254 A. Social Media and the Courts������������������������������������������������������255 IV. The Googling Juror�����������������������������������������������������������������������������260 A. Professor Thomas’ Research�������������������������������������������������������260 B. Information Flows����������������������������������������������������������������������262 C. Dealing with Juror Misconduct��������������������������������������������������264 D. The Nuanced Approach��������������������������������������������������������������265



V. Lost in Translation—Interpreting Social Media Messages��������������������������������������������������������������������������������������������266 VI. Other Aspects of Social Media���������������������������������������������������������271 VII. Conclusion����������������������������������������������������������������������������������������272 10. Information Persistence, Privacy and the Right to be Forgotten����������������������������������������������������������������������������������������������274 I. Introduction��������������������������������������������������������������������������������������274 II. Privacy Themes���������������������������������������������������������������������������������275 III. Privacy Taxonomies��������������������������������������������������������������������������277 IV. Obscurity of Information—Practical and Partial Obscurity������������������������������������������������������������������������279 V. Judicial Approaches��������������������������������������������������������������������������280 VI. The Internet and Privacy������������������������������������������������������������������281 A. Social Networking and Privacy�������������������������������������������������283 B. Why Compromise Privacy?�������������������������������������������������������284 C. Social Networking and the Future of Privacy��������������������������286 VII. Search Engines and Information Retrievability������������������������������287 VIII. The Right to be Forgotten����������������������������������������������������������������287 A. Viktor Mayer-Schönberger and Delete�������������������������������������288 i. Digital Information or Digital Memory���������������������������290 ii. Digital Systems as a Communication Tool����������������������292 iii. Fact and Truth�������������������������������������������������������������������293 iv. Mayer-Schönberger’s Solution������������������������������������������295 B. Privacy and the Right to be Forgotten�������������������������������������296 C. Google Spain and the Right to Be Forgotten����������������������������298 i. Matters Arising������������������������������������������������������������������300 ii. Freedom of Expression Issues�������������������������������������������302 iii. The Return of Partial and Practical Obscurity����������������302 iv. Emasculating the Internet?�����������������������������������������������302 v. Google Spain—The Aftermath������������������������������������������303 vi. The General Data Protection Regulation�������������������������304 IX. A Right to Update?����������������������������������������������������������������������������305 X. Conclusion����������������������������������������������������������������������������������������307 11. Reputational Harms�������������������������������������������������������������������������������������309 I. Introduction��������������������������������������������������������������������������������������309 II. The Publication Issue�����������������������������������������������������������������������312 A. Internet Platforms���������������������������������������������������������������������312 B. Platform Defamation����������������������������������������������������������������314 III. Google and Defamation�������������������������������������������������������������������315 A. Metropolitan International Schools Ltd v Designtechnica Corporation����������������������������������������������������316



IV. V.


B. Tamiz v Google����������������������������������������������������������������������������318 C. New Zealand, Hong Kong and Australia�����������������������������������320 i. A v Google���������������������������������������������������������������������������320 ii. Hong Kong�������������������������������������������������������������������������321 iii. Australia������������������������������������������������������������������������������322 a. Trkulja and Duffy���������������������������������������������������������323 b. Bleyer����������������������������������������������������������������������������325 Linking and Publication��������������������������������������������������������������������326 Reputational Harms—Where Defamation Does Not Tread������������330 A. Harassment���������������������������������������������������������������������������������330 B. Revenge Porn������������������������������������������������������������������������������334 C. Harmful Digital Communications �������������������������������������������336 i. Cyberbullying’s Unintended Consequences���������������������339 ii. The Safe Harbour and Censorship������������������������������������340 iii. Comparative Enactments (Australia and Nova Scotia)�����������������������������������������������������������������������341 Triaging Reputation���������������������������������������������������������������������������343 Conclusion������������������������������������������������������������������������������������������345

12. Conclusion����������������������������������������������������������������������������������������������������347 I. The Qualities of Digital Information������������������������������������������������348 II. Governance of a Distributed, Dynamic, Changing Environment?�������������������������������������������������������������������������������������349 III. Behavioural Change and Values��������������������������������������������������������351 IV. Old Rules in New Bottles—Seeking Consistency�����������������������������353 A. Functional Equivalence��������������������������������������������������������������353 B. The Dangers of Analogy�������������������������������������������������������������354 C. Possessing the Intangible—Digital Property����������������������������355 D. Information Dynamic and Non-coherence������������������������������356 E. Competing Approaches��������������������������������������������������������������356 V. Volume, Dissemination and Availability of Information�����������������357 A. A Future for Precedent���������������������������������������������������������������358 B. E-discovery—A Digital Success Story���������������������������������������359 VI. Participation, Interactivity and the Message�������������������������������������362 A. IT in Court����������������������������������������������������������������������������������362 VII. Who Am I Online?�����������������������������������������������������������������������������364 A. Information Persistence, Privacy and Social Media������������������364 B. Memory and Forgetting�������������������������������������������������������������365 C. Mixed Messages—Juries and Jokes��������������������������������������������366 D. Reputational Harms and Reputational Management���������������368 i. Reputational Harms and Defamation�������������������������������368 ii. Other Remedies for Reputational Harms�������������������������370



VIII. The Message is the Medium—What the Law must Recognise�����������������������������������������������������������������������������������372

Bibliography������������������������������������������������������������������������������������������������������������375 Index�����������������������������������������������������������������������������������������������������������������������397



A v Google [2012] NZHC 2352�������������������������������������������������������������������������������320–21, 369 Abdula v R [2012] 1 NZLR 534��������������������������������������������������������������������������������������������220 Aiono v R [2013] NZCA 280�������������������������������������������������������������������������������������������������265 Anastasoff v US 223 F 3d 898 (8th Cir 2000)�����������������������������������������������������������������������168 Attorney-General v Dallas [2012] EWHC 156 (Admin), [2012] 1 WLR 991�����������������������������������������������������������������������������������������������������246, 262 Attorney-General v Fraill [2011] EWCA Crim 1570, [2011] 2Cr App R 21�������������������������������������������������������������������������������������������������������������246, 264 Auckland Waterfront Development Agency Ltd v Mobil Oil New Zealand Ltd [2015] NZHC 470��������������������������������������������������������������������������������206 Austin, Nichols & Co Inc v Stitching Lodestar [2008] 2 NZLR 1412������������������������������������211 Barrick Gold v Lopehandia (2004) 71 OR 3d 416 (ONCA)���������������������������������������������������74 Bleyer v Google [2014] NSWSC 897���������������������������������������������������������������������323, 325, 369 Board of Trade Chicago v Christie Grain and Stock Co 198 US 236 (1905)�������������������������������������������������������������������������������������������������������������������138 Boardman v Phipps [1967] 2 AC 46��������������������������������������������������������������������������������������124 Bodil Lindqvist v Åklagarkammaren i Jönköping ECJ Case C-101/01 jsf?docid=48382&doclang=en������������������������������������������������������������������������������������������335 Bragg v Linden Research Inc 487 F Supp 2d 593 (ED Penn 2007)���������������������������������������146 Brand v Berki [2014] EWHC 2979 (QB)�����������������������������������������������������������������������������332 Brown v BCA Trading [2016] EWHC 1464 (Ch)���������������������������������������������������������206, 361 Brown v Murdoch & Or (No 2) [2014] FAMCA 618���������������������������������������������������� 211–12 Budu v British Broadcasting Corp [2010] EWHC 616 (QB)�����������������������������������������������320 Bunt v Tilley [2007] 1 WLR 1243������������������������������������������������������� 72, 77, 317–19, 323, 325 Byrne v Deane [1937] 2 All ER 204; [1937] 1 KB 818 (CA)�������������������������72–73, 77, 79–80, 319, 321, 354 Cairns v Modi [2010] EWHC 2859 (QB)�����������������������������������������������������������������������������313 Carpenter v US 484 US 19 (1987)�����������������������������������������������������������������������������������������138 Catalano v Managing Australia Destinations Pty Ltd (No 2) [2013] FCA 672�����������������������������������������������������������������������������������������������������211 Chambers v DPP [2012] EWHC 2157 (QB)������������������������������������������������ 246, 266, 272, 367 Chan Wing-Siu v R [1985] AC 168���������������������������������������������������������������������������������������163 Chief Executive, Ministry of Fisheries v United Fisheries [2011] NZAR 54; [2010] NZCA 356����������������������������������������������������������������������������������64 Compagnie Financière et Commerciale du Pacifique v Peruvian Guano Co (1882) 11 QBD 55���������������������������������������������������������������������������������������8, 173 Consolidated Aluminum Corp v Alcoa, Inc No 03-1055-C-M2, 2006 WL 2583308, *5 (MD La July 19, 2006)������������������������������������������������������������������176


Table of Cases

Contostavlos v Mendahun [2012] EWHC 850(QB)�������������������������������������������������������������335 Coy v Iowa (1988) 487 US 1012��������������������������������������������������������������������������������������������214 Crookes v Newton [2011] 3 SCR 269, [2011] SCC 47������������������������������������������������������������������������������������������������ 59–60, 62, 74, 76, 321, 327–29, 355, 370 Crouch v Snell 2015 NSSC 340����������������������������������������������������������������������������������������������343 CSR Ltd v Della Maddalena (2006) 224 ALR 1��������������������������������������������������������������������211 Cubby v CompuServe 776 F Supp 135 (SDNY, 1991)������������������������������������������������������������57 Da Silva Moore v Publicis Groupe 11-civ-1279 (ALC) (AJP), US Dist LEXIS 23350 (SDNY Feb 24, 2012)�������������������������������200–03, 205 Davies v Police [2008] 1 NZLR 638 (HC)����������������������������������������������������������������������������128 Davison v Habeeb [2011] EWHC 3031 (QB)�����������������������������������������������������������72–73, 318 Derby and Co Limited v Weldon (No 9) [1991] [1991] 2 All ER 901����������������������������������183 Deutsche Finance NZ Ltd v CIR (2007) 18 PRNZ 710���������������������������������������������������������219 Digicel (St Lucia) Ltd et al v Cable & Wireless PLC et al [2008] EWHC 2522 (Ch); [2009] 2 All ER 1094���������������������������������������������������������185, 187, 206 Dirks v SEC 463 US 646, 653 (1983)������������������������������������������������������������������������������������138 Dixon v R [2014] NZCA 329. [2014] 3 NZLR 504 (CA); [2015] NZSC 147 (SC)������������������������������������������������������������������������120, 127–32, 134–35, 137–38, 142, 150, 356 Doe v D 2016 ONSC 541 bUaJvZ9k_BQk5RRlNsS1Z6ZW4yMTQ1WUMzSGNMZGZ6QkpN/ view������������������������������������������������������������������������������������������������������������������������������������276 Dow Jones & Co Inc v Gutnick [2002] HCA 56, (2002) 210 CLR 575, 194 ALR 433�����������������������������������������������������������������������������74, 118 Dr Yeung Sau Shing Albert v Google Inc [2014] HKCFI 1404, [2014] 4 HKLRD 493��������������������������������������������������������������������������������������������������������322 Duffy v Google [2015] SASC 170�������������������������������������������������������������� 74, 311, 322–26, 346 E*Trade Securities LLC v Deutsche Bank AG 230 FRD 582 (D Minn 2005)������������������������������������������������������������������������������������������������������������������176 Emmens v Pottle (1885) 16 QBD 354�����������������������������������������������������������������������������79, 317 Entick v Carrington [1765] EWHC KB J98; 19 Howell’s State Trials 1029 (1765)�������������������������������������������������������������������������������������������������������������157 Erris Promotions Limited v CIR [2004] 1 NZLR 811 (HC)�������������������������������������������������134 Faisaltex Ltd v Preston Crown Court [2009] 1 Cr App R 37, [2008] EWHC 2832 (Admin)���������������������������������������������������������������������������������������������64 Farah Construction Pty v Saydee Pty [2007] HCA 22����������������������������������������������������������124 Federal Housing Finance Agency v HSBC North America Holdings Inc, et al 2014 WL 584300�������������������������������������������������������������������������� 200–01 Ferrier Hodgson v Siemer (HC, Auckland CIV-2005-404-001808, 5 May 2005, Ellen France J)������������������������������������������������������������������������������������������������76 Firm of Solicitors v The District Court Auckland [2004] 3 NZLR 748�����������������������������������32 Florida v Jardines 133 S Ct 1409 (2013)���������������������������������������������������������������������������������68 Fox v Percy [2003] 214 CLR 118�������������������������������������������������������������������������������������������211 Galloway v William Frederick Frazer, Google t/a YouTube and others High Court Northern Ireland HOR9793 27 January 2016 Horner J�����������������������������333

Table of Cases


Global Aerospace Inc et al, v Landow Aviation, LP dba Dulles Jet Center, et al No CL 61040, 2012 WL 1431215 (Va CirCt Apr 23, 2012)����������������������������������������������������������������������������������������������������202 Godfrey v Demon Internet [2001] QB 201, [1999] 4 All ER 342 (QB)����������������������������������������������������������������������� 70, 72, 313, 317–18 Goodale v Ministry of Justice [2009] EWHC B41 (QB)���������������������������������186–87, 205, 361 Google Spain SL, Google Inc v Agencia Española de Protección de Datos (AEPD), Mario Costeja González European Court of Justice 13 May 2014 C-131/12. jsf?doclang=EN&docid=152065����������������������������������������������������������������� 13, 37, 274, 298, 326, 365 Hedley Burn Co Limited v Hellar & Partners [1964] AC 465����������������������������������������������167 Heller v Bianco 111 Cal App 2d 424 (1952)���������������������������������������������������������������������71, 80 Hird v Wood (1894) 38 SJ 234 (CA)�������������������������������������������������������������������������������������327 Hosking v Runting [2005] 1 NZLR 1����������������������������������������������������������������������������276, 296 Hunt v A [2007] NZCA 332; [2008] 1 NZLR 368���������������������������������������������������������������124 In re Biomet M2A Magnum Hip Implant Prods Liability Litigation No 3:12-MD-2391, 2013 WL 1729682 & 2013 WL 6405156 (ND Ind Apr 18 & Aug 21, 2013)�������������������������������������������������������������������������������������203 Intel v Hamidi I Cal Rptr 3d 32 (Cal 2003).�������������������������������������������������������������������������143 Intercity Group (NZ) Limited v Nakedbus NZ Limited [2014] NZHC 124���������������������������������������������������������������������������������������������������������������82 International News Service v Associated Press 248 US 215 (1980)���������������������������������������139 International Telephone Link Pty Ltd v IDG Communications Ltd (HC, Auckland CP 344/97, 20 February 1998)������������������������������������������������321, 326, 329 Irish Bank Resolution Corporation Ltd v Sean Quinn & Ors [2015] IEHC 175 09859e7a3f34669680256ef3004a27de/b40ea52f90 e274f380257e18003d15fa?OpenDocument�������������������������������������������������������������� 203–05 Jennings v Buchanan [2005] 2 NZLR 577������������������������������������������������������������������������������76 Jones v Tsige (2012) OR (3d) 241, 2012 ONCA 32��������������������������������������������������������������276 Karam v Fairfax [2012] NZHC 1331���������������������������������������������������������������������76, 329, 370 Karam v Parker [2014] NZHC 2097�������������������������������������������������������������������������������������314 Kennon v Spry [2008] HCA 56, [2008] CLR 366�����������������������������������������������������������������132 Kent Pharmaceuticals Limited v Director of Serious Fraud Office and Others [2002] EWHC 3023 Admin��������������������������������������������������������64 King v Sunday Newspapers Ltd [2011] NICA 8 �������������������������������������������������������������������332 King v Taylor (DC, Auckland CIV 2014-4-122, 24 April 2014, Judge David Wilson QC)��������������������������������������������������������������������������������������������������334 Kleen Products LLC v Packaging Corporation of America 10 C 5711, 2012 WL 4498465 (ND Ill Sept 28, 2012)����������������������������������������������������������������� 202–03 Korda Mentha v Siemer (HC, Auckland CIV-2005-404-1808, 23 December 2008, Cooper J)��������������������������������������������������������������������������������������������76 Kremen v Cohen 337 F3d 1024 (9th Cir 2003)�������������������������������������������������������������� 140–42 Kyllo v US 533 US 27 (2001)���������������������������������������������������������������������������������������������������68 Law Society v Kordowski [2011] EWHC 3185 (QB)������������������������������������������������������������332


Table of Cases

Lawrence v Newberry (1891) 64 LT 797��������������������������������������������������������������������������������327 Lee v Illinois (1986) 476 US 530�������������������������������������������������������������������������������������������214 LICRA and UEJF v Yahoo! Inc and Yahoo! France Order in Summary Proceedings by the Superior Court of Paris rendered on 22 May 2000 by First Deputy Chief Justice Judge Jean-Jacques Gomez:��������������������������������������������������112, 371 Maryland v Craig (1990) 497 US 836�����������������������������������������������������������������������������������214 McAlpine v Bercow [2013] EWHC 1342 (QB)���������������������������������������������������������������������313 McLeod v St Aubyn [1899] AC 549���������������������������������������������������������������������������������������317 Merlin v Cave [2014] EWHC 3036 (QB)���������������������������������������������������������������������� 333–34 Metropolitan International Schools Ltd v Designtechnica Corporation, Google Ltd and Google Inc [2009] EWHC 1765���������������������������������316, 325 Money Managers Limited v Foxbridge Trading Unreported High Court Hamilton CP 67/93 15 December 1993 per Hammond J�������������������������������������������������������������������������������������������������������124 Murray v Wishart [2014] NZCA 461�������������������������������������������������������� 76, 83, 254, 322, 325 Nichia Corp v Argos Ltd [2007] EWCA Civ 741�������������������������������������������������������������������206 NZ Post v Leng [1999] 3 NZLR 219��������������������������������������������������������������������������������������326 OBG v Allan [2007] UK HL 21�������������������������������������������������������������������������������������� 122–24 Oriental Press Group Limited v Fevaworks Solutions Ltd (2013) 16 HKCFAR 366����������������������������������������������������������������������������������������������������321 Oxford v Moss (1979) 68 Cr App R 183����������������������������������������������������������125–26, 129, 139 Peck v United Kingdon [2003] ECHR 44647/98; [2003] EMLR 287�������������������������������������������������������������������������������������������������������������296 People v Dolbeer 1963 214 Cal App 2d 619, 29 Cal Rpt R 573�������������������������������������� 139–40 People v Novack 41 Misc 3d 733, 736, 971 NY 2d 197 (2013)���������������������������������������������212 People v Parker 1963 217 Cal App 2d 422, 31 Cal L Rptr 716�������������������������������������� 139–40 Perfect 10 v Google 487 F 3d 701 (2007) 9th Cir������������������������������������������������������������� 61–62 Petros v Chaudhari [2004] EWCA Civ 458; EWHC 3185 (QB)������������������������������������������332 PJS v News Group Newspapers Ltd [2016] UKSC 26�����������������������������������������������������������244 PJS v Newsgroup Newspapers [2016] EWCA Civ 100; [2016] UKSC 26�����������������������������330 Police v Joseph [2013] DCR 482������������������������������������������������������������������������������268–70, 367 Police v Slater [2011] DCR 6�����������������������������������������������������������������������������������������245, 258 ProCD, Inc v Zeidenberg 86 F 3d 1447, 1449 (7th Cir 1996)�������������������������������������������������56 Pyrrho Investments Ltd v MWB Property Ltd [2016] EWHC 256 (Ch)�����������������������������������������������������������������������������������������204, 206–07, 361 R v Atkinson DC Auckland CRI-2010-004-014676, 30 October 2012����������������������������������������������������������������������������������������������������������������265 R v Debnath [2005] EWCA Crim 3472��������������������������������������������������������������������������������332 R v Durham [2011] NZCA 69������������������������������������������������������������������������������������������������58 R v Hayes (2006) 23 CRNZ 547��������������������������������������������������������������������������������������58, 353 R v Hayes [2008] 2 NZLR 321 (NZSC)��������������������������������������������������������������������������������130 R v Jogee and Ruddock v R [2016] UKSC 8��������������������������������������������������������������������������163 R v Matenga [2009] 3 NZLR 18��������������������������������������������������������������������������������������������211 R v Misic [2001] 3 NZLR 1�����������������������������������������������������������������������������������������������������65 R v Munro [2008] 2 NZLR 87�����������������������������������������������������������������������������������������������211 R v Patrick 2009 SCC 17, [2009] SCJ No 17������������������������������������������������������������������������280

Table of Cases


R v Powell and R v English [1999] 1 AC 1����������������������������������������������������������������������������163 R v R (Rape: Marital Exception) [1964] AC 465������������������������������������������������������������������167 R v Spencer (2014) SCC 43, [2014] 2 SCR 212, [2014] SCJ No 43�������������������������������������280 R v Tessling 2004 SCC 67, [2004] 3 SCR 432�����������������������������������������������������������������������280 R v Ward (2012) ONCA 660, [2012] OJ No 4587���������������������������������������������������������������280 R v Wilkie, R v Burroughs, R v Mainprize [2005] NSWSC 794�������������������������������������������211 R v Wilkinson [1999] 1 NZLR 403��������������������������������������������������������������������������������126, 129 R v Wong [1990] 3 SCR 36, 60 CCC (3d) 460, [1990] SCJ No 118�������������������������������������280 R(H) v Commissioners of Inland Revenue [2002] EWHC 2164 Admin��������������������������������64 Re ML (Use of Skype Technology) [2013] EWHC 2091 (Fam)��������������������������������������������212 Reno v ACLU 521 US 844 (1997)������������������������������������������������������������������������������������������113 Riley v California 134 SCt 2473 (2014)��������������������������������������������������������������������������66, 354 Rio Tinto plc v Vale SA et al (2015) No 14, Civ 3042 (SDNY)���������������������������������������������200 Rio Tinto PLC v Vale SA et al 14 Civ 3042 (RMB) (AJP) 3 March 2015�����������������������������203 Ruckelshaus v Monsanto Co 467 US 986 1,001 to 1,004 (1984)������������������������������������������138 Runescape LJN: BG0939 Rechtbank Leeuwarden, 17/676123-07 VEV.�������������������������������146 S (Relocation: Parental Responsibility) [2013] 2 FLR 1453; [2013] EWHC 1295 (Fam)�����������������������������������������������������������������������������������������������212 Scott v Harris (2007) 550 US 372���������������������������������������������������������������������������������216, 219 Siemer v Stiassny [2011] NZCA 106���������������������������������������������������������������������������������������76 South Central Bell Telephone Company v Barthelemy 643 So 2d 1240 (Lou 1994)�����������������������������������������������������������������������������������������������134 Southern Storm (2007) Limited v The Chief Executive, Ministry of Fisheries [2013] NZHC 117�����������������������������������������������������������������������������64 Stewart v R (1988) 1 RCS 963���������������������������������������������������������������������������������������125, 129 Stratton Oakmont v Prodigy (1995) NY Misc Lexis 229; 23 Media LR 1794���������������������������������������������������������������������������������������������������������������57 Sunstate Airlines (Qld) Pty Ltd v First Chicago Australia Securities Ltd (unreported, 11 March 1997)��������������������������������������������������������������������211 Svensson v Retreiver Sverige AB CJEU C466/12 http://curia.����������������������������������������������������������������������������60 Tacket v General Motors 836 F 2d 1042 (7th Cir 1987)����������������������������������������������������71, 80 Tamiz v Google [2012] EWHC 449 (QB) [2012] EMLR 24; CA [2013] EWCA Civ 68; [2013] 1 WLR 2151, [2013] EMLR 14, [2013] WLR(D) 65����������������������������������������������� 64, 69, 71–73, 318–20, 322–23, 325, 346, 354, 369 Taxation Review Authority 25 [1997] TRNZ 129����������������������������������������������������������������124 Thomas v News Group Newspapers Ltd [2001] EWCA 1233�����������������������������������������������332 Thompson v Department of Housing & Urban Development 199 FRD 168. 171 (D Md 2001)���������������������������������������������������������������������������������������178 Torkington v McGee [1902] 2 KB 427�����������������������������������������������������������������������������������123 Trimingham v Associated Newspapers Ltd [2012] EWHC 1296 (QB)�������������������������������������������������������������������������������������������������������������332 Trkulja v Google Inc LLC (No 5) [2012] VSC 533������������������������������������ 74, 322–26, 346, 369


Table of Cases

TS and B Retail Systems v Three Fold Resources No 3 [2007] FCA 151�����������������������������������������������������������������������������������������������������������������124 Tucker v News Media Ownership Ltd [1986] 2 NZLR 716���������������������������������������������������277 United States v O’Keefe 537 F Supp 2d 14 at 24 (DDC 2008)���������������������������������������������197 United States v Fumo 639 F Supp 2d 544 (ED Pa 2009)������������������������������������������������������263 Universal City Studios v Reimerdes & Corley 111 F Supp 2d 294—Dist Court, SD New York (2000)�������������������������������������������������� 5, 59, 62–63, 83 Urbanchich v Drummoyne Municipal Council (1991) Aust Torts Reports 69�������������������������������������������������������������������������������������������������77, 325 US Department of Justice v Reporters Committee for Freedom of the Press 489 US 749 (1989)���������������������������������������������������������������������������279 US v Comprehensive Drug Testing 621 F 3d 1162 (9th Cir 2010)����������������������������������66, 354 US v Sklyarov and Elcomsoft 7 July 2001 US District Ct, Northern District California Case No 5-01-257�������������������������������������������������������������118 US v Thomas 74 F 3d 701 (6th Cir 1996)�����������������������������������������������������������������������������118 Victor Chandler International Ltd v Customs and Excise Commissioners and another [2000] 2 All ER 315�������������������������������������������������������������184 Victor Stanley Inc v Creative Pipe Inc 250 FRD at 251 (D Md 2008)��������������������������� 196–98 Von Hannover v Germany [2004] EMLR 379����������������������������������������������������������������������296 Watchorn v R [2014] NZCA 493��������������������������������������������������������������������������120, 131, 150 William A Gross Construction Associates Inc v American Manufacturers Mutual Insurance Co 256 FRD 134 at 134, 136 (SDNY 2009)����������������������������������������������������������������������������������������195–96, 198 Williams v Superior Court 1978 81 Cal App 3d 330, 146 Cal Rptr 311�������������������������������������������������������������������������������������������������������� 139–40 Wishart v Murray [2013] NZHC 540 (HC), [2013] 3 NZLR 246 (HC), [2014] 3 NZLR 722 (CA), [2014] NZCA 461���������������������������76–78, 314, 355 WXY v Gewanter [2012] EWHC 496 (QB)��������������������������������������������������������������������������332 Your Response Limited v Data Team Business Media Limited [2014] EWCA CIV 281��������������������������������������������������������������������������������������121, 123–24, 138, 151 Zubulake v UBS Warburg LLC 220 FRD 280, (2003)���������������������������������������������������176, 360

1 Introduction Shane Greenstein suggests that Internet Exceptionalism provided an alternative basis for determining value for investors in new technologies during the ‘Dot Com’ bubble of the late 1990s. There were two approaches. The first was that the technology was central and economic factors had a secondary impact. The second was that the Internet had its own set of economic rules that had little in common with earlier historical models. Whichever point of view was attractive, Internet Exceptionalism foresaw the replacement of existing non-Internet related activities with new entrepreneurial activities. There was a presumption that revenue would appear as activities moved into the realm of the new and improved Internet-based economy.1 Greenstein characterises Internet Exceptionalism as a myth or an ideology that overly stresses the unique features of the technology in commercial events. Proper economic analysis is relegated to a secondary status and insufficiently emphasises or overlooks the influence of markets in fostering or discouraging innovation. Greenstein makes no bones about his position. He considers Internet Exceptionalism to be ‘just plain wrong’.2 This book argues a different form of exceptionalism. While Greenstein speaks from the position of an economist, I look at the phenomenon of the Internet and underlying digital technologies from the perspective of a lawyer. I am not dismissive of the difference between earlier technologies and those of the digital revolution. In this book I argue, however, that many of our rules and laws are based upon a pre-existing technology. I argue that in some cases the technologies of the Digital Paradigm are so different from those preceding that the validity of those rules in the context of the new technologies must be re-examined. In some cases there are collisions—where the old rule and the new are in conflict. In other cases there are paradoxes—where new digital technologies present perverse or confusing outcomes. But I am not arguing that there should be special laws for the Internet or for digital technologies. I am suggesting that when we examine the validity or applicability of some of our rules, especially those based on pre-digital technologies, we should pay careful attention to the particular qualities of technologies, especially in the cross-over from the pre-digital to the digital paradigm. 1  Shane Greenstein, How the Internet Went Commercial—Innovation, Privitization and the Birth of a New Network (Princeton, Princeton University Press, 2015) 7–8 and 342–43. 2  ibid, 8.



When lawyers consider the communication of information they tend to c­ onsider primarily the content of the communication—what is being communicated. It is not often that they will consider the ‘how’ of communication. There may be the odd exception such as whether or not a contract for the sale and purchase of land was in writing but predominantly lawyers’ arguments will be about the interpretation of what is written. The technology of communication rarely features unless the dispute is about a patent or has a focus upon similar or dissimilar designs. This book suggests that although in many transactions the content of communication will be important, when one is considering the foundation and basis for a rule about communication one needs to look below the surface, below the content layer and consider the attributes of the technology—how it operates, what it does, what its particular qualities are—and in this consideration what I consider to be paradigmatical differences between digital and pre-digital technologies become apparent. In chapter two I consider the analytical framework for the examination of the differences between digital and pre-digital rules based upon the qualities of different technologies. The discussion commences with an overview of an analytical framework developed by the historian Elizabeth Eisenstein in her seminal work The Printing Press as an Agent of Change.3 In that work Eisenstein identified a number of qualities present in print technology that differentiated the communication of information in print from that communicated in manuscript. These qualities were not the obvious ones of machine based creation of content but focused upon the way in which printed material was going to and did impact upon the intellectual activities of educated elites in early-modern Europe. These qualities were beneath the content layer; not immediately apparent but vital in considering the way in which readers dealt with and related to information and ultimately had an impact upon their expectations of information and how, in turn, they themselves used print to communicate. It is also important to realise that many of the writings of the Canadian media specialist Marshall McLuhan have become particularly relevant in a consideration of the communication of information within both what may be called the ‘print paradigm’ and the ‘digital paradigm’. McLuhan’s famous albeit opaque aphorism ‘the medium is the message’ tells us that we must look beyond the content layer in examining the impact that a new communications technology may have. Using McLuhan’s suggestion and developing the way in which Eisenstein identified her underlying qualities of print technology, I move to consider the qualities of digital communications technologies. I identify 13 different qualities,4 some of which overlap and some of which are complementary. However, rather than merely identify these qualities I have 3 Elizabeth Eisenstein, The Printing Press as an Agent of Change (Cambridge University Press, ­ ambridge, 1979) 2 Vols. Reference will be made to the 1 volume 1980 edition; Elizabeth Eisenstein, The C Printing Revolution in Early Modern Europe (Cambridge, Cambridge University Press (Canto), 1993). 4  Eisenstein identified six for print.



­ eveloped a form of taxonomy or classes of qualities which are occupied by sped cific exemplars. For example, I have identified what I call environmental qualities. They arise from the context within which digital technologies develop and are descriptive of the nature of change within that context, and some of the underlying factors which drive that change. Because digital technologies primarily involve the development of software tools which operate on relatively standard computing equipment, the capital investment in hardware and manufacturing infrastructure is not present in the development of digital tools, although it certainly is in the development of the hardware that those tools require. Thus the development of digital software can take place in any one of a number of informal locations where the only requirements are a power supply, a computer and a programmer or programmers. This lack of infrastructural requirements enables the development of software tools which can be deployed via the non-regulated environment of the Internet giving rise to the qualities of permissionless innovation and continuing disruptive change which are discussed in detail. A second set of qualities I have identified as technical qualities. These are so classified because they underlie some of the technical aspects of the new digital technologies. Some of these qualities are present in a different form in the print paradigm. Eisenstein identified dissemination of content as a quality of print that was not present within the scribal paradigm. I have identified exponential dissemination as an example of a technical quality—the way in which the technology enables not only the spread of content as was enabled by print, but dissemination at a significantly accelerated rate with a greater reach than was enabled by physical dissemination. Another of the qualities that I identify as a technical one is that of information persistence, summed up in the phrase ‘the document that does not die’. Once information has been released on to Internet platforms the author or original disseminator loses control of that content. Given the fact that as digital information travels through a multitude of servers, copies are made en route meaning that the information is potentially retrievable even although it may have been removed from its original source. Other examples of ‘technical qualities’ include the way in which linear progress through information is challenged by navigation via hypertext link in what I call the delinearisation of information; the dynamic nature of information and its malleability in digital format; the way in which seemingly limitless capacity allows for storage of a greater amount of information than was previously considered possible; the apparent non-coherence of digital information and the need for the intermediation of hardware and software to render it intelligible, and the problem of obsolescence of information caused by loss arising not from deterioration of the medium but as a result of the unwillingness of software companies to support earlier iterations of software which enabled the creation of an earlier and now inaccessible version of the content. All are aspects of technical qualities that underpin the content of digital information.



The third category of qualities are what I call user associated qualities—qualities that arise in the behaviour of users in response to digital information technologies. Among these user associated qualities is the searchability of digital information and its associated availability and retrievability arising from the development of ever more sophisticated search algorithms and platforms, and the ability of users to participate in the creation of and use of content as a result of the interactive nature of digital technologies, in particular social media. All these qualities, cumulatively, have an impact upon our ‘relationship’ with and expectations of information and have an influence on behaviour. One form of behaviour is what may be called the online disinhibition effect which is discussed in detail. This inevitably leads to a consideration of the contentious issue of the effect that new technologies have upon the way that we think. It is suggested that the issue is not so much one of neuroplasticity advanced by Susan Greenfield5 or ‘dumbing down’ of attention spans as suggested by Nicholas Carr6 but a slightly more nuanced view of the way that the medium and the various delivery systems redefine the use of information which informs the decisions that we make.7 Paradigmatically different ways of information communication and acquisition are going to change the way in which we use and respond to information. And because the law and legal practice involves at all levels information exchange be it by way of client instructions, counsel’s opinion, a statement of claim or a judicial decision, this paradigmatic change in information communication is necessarily going to have an impact on law and legal processes. So how does the law deal with paradigmatic change? Chapter three addresses part of this issue by considering the development of rules and principles within an earlier system of recorded information and provides some examples of how even written evidence was held up to question. The complex mix of oral and written material within the scribal culture gave way to the printing revolution and the development and acceptance of the printed word as authoritative. But there is a yawning gulf between the creation of the written word on a piece of paper be it by way of hand-writing or the mechanical means of the now obsolete typewriter and the creation of information by the use of digital technology. The discussion moves to consider the various ways by which this may be accomplished and the differing ‘states’ through which information must pass before it can be rendered on a screen or printed on paper.

5  Susan Greenfield, ‘Modern Technology is Changing the Way our Brains Work, Says Neuroscientist’ Mail Online, Science and Technology 15 May 2010, Modern-technology-changing-way-brains-work-says-neuroscientist.html. 6  Nicholas Carr, The Shallows: How the Internet is Changing the Way we Think, Read and Remember (London, Atlantic Books, 2010); Nicholas Carr, ‘Is Google Making Us Stupid: What the Internet is Doing to our Brains’ Atlantic July/August 2008 Online edition archive/2008/07/is-google-making-us-stupid/306868/. 7  For a counter argument to that advanced by Greenfield and Carr see Aleks Krotoski, Untangling the Web: What the Internet is doing to you (London, Faber, 2013) especially at 35–36. For a deeper discussion see ch 2 under the heading ‘The Internet and How we Think’.



The reason for this discussion is that it defines the paradigmatic nature of the changes in information creation and delivery with which the law must deal and presents a challenge to two established analytical tools used by lawyers. The first area of examination is that of functional equivalence which originally arose as a way of dealing with equating digital equivalents of paper-based requirements for commercial transactions. The nature of functional equivalence and its development and use by UNCITRAL in the Model Law on Electronic Commerce is offered as an example. The argument is that functional equivalence is a very useful analytical tool when used properly and carefully, with due attention to how the function provided by the new technology is truly equivalent to the old. This involves a consideration not only of the product (or the content) but also of the way that it is achieved. In that respect there is a discussion of the American case of Universal City Studios v Reimerdes & Corley where the nature of hypertext links were considered.8 This case provides an example of how the term ‘functional equivalence’ can be misused or misapplied. It is my argument that the Judge used the term as a convenient way to sheet liability home against the defendants where, on a strict analysis of the technology, that liability was doubtful if non-existent. To understand why, I discuss the nature of hypertext links and how they operate. Only after this examination can the various levels of functionality and the existence of equivalency be understood. The second area of analysis involves a discussion about the use of analogy. Once again analogies must be used with considerable care in ensuring that the comparators—the source and the target—are alike. One of the problems for lawyers is that in the past they have put the medium (usually paper) to one side and concentrated on the message of the content in question. The discussion of analogies will look at the nature of the computer as a means of storage of information and the way in which cases have attempted to compare a computer with a filing cabinet. The context of the discussion will be within that of a search authorised by a warrant and the associated issues arising from the seizure of information beyond the scope of the warrant, irrelevant to the investigation or protected by privilege. The discussion will demonstrate how in fact the filing cabinet analogy collapses. Other problems involving analogies—at as high a level as the United States Supreme Court—will be discussed and considered. This does not mean that reasoning by analogy should be abandoned nor is inapplicable in trying to achieve rule consistency between the pre-digital and digital paradigm. Rather the emphasis must be upon the proper and careful use of analogy and in ensuring that like and like are compared. The discussion on the issue of publication in the digital paradigm for the purposes of defamation will be considered in the light of a number of cases from Australia, Canada, England and New Zealand. The discussion will also demonstrate that in considering the issue of


111 F Supp 2d 294—Dist Court, SD New York (2000).



publication, the different platforms deployed on the backbone that is the Internet differ markedly one from another. Thus publication via a Google search may be different from that on a blog. And the responsibility of the host of a Facebook page for comments posted by a third party may depend upon the level of control and the awareness of the host of the presence of third party content. Thus the ‘how’ of the arrival of the content becomes a matter of examination. But most importantly, the discussion will reveal that the reality of the technology, the way that it operates and its fundamental properties, must be considered and understood before we can come to determining the applicable rule. Analogy and functional equivalence are available, but as reasoning methods, must be deployed with care. The fourth chapter deals with a discrete substantive issue. How does one control a distributed technology. What, if any, governance methods or models are applicable to the Internet? The issue of Internet Governance is one that is worth a book in itself and indeed a number have been written. This chapter looks at the issue from the perspective of some of the tensions that have arisen between what could be called traditional models of governance, usually involving nation states on the one hand and the way in which a world-wide communications technology distributed among a number of nation states may be otherwise regulated from a perspective that recognises it as a technological phenomenon. The starting point for the discussion is an examination of the Internet Governance Forum, how it came to be and the tension that exists between that organisation and others in the global communications space. But where does the organisation and control of a technological phenomenon really lie? There can be no doubt that network engineers play an important role through organisations such as the Internet Society (ISOC), the Internet Engineering Task Force (IETF) and the Internet Corporation for Assigned Names and Number (ICANN). As a part of this discussion will be the vexed question of Net Neutrality—the argument that there should be no discrimination between the various types of Internet content. Data is data and should be treated alike irrespective of how it is disassembled and reassembled at the points of distribution and reception—an aspect of what is called the ‘end-to-end’ design of the Internet. There have been a number of theorists who have written on the subject of Internet Governance and their contribution cannot be overlooked. Indeed within the various theories of Internet Governance there are collisions, differences and contentious issues. A discussion of Internet Governance would not be complete without a brief consideration of the views of these theorists. But underlying some of these theories is the issue of Internet Exceptionalism to which I have already made reference. Is there a case for Internet-specific rules and is there a case for an international specific governance model that recognises the unique character of the new technology? Or is the current ‘hands-off ’ approach favoured by Western democracies to prevail over the more regulated approach favoured by nations such as Brazil, Russia, China and India? The reality is that the Internet challenges existing models of governance and control. The qualities of digital technologies such



as permissionless innovation and the associated aspect of continued disruptive change demonstrate that the Internet is a moving target, difficult to control and constantly changing. The nature of ‘digital property’ presents a real collision in the digital paradigm. One would have thought that most of the problems in this area would have been solved by the principles underlying the rules about intellectual property. But it must be remembered that the whole concept of intellectual property in general and copyright in particular was technology specific. The Statute of Anne 1710 that created the author’s right to control the copying of his or her created content applied only to control of printing and not manuscript copying. Although the continuing development of copyright law has been in response primarily to the introduction of new communications technologies, copyright is a statutorily created special property right with a number of limited exceptions that would allow for the fair and reasonable use of the author’s material. But chapter five is not about copyright, but about whether or not there is a property right in a digital file or files. This is an important issue as more and more businesses migrate their information from paper-based records to digital storage either in office based servers, remote locations or the Cloud. Intertwined with the issue of a digital file is the issue of what property right there may be in pure information. The discussion starts with a consideration of this issue and a consideration of some of the commentary and cases that have surrounded this issue. There are essentially two approaches. One is that of Courts in the British Commonwealth which, by and large, have subscribed to an orthodox view that there is no property in pure information. That orthodox approach has been recently rejected by the New Zealand Supreme Court which has held that computer files may be property for the purposes of a specific statutory provision, but that decision, apart from being technologically incorrect, serves only to muddy the waters around the issue. In New Zealand for the moment, with the exception of the Supreme Court holding, the orthodox view prevails. In contrast an entirely different theory underlies aspects of property in information and in computer files as distinct from any intellectual property considerations. This theory is based upon the theory that there should be an element of property underlying something that is of value to someone. The American position is illustrated by a number of cases dealing with the concept of cyberproperty and the reduction in functionality as a result of some form of intrusive interference with a computer system. But the discussion does not end there. The digital paradigm has enabled the acquisition of what may be termed ‘virtual property’—items that have been ‘obtained’ or are associated with online activities such as games or social media virtual worlds. There is no doubt that these items have a value for they are traded between players often for value with payment by way of Bitcoin which may be exchanged for hard currency or by other forms of currency trading. The issue of ‘virtual property’ acquires a sharper focus in the case of the disposal of ‘digital assets’ such as social media accounts, the contents of file lockers along with access



to Cloud and other storage facilities.9 These challenge existing concepts of property, are being addressed by those practising in the cyberlaw field and provide further examples of collisions in the digital paradigm. Chapter six demonstrates a possible future for law. I describe the way in which pre-digital technologies have enabled the achievement of the conditions necessary for the development of stare decisis and the doctrine of precedent. Those conditions have developed primarily as a result of the underlying properties of printed information. The chapter then goes on to consider how digital information systems challenge the technological underpinnings that have made the doctrine of precedent possible, and advances an alternative future for precedent. To be sure there is a place for precedent in the Anglo-American common law system, but the approach to precedent in the digital paradigm will be one which is quite different and will focus upon criteria other than the issue/principle approach that has characterised precedent until recently. The chapter suggests that this model of precedent is looking towards a twilight, and a new dawn of precedent based legal analysis possibly employing artificial intelligence and data analytics is coming. Chapters seven and eight deal broadly with evidence in the digital paradigm and the use of digital technologies in court. Chapter seven specifically addresses the issue of e-discovery or e-disclosure as it is known in England. E-discovery is a child of the digital paradigm and arises as a result of information being retained in digital storage devices rather than in a paper file. The qualities of digital information, in particular its replication and dissemination, mean that the volume of information that has to be examined has exponentially increased and is often located over a number of digital devices that form part of an organisation’s communications network such as desktop computers, on-site servers, Cloud storage devices and portable devices such as smartphones, laptops, tablets and USB sticks. The collision that this increased volume of information presents with the predigital paradigm is that the rules relating to discovery were developed in the ‘paper paradigm’. Even then the wide line of inquiry that was allowed by the Peruvian Guano10 test could present its own problems where the paper trails were considerable. In some jurisdictions the Peruvian Guano test still applies. In others, such as New Zealand, it has been abandoned and has been replaced with a ‘relevance to issue’ test. The problems that have been created by the digital paradigm are capable of solution using software search and analytical tools that progressively reduce volume and increase relevance. Some of these tools are quite mundane, such as keyword searching. Others are highly sophisticated, such as predictive coding or Technology Assisted Review (TAR). The utility and applicability of these tools is still under consideration by the Courts, but given some important principles that underpin e-discovery, such as the duty of counsel to confer and consult (and if 9  See ‘What You Need to Know About Death and the Internet’ InternetNZ July 2016 10  Compagnie Financière et Commerciale du Pacifique v Peruvian Guano Co (1882) 11 QBD 55.



possible agree) on e-discovery solutions, and the principle that the scope of discovery should be reasonable and proportionate to the matters in dispute and the quantum of the claim, a relatively robust judicial approach has developed to the use of software discovery tools, especially in the United States and more recently in England. Chapter seven will trace the development of e-discovery rules and consider particularly the influence of an organisation known as the Sedona Conference on that development. The focus will then shift to a consideration of some of the common themes that underpin e-discovery rules and the way in which these themes have been realised in common law jurisdictions such as Australia, Singapore, Hong Kong and New Zealand. The discussion will then shift to a consideration of the way in which the various rules deal with technological solutions and conclude with a discussion on the issue of predicative coding and technology assisted review (TAR) as opposed to keyword searching and the issues surrounding that methodology. The issue of e-discovery is an interesting one within a greater consideration of how the law responds to a new technology. Technology in and of itself has not been the prime motivator for change. Rather, change has been driven by a recognition that the phenomenal increase in the costs of litigation arising as a result of paper-based rules for a digital phenomenon could not be sustained and exposed deeper issues such as access to justice and the viability and integrity of the court system as a place for the resolution of legal disputes. It is most interesting to note that the change in approach to discovery and the development of e-discovery has been driven by the courts and by those responsible for the rules of procedure rather than through legislative change. It was a problem for the courts and the courts have taken steps to address this particular collision arising from the digital paradigm. The way in which technology can be deployed in the court environment, or indeed the way in which technology can be the court itself is considered in chapter 8. The major premise behind chapter 8 is that the court process, as is the practice of law itself, is an exercise in information exchange and processing with the objective being a decision informed by evidence and information communicated via argument. One of the problems about trials and particularly criminal trials has been the requirement of the physical presence of the participants and the first issue that is addressed is the nature of the confrontation right and whether in fact this requires physical presence or whether the elements that underpin confrontation can be met by the use of technology. The arguments favouring the ‘physical presence’ model such as openness of proceedings, the underlying requirements of the adversarial trial, the suggestion that presence discourages falsehood and the associated fiction that demeanour provides vital signals for truth-telling are considered. Other arguments such as the weight of history and other symbolic elements that underpin the adversarial trial are addressed. The reality is that the citizens of the twentyfirst century, growing up with technology and unencumbered by the ‘presence



­ aradigm’ as an aspect of communication will view the ‘physical presence’ model p as a quaint archaism with a corresponding loss of confidence in the justice system. One aspect of the ‘physical presence’ oral evidence model is that it relies to a considerable extent upon memory. It is well known, and has been for centuries, that memory is unreliable, and forms a poor basis for the proper evaluation of facts. Furthermore, recent research has shown that memory can reinterpret information to the point that what is recalled is not objective fact, but a very subjective reconstruction of events that bears little resemblance to reality.11 There are other problems that are present within the ‘physical presence’ orality model that are associated with the ability to articulate, aspects of intellectual ability, problems with language and translation and the threatening and intimidating environment of the courtroom itself. It is suggested that technology may well provide answers to these issues that will result in potentially a much more reliable recounting of information. I shall also discuss some examples of new presentational technologies that are becoming available. Although lifelike 3D holograms are not yet with us, 3D rendering is available and is already being used in certain circumstances such as crime scene walkthroughs. This part of the chapter will explore other opportunities for the use of 3D technology and also the creation of evidential items by way of 3D printing. Other possibilities such as the use of electronic bundles of documents, closed circuit TV and surveillance systems along with issues surrounding facial recognition software will be considered. The chapter closes with a discussion of the ‘technology as the court’ and considers some of the issues arising from the proposals of Professor Richard Susskind for an online court. Current calls for a more affordable justice system have driven an alternative model for court hearings but this involves an innovative and transformative use of technology. The online court proposals are not simply the use of technology to imitate the ‘physical presence’ model by deploying tele- or videoconferencing but to use technology to shift the emphasis of the process away from a Court hearing and towards an early resolution of the dispute. The development of technologies is not going to stop. In some respects chapter 8 provides a snapshot of the way in which available technologies may be deployed to achieve the result of better informed decision making. Other possibilities such as a crowd-sourced jury, the use of blockchain to ensure process integrity and a collaborative reimagining of the court system may well be available in the future. The chapter suggests that there are significant future possibilities offered by the Digital Paradigm that will be disruptive, transformative and innovative. It is a path we are still travelling. Chapters nine, 10 and 11 have a common theme running through them and it is that of social media and the challenges that collaborative, sharing and distributed technologies bring to the Court system, and the way in which social media content


See Julia Shaw, The Memory Illusion (London, Penguin, 2016).



may be mis-interpreted resulting in potential criminal liability for ‘off-the-cuff ’ comments. There are challenges posed by social media especially to reputation and to concepts of privacy, giving rise to the theory of the ‘right to be forgotten’ or ‘the right to be deindexed’. In chapter nine I examine the nature of social media and suggest a possible taxonomy, although it must be recognised that social media platforms rise and fall in popularity. In that respect it is probably unwise to single out any particular platform as representative. Rather I have attempted to classify the various platforms in terms of the services that they offer and common characteristics that they have. One common feature to all social media platforms is the emphasis on sharing information. Indeed that element underpins all social media activity and the nature of the relationships created on social media platforms depends very much upon the nature of the information and the extent to which it is shared. Some social media platforms have a primary focus of sharing photographs like Instagram or Pinterest. Others, like Facebook, allow the sharing of wide variety of information types. Social media are having an impact upon the courts. Many courts are adopting social media platforms as a means of communication with court users. The ability to ‘live stream’ court proceedings has been adopted in some jurisdictions12 and lawyers and law firms use social media for promotional purposes—a practice that is not without risk and concerning which professional organisations have suggested caution. Social media use within the courtroom poses some unique problems especially when that involves the use of social media by jurors during the course of a trial. The phenomenon of what I term the ‘Googling Juror’ is considered along with the steps that courts can take to reduce the risk of juror contamination and the strategies that may be adopted where social media use may have an impact upon fair trial processes. In this respect it is suggested that a nuanced approach be taken, going beyond the fact of communication and considering the nature of the communication and whether it involved the passage of potentially prejudicial information into the jury room, or the communication of an opinion or point of view from the jury room to the ‘outside world’. There are also circumstances where the use of social media may fall within the realm of prohibited conduct and two cases are considered which provide examples of the care with which prosecuting authorities should approach potentially harmful messages, taking into account nuance and context but recognising that often simple text is deprived of inflection and tone. The two cases are examples of how easy it is to ‘get it wrong’ and for prosecuting authorities to convey the impression that they are unaware of the subtleties of new communications technologies. 12  A very recent example is a decision approving the live streaming of an appeal against an extradition decision—the first live streaming of a case in New Zealand. The application to live stream was opposed. See Ortmann, Dotcom and Others v United States of America [2016] NZHC 2043 www.nzlii. org/nz/cases/NZHC/2016/2043.html.



The quality of ‘information persistence’ presents some very real challenges to privacy expectations in the Digital Paradigm. Mark Zuckerburg, the developer of Facebook, suggested that privacy was no longer a social norm,13 but he is not the only person to do so. Scott McNealy, chief executive of Sun Microsystems suggested that ‘You have zero privacy anyway … get over it’ as far back as 199914 and Eric Schmidt, Chairman of Google suggested ‘If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place’15 as an answer to privacy concerns. This chapter focuses upon the collision that occurs between the nature of information persistence and the searchability and retrievability of information on the one hand and the concept of privacy on the other. It will consider the development of privacy theory against the background of technological developments as well as the tension that exists between the state and individuals and the acquisition of data by the state in the new Paradigm. I then move on to consider the classification of privacy by developing different classes or expectations of privacy and consider the effect of what is known as partial and practical obscurity, which are terms that describe some of the difficulties attendant upon the recovery of information in the pre-digital paradigm, and how these are challenged by Digital Systems. I then pass to consider the particular issues surrounding privacy expectations and the Internet, especially within the context of social media platforms, the development of relationships by using social media and the way in which social networking sites present their own challenges to privacy, drawing upon observations about Facebook by James Grimmelmann. I give consideration to what it is about social networking that seems to drive people to disclosure of matters that would normally be expected to remain within the ‘private sphere’ and make some observations about the impact of social networking and the future of privacy. The second major theme of the chapter deals with the impact of search engines upon privacy against the backdrop of the qualities of information persistence and the searchability and retrievability of that information. Viktor Mayer-Schönberger in his book Delete: The Virtue of Forgetting in the Digital Age16 challenges some of the technical realities of the Digital Paradigm, arguing that forgetting is as important a part of the human condition as is remembering and that a power imbalance has developed whereby those who seek information about another are empowered to do so, and the individual about whom information is sought is unable to control this process. Furthermore, the nature of memory and forgetfulness itself are vital qualities, allowing us to generalise rather than focus upon the detail of past events. 13  Bobbie Johnson, ‘Privacy no longer a social norm, says Facebook founder’ The Guardian 11 J­ anuary 2010 14  Polly Sprenger, ‘Sun on Privacy: Get Over It’ Wired 26 January 1999 politics/law/news/1999/01/17538. 15  ‘Google CEO on Privacy’ The Huffington Post 18 March 2010 www.huffingtonpost. com/2009/12/07/google-ceo-on-privacy-if_n_383105.html. 16  Viktor Mayer-Schonberger, Delete: The Virtue of Forgetting in the Digital Age (Princeton, ­Princeton University Press, 2011).



Whilst digital systems mean that information preservation is the norm, MayerSchönberger suggests that there should be a digital equivalent to human forgetting based on information privacy principles and the use of technology to imitate forgetting by placing a ‘use by’ or expiry date upon digital data. This is a controversial argument and one which I challenge. I suggest that it is a misconception to characterise information as ‘memory’. In addition, although information persistence is a characteristic of digital systems it must not be forgotten that they are not just about memory but about communication. Then there is the problem of truth. Should digital forgetting suppress the truth? This leads in to a discussion about the development of the so called right to be forgotten which, within the context of the decision in Google Spain SL, Google Inc v Agencia Española de Protección de Datos (AEPD), Mario Costeja González,17 did not go so far as actually deleting truthful information but rather offered the solution of de-indexing whereby a particular search would exclude certain information. The Google Spain case dealt with true information, but information that was no longer relevant to the realities of the life changes that had taken place in the life of Mr Gonzales, the complainant in the case. The decision was set against a backdrop of European Union data storage rules and must be viewed in that light, but the concept of relevance of old truths to changed circumstances is an interesting one within the context of a discussion about privacy. As an alternative to Mayer-Schönberger’s right to delete or Google Spain’s right to be de-indexed, a further possibility is advanced that recognises the importance of truth and the technical realities of ‘the document that does not die’ but allows an individual to reassert control over the area of personal identity. Could the solution lie in a right to update or a right to reply? This chapter offers that as a possibility. But the real issue is whether or not there is to be a restraint upon technology and its use and effectiveness by the stifling of or restrictions upon the ability to locate information from the resources of storage platforms and servers that are connected to the Internet. Or will privacy theory stifle the extent to which information which is available in the future—information which is true and which has been available in the past? The final substantive chapter deals with reputational harms and continues some of the themes of social media and information searchability via Google within the context firstly of defamation and how the Digital Paradigm poses certain challenges for established defamation theory on the one hand, and how other associated harms that are caused by online activity are addressed by legislation of remedies that are examples of Internet Exceptionalism on the other, but which also have, as a possibly unintended consequence, the ability to provide more immediate relief for reputational harms than an expensive action in defamation.

17  Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (AEPD), Mario Costeja González (2014) European Court of Justice C-131/12 jsf?num=C-131/12&language=EN.



I argue that the term ‘Internet defamation’ as a class of defamation publication is inaccurate and wrong. I argue that the term ‘Internet Platform defamation’ is a much more accurate term. The reason for that is that different platforms operate in different ways and use of variety of means to make content available. I suggest that this is another demonstration of the need for a careful consideration of the particular technology or platform at issue to be considered, which has already been discussed in chapter 3 and the nature of publication on Facebook and which is further developed in this chapter in a consideration of the differing and conflicting approaches to defamation on Google platforms. Can a search result be defamatory? What position do hypertext links occupy as ‘publishers’ of potentially defamatory content? Can a return of information located by algorithm amount to publication? I discuss what I call ‘The Google Cases’ as examples of the importance of understanding the technology and the fact that the divergent approaches on the part of the courts in various jurisdictions arise from a misunderstanding of the fact that rules which developed in a different form of distributive paradigm need not necessarily apply, even mutatis mutandis in a new one. I close this discussion with some observations on whether there is a place for a ‘strict liability’ tort like defamation in the Internet age, especially in light of the necessity to prove ‘serious harm’ under recent changes to English legislation. But remedies for reputational harms are not restricted to defamation actions. If Senor Gonzales had been in England, defamation would have provided no remedy for the publication of an article in an online version of a newspaper, because truth is a defence. The Google Spain decision relied more on the current relevance of past truthful information in concluding whether or not Senor Gonzales was entitled to a remedy. The decision to de-index had the result of obscuring the information from access and view. Other sorts of reputational harms can arise within the context of failed relationships or unwanted attention where social media platforms are deployed to annoy, harass and develop an online persona for an individual that is incorrect. The use of remedies for harassment and revenge porn are two specific examples of what may be available and I then pass to consider legislation recently introduced in New Zealand—the Harmful Digital Communications Act 2015—designed to address cyberbullying but with the potential to be used to address reputational harms and have content removed from Internet platforms—an example of MayerSchönberger’s right to delete rather than Senor Gonzales’ right to be merely deindexed. Similar legislation that is clearly an example of Internet Exceptionalism has been enacted in Australia. The online environment also presents an opportunity for ‘self-help’ when faced with reputational harms and there are opportunities for individuals to use technology to deal with a problem raised by the technology. I discuss this as another option that the new paradigm presents although it must be acknowledged that often remedies that are available from third party providers are expensive. And as far as traditional remedies are concerned, the story is a continuing one.



This is a book that provides examples of paradoxes and collisions that arise within the law when confronted with a new and paradigmatically different set of technologies. In some cases the established institutions, especially the courts, have stepped up to the challenge and tried to resolve the problem. In some cases these have been successful—the development of e-discovery protocols is one outstanding example. In other cases, confusion reigns and this is demonstrated by the approaches of English and Commonwealth Courts to the issue of Google platform defamation. There are some issues that may never be satisfactorily resolved. My view is that some homogeneous form of Internet Governance is one of them. The very concept of a distributed network that was developed by a community of benevolent engineers responsible to the concepts of engineering efficiency and the public interest rather than to a political creed presents an immediate challenge to an establishment which, however democratic, relies on centralised systems. These, and the resolution or perpetuation of some of the other collisions described and exemplified in the following chapters will be collected and discussed in the conclusion wherein some suggestions for future direction will be made.

2 The Analytical Framework When faced with a totally new situation, we tend always to attach ourselves to the objects, to the flavor of the most recent past. We see the world through a rear-view mirror. We march backwards into the future.1

I. Introduction Marshall McLuhan articulated two aphorisms that aptly encapsulate certain realities about the impact of the media of information communications. ‘The medium is the message’2—perhaps his most famous and yet opaque statement—­emphasises the importance of understanding the way in which information is communicated. According to McLuhan, we focus upon the message or the content that a medium delivers whilst ignoring the delivery system and its impact. In most cases our expectation of content delivery is shaped by earlier media. We tend to look at the new delivery systems through a rear view mirror and often will seek for analogies, metaphors or concepts of functional equivalence to explain the new medium that do not truly reflect how it operates and the underlying impact that it might have. ‘We become what we behold. We shape our tools and thereafter our tools shape us’3 is the second aphorism that summarises the impact that new media may have. Having developed the delivery system, we find that our behaviours and activities change. Over time it may be that certain newly developed behaviours become acceptable and thus underlying values that validate those behaviours change. In the case of information delivery tools, our relationships with, expectations and use of information may change. McLuhan’s first aphorism is that content alone does not cause these modifications. My suggestion is that it is the medium of delivery that governs new information expectations, uses and relationships. How does this happen? One has to properly understand the tool—or in the case of information communication, the


Marshall McLuhan, Understanding Media: The Extensions of Man (London, Sphere Books, 1967).


ibid, xxi.

2 ibid.



medium—to understand the way in which it impacts upon informational behaviours, use and expectations. In this chapter I shall consider the underlying ‘qualities’ or ‘properties’ of new information media. The starting point is to consider the approach of Elizabeth Eisenstein in her study The Printing Press as an Agent of Change.4 I shall then consider the development of a similar approach to digital communications systems and particularly the Information Technologies of computers and the Internet. I will argue that the properties of digital communications technologies present changes in communication methods that are paradigmatically different from what has gone before. This is not to say that earlier modalities of information communication are defunct, but rather that digital systems add methods of communication. The content of communication has not changed. It is in the means of the communication of that content that we find a revolution has taken place. I have chosen to adopt the term ‘The Digital Paradigm’ to describe this new information communication ecosystem. The use of such a term necessarily suggests that there has been a ‘paradigm shift’—a frequently used and often debated term. RL Trask suggests that one should refrain from using the term and to use caution when encountering it.5 Indeed it has been suggested that the term has become so overused as to become meaningless.6 Despite this the term has been used in contexts other than science to suggest a major change in thought patterns or behaviours—a radical change in personal beliefs, complex systems or organisations, replacing the former way of thinking or organising with a radically different way of thinking or organising. The concept of ‘paradigm shift’ was developed by Thomas Kuhn, in his influential book The Structure of Scientific Revolutions.7 He argued that from time to time there was a change in the basic or fundamental assumptions, or paradigms, within the ruling theory of science. A scientific revolution occurs, according to Kuhn, when scientists encounter anomalies that cannot be explained by the universally accepted paradigm within which scientific progress has been made up until then. The paradigm, in Kuhn’s view, is not simply the current theory, but the entire worldview in which it exists, and all of the implications which come with it. This is based on features of landscape of knowledge that scientists can identify around them. Kuhn was of the view that the existence of a single reigning paradigm was characteristic of the sciences. Philosophy and many of the social sciences were characterised by a tradition of claims, counterclaims and debates over fundamentals, 4  Elizabeth Eisenstein, The Printing Press as an Agent of Change (Cambridge, Cambridge University Press, 1979) 2 Vols. Reference will be made to the 1 volume 1980 edition; Elizabeth Eisenstein, The Printing Revolution in Early Modern Europe (Cambridge, Cambridge University Press (Canto), 1993). 5  RL Trask, Mind the Gaffe: The Penguin Guide to Common Errors in English (London, Penguin, 2001). 6  Robert Fulford, ‘The Word “Paradigm”’ Globe and Mail 5 June 1999 Paradigm.html. 7  Thomas Kuhn, The Structure of Scientific Revolutions (Chicago, University of Chicago Press, 1962).


The Analytical Framework

although the concept of the paradigm shift has been applied to the social sciences. For example, ML Handa developed a concept of a paradigm within the social sciences and, in developing his approach to a paradigm shift, focused upon the social circumstances that precipitated such a shift and how that shift affects social institutions.8 Thus social scientists have adopted the Kuhnian phrase ‘paradigm shift’ to denote a change in how a given society goes about organising and understanding reality. A ‘dominant paradigm’ refers to the values, or system of thought, in a society that are most standard and widely held at a given time. Dominant paradigms are shaped both by the community’s cultural background and by the context of the historical moment. Thus it could be said that in the broader sense a paradigm is a certain understanding of reality. It is a school of thought or framework which forms a world view—a shared way of perceiving aspects of reality. For example, there can be economic paradigms, scientific paradigms and philosophical paradigms. A paradigm is formed when there is general consensus that the worldview is good enough for the collective to gather around and from which it may progress. Hence the paradigm becomes more than an agreed theory of understanding but a much wider societal worldview.9 The shift to digital systems as a means of communication and the transporting of information and digital products represents such a worldview. I suggest that the Digital Paradigm and an understanding of the new media for communicating legal information present some fundamental challenges to our assumptions about law and may well revolutionise some established legal institutions and doctrines. In the following chapter I challenge the often advanced and convenient escape route that suggests that what the Digital Paradigm offers is merely content in a different delivery system which may be ‘functionally equivalent’ to that which has gone before. I argue that escape route is now closed off in light of the fundamentally different manner by which content is delivered in the Digital Space. In the same way that printing provided a paradigmatically different system of recording information than had been present in the scribal culture, which resulted in the development of a culture surrounding print and printed material and fundamentally changed the mentality of early-modern readers and scholars with repercussions that transformed Western society, so the advent of digital communications systems is having a similar effect. The fact of the paradigm shift from what could be broadly defined as pre-digital communication systems—which largely reflect print processes and culture—may be observed from an examination not of the content communicated—the message—but the underlying qualities of the manner in which the message is deliver—what McLuhan would refer to as ‘the medium’.

8  ML Handa, ‘Peace Paradigm: Transcending Liberal and Marxian Paradigms’ Paper presented in ‘International Symposium on Science, Technology and Development, New Delhi, India, 20–25 March 1987. 9  Giles Hutchins, ‘A Paradigm Shift is in Our Midst’ The Nature of Business (12 April 2014) http://

Elizabeth Eisenstein and the Qualities of Print


As we consider the qualities of the new information communication ecosystem it will become clear that this is a complex and at times contradictory process, for within the Digital Paradigm there are a number of tensions that result in nuanced conclusions rather than absolutes. What I propose is to consider as a basis of developing an analysis of these qualities the way in which Eisenstein developed her contention that the underlying properties of print significantly differentiated it from the scribal means of information communication.

II.  Elizabeth Eisenstein and the Qualities of Print Eisenstein’s theory holds that the capacity of printing to preserve knowledge and to allow the accumulation of information fundamentally changed the mentality of early-modern readers, with repercussions that transformed Western society.10 Six qualities of print were identified by Elizabeth Eisenstein that were the enablers that underpinned the distribution of content which enhanced the developing Renaissance, that spread Luther’s 97 arguments around Germany in the space of two weeks from the day that they were nailed on the church door at Wittenberg, and allowed for the wide communication of scientific information that enabled experiment, comment, development and what we now know as the Scientific Revolution. Within 300 years of the introduction of the printing press by Gutenberg the oral-memorial, custom-based, ever-changing law had to be recorded in a book for it to exist. It would be fair to observe that Eisenstein’s approach was and still is contentious.11 But what is important is her identification of the paradigmatic differences between the scribal and print cultures based upon the properties or qualities of the new technology. These qualities were responsible for the shift in the way that intellectuals and scholars approached information.

10 Eisenstein, The Printing Press as an Agent of Change above n 4, 159. Elizabeth Eisenstein’s theories were the subject of considerable discussion and analysis in my earlier book The Law Emprynted and Eglysshed: The Printing Press as an Agent of Change in Law and Legal Culture 1475—1642 (Oxford, Hart Publishing, 2015). 11 See for example the debate between Adrian Johns and Elizabeth Eisenstein—Adrian Johns, ‘How to Acknowledge a Revolution’ (2002) American Historical Review 106; Elizabeth Eisenstein, ‘An Unacknowledged Revolution Revisited’ (2002) American Historical Review 87; Elizabeth Eisenstein, ‘A Reply—AHR Forum’ (2002) American Historical Review 126. See also Adrian Johns, The Nature of the Book (Chicago, University of Chicago Press, 1998); David McKitterick, Print, Manuscript and the Search for Order 1450—1830 (Cambridge, Cambridge University Press, 2003); Eric J Leed, ‘Elizabeth Eisenstein’s The Printing Press as an Agent of Change and the Structure of Communications Revolutions’ (Review) (1982) 88 American Journal of Sociology 413; Diederick Raven, ‘Elizabeth Eisenstein and the Impact of Printing’ (1999) 6 European Review of History 223; Richard Teichgraeber, ‘Print Culture’ (1984) 5 History of European Ideas 323; William J Bouwsma, ‘The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early-Modern Europe’ (Review) (1979)


The Analytical Framework

A.  Eisenstein’s Qualities Six features or qualities of print that significantly differentiated the new t­ echnology from scribal texts were identified by Eisenstein.12 Dissemination of information was increased by printed texts not solely by volume but by way of availability, dispersal to different locations and cost. The impact upon the accessibility of knowledge was enhanced by the greater availability of texts and, in time, by the development of clearer and more accessible typefaces.13 Standardisation of texts meant that every text from a print run had an identical or standardised content.14 Every copy had identical pagination and layout along with identical information about the publisher and the date of publication. Standardised content allowed for a standardised discourse.15 Print allowed greater flexibility in the organisation and reorganisation of material and its presentation. Innovations such as tables, catalogues, indices and crossreferencing material within the text were characteristics of print.16 Print provided an ability to access improved or updated editions with greater ease than in the scribal milieu by the collection, exchange and circulation of data among users, along with the error trapping. This is not to say that print contained fewer errors than manuscripts.17 Print accelerated the error-making process that was present in the scribal culture. At the same time dissemination made the errors

84 American Historical Review 1356; Jack Censer, ‘Publishing in Early Modern Europe’ (2001) Journal of Social History 629; Charles B Schmitt, ‘The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early-Modern Europe’ (1980) 52 Journal of Modern History 110; Carolyn Marvin, ‘The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early-Modern Europe’ (Review) (1979) 20 Technology and Culture 793 together with a recent collection of essays examining Eisenstein’s theory and its impact Sabrina Alcorn Baron, Eric N Lindquist and Eleanor F Shevlin (eds), Agent of Change: Print Culture Studies after Elizabeth L Eisenstein (Amherst, University of Massachusetts Press, 2007). 12 Eisenstein, The Printing Press above n 4 generally 43 et seq; especially 71 et seq. The Printing Revolution above n 4, 42 et seq. 13  Early print fonts, especially in legal works, imitated scribal forms but later gave way, in the seventeenth century, to the more legible roman font. Some of these early styles were difficult to read, even for the highly literate. This was the case with early legal texts in England which were printed in ‘black letter’. The use of the Roman font did not become common until the early seventeenth century. For an example of ‘black letter’ see Totell’s published Year Books or William Fulbecke Directive or Preparative to the Study of the Lawe (London, Thomas Wight, 1600). The title page and introduction are in Roman. The text is in black letter. The difference is immediately apparent. However, Michael Dalton, The Countrey Justice (London, Society of Stationers, 1613) is printed in Roman and is more legible than the black letter font. 14  Note, however, the use by Tottel of mixed sheets from different printings. JH Baker, ‘The Books of the Common Law’ in Lotte Hellinga and JB Trapp (eds), The Cambridge History of the Book in Britain (Vol 3) (Cambridge, Cambridge University Press, 1999) 427 et seq. Thus some of his printings were compilations. This suggests that the economics of printing may have contributed to circumstances that might have challenged the uniformity that standardisation required. 15 Eisenstein, The Printing Press above n 4 80 et seq. For a discussion of ‘standardised errors’ resulting from print, the use of errata and the problems of the provenance of text see Harvey, above n 10, 5–6. 16 Eisenstein, The Printing Press above n 4 103. 17  The issue of error was a matter which had an impact upon the reliability of printed material.

Elizabeth Eisenstein and the Qualities of Print


more obvious as they were observed by more readers. Print created networks of correspondents and solicited criticism of each edition.18 The ability to set up a system of error-trapping, albeit informal, along with corrections in subsequent editions was a significant advantage attributed to print by the philosopher, David Hume, who commented that ‘The Power which Printing gives us of continually improving and correcting our Works in successive editions appears to me the chief advantage of that art’.19 Fixity and preservation are connected with standardisation. Fixity sets a text in place and time. Preservation, especially as a result of large volumes, allows the subsequent availability of that information to a wide audience. Any written record does this, but the volume of material available and the ability to disseminate enhanced the existing properties of the written record.

B.  Eisenstein, McLuhan and Media Eisenstein’s identification of qualities builds upon McLuhan’s aphorisms. The identification of the qualities of the medium (print) goes below content and examines factors inherent within it and identifies fundamental differences between the new medium (print) and the old (scribal production of texts). But importantly the identification of the qualities goes further than merely differentiating a new medium of communication from an old one. It identifies factors inherent in the medium that change attitudes towards information, the way it can be used and the user’s expectations of information. Fixity and standardisation allowed for the development and acceptance of text in print as authoritative which would be a significant factor for the development of law and legal doctrines—an example of McLuhan’s second aphorism. Thus, Eisenstein’s theory recognises that media works on two levels. The first is that a medium is a technology that enables communication and the tools that we have to access media content are the associated delivery technologies and possess certain qualities. The second level is that a medium has an associated set of protocols or social and cultural practices—including the values associated with information—that have grown up around the technology. Thus the delivery system is the technology containing certain qualities that give rise to the second level which generates and dictates behaviour.20 Eisenstein’s argument is that when we go beneath the content that the system delivers and look at the qualities or the properties of a new information technology,

18 Eisenstein, The

Printing Press above n 4 107 et seq. Cited by JA Cochrane, Dr Johnson’s Printer: The Life of William Strahan (London, Routledge and K Paul, 1964) 19 at n 2. See also Eisenstein, The Printing Press above n 4, 112. 20  Lisa Gitelman, Always Already New: Media, History and the Data of Culture (Cambridge, MIT Press, 2008) 7. 19 


The Analytical Framework

we are considering what shapes and forms the basis for the changes in behaviour and in social and cultural practices. The qualities of a paradigmatically different information technology fundamentally change the way that we approach and deal with information. In many cases the change will be slow and imperceptible. Adaptation is usually a gradual process. Sometimes the changes in the way in which we approach information will change our intellectual habits—often subconsciously or incrementally. For example, textual analysis had been an intellectual activity since information was recorded in textual form. I contend that the development of principles of statutory interpretation, a specialised form of textual analysis, followed Thomas Cromwell’s dissemination and promulgation of the Reformation statutes, complete with preambles, in print.21 There can be no doubt that print ushered in a paradigmatically different means of communication based upon its qualities. My suggestion is that the developing reliance of lawyers upon printed sources is in fact informed by the underlying qualities of print rather than the content itself. Indeed, it could be suggested that the information that is the law is dependent upon the qualities of print for both its reliability and authoritativeness.

III.  Digital Information The advent of the Digital Paradigm, and the recording of legal information in digital format challenges some of the essential qualities of print—the qualities that lawyers, judges and law academics have confidently and unquestioningly relied upon for the authority of the law. In an article that traced the use of ‘non-legal’ information used in judgments the following observation was made: Now, however, the world of information readily available to lawyers and judges is vastly larger. Even apart from the on-line catalogs that make full university collections far more available than ever before to a person physically standing in the law library, there has been a dramatic change in what is available to the typical LEXIS or Westlaw subscriber in a law firm, court, or government agency; and Internet access multiplies the phenomenon even further. There is now a dramatically accelerating increase in the availability of nonlegal sources accessible through on-line information methods… One of the most important features of law’s traditional differentiation has been its informational autonomy. In many respects legal decision making is highly information

21  I acknowledge that this is a very bold assertion. The argument is a little more nuanced and involves a consideration of the use of the printing press by Cromwell, the significant increase in legislative activity during the course of the English Reformation, the political and legal purpose of statutory preambles, the advantages of an authoritative source of law in printed form for governing authorities, all facilitated by underpinning qualities of print such as standardisation, fixity and dissemination. For a detailed discussion see Harvey above n 10, 196 et seq.

Digital Information


dependent and was traditionally dependent on a comparatively small universe of legal information, a universe whose boundaries were effectively established, widely understood, and efficiently patrolled.22

One of the arguments that is advanced by those commenting on the Digital Paradigm and its effects upon law and legal scholarship is to try and locate a form of functional equivalence between the Print and the Digital Paradigms. And the reason for this lies in some of the unique qualities of the digital space itself. Digital information goes beyond the content layer, and in some respects the content layer is superficial. Digital information has elements affecting the presence of content and its stability that challenge the certainties that accompanied information in print. I shall now turn to the identification of the qualities of digital communications systems—qualities that make it paradigmatically different from what has gone before.

A.  Identifying Digital Qualities Using Eisenstein’s approach, certain qualities present in the digital communications paradigm and which distinguish it from the pre-digital era can be identified, although in so saying, the inventory that I have compiled is by no means complete. Nevertheless, these qualities all underlie the way in which content is communicated. They are not necessarily unique to the Digital Paradigm because, in some respects, some of them at least were present in some form in the pre-digital era. For example, some of the qualities identified by Eisenstein that were unique to print, such as dissemination, data collection, information reorganisation, amplification and reinforcement have a continued manifestation in information communication technologies since the advent of the printing press. As new technologies have arrived, some of these qualities have been enhanced. Certainly the quality of dissemination has undergone a quantum leap in the Digital Paradigm from the Print Paradigm.23 My tentative inventory comprises 13 qualities which dramatically, paradigmatically, differentiate digital technologies from those that have gone before. The taxonomy for these qualities suggests three major classifications based upon the nature of the qualities. These classifications I have described as ‘Environmental’, ‘Technical’ and ‘User Associated’.

i.  Environmental Qualities These qualities arise from the context within which digital technologies develop. They relate to change and the drivers for change. One of the problems with digital 22  Frederick Schauer and Virginia J Wise, ‘Nonlegal Information and the Delegalization of Law’ (2000) 29 Journal of Legal Studies 495 at 512, 514. 23  Hence I have redefined it—see below in the discussion on Exponential Dissemination.


The Analytical Framework

technologies is their disruptive nature. They challenge thinking about the nature of information, our relationship with it and how we communicate it. This disruptive quality is neither good nor bad although ultimately it may be transformative. But it does mean that settled methods of doing things, and more importantly thinking, are constantly being challenged. From the perspective of a lawyer this quality creates a problem because lawyers value certainty, predictability and stability. However, continuing disruptive change, and its compatriot permissionless innovation, are realities of the Digital Paradigm with which lawyers must come to terms. a.  Continuing Disruptive Change In the past as new communication technologies have become available there has been a period where the new technology has an opportunity to ‘bed in’ before the next significant change takes place. For example the advent of the printing press in 1450 was followed by its spread through Europe, but, apart from improvements in the technology, no new communications technology was present until the development of the electrical telegraph system by Samuel Morse, Joseph Henry and Alfred Vail in 1836. Effectively, there had been a period of almost 400 years for the printing press to become accepted as a new means of communication. The telegraph system addressed the tyranny of distance and was followed by Marconi’s long distance radio transmission in the last decade of the nineteenth century. That was followed by developments in radio and within a short time thereafter, the development of television. It can be seen from this very brief overview that the time between new technological developments in communications has shortened. Nevertheless, there has been a ‘breathing space’ of increasing brevity between each one. The advent of digital technologies and particularly the rise of the Internet has meant effectively that breathing space has gone and continuing disruptive change is a reality. The nature of this change has been described in another context as ‘The Long Blur’24 and as new information systems have driven change many earlier business practices have changed or, in some cases, become obsolete. Changes in work habits and attitudes, the concept of secure lifetime jobs, have vanished along with associated concepts of loyalty to an employer and a recognition of the loyal employee. Although many new high paying jobs requiring exceptional skills and intelligence exist, many business models are now effectively service industries of which, in some respects, the law may be considered one.25 In essence the law is an ‘information exchange’ system with a number of ‘information flows’ throughout it. And why should lawyers and judges avoid changes


Jim Dator, ‘Judicial Governance of the Long Blur’ (2000) 33 Futures 181. an examination of the changes in the legal profession and a possible future see Richard Susskind, The End of Lawyers: Rethinking Legal Services (Oxford, Oxford University Press, 2008) and Tomorrow’s Lawyers (Oxford, Oxford University Press, 2013). 25  For

Digital Information


in the way in which information is delivered or the impact of the qualities that underlie new communications technologies? After all, both simply process information in a particular manner. b.  Permissionless Innovation Associated with continuing disruptive change is the quality of permissionless innovation, particularly in so far as the Internet is concerned. In some respects, these two qualities are interlinked and indeed it could be argued that permissionless innovation is what drives continuing disruptive change. Permissionless innovation is the quality that allows entrepreneurs, developers and programmers to develop protocols using standards that are available and that have been provided by Internet developers to ‘bolt-on’ a new utility to the Internet. Thus we see the rise of Tim Berners-Lee’s World Wide Web which, in the minds of many, represents the Internet as a whole. Permissionless innovation enabled Shawn Fanning to develop Napster; Larry Page and Sergey Brin to develop Google; Mark Zuckerberg to develop Facebook and Jack Dorsey, Evan Williams, Biz Stone and Noah Glass to develop Twitter along with dozens of other utilities and business models that proliferate on the Internet. There is no need to seek permission to develop these utilities. Using the theory ‘if you build it, they will come’26 new means of communicating information are made available on the Internet. Some succeed but many fail.27 No regulatory criteria need to be met other than that the particular utility complies with basic Internet standards. What permissionless innovation does allow is a constantly developing system of communication tools that change in sophistication and the various levels of utility that they enable. It is also important to recognise that permissionless innovation underlies changing means of content delivery.

ii.  Technical Qualities I have classified a number of qualities as technical primarily because they are underlying aspects of the way in which digital communications technologies, and especially the Internet, work. Some of these qualities have developed from early communications systems and have been enhanced within the digital environment. Others, such as the non-coherence of information, are unique. a.  Delinearisation of Information This quality recognises the effect of hypertext linking although the idea behind it is not new, nor does it originate with Tim Berners-Lee and the development of the

26  In fact a misquote that has fallen into common usage from the movie Field of Dreams (Director and Screenplay by Phil Alden Robinson, 1989). The correct quote is ‘If you build it he will come’ (my emphasis) 27  See for example Andrew Keen, The Internet is Not the Answer (London, Atlantic Books, 2015).


The Analytical Framework

World Wide Web, for it was propounded as early as 1948 by Vannevar Bush.28 The reality of information delinearisation in many respects subtly changes approaches to intellectual activity that have their foundation in the printing press. The organisation of information in print influenced certain approaches to and habits surrounding intellectual activity. I am not for one moment suggesting that the linear approach to human thought and analysis had its origins with the printing press, for certainly it did not. But what print did was to enhance and solidify linear thinking.29 Where there is standardisation and fixity of information within print, it is possible to develop associated information location devices such as indices and tables of content. Although tables of contents were available within the scribal culture, they were only relevant to the particular volume, given that it was difficult to achieve identical volumes, although it was not impossible. The mass production of identical copies enabled by print meant that accurate, consistent indices and tables of content could be provided. This meant that the linear approach to information was enhanced and emphasised and although footnotes could take the reader to different references, to follow such a line would require a departure from the primary text as the reader first located the referenced text and then sought the information within. In some respects the utilisation of footnotes was a form of ‘proto-delinearisation’, if I can put it that way, but it was not until the development of the World Wide Web and the centralisation of information resources within the context of the Internet that full delinearisation became a reality. The Internet brings the information to the reader and I shall discuss shortly the ways in which the Internet enables that through other qualities. The print paradigm essentially meant the reader or scholar had to seek texts or other forms of information in a library or various other locations. In essence the Internet brings the library to the scholar and it is by virtue of that that the delinearisation becomes significant. By the use of hypertext links, the scholar or reader is immediately able to seek out the other information. This means that the reading, or information acquisition process, which is essentially linear in following a particular line of argument in the principal text, is interrupted as the scholar or reader follows the hypertext link to the other source of information. This can be done instantaneously. The

28  Vannevar Bush, ‘As We May Think’ The Atlantic (1 July 1945) archive/1945/07/as-we-may-think/303881/?single_page=true. 29  For further discussion, especially in the context of reading see Maryanne Wolff, Proust and the Squid: The Story and Science of the Reading Brain (New York, Harper Collins, 2007); Neil Postman, The Disappearance of Childhood (New York, Vintage/Random House, 1994). Walter Ong, Orality and Literacy: The Technologising of the Word (Oxford, Routledge, 2002). Neil Postman, Amusing Ourselves to Death: Public Discourse in the Age of Showbusiness (New York, Penguin Books, 1986); Sven Birkerts, The Gutenberg Elegies: The Fate of Reading in an Electronic Age (Winchester MA, Faber, 1994); Sven Birkerts, ‘Resisting the Kindle’ The Atlantic (March 2009). and my discussion in ‘Why Do Jurors Go OnLine’ The IT Countrey Justice (July 27 2012) why-do-jurors-go-on-line/.

Digital Information


equivalent within the context of the print paradigm would mean that the scholar or reader would stop at a particular footnote, go to the source of the information, say at a library, locate it, read it, consider it, and then return to the principal text. This would mean that the reading of the principal text would be prolonged considerably. However, the potential within delinearisation is that it means the primary text no longer need be considered the principal source of information, in that the gathering together of information following hypertext links may result in the totally different approach to analysis than there was before. Not only would the principal text be subject to scrutiny and critique but it could be considered within the context of a vast range of additional information made possible by hypertext linking. Delinearisation may well mean that a change could take place in the way in which an argument is developed, in that the manner of analysis may change. Linear structure is present throughout the law. For example, current judgment writing style follows a pre-ordained linear structure involving an introduction, identification of issues—both factual and legal—identification of evidence or information relevant to the issues, discussion and analysis of the evidence and matching them up with the issues and law and a conclusion. New digital technologies mean already that some judgments utilise hypertext links. There are difficulties in this regard with the problem of ‘link rot’.30 Link rot occurs where a given URL no longer links to material which is referenced, making a citation to that material worthless. Neacsu observes that: There are many reasons why websites disappear or change. They may no longer be supported by the author or organization which created them. The text may be a fluid one as far as the parent organization is concerned, so the information as used by the article’s author in writing may not be what a reader may find a year later. Some nongovernmental organizations change their stance on different political issues quite often and do not archive the documents that would memorialize those changes. Electronic text may also be subject to malicious change, either from the outside or from internal mischief. Irrespective of these reasons, when a law journal article references a text that does not exist or has been altered, and thus supplies ‘unreliable’ footnotes, its scholarly value becomes questionable…31

30  See for example Dana Neacsu, ‘Google, Legal Citations, and Electronic Fickleness: Legal Scholarship in the Digital Environment’ (June 2007). Available at SSRN: or; Mary Rumsey, ‘Runaway Train: Problems of Permanence, Accessibility and Stability in the Use of Web Resources in Law Review Citations’ (2002) 94 Law Library Journal 27; Susan Lyons, ‘Persistent Identification of Electronic Documents and the Future of Footnotes’ (2005) 97 Law Library Journal 681; Jonathan Zittrain and Kendra Albert, ‘Perma: Scoping and Addressing the Problem of Link and Reference Rot in Legal Citations’ (21 September 2013). Available at SSRN: or Adam Liptak, ‘In Supreme Court Opinions, Web Links to Nowhere’ New York Times (23 September 2013) www.nytimes. com/2013/09/24/us/politics/in-supreme-court-opinions-clicks-that-lead-nowhere.html?smid=reshare&_r=1&. See also the discussion below under the headings ‘Information Persistence’,‘Format Obsolescence’ and the ‘Non-coherence of Information’. 31  Neascu, ibid 7–8.


The Analytical Framework

This is particularly the case with the ability to access cited material, exacerbated by the phenomenon of link rot. Neacsu makes a number of suggestions whereby this problem may be met: a) One is by the use of Persistent Uniform Resource Locators (PURLs) which provide a persistent way of locating and identifying electronic documents using the redirect feature built in to the HTTP protocol. b) Archiving including systems like the Internet Archive whilst recognising that the scope and the running of this archive does not answer the needs of legal scholarship in that there is no reliable coverage of sources cited in law review articles and no reliable institutional overview. c) Law Library preservation systems maintaining a digital archive that includes copies of all documents cited in all journals including student edited journals d) The ‘Legal URL Archive’ which would involve the creation of a mirror site in which duplicates of cited documents would be created with the objective of maintaining the document when it was cited (addressing the issue of ­stability) and keeping the information publicly available (addressing the issue of accessibility).32 As well as the ability to link to other material is the ability to embed other forms of content—video, audio, animations, diagrams and the like—within a judgment.33 This may well mean in the future that the strict linear form of analysis which I have described may become a little less simplistic and a little more ‘information rich’ involving a wider opportunity for the reader to explore some of the underlying support for a judgment and the analysis within it that is more immediate than was the case in the pre-digital paradigm, given the difficulties that one might have in tracking down sources. This is not to say that linear analysis is dead, but what it does mean is that approaches to analysis may change in subtle ways as the result of the ability to bring all of the information that supports an argument into the one place at the one time. b.  Information Persistence or Endurance It is recognised that once information reaches the Internet it is very difficult to remove it because it may spread through the vast network of computers that comprise the Internet and maybe retained on any one of them by the quality of exponential dissemination discussed below, despite the phenomenon of ‘link rot’.34 32 

ibid, 12–15. David Harvey, ‘Using Digital Tools to Give Reasons’ 2011 Paper presented at the Courts ­Technology Conference, Long Beach, California 2011, The IT Countrey Justice (8 June 2012) https:// com/doc/96370049/The-Digital-Decision-Paper. 34  For discussion see above s III.A.ii.a. Link rot may arise for any one of a number of reasons, many of which involve relocation of information rather than its total removal. For a counter-point to link rot see the discussion on Exponential Dissemination below. 33  See

Digital Information


It has been summed up in another way by the phrase ‘the document that does not die’. Although on occasions it may be difficult to locate information, the quality of information persistence means that it will be on the Internet somewhere. This emphasises the quality of permanence of recorded information that has been a characteristic of that form of information ever since people started putting chisel to stone, wedge to clay or pen to papyrus. Information persistence means that the information is there but if it has become difficult to locate, retrieving it may resemble the digital equivalent of an archaeological expedition, although the spade and trowel are replaced by the search engine. The fact that information is persistent means that it is capable of location. c.  Dynamic Information In some respects the dynamic nature of information challenges the concept of information persistence because digital content may change. It could be argued that this seems to be more about the nature of content, but the technology itself underpins and facilitates this quality as it does with many others. An example of dynamic information may be found in the on-line newspaper which may break a story at 10am, receive information on the topic by midday and by 1pm on the same day have modified the original story. The static nature of print and the newspaper business model that it enabled meant that the news cycle ran from edition to edition. The dynamic quality of information in the Digital Paradigm means that the news cycle potentially may run on a 24 hour basis, with updates every five minutes. Similarly, the ability that digital technologies have for contributing dialogue on any topic enabled in many communication protocols, primarily as a result of Web 2.0, means that an initial statement may undergo a considerable amount of debate, discussion and dispute, resulting ultimately in change. This dynamic nature of information challenges the permanence that one may expect from persistence and it is acknowledged immediately that there is a significant tension between the dynamic nature of digital information and the concept of the ‘document that does not die’.35 Part of the dynamic of the digital environment is that information is copied when it is transmitted to a user’s computer. Thus there is the potential for information to be other than static. If I receive a digital copy I can make another copy of it or, alternatively, alter it and communicate the new version. Reliance upon the print medium has been based upon the fact that every copy of a particular edition is identical until the next edition. In the digital paradigm authors and publishers can control content from minute to minute. In the digital environment individual users may modify information at a computer terminal to meet whatever need may be required. In this respect the digital reader becomes something akin to a glossator of the scribal culture, the difference 35 

Although for the other side of this coin see the quality of format obsolescence.


The Analytical Framework

being that the original text vanishes and is replaced with the amended copy. Thus one may, with reason, validly doubt the validity or authenticity of information as it is transmitted. d.  Volume and Capacity The data storage capabilities of digital systems enable the retention and storage of large quantities of information. Whereas books and other forms of recording information were limited by the number of pages or the length of a tape, the potential storage capabilities inherent in digital systems while not limitless nor infinite are, comparatively with other media, significantly greater. This phenomenal increase in the amount of information available has a corresponding downside in that information location can, by virtue of volume alone, become difficult. This is resolved by other qualities of the technology. However, information volume is an issue that has an impact upon certain understandings that apply to law and legal principles and which I shall address at a later point in chapter 6. e.  Exponential Dissemination Dissemination was one of the leading qualities of print identified by Eisenstein, and it has been a characteristic of all information technologies since. What the internet and digital technologies enable is a form of dissemination that has two elements. One element is the appearance that information is transmitted instantaneously to both an active (on-line recipient) and a passive (potentially on-line but awaiting) audience. Consider the example of an e-mail. The speed of transmission of emails seems to be instantaneous (in fact it is not) but that enhances our expectations of a prompt response and concern when there is not one. More important, however, is that a matter of interest to one email recipient may mean that the email is forwarded to a number of recipients unknown to the original sender. Instant messaging is so-called because it is instant and a complex piece of information may be made available via a link by Twitter to a group of followers which may then be retweeted to an exponentially larger audience. The second element deals with what may be called the democratisation of information dissemination. This aspect of exponential dissemination exemplifies a fundamental difference between digital information systems and communication media that have gone before. In the past information dissemination has been an expensive business. Publishing, broadcast, record and CD production and the like are capital intensive businesses. It used to (and still does) cost a large amount of money and required a significant infrastructure to be involved in information gathering and dissemination. There were a few exceptions such as very small scale publishing using duplicators, carbon paper and samizdats but in these cases dissemination was very small.36 Another aspect of early information communication 36  See William Bernstein, Masters of the Word (London, Atlantic Books, 2013). Especially ch 8 ‘The Comrades Who Couldn’t Broadcast Straight’ and 263 and following.

Digital Information


technologies is that they involved a monolithic centralised communication to a distributed audience. The model essentially was one of ‘one to many’ communication or information flow.37 The Internet turns that model on its head. The Internet enables a ‘many to many’ communication or information flow with the added ability on the part of recipients of information to ‘republish’ or ‘rebroadcast’. It has been recognised that the Internet allows everyone to become a publisher. No longer is information dissemination centralised and controlled by a large publishing house, a TV or radio station or indeed the State. It is in the hands of users. Indeed, news organisations regularly source material from Facebook, YouTube or from information that is distributed on the Internet by Citizen Journalists.38 Once the information has been communicated it can ‘go viral’: a term used to describe the phenomenon of exponential dissemination as Internet users share information via e-mail, social networking sites or other Internet information sharing protocols. This in turn exacerbates the earlier quality of information persistence or ‘the document that does not die’ in that once information has been subjected to Exponential Dissemination it is almost impossible to retrieve it or eliminate it.39 f.  The ‘Non-coherence’ of Digital Information If we consider a document—information written upon a piece of paper—it is quite easy for a reader to obtain access to that information long after it was created. The only thing necessary is good eye sight and an understanding of the language in which the document is written.

37  Information flows in communication are important in analysing the impact of information. One of the most common errors in descriptions of the Internet is the suggestion that one should ‘go to’ a certain address. The reality is that the enquirer goes nowhere. The information in fact flows in a direction opposite to that suggested in that the website is downloaded to the enquirer’s computer. This is perhaps an obvious and particularly egregious example of a misunderstanding of information flows especially within the technological sense. The practice of law is entirely about information flows and is an important element in determining, for example, the culpability of a juror for extra-curial communication. See David Harvey, ‘The Googling Juror: The Fate of the Jury Trial in the Digital Paradigm’ (2014) New Zealand Law Review 203. The ‘information flows’ approach was developed by Professor Ian Cram. See Ian Cram, ‘Twitt(er)ing Open Justice? Or Threats to Fair Trials in 140 Characters)—A Comparative Perspective and A Common Problem’ (unpublished paper delivered at Justice Wide Open Conference, City University London, 29 February 2012) see centre-for-law-justice-and-journalism/projects/open-justice-in-the-digital-era. I am indebted to Professor Cram for providing me with the paper that he presented at the City of London Conference and for his analysis of information flows. The full paper may be found at Justice-Wide-Open-Ian-Cram-Twitt-er-ing-Open-Justice. 38  The disclosure of Mayor of Auckland Len Brown’s affair with Bevan Cheung broke not via newspapers, radio or television but via the Whaleoil blog— Similarly the disclosure in October 2012 that Ministry of Social Development information kiosks were insecure was researched and exposed by Kenneth Ng on the Public Address blog 39  This demonstrates that many of the qualities that underlie information in the Digital Paradigm are interrelated.


The Analytical Framework

Data in electronic format is dependent upon hardware and software. The data contained upon a medium such as a hard drive requires an interpreter to render it into human readable format. The interpreter is a combination of hardware and software. Unlike the paper document, the reader cannot create or manipulate electronic data into readable form without the proper hardware in the form of computers.40 Schafer and Mason warn of the danger of thinking of an electronic document as an object ‘somewhere there’ on a computer in the same way as a hard copy book is in a library. They consider that the ‘e-document’ is better understood as a process by which otherwise unintelligible pieces of data are distributed over a storage medium, are assembled, processed and rendered legible for a human user. Schafer and Mason observe that in this respect the document as a single entity is in fact nowhere. It does not exist independently from the process that recreates it every time a user opens it on a screen.41 Computers are useless unless the associated software is loaded onto the hardware. Both hardware and software produce additional information that includes, but is not limited to, metadata and computer logs that may be relevant to any given file or document in electronic format. This involvement of technology and machinery makes electronic documents paradigmatically different from ‘traditional documents’. It is this mediation of a set of technologies that enables data in electronic format—at its simplest, positive and negative electromagnetic impulses recorded upon a medium—to be rendered into human readable form. This gives rise to other differentiation issues such as whether or not there is a definitive representation of a particular source digital object. Much will depend, for example, upon the word processing program or internet browser used. The necessity for this form of mediation for information acquisition and communication explains the apparent fascination that people have with devices such as smart phones and tablets. These devices are necessary to ‘decode’ information and allow for its comprehension and communication. I made reference in the introduction to this chapter to the issue of ‘functional equivalence’ and perhaps the only way in which an electronic document may be seen as ‘functionally equivalent’ to a paper based document may be in the presentation of information in readable form. In the case of a Firm of Solicitors v The District Court Auckland,42 Heath J noted that section 198A of the Summary Proceedings Act 1957 (New Zealand) was designed to deal with a paper based environment but that now more often than not, information is stored primarily in

40  Burkhard Schafer and Stephen Mason, ‘The Characteristics of Electronic Evidence in Digital Format’ in Stephen Mason (gen ed), Electronic Evidence 3rd edn (London, LexisNexis Butterworths, 2012) 2.05. 41  ibid, 2.06. 42  Firm of Solicitors v The District Court Auckland [2004] 3 NZLR 748 at [110].

Digital Information


electronic form. He adopted a ‘functional equivalence’ approach to the issue of the execution a search warrant. With respect I consider that ‘functional equivalence’ is an unhelpful concept, although to make the statute work in 2004, it was probably the only option available to Heath J. Functional equivalence can relate only to the end product and not to the inherent properties that underlie the way in which the material or information is created, stored, manipulated, re-presented and represented. Furthermore, a careful analytical process involving an examination of the deeper function of a pre-digital system is required to determine whether or not the digital system in fact provides identical functionality.43 In the context of the New Zealand Search and Surveillance Act 2012 it is interesting that the complexity of electronic information is something that is capable of being searched for or ‘seized’ yet is described as an ‘intangible’ thing. The ultimate fruit of the search will be the representation of the information in comprehensible format, but what is seized is something paradigmatically different from mere information, the properties of which involve layers of information. It is clear that the legislation contemplates the end product—the content contained in the electronic data—yet the search also involves a number of aspects of the medium as well. In the ‘hardcopy’ paradigm the medium is capable of yielding information such as fingerprints or trace materials, but not to the same degree of complexity as its digital equivalent. Similarly, the complexities surrounding e-discovery demonstrate that an entirely different approach is required from the traditional means of discovery.44 Although Marshall McLuhan intended an entirely different interpretation of the phrase, ‘the medium is the message’,45 it is a truth of information in digital format. g.  Format Obsolescence In the print and scribal paradigms, information was preserved as long as the medium remained stable. The Dead Sea Scrolls and early incunabula from the print paradigm provide examples. But, as I have observed above, no intermediate technology was required to comprehend the content. The quality of continuing disruptive change means that not only are digital technologies and communications protocols in a state of change, but within many of the programs that are commonly used, new versions come available with enhancements and often new formats. This is further complicated by the unwillingness of software developers and distributors to continue support for products

43  For a full discussion of the concept of functional equivalence and the care that is needed in using the term see ch 3. 44  E-discovery demonstrates the interrelationships of Digital Paradigm qualities in that one of the basic problems within the discovery process is not only the ‘non-coherence’ of information, but the volume of information spread across a vast array of platforms. 45  McLuhan, above n 1.


The Analytical Framework

that have been replaced by new versions. The problem of content access is further exacerbated when earlier formats for content or data storage are replaced, and therefore the information stored in those earlier formats cannot be accessed. For example, Microsoft Word uses the file extension.doc for its native file format. However, the reality is that the.doc extension encompasses four distinct file formats: 1. 2. 3. 4.

Word for DOS Word for Windows 1 and 2; Word 4 and 5 for Mac Word 6 and Word 95 for Windows; Word 6 for Mac Word 97 and later for Windows; Word 98 and later for Mac

Most current versions of Word recognise the fourth iteration of the .doc format which is Binary File Format implementing OLE (Object Linking and Embedding) structured storage to manage the structure of the file format. OLE behaves rather like a hard drive system and is made up of a number of important key components in that each Word document is composed of ‘big blocks’ which are almost always (but do not have to be) 512 byte chunks. Thus a Word document’s file size will be a multiple of 512. ‘Storages’ are analogues of the directory on a disk drive, and point to other storages or ‘streams’ which are similar to files on a disk. The text in a Word document is always contained in the ‘WordDocument’ stream. The first big block in a Word document, known as the ‘header’ block, provides important information as to the location of the major data structures in the document. ‘Property storages’ provide metadata about the storages and streams in a doc file, such as where it begins and its name and so forth. The ‘File information block’ contains information about where the text in a Word document starts, ends, what version of Word created the document and other attributes. Microsoft has published specifications for the Word 97-2003 Binary File Format but it is no longer the default, having been replaced by the Office Open XML standard, indicated by the newer.docx extension. The problem is that if one wishes to open a Word document created by a version preceding Word 97 in a recent iteration of Word (say Word 2010) it will be blocked. There is a work-around but that involves a level of complexity that may discourage average users.46 Although there may be converters available for older formats, again this adds an additional layer of complexity for those who are not adept at computer use.47 46  File | Options | Trust Center | Trust Center Settings | File Block Settings allows the user to change the settings to enable opening files in earlier formats. Note that, if the user wants completely unrestricted access to these files, he or she needs to clear the check boxes for them so that whichever of the radio buttons is selected at the bottom does not apply to them. Alternatively, the user can leave them checked and opt to allow opening them in Protected View, with or without allowing editing. 47  Some converters no longer work in new operating system environments. For example, converters designed for the Windows 32-bit system may be rendered obsolete by virtue of the fact that Windows 7 64-bit uses a different file path structure and registry keys.

Digital Information


An additional layer of difficulty arises where there is lack of interoperability with proprietary file formats from other data or text storage programs and where those programs have been discontinued and are no longer available. This is a problem encountered particularly in the e-discovery when historic documents are sought. In addition, there may be hardware difficulties where data may be stored on old media such as old floppy disks. Modern computers no longer include a floppy disk drive—indeed data is rarely if ever stored on such low capacity media—and only allow USB storage devices to be used. Because of the way in which information is encoded in the digital environment, the information exists, but is not available. Thus, the document is still ‘alive’ but is in a state of suspended animation until its content can be accessed. Format obsolescence does not challenge the concept of information persistence or endurance because aspects of the Internet enable that. It is, however, a subset of the necessity for technological mediation between stored digital data and its rendering in a comprehensible form.

iii.  User Associated Qualities The final set of the three categories of qualities involve the way in which digital technologies provide opportunities for users to locate, acquire and process information. The first three qualities, which I have grouped together because they represent a continuum, perhaps are indicative of the nature of a cross-over between what could be considered technical qualities—something inherent in the technology—and qualities that are primarily user focused. The final quality relates to the way in which the Digital Paradigm enables information creation in a multiauthorial sense. a.  Availability, Searchability and Retrievability of Information As I have earlier suggested, these qualities are associated with persistence of information and the information dynamic, but what is important is, as I have already suggested, that the Internet enables the information to come to the user. No longer does the user have to go to the information. This recognises one of the fundamental realities that characterises the nature of information within the Internet space and one that must be also considered in terms of information and use generally and that is the concept of ‘information flow’ to which I have already made reference. The Internet enables and enhances the flow of information towards the user, rather than the user directing him or herself towards the information. This is recognised by the availability of information which, as I have suggested, is associated with information persistence. What is significantly different with the Internet is that the information is constantly available, 24 hours a day, 7 days a week, 365 days of the year. The Internet is always on—it is always ‘open’. Information availability is not restricted by time or the presence of a librarian and is impeded only if the site where the information is held is down for some reason.


The Analytical Framework

One of the problems with information both in the sense of persistence and availability is finding out where it is. Before the Internet went ‘public’ and was essentially a university or research-based tool, users developed means of locating information of which Gopher was one example. The arrival of the World Wide Web resulted in the development of various search engines of which Google has now become the dominant force. Searchability of information means that the vast library of the Internet can reveal its treasures as long as one is competent in the use of a search engine.48 Most users utilise pretty basic search terms but more sophisticated use and construction of search terms narrows the scope of the information sought and returns more precise results. The important thing is that searchability of information means that the information availability is enhanced. The third part of the trilogy of course is retrievability. This means that the available information has been located by a search is instantly retrievable, and importantly, the information flow is towards the user, rather than the reverse. I suggest that these are fundamental characteristics of the Internet. Locating information in the largely unindexed mass of information has been a challenge for Internet users from days when the Internet was being developed by academics and before it ‘went public’. The development of search facilities did not begin with Google. The need for a means of locating information in the delinearised sense was first propounded by Vannevar Bush49 and further developed by Gerard Salton whose teams at Harvard and Cornell developed the SMART informational retrieval system and Ted Nelson who coined the term ‘hypertext’ in 1963 and developed Project Xanadu in 1960. Early search engines such as Archie, Veronica, Jughead and Gopher were created in the early 1990s. Archie was originally named ‘Archives’ but was shortened. The naming of the subsequent search engines developed from associations with comic book characters. Archie addressed the scattered nature of data by combining a script-based data gatherer with a regular expression matcher for retrieving file names matching a user query. Essentially Archie became a database of filenames which it would match with the users’ queries. The University of Nevada System Computing Services group developed Veronica. Veronica served the same purpose as Archie, but it worked on plain text files. Soon another user interface name, Jughead, appeared with the same purpose as Veronica, both of these were used for files sent via Gopher, which was created as an Archie alternative by Mark McCahill at the University of Minnesota in 1991. The development of the World Wide Web resulted in a proliferation of early search engines such as Hotbot, AltaVista, Dogpile and Ask Jeeves but Google emerged in the mid-1990s and now dominates Internet search technologies.

48  As a quality of digital information its searchability has provided an answer to locating relevant material out of a very large dataset using what are known as e-discovery tools which will be discussed in ch 7. 49  Bush, above n 28.

Digital Information


Search engines consist of three main parts. Search engine spiders follow links on the web to request pages that are either not yet indexed or have been updated since they were last indexed. These pages are crawled and are added to the search engine index (also known as the catalog). When you search using a major search engine you are not actually searching the web, but are searching a slightly outdated index of content which roughly represents the content of the web. The third part of a search engine is the search interface and relevancy software. For each search query search engines typically do most or all of the following: 1. Accept the user inputted query, checking to match any advanced syntax and checking to see if the query is misspelled to recommend more popular or correct spelling variations. 2. Check to see if the query is relevant to other vertical search databases (such as news search or product search) and place relevant links to a few items from that type of search query near the regular search results. 3. Gather a list of relevant pages for the organic search results. These results are ranked based on page content, usage data, and link citation data. 4. Request a list of relevant ads to place near the search results.50 Search engines are essential for the proper functioning of the Internet. Without them the information that is located in servers on the network would be largely inaccessible unless the user was aware of the location of that information. Thus steps to limit or restrict the operation of search engines, as was the case in the ‘right to be forgotten’ case of Google Spain SL, Google Inc v Agencia Española de Protección de Datos (AEPD), Mario Costeja González,51 which will be discussed in chapter 10, have a significant and detrimental effect upon the overall utility of the Internet. b.  Participation and Interactivity This final quality of the digital and Internet environments that differs from the print paradigm is one of interactivity and I have already made reference to this in my discussion about dynamic information. Reading from the print media is essentially a passive activity and any interactivity may be on the part of the reader making notes or writing thoughts or concepts that develop as a result of the reading process. In the digital environment the reader may interact with the text as it


Aaron Wall, Search Engine History 2006–2015 European Court of Justice C-131/12 12&language=EN. An alternative URL is here: en&nat=or&oqp=&dates=&lg=&language=en&jur=C%2CT%2CF&cit=none%252CC%252CCJ%25 2CR%252C2008E%252C%252C%252C%252C%252C%252C%252C%252C%252C%252Ctrue%252 Cfalse%252Cfalse&num=C-131%252F12&td=%3BALL&pcs=Oor&avg=&page=1&mat=or&jge=&fo r=&cid=415122. The length of this URL for this reference demonstrates the need for search engines. It would be virtually impossible for this URL to be memorised much less transcribed accurately into the address section of a browser. The Internet (like all computer-based systems) is unforgiving of error. 51 (2014)


The Analytical Framework

is presented. In this respect the acquisition of information in the digital environment becomes associative and non-linear. In some respects participation in the context of social interaction is associated with online disinhibition and with the information dynamic. But the Internet enables a greater degree of participation in dialogue and discourse than might earlier have been the case. In the pre-digital paradigm participation within a discussion or engagement with an issue may only have been available through the ‘letters to the editor’ column or perhaps, if one were so motivated, pamphleteering. The Internet now enables immediate participation within a debate and the ability to share one’s thoughts through the use of blogs, Twitter, Facebook and other forms of social media. Furthermore, the ability to participate, engage in debate, seek out information and engage with others probably is the greatest opportunity to embark upon a form of participatory democracy. In a global sense, that mirrors the Athenian form of participation and perhaps may even be the first time that the community has had such an opportunity to so engage. The quality of participation is driving many governments towards considering on-line voting, recognising that the Internet enables an opportunity for greater engagement by the community with the political system. It doesn’t stop there. The participatory possibilities of the Internet could well mean that in the future juries would hear trials on-line rather than being physically present in a court room.

B.  Some Observations Cumulatively these various qualities have a significant effect upon our relationships with information—our expectations and use of information, how we process information and communicate it. A consequence of these expectations has an impact upon the values we place upon information and some of the behaviours associated with information and information technology use. One example may be found in what I call Dissociative Enablement or the Online Disinhibition Effect.

i.  Online Disinhibition In an article entitled ‘The Online Disinhibition Effect’ John Suler52 examines the willingness of some people to self disclose or act out more frequently or intensely online than they would in person. He suggests that there are some six factors that interact with one another creating this online disinhibition effect. He observes that often people say and do things in cyberspace that they wouldn’t ordinarily say and do in the face to face world. This online disinhibition effect can work in two possible directions. One is benign disinhibition where people share very personal things about themselves revealing secret emotions wishes and fears. 52  John Suler, ‘The Online Disinhibition Effect’ (2004) 7 Journal of Cyberpsychology and Behaviour 321.

Digital Information


Toxic disinhibition, on the other hand, involves the use of rude or offensive language, harsh criticisms, anger, hatred and theft and threats. People may visit the dark underworld of the Internet, involving pornography, crime and violence that they would never explore in the ‘real world’. Benign disinhibition may be indicative of an attempt to better understand and develop oneself—a form of working through or self-actualisation. On the other hand toxic disinhibition may simply be a blind catharsis, a form of repetition compulsion and an acting out of unsavoury needs without any personal growth at all. Suler examines possible causes for online disinhibition and what elements of cyberspace lead to the weakening of psychological barriers that block hidden feelings and needs. He identifies a number of factors. The first is that of ‘dissociative anonymity’. This is one of the principle factors that create the disinhibition effect where people have an opportunity to separate their actions online from their ‘in person’ lifestyle and identity. As the result they feel less vulnerable about self-­ disclosing and acting out. In effect, the online self becomes a compartmentalised self. In the case of a person demonstrating toxic disinhibition, expressed hostilities or other deviant actions, responsibility for those behaviours can be averted almost as if superego restrictions and moral cognitive processes have been temporarily suspended from the online psyche. An aspect of dissociative anonymity yet in some respects separate from it is that of ‘invisibility’. In text driven online environments people can’t see one another. This ‘invisibility’ gives people the courage to go places and do things which they would not otherwise do. Although it overlaps with anonymity there are some important differences. In the text communication of email, chat, instant messaging and blogs people may know a great deal about each other’s identity and lives as revealed in this environment. However they still cannot see and hear one another and the opportunity to be physically invisible amplifies the disinhibition effect. People don’t have to worry about how they look or sound when they type a message. Visual physical clues—a frown, a shaking of the head, a sigh, a bored expression—are no longer present and cannot act as inhibitors to behaviour. Suler sums it up saying ‘text communication offers a built in opportunity to keep one’s eyes averted’.53 Another quality identified by Suler is that of ‘asynchronicity’ in that in email, message boards and other forms of social media people don’t interact with one another in real time and thus do not have to deal with another person’s immediate reaction, which acts as a disinhibitor. Indeed asynchronistic communication may be seen as a form of running away after posting a message that is personal emotional or hostile and Karly Munroe, an online psychotherapist, describes this as participating in an emotional hit and run.54 ‘Solipsistic introjections’ is another aspect of the absence of face-to-face cues. As part of this aspect people may feel that their minds have merged with the mind 53  54 

ibid, 322. K Munro, unpublished observations 2003 cited in Suler, above n 52 323.


The Analytical Framework

of the online companion. Reading another person’s message may be experienced as a voice within one’s head, as if that person’s psychological presence and influence have been assimilated or introjected into one’s psyche. Thus when reading the message from another one might ‘hear’ the online companion’s voice using one’s own voice. Suler suggests that people may sub-vocalise as they read, projecting the sound of their own voice into the other person’s text. This conversation may be experienced unconsciously as talking to or with oneself which encourages disinhibition because talking to oneself feels safer than talking with others. Suler points out that ‘for some people, talking with oneself may feel like confronting one’s self which may unleash many powerful psychological issues’.55 ‘Dissociative imagination’ is a force that possibly magnifies disinhibition. Consciously or unconsciously people may feel that the imaginary characters that they ‘created’ as an online persona exists in a different space and that online persona along with the online others live in a make believe dimension separate and apart from the demands and responsibilities of the real world. Emily Finch, an author and criminal lawyer studying identity theft in cyberspace, suggests that some people see their online life as a kind of game with norms and rules that do not apply to everyday living. Once they turn off the computer and return to their daily routine they believe they can leave behind that game and their game identity.56 Fantasy game environments provide a good example of the effect of dissociative imagination, where a user consciously creates an imaginary character that can influence many dimensions of online living. Difficulties may arise for those who have problems in distinguishing personal fantasy from social reality. anonymity amplifies the effect of dissociative imagination but dissociative imagination and dissociative anonymity usually differ in the complexity of this dissociated sector of the self. Within the online environment there is something of a democratisation that takes place with a ‘minimisation of status and authority’. In the real world authority figures express their status and power in dress, body language and the trappings of their environmental settings. The absence of these together with a lack of the person’s elevated position means they have less of an effect upon that person’s online presence and influence. On the Internet everyone has an equal opportunity to voice him or herself. The Internet provides a level playing field and Internet philosophy holds that everyone is an equal and that the purpose of the Internet is to share ideas and resources among peers. This atmosphere and philosophy contribute to the minimisation of authority. Most people who would normally be reluctant to say what they really think as they stand before an authority figure are faced online with what is effectively a peer relationship where the appearances of authority are minimised and people are more willing to speak out and misbehave.

55  56 

Suler, above n 52, 323. E Finch, unpublished observations cited in Suler, above n 52, 323.

Digital Information


However the online disinhibition effect is not the only factor that determines how much people are prepared to self-disclose or act out in cyberspace. Individual differences play a role, such as the intensity of a person’s underlying feelings, needs and drive. People with histrionic styles tend to be very open and emotional whereas compulsive people are more restrained. What must be recognised is that online disinhibition does not really reveal the underlying true self but in fact should be understood as a person shifting while online into an intrapsychic constellation that may be dissociated from the in-person constellation. It is an aspect of the total person. Thus when a person is shy in person whilst outgoing on line, neither self-presentation is truer than the other. There are two dimensions of that person each revealed within a different situational context. A further aspect of this disinhibition is what I refer to as dissociative enablement—where dissociative effects actually enable behaviours that would be more difficult in the real world. This is inherent within Internet technologies which enable an Internet user to engage in cyber-stalking, to embark on a discussion utilising language and tone that one would be reluctant to use to the correspondent or auditor face to face57 and enables the Internet criminal to commit fraud or other forms of Internet crime without having to confront the victim. Dissociative enablement or disinhibition enables behaviours to take place on the Internet that might not otherwise take place within the physical context. Dissociative enablement or disinhibition, as I have suggested above, have an impact on the nature and quality of discourse within the Internet space. In many respects, it enables what perhaps may be a more robust form of discourse than might otherwise be the case and inevitably becomes wrapped up in a vigorous and at times hysterical discussion about cyberbullying, online hate speech and censorship. Whether or not that is a good thing is not for me to say. Yet it is behaviour that this quality enables that cannot be ignored, and, of course, it is ultimately tied in with the delivery of content and the nature and quality thereof.

C.  The Internet and How We Think The discussion about online disinhibition gives an example of how the qualities of the new technology enable certain behaviours and magnify others. It should be remembered, as I have already suggested, that there is an overlap or merger between some of the qualities that I have identified. The quality of Information Persistence exists in tension with the dynamic nature of information, with format obsolescence 57  The disclosures in New Zealand made by the soi-disant group ‘Roastbusters’ on Facebook about their sexual exploits with young women is an example of dissociative enablement. In the ‘real world’ they may have communicated with a limited number of individuals within their peer group. Now they communicate their behaviour—and the consequent embarrassment of their victims—to the world. For example see ‘Roast Busters: Over 63k call for PM to take action’ NZ Herald (11 November 2013) This article is one of a large number published since the story broke on TV3 news on 3 November 2013.


The Analytical Framework

and digital information non-coherence. Certainly the Internet and its protocols give greater emphasis to information persistence than the other competing qualities, but it must be recognised that this example of a ‘qualities tension’ means that any analysis of the impact of Digital Paradigm qualities must be a nuanced one. On the other hand some approaches may only involve the application or consideration of some of the qualities. Internet-based analysis is not so likely to involve format obsolescence which is very likely to arise in a consideration of e-discovery approaches. In addition there is the wider issue of the effect the Internet may be having upon the way that we think. Will delinearisation change the way that we think? It certainly has an impact upon the way in which we deal with information, and our expectations of it. But the question becomes one of whether or not the internet changes us forever. Underlying this theory is the concept of neuroplasticity—the ability of the brain to adapt to and learn from new stimuli. The concept of neuroplasticity was picked up by Nicholas Carr in his book The Shallows: How the Internet is Changing the Way we Think, Read and Remember.58 His book, based upon an earlier article that appeared in the Atlantic, has as its thesis that the internet is responsible for the dumbing down of society based upon the way in which our minds respond both to the wealth of information and its availability. The neuroplasticity argument is advanced by Susan Greenfield59 who believes the web is an instant gratification engine, reinforcing behaviours and neuronal connections that are making adults more childlike and kids hungry for information that is presented in a super simplistic way but in fact reduces their understanding of it. Greenfield is of the view that the web spoon feeds us things to capture our attention. This means we are learning to constantly seek out material that stimulates us and our plastic minds are being rewarded by our ‘quick click’ behaviour. We want new interactive experiences and we want them now. This view is disputed by Aleks Krotoski60 who first observed that there is no evidential support for Greenfield’s propositions which pre-suppose that once we used the web we would forever be online and never log off again. According

58  Nicholas Carr, The Shallows: How the Internet is Changing the Way we Think, Read and Remember (London, Atlantic Books, 2010). See also Nicholas Carr, ‘Is Google Making Us Stupid’ Atlantic Magazine (1 July 2008) 59  See especially Susan Greenfield, ‘Living On-line is Changing Our Brains’ New Scientist (3 August 2011) For this and for her assertions of ‘internet addiction’ she has been criticised by Dr Ben Goldacre for claiming that technology has adverse effects on the human brain, without having published any research, and retracting some claims when challenged. Goldacre suggested that ‘A scientist with enduring concerns about a serious widespread risk would normally set out their concerns clearly, to other scientists, in a scientific paper’; Ben Goldacre, ‘Serious Claims Belong in a Serious Scientific Paper’ The Guardian (21 October 2011) bad-science-publishing-claims. 60  Aleks Krotoski, Untangling the Web: What the Internet is Doing to You (London, Faber, 2013). Presentation by Aleks Krotoski at the Writers and Readers Festival, Auckland, 19 May 2013. Personal discussion between the author and Aleks Krotoski, 19 May 2013.

Digital Information


to Greenfield, says Krotoski, we become connected to our computers and other devices in a co-dependent exclusive, almost biological way ignoring where, how and why we are connecting. Krotoski, for example, disputes internet addiction, internet use disorder or neurological rewiring. Like Krotoski, William Bernstein61 rejects Carr’s thesis. Bernstein points out that neuroplasticity is a phenomenon well known to brain researchers. He then goes on to ask and answer Carr’s question: Does the Web rewire your brain? You bet; so does everything you actively or passively experience. Literacy is possibly the most potent cerebral rewire of all; for five thousand years humans have been reassigning brain areas formerly needed for survival in the natural environment to the processing of printed abstractions. Some of this commandeered real estate has almost certainly been grabbed, in its turn, by the increasing role of computers and the Internet in everyday post-industrial life. Plus ca change.62

Bernstein then goes on to examine Carr’s theory that Internet use decreases concentration on the matter at hand, emphasising the use of hyperlinks. Bernstein accepts that we have better information retention if it is placed in front of us on one page rather than chasing it through a maze of hypertext links. On the other hand, he observes, real life rarely supplies us with precisely the information that we need in one document. Those skilled at following informational threads through different sources will succeed more often than those spoon fed information.63 Bernstein finally confronts Carr’s argument in this way: Carr’s thesis almost automatically formulates its own counterargument: Life in the developed world increasingly demands non-rote, nonlinear thought. Shouldn’t learning to navigate hypertext skilfully enhance the ability to make rapid connections? Shouldn’t such abilities encourage the sort of nonlinear creative processing demanded by the modern work environment, and make us smarter, more productive, and ultimately more autonomous and fulfilled… If the Web really is making Americans stupid, then shouldn’t citizens of more densely wired nations, such as Estonia, Finland and Korea, be hit even harder? The question answers itself.64

In some respects Carr and Greenfield are using the ‘low hanging fruit’ of technological fear65 to advance their propositions. Krotoski’s rejection of those views is, on the other hand, a little too absolute and in my view the answer lies somewhere in between. The issue is a little more nuanced than whether or not the Internet is dumbing us down or whether or not there is any evidence of that.


See Bernstein above n 36 esp at 323 and following. ibid, 323–24. 63  ibid, 324. 64  ibid, 324–26. 65  Sometimes referred to as the Frankenstein Complex, a term coined by science fiction writer Isaac Asimov and used in his robot novels—see for example Isaac Asimov, I Robot (New York, Gnome Press, 1950) and The Rest of the Robots (New York, Doubleday, 1964). 62 


The Analytical Framework

My argument is that the impact of the internet lies in the way in which it redefines the use of information and the way we access it, process it, use it, respond to it and our expectations of it and its availability. This may not seem to be as significant as Carr’s rewiring or Greenfield’s neuroplasticity but it is, in my view, just as important. Our decision making is based upon information. Although some of our activity could be termed responses to stimuli, or indeed it might be instinctive, most of the stimuli to which we respond can in fact be defined as information—if not all of it. The information that we obtain when crossing the road comes from two of our senses—sight and hearing—but in many other of our activities we require information upon we which may deliberate and to which we respond in making decisions about what we are going to do, buy and so on. And paradigmatically different methods of information acquisition are going to change the way in which we use and respond to information. There are other changes that are taking place that arise from some of the fundamental qualities that underline new digital communications technologies—and all communication technologies have these particular properties or qualities underlying them and which attach to them; from the printing press through to the wireless through to the radio through to television and into the digital paradigm. It is just that digital systems are so fundamentally different in the way in which they operate and in their pervasive nature that they usher in a new paradigm.66

IV. Conclusion Once there is a recognition of the fact that there are properties that underlie an information technology that influence the way in which we address content, and that will govern or moderate information activities, we begin to understand what Marshall McLuhan meant by his aphorism ‘The medium is the message’. Understanding the medium and the way it governs and moderates information activities allows us to understand the impact of the digital communications technologies— a convergence of everything that has gone before and the way in which it redefines the use of information and the way we access it, process it, use it, respond to it and our expectations of it and its availability. In some respects the paradigm shift can be seen in an inter-generational context. Mark Prensky, an American educator, spoke of the issues confronting education in the digital paradigm.67 He suggested that there was a growing culture


See above for the discussion of the qualities of digital information technologies. Marc Prensky, ‘Digital Natives, Digital Immigrants’ (2001) 9 On the Horizon 1 www.emeraldinsight. com/journals.htm?issn=1074-&121&volume=9&issue=5&articleid=1532742&show=pdf;…/prensky%20-%20digital%20natives,%20digital%20immigrants%20-%20part1.pdf. For a brief introduction to the development of Prensky’s theory see Wikipedia ‘Digital Native’ http:// 67 



of people who had grown up knowing nothing but the Internet, digital devices and seeking out information online. This group he called ‘Digital Natives’—those born after 1990. He contrasted this class with ‘Digital Immigrants’—those who had developed the information seeking and uses before the advent of the Internet. Digital Immigrants used digital communications systems but their thought processes were not as committed to them as Digital Natives. Although they could speak the same language as the Digital Natives, they had a different accent that derived from an earlier information paradigm. Digital Immigrants have an approach to information that is based upon sequential thinking, single tasking and limited resources to enable communication, all underpinned by the fixity of text. For the Digital Immigrant text represents finality. A book is not to be reworked, and the authority of a text depends upon its finality.68 Information is presented within textual constraints that originate in the Print Paradigm. Digital Natives inhabit a different information space. Everything is ‘multi’— multi-resource, multi-media, multi-tasking, parallel thinking. Information for the Digital Native may in its first instantiation be text but it lacks the fixity of text, relying rather on the dynamic, fluid, shifting qualities of the digital environment. Text does not mean finality. Text is malleable, copyable, moveable and text, like all other forms of information in the digital space, is there to be shared. In the final analysis, the fundamental differences between Digital Immigrants and Digital Natives can be reduced to one fundamental proposition—it’s all about how we process information. For Digital Natives the information resources are almost without limitation and the Digital Native mind shifts effortlessly between text, web-page hypertext links, YouTube clips, Facebook walls, Flickr and Tumblr, the terse, abbreviated tweet or text message and all of it not on a desktop or a laptop but a handheld smartphone. But there is more to this discussion than the content that media convergence enabled by digital technologies provides. Content, as McLuhan said, is ‘the juicy piece of meat carried by the burglar to distract the watchdog of the mind’.69 It is as important to understand how it is that digital information technologies work. We need to understand the underlying qualities or properties of digital technologies to understand the way in which they drive our information uses, activities and behaviours. This chapter has identified those qualities and how they collectively underlie paradigmatic change in the nature of information and its communication. Armed with this analytical background we may now move to consider aspects of law and law making in this new environment. I offer a caution. The new Information Paradigm will not impact on all legal doctrine. But some aspects of legal rules and law making are affected and it is to this that we shall now turn. 68  Ronald Collins and David Skover, The Death of Discourse (Durham NC, Caroline Academic Press, 2005) xix. For a more detailed discussion of the difference between fixed and digital texts see Ronald Collins and David Skover, Paratexts (1992) 44 Stanford Law Review 509. 69  McLuhan, above n. 1.

3 The Transition to the Digital Paradigm—Analogies and Functional Equivalence I. Introduction This chapter considers some of the difficulties in dealing with a communications technology that is paradigmatically different from what has gone before. It considers the way in which lawyers have tried to use analogy using examples drawn from an earlier paradigm to make a common rule apply to the new paradigm. Often this results in an uncomfortable fit. That is not to say that the use of analogy between paradigms should be discarded. Rather, it is a method of reasoning that has to be applied with considerable care, and only after a proper consideration of the comparators has taken place. The chapter also considers the use of the term functional equivalence, which is a term that is used to suggest that there is a parallel between the earlier communications paradigm and the electronic world. The term ‘functional equivalence’ has been used primarily in the field of e-commerce to validate electronic transactions. While this might have been a comfortable way of adapting old principles to the new paradigm, once again it makes a very uncomfortable fit if applied uncritically and without care, for it ignores the nature of the technology and its transformative nature.

II.  A Historical Perspective All of our rules and principles have developed within an earlier communications paradigm that started with the written recording of information. That form of communication has been present for thousands of years and although to us it may seem to be unremarkable it has, over the centuries, managed to remain

A Historical Perspective


c­ ontentious. Information recorded in writing has not always been authoritative in and of itself.1 Writing is a form of code—the representation of human speech in a system of agreed or accepted symbols which integrates cognition, technology and language, and thus demands multiple forms of analysis.2 From early time this form of code as recorded language or information had three dramatic effects. First, the written record replaced the need for memorisation of information. The written record became written memory. Second, those who understood the written record had significant advantages over those who could not. Literacy, even at this early stage, gave real meaning to the phrase ‘information is power’. Third, such a recording system probably served a role in the formation of centralised society and economy in the shape of communities that later became city states.3 Today we tend to assume that the written record can be more reliable than the ephemeral spoken word. A contemporaneous document will generally be seen as more reliable as a record of an event than a recollection from memory. Clanchy suggests that assumption arises as a result of education in reading and writing from an early age, together with the constant use of written materials.4 This preference is despite inherent problems with print. O’Donnell suggests that there were two factors that meant that print gradually became accepted, notwithstanding perceived drawbacks. One was that readers are prepared to accept an admixture of errors. Even today, with the most scrupulous copy-editing, grammatical and spelling errors still creep in. The value of the book is not degraded by such errors. Certainly, manuscript books were full of them and users were well used to deciphering imperfect books. The second was that print introduced a system of communication that was so wide-ranging, fast and powerful and providing all the advantages associated with the communication of information that any defects in the technology could be remedied. Proof reading, which would have been labour intensive in a scriptorium was cost-effective in a print shop and essential if what was being printed were stock prospectuses where large sums of money were at stake.5 The history of the development of printing as a means of communication is well embedded and has been the subject of a substantial literature. The seminal work of Elizabeth Eisenstein,6 although not without its critics, still commands 1  See for example MT Clanchy, From Memory to Written Record: England 1066—1307 3rd edn (Chichester, Wiley Blackwell, 2013) 295 on the investiture crisis and the commentary on writing that Plato attributes to Socrates. Plato Dialogues of Plato, The Phaedrus trans Benjamin Jowett (Project Gutenberg, 2013) 2  Julia MH Smith, ‘Writing in Britain and Ireland c 400 to c 800’ in Clare A Lees (ed), The Cambridge History of Early Medieval English Literature (Cambridge, Cambridge University Press, 2013) 20. 3  William Bernstein, Masters of the Word: How Media Shaped History from the Alphabet to the Internet (London, Atlantic Books, 2013) 22. 4  Clanchy, above n 1, 295. 5  James J O’Donnell, The Avatars of the Word (Cambridge MA, Harvard, 2000) 80–81. 6  Elizabeth Eisenstein, The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early Modern Europe 1 Vol (Cambridge, Cambridge University Press, 1980).


The Transition to the Digital Paradigm

respect. But one thing that is clear is that print became the primary way in which information was conveyed other than orally until the development of the telegraph in the early nineteenth century. The telegraph enabled messages—albeit encoded—to be conveyed over long distances instantaneously. The ability of this system depended upon the existence of an available infrastructure and the skill to encode and decode messages which were then communicated in written or oral form. Other forms of point to point communication developed such as the telephone—a one to one instantaneous communication system over a distance—and wireless radio, followed by ‘radio with pictures’ in the form of television. One feature that is common to all of these models of communication are that they depend upon a significant infrastructure to operate. Although telegraphy and telephony may be seen as a ‘one-to-one’ form of communication, behind those systems lie large provider organisations. Printing, radio and television demonstrate a very centralised form of communication in that information is created within one location and is sent to a large audience—a one-to-many model—and, as far as radio and television are concerned (absent the rise of sound and video recorders) may be seen as an ‘appointment’ form of communication where the listener or viewer must be present within view or hearing of the means of signal transmission.7 In those respects, radio and television are throwbacks to the oral form of communication although with television the visual sense is engaged and the communication experience is therefore far more absorbing. Nevertheless the centralised distribution model has characterised the mass communication of information. A further characteristic of these communications technologies was that because of their monolithic nature and the need for the deployment of centralised technologies, the control of information distribution remained in a few hands. Certainly books were published on a wide variety of topics, but behind the printing of every book or newspaper were commercial imperatives that limited the type of content available. Although there were rather primitive forms of ‘printing press’ like a typewriter, carbon paper and latterly Gestetner machines, the reach of such means of communication was very limited. The need for technological infrastructure made the copying of content difficult. The integration of the magnetic tape recorder with a radio receiver allowed broadcast radio (and probably more importantly music) to be recorded and the advent of the videocassette recorder that enabled time-shifting of television programmes allowed freedom from the tyranny of appointment viewing in the late 1960s on the very cusp of the beginning of the Digital Paradigm and the development of home computers. As was the case with print, these other forms of communication, especially television, have been criticised and condemned for the damage that they might do 7 This model is being challenged by a multitude of on-demand services that even include arguments before the United Kingdom Supreme Court. Lord Neuberger made the observation ‘Now justice can be seen to be done at a time which suits you’. Supreme Court News Release 5 May 2015

Digital Writing


to society, much in the same way that criticism was directed against the development of writing and print. But habits can change very quickly especially when that change has been driven by technology. Within the last 50 years the process of writing has been changed by photocopiers, word processing and email. The older ways of making multiple copies, employing typewriters and carbon paper are no longer present. Typewriter manufacturers have gone out of business save in India where there is still a demand in the Indian civil service for this form of technology. Postage stamps, although still used, no longer support postal services in the way that they did, and postal services are going through a period of retrenchment and reductions in delivery. Yet these older forms of reproduction and written communication are the products of the nineteenth century. The universality of printing and word processing, assisted by, in the Western world, a universal education system, means that we take writing and reading for granted. We forget, unless we read history in a specialised way, how much business once was carried on without it. It is against this communications paradigm of written information inextricably associated with a medium that many of our assumptions about communication have taken place. Despite the availability of television and radio, print was the primary means of recording and preserving information. What we forget, in our uncritical acceptance of this form of communication and information recording, are the technological realities that underlie it. And the Digital Paradigm has introduced an entirely new, machine mediated form of information creation. Print on paper freezes the text on paper once and for all. The qualities of fixity and standardisation mean that the reader can be sure that the same text will be found on the same page of any particular book. Variations may occur from edition to edition but by and large print sets text in an immutable form on a permanent or semi-permanent medium. The text will last as long as the medium upon which it is printed or written. The Digital Paradigm overturns that assumption, for text created on a computer returns us to a state of textual instability.

III.  Digital Writing Let us reflect for a moment upon the written word. Writing, as I have observed, is a form of encoded representation in and of itself and conventions of expression and language determine how it is transcribed in handwriting or typed text upon a page. If one considers the keyboard of a typewriter, which is largely emulated in the QWERTY computer keyboard, the choices for what appeared on the page were limited to what was on the keyboard. The keyboard powered the arms (or golf ball) that contained the type and thus was transcription limited. And once the key hit the ribbon and the letter was transferred to paper, the text was fixed. Now consider the process that takes place in transcribing text using a computer. Although the process looks the same on the screen as the typewriter transcription


The Transition to the Digital Paradigm

there is a world of difference between the two. The computer works on a binary system where it reads 8 separate digits represented by 1s or 0s into one unit known as a byte. This allows for 128 combinations to stand for numbers, letters and punctuation marks. The problem is that these 128 characters may be sufficient for ­English but not, say, for French, German or Scandinavian languages which use the Roman alphabet but which are also characterised by the use of accents. When IBM developed the personal computer in the late 1970s it doubled the character set to 256 with a number of marked vowels and a few Greek letters, but the Scandinavian requirements were omitted. This 256 character encoding is known as ASCII.8 When one presses a key or a series of keys on a computer keyboard, the ‘letter’ associated with the key is transformed into a series of binary electronic impulses. These impulses are temporarily stored in the computer’s random access memory or RAM. I emphasise the word ‘temporarily’ because if the electricity or battery power to the computer is lost, the text is lost as well. Thus the data stored in RAM can be saved to a more permanent medium like a hard drive or some other form of storage medium. But the data is still in binary form. It is still in the form of electronic impulses. It cannot be ‘read’ by human eye in this form and requires computer technology to ‘translate’ the electronic impulses back into familiar ASCII text. What has happened is that various interpreters built in to the computer ‘translate’ the text into a form that the computer can process. The text is converted into a series of 1s and 0s which is machine code. Thus, digital information or text creation systems interpose a number of technical processes between the creator of the information and the medium or media upon which it is recorded. Where scribal or printed language was one form of encoding, now a number of encoding systems come into play over which the creator has no control and which, as likely or not, he or she would be unable to understand. All information in digital form, be it digital video, the audio from a podcast, a web page, a word processing document or a digital TV signal are all ultimately a stream of 1s and 0s.

IV.  Change and Communication in the Digital Paradigm How then do new digital technologies change things? And when does the nature of change become so dramatic, so disruptive and result in such difference that it ushers in a new paradigm? The arrival of the printing press followed upon centuries of the scribal culture which had developed into a static form of information communication.9 The printing press was the first information technology and


American Standard Code for Information Interchange. Bonaventura: ‘A man might write the works of others, adding and changing nothing, in which case he is simply called a “scribe” (scriptor). Another writes the work of others with additions 9  Saint

Change and Communication in the Digital Paradigm


­ rovided the basis for a number of changes in the way in which people thought p and behaved. It demonstrated McLuhan’s aphorism: ‘We shape our tools and afterwards our tools shape us’.10 Within the pre-print culture orality dominated as the principal form of social communication. The printed book gave rise to the muting of orality as the reader retired into his or her own mind.11 Reading made different demands on people— immobility, isolation, silence, concentration, ‘the ability to immerse oneself in the thought processes of the writer and to remember and make links with the thoughts of writers as expressed in other texts’.12 Although reading had been a part of the human existence for thousands of years before printing, the advent of printed material made the written word available to a wider audience. However, humans are not genetically structured for reading in the way that we are for oral language. Maryanne Wolff in her book on the neuroscience of reading13 argues that reading changes the way that our brains are organised, which has had an impact on the way in which the species evolved. It is based upon what neuroscientists refer to as the plasticity of the brain. As we acquire new skills, new connections are created in the brain and new neural pathways are developed. Wolff puts it this way: Thus the reading brain is part of highly successful two-way dynamics. Reading can be learned only because of the brain’s plastic design, and when reading takes place, that individual brain is forever changed, both physiologically and intellectually. For example, at the neuronal level, a person who learns to read in Chinese uses a very particular set of neuronal connections that differ in significant ways from the pathways used in reading English. When Chinese readers first try to read in English, their brains attempt to use Chinese-based neuronal pathways. The act of learning to read Chinese characters has literally shaped the Chinese reading brain. Similarly, much of how we think and what we think about is based on insights and associations generated from what we read.14

Thus we can see how McLuhan’s aphorism begins to work. But the matter does not end there. According to Postman reading fosters rationality and the form of the printed book encourages what Walter Ong called ‘the analytic management of

which are not his own; and he is called a “compiler” (compilator). Another writes both others’ work and his own, but with others’ work in principal place, adding his own for purposes of explanation; and he is called a “commentator” (commentator) … Another writes both his own work and others’ but with his own work in principal place adding others’ for purposes of confirmation; and such a man should be called an “author” (auctor)’. Cited in Eisenstein above n 6, 121. 10 

Marshall McLuhan, Understanding Media: The Extensions of Man (New York, McGraw Hill, 1964). For a full discussion of the impact of the reading revolution see Neil Postman, The Disappearance of Childhood (New York, Vintage/Random House, 1994). 12  John Naughton, From Gutenberg to Zuckerberg—What You Really Need to Know About the Internet (London, Quercus, 2012) 24. 13  Maryanne Wolff, Proust and the Squid: The Story and Science of the Reading Brain (New York, Harper Collins, 2007). 14  ibid, 5. 11 


The Transition to the Digital Paradigm

knowledge’.15 Postman suggests that the printed text engages powers of classification, inference making and reasoning.16 Of course these forms of analysis and qualities existed in the scribal era which was predominated by an oral culture but Postman is suggesting that print enhanced and developed these qualities even further and resulted in the development of Typographical Man for whom the written and printed word achieved dominance both consciously and, because of brain plasticity, subconsciously. Sven Birkerts confirms the private but active engagement with text that print fosters, observing the linear nature of print governed by syntax and the progression of the eye down the page.17 Lest one consider that the advent of the e-book or the Kindle will allow reading to continue unabated as before, Birkerts noted the physical nature of the book, its size and the systems necessary for its storage which we understand and accept reflexively. But reflexes are modified by use and need. As Marshall McLuhan argued decades ago, technology changes reflexes, replacing them with new ones. Our rapidly evolving digital interface is affecting us on many levels, not least those relating to text and information. We read and absorb as the age demands, and our devices set the pace. I was in a crowd at a poetry reading recently, eavesdropping on the conversation behind me. Somebody referenced a poem by Wallace Stevens but couldn’t think of the line. Her neighbor said ‘Wait—’ and proceeded to Blackberry (yes, a verb) the needed words. It took only ­seconds. Everyone bobbed and nodded—it was the best of all worlds.18

Thus are our thought processes influenced by the medium. The Internet is at least as revolutionary a technology as the printing press was and it is no accident that I referred to our present information era as ‘The Digital Paradigm’ because the new information systems that are available to us are as paradigmatically different from print as print was to the scribal culture. The networked media is like an ecosystem—a community of organisations, publishers, authors, end users and audiences which, along with their environment, function as a unit. Until the advent of the Internet our media ecosystem was dominated by monolithic ‘one-to-many’ media19 that shaped discourse and dominated entertainment and sport. The established and largely centralised media had a significant impact upon public and private life and culture. The discourse was limited to what was approved for print or broadcast. The ecosystem has changed dramatically. The Internet now overshadows main stream media and the c­ ontinuing use

15  Cited in Postman, above n 11, 51. See generally Walter Ong, Orality and Literacy: The Technologising of the Word (Oxford, Routledge, 2002). 16  Neil Postman, Amusing Ourselves to Death: Public Discourse in the Age of Showbusiness (New York, Penguin Books, 1986) 51. 17  Sven Birkerts, The Gutenberg Elegies: The Fate of Reading in an Electronic Age (Winchester MA, Faber, 1994) 122. Birkerts’ description of the linear nature of print contrasts with the ‘de-linearisation’ of information described in ch 2. 18  Sven Birkerts, ‘Resisting the Kindle’ The Atlantic (March 2009) archive/2009/03/resisting-the-kindle/7345/. 19  For discussion see above s II. Print, radio and television all shared these qualities.

Change and Communication in the Digital Paradigm


of computers and the computing power of the mobile phone will mean that the Internet will replace mainstream media as the ‘dominant species’ within the media ecosystem. In the same way that Birkerts expressed concerns at the decline of reading, others have developed a dystopian view of the networked world that in some ways focuses attention upon the nature of the changes that are taking place—the way in which the tool of the Internet is beginning to shape us, as McLuhan would have it. The Internet seems to erode the capacity for contemplation and concentration. Nicholas Carr observed: Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.20

Yet the Internet is largely a text-based system and it may well be that we are reading more. The problem is that the nature of what we are reading and the way that we process the material is changing—once again Wolff ’s brain plasticity theory. She worries that the style of reading promoted by the Net, a style that puts ‘efficiency’ and ‘immediacy’ above all else, may be weakening our capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made long and complex works of prose commonplace.21 Could it be that, within the next few decades, our dependence upon digital information and Internet technologies will make us functionally incompetent to engage in reasoned decision-making unless we are plugged into or have immediate access to cyberspace? The combination of the qualities that Internet information possesses with the way in which the use of a new communications technology affects our dynamic thought patterns and cognitive ability means that the Internet becomes an essential information resource to which we are adapting—or have become adapted?— and which will be the principal information resource for the Digital Natives as

20  Nicholas Carr, ‘Is Google Making Us Stupid’ The Atlantic (July/August 2008) www.theatlantic. com/magazine/archive/2008/07/is-google-making-us-stupid/6868/. See also generally Nicholas Carr, The Shallows—How the Internet is Changing the Way we Think, Read and Remember (London, Atlantic Books, 2010). For a discussion of the neuroplasticity controversy see ch 2. The issue of the impact of new information systems upon cognition is referred to (citing Carr’s article) in Nicole L Waters and Paula Hannaford-Agor, ‘Jurors 24/7: The Impact of New Media on Jurors, Public Perceptions of the Jury System and the American Criminal Justice System’ (unpublished). I am grateful to Ms HannafordAgor for a copy of the article which is to be published in a forthcoming encyclopaedia on criminology and criminal justice. 21  Wolff above n 13.


The Transition to the Digital Paradigm

­ ncyclopaedia Britannica was for those born in the mid-twentieth century. The E sense of loss expressed by Birkerts and Carr can be explained in terms of cognitive and thinking abilities which were developed in the print paradigm and they mourn its passing. The linear side-to-side verticality of reading and processing information becomes replaced with a hypertexted system of information that is not only dynamic in itself but encourages dynamic behaviour on the part of the users, as they switch from a webpage to instant messaging to email to a Skype session. When Lord Chief Justice Judge was critical of the impact that technology and digital information systems were having on today’s children,22 what he failed to recognise is that the Digital Natives find such a means of absorbing information completely incompatible with the way in which their learning systems are becoming adapted as a result precisely of the technological proficiency to which His Lordship refers. The means of information gathering is radically different from that acquired from a book, as I suggest above and as Birkerts observes. Information and contents do not simply move from one private space to another, but they travel along a network. Engagement is intrinsically public, taking place within a circuit of larger connectedness. The vast resources of the network are always there, potential, even if they do not impinge on the immediate communication. Electronic communication can be passive, as with television watching, or interactive, as with computers. Contents, unless they are printed out (at which point they become part of the static order of print) are felt to be evanescent. They can be changed or deleted with the stroke of a key. With visual media (television, projected graphs, highlighted ‘bullets’) impression and image take precedence over logic and concept and detail and linear sequentiality are sacrificed. The pace is rapid, driven by jump-cut increments, and the basic movement is laterally associative rather than vertically cumulative. The presentation structures the reception and, in time, the expectation about how information is organised. Further, the visual and non-visual technology in every way encourages in the user a heightened and ever-changing awareness of the present. It works against historical perception, which must depend on the inimical notions of logic and sequential succession. If the print medium exalts the word, fixing it into permanence, the electronic counterpart reduces it to a signal, a means to an end.23

V.  The Law’s Approach to Equating the Old with the New It may be seen from the above discussion that the transition from an old technology to a new one is never easy. The fact that a new technology is almost always

22  Rt Hon The Lord Judge, ‘Jury Trials’ (Judicial Studies Board Lecture, Belfast 16 November 2010) 23  Birkerts, above n 17, 122–23.

Functional Equivalence


disruptive is seen in terms of the way in which it upsets established processes or ­understandings. The transformative nature of disruptive change is rarely ­considered as we tend to focus upon the inconvenience or upsets that disruption causes. The law, and especially the common law, is designed to provide and maintain certainty by the preservation of existing principle. Precedent, as I shall discuss in chapter five, does not take kindly to disruptive change, but prefers to develop incrementally on a case-by-case basis. It looks to the past for a direction towards the future. In the area of principles that were based upon earlier technologies, and especially in the case of communications technologies, principles were developed which took for granted the properties and characteristics of the new technology. Judges and lawyers value the principle because of the need for certainty and use various methods of analysis of earlier cases and technologies to reach a result in a contemporary case. Two of these methods of analysis may be employed—those of functional equivalence and analogy. Yet these tools are fraught with difficulty when the new technology is paradigmatically different from the old. This is not to say that these tools are of no use—far from it. What is necessary is to use them properly and in a time of paradigmatic change this means that the tools cannot be used in an off-hand way—as a code or shorthand to avoid the inconvenience of close analysis—but rather require a careful consideration of the way the old principles developed with their context and whether they can validly be applied to a new paradigm.

VI.  Functional Equivalence The concept of ‘functional equivalence’ in law arose primarily as a result of the development of electronic commerce (e-commerce) and the need to ensure that legal requirements prescribing the use of paper-based documentation for the purposes of recording transactions did not constitute a major or continuing obstacle to the development of e-commerce and the use of digital systems. It was also recognised that there should not be a wholesale removal of the rules and requirements surrounding paper based transactions which would disturb the legal concepts and principles that underpinned those requirements. These issues prompted the drafting of legislative initiatives by the United Nations Commission on International Trade Law (UNCITRAL) such as the Model Law on Electronic Commerce. The Model Law was designed to be a template for domestic jurisdictions to consider in drafting their own laws. The Model Law was intended as a legal framework to do electronically what had been done in the past on paper. The desire was not to engage in substantive new rule making. The purpose of such laws, and of the Model Law itself, was to create legal recognition for electronic records, electronic signatures and electronic contracts, and to ensure


The Transition to the Digital Paradigm

that the medium in which a record, signature or contract was created, presented or retained did not affect its legal significance.24 The Internet and e-commerce did not raise legal problems that were brand new. Contracting by other than face to face has been present since the inception of the mail services, the telegraph and telephone. Some of the issues raised by these forms of contract developed long before shrinkwrap terms delivered by mass-marketed software and Internet based ‘click-wrap’ and ‘browsewrap’ terms and conditions.25 But the most significant approach of the UNCITRAL Model Law in addressing issues raised by e-commerce and e-contracting was to develop the concept of functional equivalence. It is the basic underlying principle of the Model Law. Briefly put, it involved an examination of the function fulfilled by traditional form requirements (‘writing’, ‘signature’, ‘original’, ‘dispatch’, and ‘receipt’) and a determination as to how the same function could be transposed, reproduced or imitated in a dematerialised environment.26 Thus what was required was an analysis of the purposes or functions of paper-based requirements to provide criteria which, when met by non-paper records, had the same level of legal recognition as paper documents performing the same function. This primarily related to ­writing and verification by signatures along with a consideration of the nature of an ‘original’ document in an environment where copying is a given and uniqueness is ephemeral.27 The UNCITRAL approach made it clear that in settling a ‘functionally equivalent’ rule there must be two paradigms under consideration, a recognition of the essential differences between the two, the purposes for the pre-digital or paper based rule, the qualities of the digital system that allow for a consideration of whether there is room for a ‘functionally equivalent’ approach and the articulation of the rule in that light. The way in which the drafters of the Model Law approached the concept and applicability of functional equivalence demonstrates the need for considerable analytical care to be taken in determining first the function of the pre-digital requirement or rule and secondly how that particular function can be met in the digital environment. The example of the ‘original’ demonstrates that a precise match may not be achievable but what is required is a technological parallel or equivalent. Once the function or purpose has been determined then an examination of the technology is required to determine the achievability of the digital

24  Henry D Gabriel, ‘The Fear of the Unknown: The Need to Provide Special Procedural Protections in International Electronic Commerce’ (2004) 50 Loyola Law Review 307, 310. 25  ProCD, Inc v Zeidenberg 86 F.3d 1447, 1449 (7th Cir 1996); see also Christina L Kunz et al, ‘­Click-Through Agreements: Strategies for Avoiding Disputes on Validity of Assent’ (2000) 57 B ­ usiness Lawyer 401. 26  Jose Angelo Estrella Faria, ‘e-Commerce and International Legal Harmonization: Time to Go Beyond Functional Equivalence?’ (2004) 16 South African Mercantile Law Journal 529. 27  UNCITRAL Model Law on Electronic Commerce With Guide to Enactment 1996 With Additional Article 5 bis As Adopted in 1998 (New York, United Nations, 1999) 20 et seq, para 16 et seq. www.

Functional Equivalence


equivalent and if achievable, how that should be applied. The analysis must be a penetrating rather than a superficial one. The problem is that the use of the term ‘functional equivalence’ in the legal realm may be a lazy way of merely equating a technological outcome with a pre-digital rule or practice. The Electronic Transactions Act 2002 (NZ) section 3, which sets out the purpose of the legislation, states that that the purpose of the Act is to facilitate the use of electronic technology by ‘(b) providing that certain paper-based legal requirements may be met by using electronic technology that is functionally equivalent to those legal requirements’. The use of the term in this context is a somewhat slippery and potentially dangerous one. The language of the sub-clause assumes that the ‘use of electronic technology’ can be functionally equivalent to legal requirements. A legal requirement may be a statutory direction or rule that is often coupled with a consequence or an adverse outcome for non-compliance or non-performance. ‘Using electronic technology’ refers to the deployment of a form of applied scientific knowledge. A legal requirement may dictate the use of a particular technology, but it cannot be the functional equivalent of a particular use of electronic technology.28 Electronic compliance can seek to achieve the same objectives or outcomes as are achieved with an ‘ink and paper’ compliance with particular legal requirements, but this only takes the matter so far. While there may be functional equivalence for, say, signature or writing, the drafters of the Model Law rejected the concept as the basis of a rule on document retention.29 Thus, the concept is not universally applicable. In terms of the computer environment, the concept of functional equivalence is often used as a form of shorthand or a comparator akin to an analogy. In Cubby v CompuServe it was stated: Technology is rapidly transforming the information industry. A computerised database is the functional equivalent of a more traditional news vendor, and the inconsistent application of a lower standard of liability to an electronic news distributor such as CompuServe than that which is applied to a public library, book store, or newstand would impose an undue burden on the free flow of information. Given the relevant First Amendment considerations, the appropriate standard of liability to be applied to CompuServe is whether it knew or had reason to know of the allegedly defamatory Rumorville statements.30

That was followed in Stratton Oakmont v Prodigy31 where Ain J adopted the language in Cubby. However, the fact of repetition does not make the comparison accurate. There has been no examination of the actual way in which a traditional news vendor, public library or bookstore operates nor any consideration of the 28  Bob Dugan and Ben Dugan, Electronic Transactions: Electronic Transactions Act 2002 ­( Washington, LexisNexis NZ Ltd, 2004) 13–14. 29  UNCITRAL Model Law above n 27, para 18. 30  Cubby v CompuServe 776 F. Supp 135 (SDNY 1991). 31  Stratton Oakmont v Prodigy (1995) NY Misc Lexis 229; 23 Media LR 1794.


The Transition to the Digital Paradigm

fact that those distributors deal in print media which, in terms of its u ­ nderlying properties, is vastly different from digital systems as this chapter has demonstrated. The only area of commonality is that all are involved in some way or another in the distribution of content.

A.  Functional Equivalence Problems In the case of R v Hayes32 the New Zealand Court of Appeal stated Applying the principles of functional equivalence and technological neutrality, the approach to sentencing for computer based crime should start by reference to the penalties that would have been imposed had the crime been committed through paper based means.

The assumption underlying this statement is very wide indeed. It essentially equated paper-based fraudulent offending with fraudulent computer-based offending. The two types of offending are totally different, absent elements of dishonesty and intention, in the means that are employed to commit the offence. The statement by the Court of Appeal is an example of the misuse of functional equivalence because a careful examination of the way in which computerbased offending is carried out would reveal that there are considerable differences between computer fraud and ‘real world’ fraud. What the Court should have done was to undertake an analysis to justify using paper-based offending as a sentencing model by stating how or why it was that the two types of offending are indeed functionally equivalent. Reference was made to the way in which the concept of functional equivalence had developed in the context of the Electronic Transactions Act and the ­UNCITRAL Guide to the Model Law on Electronic Commerce. But apart from an historical discussion about the development of the concept of functional equivalence in the context of e-commerce no analysis was undertaken other than to identify a number of factors that had nothing to do with technological systems other than a reference to the importance of maintaining confidence in computer systems.33 32  R v Hayes (2006) 23 CRNZ 547 (CA) para [77]. An ‘equivalence based’ approach to sentencing for computer enabled offending was adopted in Sarah v R [2013] NZCA 446 para [35] et seq. In R v Durham [2011] NZCA 69 reference was made to the concept of functional equivalence, citing Hayes. Durham was a case involving charges of knowing possession of objectionable material and a challenge to the legality of a search. The Court considered the nature of possession of material in the electronic environment and referred to ‘a clear policy choice that has been made in this country to deal with electronic data in the same way as its paper-based equivalents. The policy is premised on principles of functional equivalence and technological neutrality’. Once again the concept of functional equivalence is used as a shorthand workaround for an absence of analysis of the basic paper-based function and the equivalent means in the digital paradigm. The essence of the shorthand analysis is paper=digital without more. It is no accident that the Judge in Durham was the same Judge who wrote for the Court in Hayes. 33  Hayes above n 32 paras [75]–[76]. It is perhaps no accident that the Judge who wrote the ­decision in Hayes was, whilst in practice, a consultant to the New Zealand Law Commission in the preparation

Functional Equivalence


B.  Functional Equivalence and Links In the United States the issue of functional equivalence received attention in ­Universal City Studios v Reimerdes & Corley34 involving the provisions of hypertext links to sites where a program, DeCSS, that allowed circumvention of technological protection measures for DVDs, the distribution of which was prohibited, could be downloaded. What the defendants had done was set up a webpage with hypertext links to other Internet sites where the program was available. To the extent that defendants have linked to sites that automatically commence the process of downloading DeCSS upon a user being transferred by defendants’ hyperlinks, there can be no serious question. Defendants are engaged in the functional equivalent of transferring the DeCSS code to the user themselves.

What the Judge is saying is that by providing links to sites where DeCSS is available or by which DeCSS may be obtained, the defendants were engaged in the ‘functional equivalent’ of transferring the code to the user themselves. This is the first time that the term ‘functional equivalent’ makes its first appearance in the discussion but is not in any way defined. It seems that the Judge has used the phrase as a convenient way of sheeting liability home against the defendants where, on a rigorous interpretation of the statute, such liability did not exist. A hypertext link provides information where the programs may be located.35 This is rather like a phone book address or a footnote in that it directs the user to a location. It is the webmaster of that location who actively engages in the act of supply. The use of the term ‘functional equivalent’ suggests that by providing a link to another site, the link provider is a provider of the software. That defies reality and a clear interpretation of the language of the statute which neither directly

of its report Electronic Commerce Part One: A Guide for the Legal and Business Community ­(Wellington, Law Commission, 1998) and later the lead Law Commissioner in the preparation of the report ­Electronic Commerce Part Two: A Basic Legal Framework (Wellington, Law Commission, 1999). The UNCITRAL Model Law and Commentary form the basis for many of the Commission’s ­recommendations in those reports with which Heath J would have been very familiar. 34 

111 F Supp 2d 294—Dist Court, SD New York, 2000. Crookes v Newton [2011] 3 SCR 269 at para [30] where Abella J writing for the majority said ‘Hyperlinks thus share the same relationship with the content to which they refer as do references. Both communicate that something exists, but do not, by themselves, communicate its content. And they both require some act on the part of a third party before he or she gains access to the content. The fact that access to that content is far easier with hyperlinks than with footnotes does not change the reality that a hyperlink, by itself, is content-neutral—it expresses no opinion, nor does it have any control over, the content to which it refers’. See further discussion of Crookes v Newton in the section of this chapter dealing with analogy. For Berners-Lee and Cailliau’s original proposal see T Berners-Lee and R Cailliau, World-Wide Web: Proposal for a Hypertext Project, World Wide Web Consortium www. For some of the early thinking that predated the web see Vannevar Bush, ‘As We May Think’ Atlantic Monthly (July 1945) Dr Bush was a computer pioneer, was the Director of the US Office of Scientific Research and Development and co-ordinated wartime research in the application of science to war. 35 See


The Transition to the Digital Paradigm

nor by implication allows the insertion or utilisation of the concept of functional equivalence. In summary the vision of hypertext and its realisation may be expressed in this way: The hypertext vision, which is independent of any specific technological implementation, is of information perceived by the user to be joined into a single Document, with each user able to wander according to his or her own interests and motivations. To the user, the ‘World Wide Web’ facilitates the movement of the user from one document, commonly called ‘pages’ or ‘URLs’, to another. Technically, however, the Web facilitates movement of documents from a server to a client computer, preserving issues of ownership, boundaries, and territory. To the increasing numbers of Web squatters who want to entice users to return to (and stay within) their ‘site’ (the collection of documents over which an owner has complete control), the Web is a collection of destinations. Since these sites are increasingly in competition with each other for user attention, they are, therefore, decreasingly likely to pursue or support the original hypertext vision. In fact, it is more common that designers are expected to ask permission before linking their site to another. And in the emerging commercial world of the Internet, one designer might have to pay another in order for the second to establish a link to the first.36

Notwithstanding the way in which the Web has developed, it is still a matter of fact in terms of computing reality and functionality that a link is an enabling tool. The emphasis upon the link has shifted from a ‘mere’ tool to an ‘essential’ tool. It nevertheless is a ‘mechanical’ means of moving from one address where information is located to another address where information is located, and enabling the various protocols in html to undertake the transfer of information from the target server to the client computer. The link does not automatically provide the information sought. It is rather like a signpost or a trigger for a number of other steps in the process, all of which must be completed, to enable information to be made available. In considering whether links fulfil functions other than those of technical navigation what has to be remembered is that a link in and of itself is content neutral, an aspect of links that has been recognised in cases such as Crookes v Newton37 where the issue, broadly stated, was whether a link was the functional equivalent of publication of defamatory material. Hypertext linking in the context of copyright infringement was examined in Svensson v Retreiver Sverige AB.38 The issue was whether anyone, other than the holder of copyright in a certain work, who supplies a clickable link to the work on his website, communicates the work to the public. Once again the functionality of

36 Michele Jackson, ‘Assessing the Structure of Communication on the World Wide Web’ (1997) 3 Journal of Computer Mediated Communication, doi/10.1111/j.1083-6101.1997.tb00063.x/full. 37  Above n 35. 38  Case C466/12 Svensson v Retriever Sverige AB

Functional Equivalence


links was at issue. Was the provision of a link equivalent to a communication? The European Copyright Society, in an 18 page submission, stated that: [B]ecause hyperlinks do not transmit a work, (to which they link) they merely provide the viewer with information as to the location of a page that the user can choose to access or not. There is thus no communication of the work.

In summary the Court of Justice of the European Union (CJEU) held that: 1. A clickable direct link to a copyright work made freely available on the internet with the authority of the copyright holder does not infringe. 2. It makes no difference to that if a user clicking on the link is given the impression that the work is on the linking site. 3. However, it seems that a clickable link will (unless saved by any applicable copyright exceptions) infringe if the copyright holder has not itself authorised the work to be made freely available on the internet. 4. If the work is initially made available on the internet with restrictions so that only the site’s subscribers can access it, then a link that circumvents those restrictions will infringe (again subject to any applicable exceptions and ­further discussion below). 5. The same is true where the work is no longer available on the site on which it was initially communicated, or where it was initially freely available and subsequently restricted, while being accessible on another site without the copyright holder’s authorisation. There are a couple of observations that must be made about the CJEU decision. The first is that it is not a decision about the implications of hypertext links. There is no discussion about the technology or implications of hypertext links. In that respect the decision is a little disappointing. Secondly, what the decision is about is the nature of communication within the context of copyright law. By focusing on communication, the Court avoided the thorny issue of the function and essential meaning of hypertext links. This was probably by design. By restricting their decision strictly to the ambit of the questions posed by the national court, the CJEU adopted a narrow focus on the issue, restricting the decision to the questions at hand and further restricting the decision to the strict framework of copyright law. The issues have been addressed strictly within that context and the content neutrality (or partiality) of hypertext links need not, therefore, have been considered. In the United States the case of Perfect 10 v Google39 represented a step away from a functional equivalence approach. The facts of the case were these. Perfect 10 marketed and sold copyrighted images of nude models. Some website publishers republished Perfect 10’s images on the Internet without authorisation. Once this occurred, Google’s search engine automatically indexed the webpages


Perfect 10 v Google 487 F. 3d 701 (2007) 9th Cir.


The Transition to the Digital Paradigm

c­ ontaining these images and provided thumbnail versions of images in response to user enquiries. When a user clicked on the thumbnail image returned by Google’s search engine, the user’s browser accessed the third-party webpage and in-line links to the full-sized infringing image stored on the website publisher’s computer. This image appeared, in its original context, on the lower portion of the window on the user’s computer screen framed by information from Google’s web-page. Perfect 10 sued Google claiming that the latter’s ‘Google Image search’ infringed Perfect 10’s copyrighted photographs of nude models, when it provided users of the search engine with thumbnail versions of Perfect 10’s images, accompanied by hyperlinks to the website publisher’s page. The Court observed that apart from the ‘thumbnails’ Google did not store copies of the images. Thus they could not communicate copies because there was no image to be communicated. The Court considered the way in which the technology operated: Instead of communicating a copy of the image, Google provides HTML instructions that direct a user’s browser to a website publisher’s computer that stores the full-size photographic image. Providing these HTML instructions is not equivalent to showing a copy. First, the HTML instructions are lines of text, not a photographic image. Second, HTML instructions do not themselves cause in-fringing images to appear on the user’s computer screen. The HTML merely gives the address of the image to the user’s browser. The browser then interacts with the computer that stores the infringing image. It is this interaction that causes an infringing image to appear on the user’s computer screen. Google may facilitate the user’s access to infringing images. However, such assistance raises only contributory liability issues, see Metro-Goldwyn-Mayer Studios, Inc v Grokster, Ltd., 545 US 913, 929–30, 125 SCt 2764, 162 L.Ed.2d 781 (2005), Napster, 239 F. 3d at 1019, and does not constitute direct infringement of the copyright owner’s display rights.

Thus because Google’s search engine communicated HTML instructions that told a user’s browser where to find full-size images on a website publisher’s computer, Google did not itself distribute copies of the infringing photographs. It was the website publisher’s computer that distributed copies of the images by transmitting the photographic image electronically to the user’s computer. Unlike Crookes v Newton40 which attempted to lay down some bright line rules about links and their general function within the legal framework, the decision in Perfect 10 is fact specific. Whilst, within the factual matrix of the case, Google was not liable for direct infringement, there was still the problem of assisting third-party websites in distributing their infringing copies of photographs to a worldwide market and assisting a worldwide audience of users to access infringing materials, for the purpose of Perfect 10’s contributory infringement claim. The decision is a step away from Judge Kaplan’s approach in Reimerdes & Corley which relied on the ‘functional equivalence’ approach. The Court in Perfect 10 effectively adopted a content-neutrality approach to the issue of direct ­infringement,


Above n 35.

The Problem of Analogies


focusing especially upon the nature of the code and the HTML instructions which they held were not equivalent to showing a copy. Applying the approach in Reimerdes & Corley would amount to a ‘distinction without a difference’ but the more recent cases seem to demonstrate a shift away from such a rationale to a more nuanced understanding of links and their use as a coded reference point.

C.  Functional Equivalence—Final Thoughts The discussion undertaken demonstrates that functional equivalence as a comparator must be treated with considerable care. As the examples and discussion have shown, to utilise functional equivalence goes further than a mere comparison or a wholesale transfer of conceptual thinking from one paradigm to another based on superficial similarities. To do that can result in either the perpetuation of a false comparison or a potential inhibition of the legal effects of the new paradigm by anchoring them in the outdated characteristics and properties of the old. As I have earlier suggested, what must be undertaken is a careful examination of the function of the earlier rule or concept. This was clearly done in the development of the Model Law on Electronic Commerce. Then what needs to be done is to locate an equivalent and I emphasise the use of the word ‘equivalent’ rather than parallel. The paradigmatic differences in the properties of digital systems make this difficult—again a matter recognised by the drafters of the Model Law. Thus a digital process that achieves a similar outcome that is valid within the digital paradigm is what is required. This necessarily requires an engagement with the technology by lawyers, judges and rule makers together with a more than superficial understanding of the technological implications of old rules and new equivalents. The simple shorthand of ‘functional equivalence’ or ‘distinction without a difference’ is lazy analysis and can lead to error or possible absurdity.

VII.  The Problem of Analogies But there is another method of comparison often used by lawyers—that of the use of analogy. Analogy is a cognitive process of transferring information or meaning from a particular subject (the analogue or source) to another (the target), or a linguistic expression corresponding to such a process. In a narrower sense, analogy is an inference or an argument from one particular to another particular, as opposed to deduction, induction and abduction, where at least one of the premises or the conclusion is general. The word analogy can also refer to the relation between the source and the target themselves, which is often, though not necessarily, a similarity, as in the biological notion of analogy. Phrases such as and so on, and the like, as if, and the very word like also rely on an analogical understanding by the receiver of a message including them.


The Transition to the Digital Paradigm

One has to be careful with analogies that the comparators—the source and the target—are alike. It is no good using apples as an analogy for oranges. In the context of the digital paradigm there is a problem in that the set of circumstances which provides the basis for comparison has arisen in an environment that is often quite different from the new one. Part of the problem, especially when dealing with information and communications technologies, is that lawyers tend to focus upon one aspect of the technology—the content layer—whereas information in the Digital Paradigm has different qualities and operates in a multi-layered environment. In the case of Tamiz v Google41 Eady J observed ‘one needs to be wary of analogies when considering modern technology’ and in this part I shall demonstrate the care that needs to be taken in developing analogies in the search for an applicable rule.

A.  The Nature of a Computer and the Scope of a Search Courts have struggled with the concept of a computer as a data storage device and have attempted to draw analogies with non-computer comparators to try and give effect to or validate search exercises. Thus, in R(H) v Commissioners of Inland Revenue42 it was suggested that comparison with a filing cabinet was inexact. But a hard disk was seen as a single object—a thing for the purposes of the Taxes Management Act. To fall within the scope of the warrant it was necessary to identify the computer as a single ‘thing’ hence the necessity to reject the comparison with a filing cabinet. As a complete and assembled object a computer clearly is a ‘thing’ but that ignores the completely novel manner by which digital systems operate. There can be no doubt that in terms of law enforcement, the approach adopted by the judge is a pragmatic one, but for reasons which will be developed, it fails to safeguard aspects of privacy, information confidentiality and privilege associated with an accumulation of information on computer storage media. In Faisaltex Ltd v Preston Crown Court43 Keene LJ considered that the issue turned on whether one was dealing with a single item or ‘thing’ such as a diary or letter which might contain both relevant and irrelevant material, or with something like a container of a number of things, giving the example of a filing cabinet.44 In the case of Kent Pharmaceuticals Limited v Director of Serious Fraud Office and Others45 Lord Woolf of Barnes CJ held that the hard drive of a computer


Tamiz v Google [2012] EMLR 24; [2012] EWHC 449 (QB). R(H) v Commissioners of Inland Revenue [2002] EWHC 2164 Admin. 43  Faisaltex Ltd v Preston Crown Court [2009] 1 Cr App R 37, [2008] EWHC 2832 (Admin). 44 The Faizeltex Approach was followed in Chief Executive, Ministry of Fisheries v United Fisheries [2011] NZAR 54; [2010] NZCA 356 and Southern Storm (2007) Limited v The Chief Executive, Ministry of Fisheries [2013] NZHC 117. 45  Kent Pharmaceuticals Limited v Director of Serious Fraud Office and Others [2002] EWHC 3023 Admin. 42 

The Problem of Analogies


would be ‘a document’ within the meaning of section 2 of the Criminal Justice Act 1987 in the sense that it contained recorded information. In the case of R v Misic Anderson J writing for the Court of Appeal considered that a computer was a document for the purposes of the fraud provisions of the Crimes Act 1961 (NZ). He developed the argument in this way: Essentially, a document is a thing which provides evidence or information or serves as a record. The fact that developments in technology may improve the way in which ­evidence or information is provided or a record is kept does not change the fundamental purpose of that technology, nor a conceptual appreciation of that function. Legislation must be interpreted with that in mind.46

However, care must be taken to avoid a superficial way of comparing old and new technologies. In the case of a paper document and the storage of information the medium and the content were one. Simply put, the medium rendered the content static. Digital information is dynamic. Furthermore, to render digital information comprehensible requires a complex intermediation of hardware and software— something not necessary in the case of ink on paper. So there is immediately a difficulty in comparing computers or computer-based information with other forms of information capture or storage. Analogies in the field of computer based search and seizure are further challenged when the ‘plain view’ doctrine has to be considered. Briefly stated, the plain view doctrine applies where an enforcement officer acting under a search warrant authorising a search for item A relating to offending during the course finds item B, not covered by the search warrant and evidencing other offending in plain view. The problem when applying the plain view doctrine to electronic data lies in the way in which mixed and largely irrelevant data may be stored on a computer along with material within the scope of the search warrant. At present, the evaluation of such material is left to the investigating officer or those called upon to assist. These investigating officers may uncover evidence of other offending beyond the scope of the warrant. It could be argued that because the data is accessible and available (unless it is password protected or encrypted) it is in plain view. The issue that arises in such circumstances is whether or not the ‘first view’ of the recovered electronic data or the cloned hard drive should be reserved to the investigating officer or be conducted by a third party. A cloned copy of a hard drive preserves information at a point in time. A difficulty lies in the way in which the examination of that information to locate items of relevance to the inquiry should be carried out. Evidence of matters that are not relevant to the particular inquiry, but that may disclose other information of interest to investigative bodies, may well be uncovered within the ‘filing cabinet’. The use of the ‘filing cabinet’ analogy provides a context, but the analogy fails when confronted with technical reality.


R v Misic [2001] 3 NZLR 1.


The Transition to the Digital Paradigm

Conducting a search of a physical filing cabinet pursuant to a search warrant often necessitates a close inspection of the cabinet’s contents in order to determine the existence of evidence related to the alleged offending. This is particularly so if the contents of the cabinet are predominantly documents. Nevertheless, if evidence of other unrelated offending is found in the cabinet pursuant to a search warrant then such evidence could be said to be in plain view only if it was visible when the cabinet drawer was opened—for example, drug paraphernalia or perhaps a document recording drug transactions placed on top of a pile of other documents. Should evidence of offending outside the scope of the warrant come to light only after all the documents within the filing cabinet have been scrutinised, then such evidence was not in plain view and therefore not able to be seized. Thus where the search power is executed in relation to the contents of a filing cabinet, unless the evidential material was in plain view when the cabinet drawers were opened, then any evidence of an offence outside the scope of the warrant found after the cabinet’s contents were removed and examined would not be able to be seized. Such evidence was not in plain view since further searching of the cabinet’s contents was required. It is at this point that applying the filing cabinet analogy to cyber searches breaks down. When a clone of a hard drive is taken, it is tantamount to copying the entire contents of the filing cabinet. Electronic data does not have physical properties akin to paper documents. The paper document, lying in ‘plain view’ in a drawer of the filing cabinet, has no immediate electronic parallel. A further problem arises in the assumption that the investigating officers have direct and immediate access to the data after seizure. In this respect, the imposition of an independent layer between the action of cloning the data and its assessment by an individual officer such as that suggested in United Fisheries is necessary.47 Although there may be significant costs associated with the employment of an independent barrister or computer expert (or both), and it would be of advantage if both requirements could be present in the same person, it must be recognised that new technologies challenge concepts that were developed in a different paradigm and may make such added layers necessary. In the United States the Ninth Circuit has rejected reliance upon the plain view doctrine in the context of computer searches. In the case of US v Comprehensive Drug Testing48 the Court set out guidelines for those issuing search warrants and subpoenas for electronically stored information to ensure that the ‘plain view doctrine’ is not invoked and that a third party review of recovered data take place to locate and redact irrelevant or privileged material before it is released to investigators. The case of Riley v California addresses the care with which argument by analogy must be approached.49 The police had conducted a warrantless search of data 47 

Above n 44. US v Comprehensive Drug Testing 621 F 3d 1162 (9th Cir 2010). 49  Riley v California 134 SCt 2473 (2014) 42 Media LR 1925. 48 

The Problem of Analogies


stored upon a suspect’s mobile phone. The question was whether a search incident to arrest applied to mobile phones. The Court looked at the nature of the technology and observed that mobile phones are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy. A smart phone of the sort taken from Riley was unheard of ten years ago; a significant majority of ­American adults now own such phones.50

The approach adopted by the Court was to consider whether a search was exempted from warrant requirements by assessing the degree to which it intruded upon an individual’s privacy against the need for the promotion of legitimate ­governmental interests. It was argued on behalf of the government that the search of data stored on a mobile phone was materially indistinguishable from other sorts of searches such as wallets, address books and purses. That analogy was quickly dismissed by the Court: That is like saying a ride on horseback is materially indistinguishable from a flight to the moon. Both are ways of getting from point A to point B, but little else justifies lumping them together. Modern cell phones, as a category, implicate privacy concerns far beyond those implicated by the search of a cigarette pack, a wallet, or a purse. A conclusion that inspecting the contents of an arrestee’s pockets works no substantial additional intrusion on privacy beyond the arrest itself may make sense as applied to physical items, but any extension of that reasoning to digital data has to rest on its own bottom.51

The Court went on to explain why it was that the analogies offered failed: Cell phones differ in both a quantitative and a qualitative sense from other objects that might be kept on an arrestee’s person. The term ‘cell phone’ is itself misleading shorthand; many of these devices are in fact minicomputers that also happen to have the capacity to be used as a telephone. They could just as easily be called cameras, video players, rolodexes, calendars, tape recorders, libraries, diaries, albums, televisions, maps, or newspapers. One of the most notable distinguishing features of modern cell phones is their immense storage capacity. Before cell phones, a search of a person was limited by physical realities and tended as a general matter to constitute only a narrow intrusion on privacy. Most people cannot lug around every piece of mail they have received for the past several months, every picture they have taken, or every book or article they have read—nor would they have any reason to attempt to do so.52

The Court went into some detail to discuss issues like the storage capacity of modern mobile phones, the different types of data that they might contain and the consequences that this may have for privacy, and recognised the difference between the mobile phone and other items that a person might carry about. 50 

ibid, 2484. ibid, 2488–89. 52  ibid, 2489. 51 


The Transition to the Digital Paradigm

Once the inherent nature of the technology was understood and articulated, it was clear that the difference was paradigmatic. This concept of a paradigm shift did not occur to Alito J who pointed to the fact that the approach of the majority led to an anomaly. For example, the Court’s broad holding favors information in digital form over information in hard-copy form. Suppose that two suspects are arrested. Suspect number one has in his pocket a monthly bill for his land-line phone, and the bill lists an incriminating call to a long-distance number. He also has in his a wallet a few snapshots, and one of these is incriminating. Suspect number two has in his pocket a cell phone, the call log of which shows a call to the same incriminating number. In addition, a number of photos are stored in the memory of the cell phone, and one of these is incriminating. Under established law, the police may seize and examine the phone bill and the snapshots in the wallet without obtaining a warrant, but under the Court’s holding today, the information stored in the cell phone is out.53

The anomaly arises because of paradigmatic difference between ‘paper’ and digital information. And Alito J recognised that although such anomalies existed he could not see a workable alternative. The care that must be deployed in considering appropriate comparators is illustrated in the case of Kyllo v US.54 In that case the Court held that sense-enhancing technology to gather any information regarding interior of home that could not otherwise have been obtained without physical intrusion into constitutionally protected area constituted a ‘search’, as did the use of thermal imaging to measure heat emanating from a home. Stephens J suggested that an analogy could be drawn with the human senses.55 The problem with this suggestion is that it takes human senses on the one hand, with which most of us are blessed and which convey essential information about the world around us, and equates it with a technology that has a limited and specific purpose. The comparators bear little resemblance to one another. Finally the case of Florida v Jardines56 offers an amusing example of the use of analogy which, once again, is not apt and involves vastly different comparators and demonstrates that one must be very careful and very specific in the use of analogy with new technologies. The issue was whether the use of a sniffer dog to investigate whether cannabis was being cultivated upon premises amounted to a search. Kagan J opened her opinion as follows: For me, a simple analogy clinches this case—and does so on privacy as well as property grounds. A stranger comes to the front door of your home carrying super-high-powered binoculars. He doesn’t knock or say hello. Instead, he stands on the porch and uses the


ibid, 2497. Kyllo v US 533 US 27 (2001). 55  ibid, 43. 56  Florida v Jardines 133 SCt 1409 (2013). 54 

The Problem of Analogies


binoculars to peer through your windows, into your home’s furthest corners. It doesn’t take long (the binoculars are really very fine): In just a couple of minutes, his uncommon behavior allows him to learn details of your life you disclose to no one. Has your ‘visitor’ trespassed on your property, exceeding the license you have granted to members of the public to, say, drop off the mail or distribute campaign flyers? Yes, he has. And has he also invaded your ‘reasonable expectation of privacy’, by nosing into intimacies you sensibly thought protected from disclosure? Yes, of course, he has done that too.57

The Judge equated a trained police dog with a person using a particular technology—the high powered binoculars. There was a difference between allowing a person coming on a property pursuant to an implied licence and a person examining the front path with a metal detector. The scope of the implied licence is limited not only as to area but as to purpose. The analysis based upon the scope of an express of implied licence to come onto a property to carry out an investigation using a specially trained dog provides an answer to the problem. To cast the answer as an imperfect analogy, once again using disparate comparators, is a further example of the dangers of loose analogies and the care that needs to be taken in employing this type of reasoning, especially in the area of new and paradigmatically different technologies.

B.  Defamation, Publication Analogies and the Digital Paradigm Another area of law that has struggled with new digital technologies is that of defamation and in particular what amounts to publication. A progressive approach and one which recognises the issues posed by new communications technologies has been adopted in England by Eady J which has been dampened somewhat by the Court of Appeal. Eady J’s approach has been to recognise the unique qualities of Internet based information and has cautioned against the use of analogy. Two significant cases in which he delivered judgments were Metropolitan Schools Ltd v Designtechnica Corporation and Googl e58 and Tamiz v Google.59

i.  The English Approach Metropolitan Schools involved Google searches which linked to a defamatory article and also provided snippets from the article. A most important issue was whether or not Google was a publisher at common law. Eady J acknowledged that analogies are not always helpful, but that there would be resort to analogy when the common law had to be applied to new and unfamiliar concepts. He then went on to consider the analogy of a search carried out in the card index or catalogue


ibid, 1418 (citations omitted). Metropolitan Schools Ltd v Designtechnica Corporation and Google [2011] 1 WLR 1743; [2009] EMLR 27; [2009] EWHC 1765 (QB). 59  Above n 41. 58 


The Transition to the Digital Paradigm

of a library. Providing a reference to a book which may contain defamatory material would hardly attract liability. However, a catalogue may record that a particular book contained allegations of corruption against a living person or spells out information which could attract liability under the ‘repetition rule’.60 That being so and by extending the analogy, Google should be liable for repeating a defamatory allegation in a snippet. However, Eady J then pointed out that whereas the compiler of a hard copy library catalogue will have chosen the wording of the comment about content of a book on the card, that does not apply to a search engine because there has been no intervention on the part of a human agent. The content of a snippet is compiled by web-crawling robots.61 Eady J went on to consider the position once there had been notice given to Google that a snippet contained defamatory material. He referred to the decision of Morland J in Godfrey v Demon Internet where the acquisition of knowledge was critical. That is because the law recognises that one can become liable for publication of defamatory material by acquiescence. Eady J observed that someone hosting a website will generally be able to remove material that is legally objectionable. If this is not done, then there may be liability on the basis of authorisation or acquiescence. At this point Eady J recognised that platforms and applications that have been ‘bolted on’ to the Internet are different, and recognition of the different ways in which the technologies operate must be taken into account. Having referred to a website he went on to say: A search engine, however, is a different kind of Internet intermediary. It is not possible to draw a complete analogy with a website host. One cannot merely press a button to ensure that the offending words will never reappear on a Google search snippet: there is no control over the search terms typed in by future users. If the words are thrown up in response to a future search, it would by no means follow that the Third Defendant has authorised or acquiesced in that process.62

In Metropolitan Schools Google had taken steps to ensure that certain identified URLs had been blocked, but it needed to have specific URLs identified and was not in a position to put in place a more effective block on the specific words complained of without, at the same time, blocking a huge amount of other material which might contain some of the individual words comprising the offending snippet. Furthermore although it had blocked specific URL’s referred to by the plaintiff from its website, that would not stop someone going to www. to carry out a search to locate the information. In addition the material that was the subject of the search could be moved to a different website with a different address, which would thereby avoid the block. Designtechnica could


Metropolitan Schools v Designtechnica above n 58 at para [52]. ibid, para [53]. 62  ibid, para [55]. 61 

The Problem of Analogies


alter the code on its website to ensure that offending material was not picked up by search engine webspiders, and Eady J considered that the plaintiff ’s remedy lay against that company. An injunction against Google would be an inadequate substitute. Against this background, including the steps so far taken by the Third Defendant to block the identified URLs, Eady J considered it was unrealistic to attribute responsibility for publication to the Third Defendant, whether on the basis of authorship or acquiescence. Thus the automatic process together with the lack of human or editorial control seemed determinative. But then Eady J went further. He considered that, even after notification of defamatory material, Google was still not a publisher because of its lack of control over future searches that might continue to throw up offending material. So where did the library analogy go? In a word, nowhere. In some respects it demonstrates the difficulty in relying on comparators from different paradigms. What Eady J did was to recognise that the library catalogue analogy went only so far, and that the main feature of the ‘repetition rule’ involved human intervention. In my opinion his concentration on the nature of the technology itself was the correct approach which resulted in a decision which accorded with technological reality. Eady J’s approach in Metropolitan Schools was continued in the case of Tamiz v Google63 although this case did not deal with search results but Google’s blog service, Blogger. It allowed any Internet user to create an independent blog. Was Google a publisher as the host of the Blogger site? Eady J held that in its role as a platform provider Google was entirely passive. It had a policy of not removing offending material even when notified, but merely passing the complaint on to the blogger concerned. Although Eady J made no reference to the graffiti principle established in the United States,64 he nevertheless likened Google’s position to that of the owner of a wall that had been graffitied in that, although the owner could have it painted over, its failure to do so did not necessarily make it a publisher. In developing this analogy he issued a caution about analogies65 but said it may perhaps be said that the position is, according to Google Inc rather as though it owned a wall on which various people had chosen to inscribe graffiti. It does not regard itself as being more responsible for the content of these graffiti than would the owner of such a wall.66

Tamiz went to the English Court of Appeal67 and the developing principles in the line of cases developed by Eady J mitigating the strictness of defamation law 63 

Above n 41. Heller v Bianco 111 Cal App 2d 424 (1952) and Tacket v General Motors 836 F 2d 1042 (7th Cir 1987). 65  Above n 41. 66  ibid, para [10]. 67 ibid. 64 See


The Transition to the Digital Paradigm

for Internet hosts hit a ‘speed bump’. The Court held that Google could not be regarded as a purely passive communicator of information in the circumstances of this case. One of the dangers in using analogies, particularly at first instance, is that an Appeal Court may disagree with the particular comparators and substitute their own. This is precisely what happened when Tamiz went on appeal. Richards LJ referred to an earlier case of Byrne v Deane68 where a notice had been posted on the wall of a golf club and the club was held to be a publisher. This was the start of consideration of a competing analogy and it was an unwise one, especially given that the nature of the technology employed by Google is significantly distant from the means employed in Byrne v Deane.69 Then Richards LJ passed on to a more recent case—that of Davison v Habeeb70—which considered defamatory material posted on a blog hosted by Google itself. From that point alone HHJ Parkes QC considered it arguable that Google was a publisher but he went on to rely on Byrne v Deane, stating: The analogy between the ISPs which Eady J was considering in Bunt v Tilley … and the postal service was an apt one, because the ISPs in that case, like the postal or indeed the telephone services, were simply conduits, or facilitators, enabling messages to be carried from one person, or one computer, to another., by contrast, is not simply a facilitator, or at least not in the same way as the ISPs. It might be seen as analogous to a gigantic noticeboard which is in [Google Inc’s] control, in the sense that [Google Inc] provides the noticeboard for users to post their notices on, and it can take the notices down (like the club secretary in Byrne v Deane …) if they are pointed out to it. However, pending notification it cannot possibly have the slightest familiarity with the notices posted, because the noticeboard contains such a vast and constantly growing volume of material. On that analogy, it ought not to be viewed as a publisher until (at the earliest) it has been notified that it is carrying defamatory material so that, by not taking it down, it can fairly be taken to have consented to and participated in publication by the primary publisher. The alternative is to say that, like Demon Internet in Godfrey v Demon Internet Ltd…, it chose to host material which turned out to be defamatory, and which it was open to anyone to download, so that at common law it was prima facie liable for publication of the material, subject to proof that it lacked the necessary mental state… Even if [Google Inc] should properly be seen as a facilitator, the mere provider of a gigantic noticeboard on which others published defamatory material, in my judgment it must also at least be arguable that at some point after notification [Google Inc] became liable for continued publication of the material complained of on the Byrne v Deane principle of consent or acquiescence.71

By using the notice board analogy, the conclusion is that Google assumes the role of publisher once it has been notified that it is carrying defamatory material. Thus,


Byrne v Deane [1937] 1 KB 818. For further discussion of Byrne v Deane see below. 70  Davison v Habeeb [2011] EWHC 3031 (QB). 71  ibid, paras [38] and [47]. 69 

The Problem of Analogies


despite technological realities and differences between a ‘real word’ notice board and a blog which has elements of notification but operates in an entirely different way, the notice board analogy brings Google within the scope of the rule in Byrne v Deane. In Tamiz, Richards LJ picked up the ball and said of Eady J’s approach: In relation to Blogger he said nothing about HHJ Parkes QC’s analogy with the provision of a gigantic notice board on which others post comments. Instead, he drew an analogy with ownership of a wall on which various people choose to inscribe graffiti, for which the owner is not responsible (see [16] above). I have to say that I find the notice board analogy far more apposite and useful than the graffiti analogy. The provision of a platform for the blogs is equivalent to the provision of a notice board; and Google Inc goes further than this by providing tools to help a blogger design the layout of his part of the notice board and by providing a service that enables a blogger to display advertisements alongside the notices on his part of the notice board. Most importantly, it makes the notice board available to bloggers on terms of its own choice and it can readily remove or block access to any notice that does not comply with those terms. Those features bring the case in my view within the scope of the reasoning in Byrne v Deane. Thus, if Google Inc allows defamatory material to remain on a Blogger blog after it has been notified of the presence of that material, it might be inferred to have ­associated itself with, or to have made itself responsible for the continued presence of that m ­ aterial on the blog and thereby to have become a publisher of the material. Mr White QC s­ubmitted that the vast difference in scale between the Blogger set-up and the small club-room in Byrne v Deane makes such an inference unrealistic and that nobody would view a comment on a blog as something with which Google Inc had associated itself or for which it had made itself responsible by taking no action to remove it after notification of a complaint. Those are certainly matters for argument but they are not decisive in Google Inc’s favour at this stage of proceedings where we are concerned only with whether the appellant has an arguable case against it as a publisher of the comment in issue.72

The problem with this approach is that by using the analogy of a noticeboard and adopting the approach in Byrne v Deane, the Court absolved itself of the need for any discussion of the nature of the technology employed, although it was hinted that there might be a further consideration of these matters should the case go to trial. But that is a convenient means of sidestepping the necessity to properly test the nature of the comparators and thereby test the utility of the analogy itself. The Court of Appeal not only ‘picked a winner’ in choosing the notice board and thereby adopting the Byrne v Deane approach, but it relied on Davison v Habeeb with the inference that although the case had been referred to Eady J, he made no comment on the notice board analogy. This provided another means by which the Court could comfortably adopt Davidson v Habeeb and thereby Byrne v Deane without a rigorous examination of the comparators. As Eady J said, one has to be wary of analogies when dealing with new technologies. 72 

Tamiz v Google above n 41.


The Transition to the Digital Paradigm

ii.  Looking at the Technology—Canada and Australia In the Canadian case of Barrick Gold Corp v Lopehandia73 the defendant had uploaded a virtual ‘blizzard’ of defamatory messages. One of the first matters the Judge took into account was the nature of the Internet itself—as far more pervasive than print. The speed of delivery is much faster than traditional media, and hyperbole is at least as common in Internet discussions as carefully considered argument. The Judge considered that traditional approaches in the real world would fail to address the realities of the Internet world. The Court was therefore prepared to differentiate the Internet as a communications forum from other traditional forms of media and apply what could be described as the germ of a property-based argument, taking into account the particular properties, qualities and characteristics of the new digital paradigm. This was in contrast to the approach of the High Court of Australia in Dow Jones v Gutnick74 where a distinctly technology neutral approach was taken. The dismissal of the distinction between the Internet on one hand and radio and television on the other, based on the capacity of the technology to disseminate, ignores the significant differences in disseminatory capacity between the Internet and what could be termed main stream or analogue media. This amounts to a dismissal of the paradigmatic differences between earlier media and the Internet as being of little significance and in that way misinterprets the nature of the technology.75 a.  Crookes v Newton Curiously enough none of the cases referred to in the discussion about Google and defamation above contain any consideration of liability for simply providing a link to defamatory material. In all the cases there has been a deeper issue about services that are provided by Google, or the manner in which search results are displayed. However, the case of Crookes v Newton,76 a decision of the Supreme Court of Canada, provides authority at the highest level for the treatment of links in defamation proceedings. Indeed, the finding of the Court on links could well provide guidance in other areas of law. Crookes v Newton holds that a hyperlink, by itself, is not publication of the content to which it refers. Publication will only occur if the hyperlink is presented in a way that repeats the defamatory content. In reaching this conclusion the Court started with a consideration of the history of the publication rule and of the importance that a defamatory statement had to


Barrick Gold Corp v Lopehandia (2004) 71 OR 3d 416 (ON CA). Dow Jones v Gutnick [2002] 210 CLR 575; [2002] HCA 56. 75  The dismissal of the technology as a significant factor was also the case in Trkulja v Google [2012] VSC 533 and Duffy v Google [2015] SASC 170. These cases will be discussed in detail in ch 11 but it can be argued that at least in the field of defamation, the Australian Courts are dismissive of the suggestion that digital systems are any different from those of the pre-digital paradigm. 76  Above n 35. 74 

The Problem of Analogies


convey meaning. It considered the traditional rule which considers the form of the acts of the defendant and the manner in which it assists causing defamatory material to reach a third party. The Court could have then considered the use of analogy but did not do so. Rather it looked at the nature of the technology and the fact that there would be a presumption of liability for all hyperlinkers which would have a chilling effect upon the flow of information and freedom of expression on the Internet. In considering the nature of the technology the Court said: Hyperlinks are, in essence, references, which are fundamentally different from other acts of ‘publication’. Hyperlinks and references both communicate that something exists, but do not, by themselves, communicate its content. They both require some act on the part of a third party before he or she gains access to the content. The fact that access to that content is far easier with hyperlinks than with footnotes does not change the reality that a hyperlink, by itself, is content-neutral. Furthermore, inserting a hyperlink into a text gives the author no control over the content in the secondary article to which he or she has linked. A hyperlink, by itself, should never be seen as ‘publication’ of the content to which it refers. When a person follows a hyperlink to a secondary source that contains defamatory words, the actual creator or poster of the defamatory words in the secondary material is the person who is publishing the libel. Only when a hyperlinker presents content from the hyperlinked material in a way that actually repeats the defamatory content, should that content be considered to be ‘published’ by the hyperlinker.77

Thus the Court is saying that hyperlinks are essentially content-neutral references to material that hyperlinkers: 1. have not created; and 2. do not control, Although a hyperlink communicates that information exists and may facilitate the transfer of information, it does not, by itself, communicate information. But the Court went on to consider a significantly wider issue—that of the Internet itself: The Internet cannot, in short, provide access to information without hyperlinks. Limiting their usefulness by subjecting them to the traditional publication rule would have the effect of seriously restricting the flow of information and, as a result, freedom of expression. The potential ‘chill’ in how the Internet functions could be devastating, since primary article authors would unlikely want to risk liability for linking to another article over whose changeable content they have no control. Given the core significance of the role of hyperlinking to the Internet, we risk impairing its whole functioning. Strict application of the publication rule in these circumstances would be like trying to fit a square archaic peg into the hexagonal hole of modernity.78

This did not mean that there was a blanket defence available for those who hyperlinked to defamatory material. A hyperlink will constitute publication if it ‘presents 77 ibid. 78 

ibid, at [36].


The Transition to the Digital Paradigm

content from the hyperlinked material in a way that actually repeats the defamatory content’.79 This might occur, for example, where a person inserts a hyperlink in text that repeats the defamatory content in the hyperlinked material. In these cases, the hyperlink would be more than a reference; it would be an expression of defamatory meaning. However, this had not occurred in the present case, and the majority dismissed the appeal. Thus it can be seen how a rigorous technologically-based argument can result in the formation of a rule. The Supreme Court looked at first principles but did not get involved in trying to locate comparators. Rather it looked at the functionality of hypertext links to reach a conclusion that in and of themselves they did not contain meaning.80

iii.  New Zealand—Wishart v Murray The process of achieving a result by the use of analogy, and a demonstration of my suggestion that doing so at first instance lays a judge open to second guessing on appeal, may be found in the decision of Courtney J in Wishart v Murray81 and the subsequent approach by the New Zealand Court of Appeal.82 As we have seen from the Google cases, problems arise in the case of content hosts and whether they are publishers. What is novel about some of the services available on the Internet is the ability of people—by means of what could broadly be defined as social networking sites—to add or contribute to content. Now that makes the contributor an author. But does the primary author—the ‘owner’ of the blog site or social network page—attract any liability for what others say? Facebook presents an interesting example. When and under what circumstances will the host of a Facebook page become a publisher of comments by third parties? Is knowledge of the defamatory content required or, by virtue of hosting the page, is knowledge imputed?—that is, ought the content host to have known of the presence of the content rather than actually knowing of it? Courtney J, after an extensive examination of the cases, concluded that an imputed knowledge test had to be applied to the host of a Facebook page. In Wishart v Murray the plaintiff had written a book about a high profile and controversial murder case. He took a particular stance which attracted a

79  Crookes v Newton above n 35 at para [42] per Abella J referred to in Murray v Wishart [2014] NZCA 461 (CA) at para [31]. 80  The temptation is to use the analogy of a footnote in a discussion of links but the Supreme Court of Canada has rendered that unnecessary. However, there has developed a suggestion that there could in some cases be publication by reference. See Jennings v Buchanan [2005] 2 NZLR 577, Ferrier Hodgson v Siemer (HC, Auckland CIV-2005-404-001808, 5 May 2005, Ellen France J) at [1] and [82]; Korda Mentha v Siemer (HC, Auckland CIV-2005-404-1808, 23 December 2008, Cooper J) at [12], [14] and [55]; and Siemer v Stiassny [2011] NZCA 106 at [30(b)]. In the case of Karam v Fairfax [2012] NZHC 1331 it was decided to hold the issue over from a pre-trial application to a full trial where evidence is adduced and a factual matrix established. 81  Wishart v Murray [2013] 3 NZLR 246 (HC). 82  Wishart v Murray above n 81.

The Problem of Analogies


c­ onsiderable degree of criticism. Mr Murray set up a Facebook page suggesting that the book should be boycotted and he posted comments on the page and on Twitter that were critical of the plaintiff. The Facebook page attracted 250,000 site visits and comments were posted which were abusive and defamatory. What then about knowledge? Does the ‘publication test’ require actual or constructive knowledge of the defamatory statement? The ‘notice board’ cases assisted Courtney J in her approach. Courtney J’s starting point was the case of Byrne v Deane referred to above.83 This case, it will be remembered, concerned an anonymous notice that was posted on the noticeboard of a golf club. The club rules prohibited notices being posted without the secretary’s consent. The defendants had seen the notice but did not remove it. The Court of Appeal held that those with control over the noticeboard were publishers of material posted on it if it could be inferred that they had taken responsibility for it. They had the power to remove the notice and failed to do so. Byrne v Deane was followed by the Supreme Court of New South Wales in Urbanchich v Drummoyne Municipal Council.84 This case concerned defamatory posters glued to bus shelters under the defendant’s control. The defendant had actual knowledge of the posters and had been requested to remove them. Courtney J considered85 that Urbanchich held that there should be proof of facts from which the fact-finder could infer that the defendant had taken responsibility for, or ratified, the continued publication of the statements. The defendant in Urbanchich did in fact have actual knowledge and was asked to remove the material, but treating these facts as prerequisites for the defendant to be treated as a publisher does not accurately reflect the reasons for the decision. In Wishart, Courtney J considered that the analogy of the noticeboard applied to considering whether the host of a Facebook page is a publisher. The host of such a page may establish what is essentially a noticeboard, which may be public and to which anyone may post comments, or which may be private and restricted to posting from a specified group. In either case the host may control content and delete postings and may also block users. Furthermore she held that those who host Facebook pages are not passive instruments (as was the case in Bunt v Tilley which, as noted, dealt not with content providers but content carriers) or mere conduits of content on the page. She held that there are two circumstances where they will be publishers of content: 1. If they know of the defamatory statement and fail to remove it within a reasonable time in circumstances that give rise to an inference that they are taking responsibility for it. A request by the person affected is not necessary. 2. Where they do not know of the defamatory posting but ought, in the circumstances, to know that postings are being made that are likely to be defamatory.


Above n 68. Urbanchich v Drummoyne Municipal Council (1991) Aust Torts Reports 69. 85  Above n 81 at [87]. 84 


The Transition to the Digital Paradigm

The notice board analogy is hardly apt. It deals with a particular form of notification that has little technological impact. One wonders whether the notice board had convenient similarities to a Facebook page. Yet there are fundamental differences. There may well be situations where the way in which a Facebook page is operated would make it more of a carrier and a passive instrument as the Court of Appeal was to observe. To make a general statement about those who host ­Facebook pages, as Courtney J did, ignores the possibility of a number of fact specific situations. A similar generalisation could not apply to notice boards. Once again an examination of the way in which the board was used would have to be undertaken. Added to the mix for a Facebook page is the nature of the technology, together with the fact that one has to be careful not to generalise between platforms. A blog that invites comments is quite a different proposition from a Facebook page that invites postings on a wall. a.  Wishart v Murray on Appeal The case was appealed to the Court of Appeal.86 The judges unanimously held that a third party publisher—that is the owner of the Facebook page that contains comments by others—was not liable as publisher of those comments. They rejected the suggestion that liability should attach because the owner of the page ‘ought to have known’ that there was defamatory material, even if he or she was unaware of the actual content of the comment. The Court adopted a more restrictive approach, holding that the host of a Facebook page would only be liable as a publisher if there was actual knowledge of the comments and that there was a failure to remove them in a reasonable time in circumstances, which could give rise to an inference that responsibility was being taken for the comments. The decision leaves open other aspects of publication on platforms other than Facebook such as blogs. As is the case with many aspects of the dynamic nature of the Internet, the law will continue to develop. But what was important was that the Court recognised some of the problems posed by the Digital Paradigm and as a first step considered how the Facebook page worked. This is an important and necessary stage in determining the proper application of existing rules. The Court then made the following significant comment about analogy: The analysis of the cases requires the Court to apply reasoning by strained analogy, because the old cases do not, of course, deal with publication on the internet. There is a question of the extent to which these analogies are helpful. However, we will consider the existing case law, bearing in mind that the old cases are concerned with starkly different facts.87

This comment from the Court, together with its earlier analysis of how Facebook worked in this case demonstrates the proper analytical path in considering 86  87 

Murray v Wishart above n 81. ibid, para [99].

The Problem of Analogies


­ aradigmatically differing technologies. There is a clear recognition in this statep ment that whereas in the past the lack of technological nuance meant that communication of content could be considered as simply that, in the Digital Paradigm the method and technological realities of content communication must be taken into account. b.  The Use of Analogy The Court observed that it was being asked to consider third-party Facebook ­comments as analagous with: 1. the posting of a notice on a notice board (or a wall on which notices can be affixed) without the knowledge of the owner of the notice board/wall; 2. the writing of a defamatory statement on a wall of a building without the knowledge of the building owner; and 3. a defamatory comment made at a public meeting without the prior knowledge or subsequent endorsement or adoption by the organiser of the meeting. The Court then considered the circumstances in Emmens v Pottle which established that a party can be a publisher even if they did not know of the defamatory material. The holding in that case was that a news vendor who does not know of the defamatory statement in a paper he or she sells is a publisher, and must rely on the innocent dissemination defence to avoid liability. The news vendor in Emmens v Pottle did not provide an apposite analogy with a Facebook page host. The Court observed that a news vendor is a publisher only because of the role taken in distributing the primary vehicle of publication, the newspaper itself. This contrasts with the host of a Facebook page which is providing the actual medium of publication, and whose role in the publication is completed before publication occurs. The Facebook page is in fact set up before any third party comments are posted. Thus the nature of the technology demonstrates the invalidity of the analogy. So was the Facebook page more like the ‘notice on the wall’ situation described in Byrne v Deane?88 This analogy was not perfect either. In Oriental Press Group Ltd v Fevaworks Solutions Ltd89 the Court found that posting a notice on a wall, the situation in Byrne v Deane, was a breach of club rules and therefore amounted to a trespass. The Court of Appeal did not consider this was a factor affecting the outcome but rather that the club and its owners had not posted the defamatory notice and, until they became aware of it, were in no position to prevent or bring to an end the publication of the defamatory message. If a case arose where the defamatory message was posted on a community notice board on which postings were welcomed from anyone, the same analysis would apply. Furthermore,

88  89 

Above n 68. Oriental Press Group Ltd v Fevaworks Solutions Ltd [2013] HKCFA 47.


The Transition to the Digital Paradigm

in Byrne v Deane the post was truly anonymous. There was no way by which the person posting the notice could be identified. In the case of the Facebook host, posting messages in response to an invitation to do so is lawful and solicited by the host. Similarly, the Facebook host is not the only potential defendant, whereas in Byrne v Deane, as has been observed, the poster of the notice could not be identified. Once again a careful examination of the technology and the way that it works challenged the validity of the analogy. The Court also considered that drawing an analogy between a Facebook page and graffiti on a wall was also unhelpful.90 The owner of the wall on which the graffiti is written is not intending that the wall be used for posting messages. A Facebook host intends just that, depending upon the way in which the page is set up. One argument that had been advanced was that an analogy could be drawn with a public meeting—although there is a danger in equating the physical world with the virtual. It was argued that if Mr Murray had convened a public meeting on the subject of Mr Wishart’s book, Mr Murray would have been liable for his own statements at the meeting but not for those of others who spoke at the meeting, unless he adopted others’ statements himself. At this stage, having cautioned against the use of analogy, the Court took the opportunity to ‘pick a winner’. The Court considered the public meeting analogy was useful because it incorporated a factor that neither of the other two analogies do: the fact that Mr Murray solicited third party comments about Mr Wishart’s book. In addition speakers at a public meeting could be identified (and sued) if they made defamatory statements just as many contributors to the Facebook page could be. However, the public meeting analogy is not a perfect one in that statements at a meeting would be oral and therefore ephemeral, unlike the written comments on the Facebook page, but it did illustrate a situation where even if a person incites defamation, he or she will not necessarily be liable for defamatory statements made by others. That is the case even if he or she ought to have known that defamatory comments could be made by those present at the meeting. What is extraordinary is the fact that the Court decided to travel this path in the first place. It had established a proper reasoning path in identifying the characteristics of the technology and approaching the matter from that stance. By choosing an analogy—and one that was as technologically removed from the Digital Paradigm as an example could be—the Court opened the door to the potential use of that analogy as a descriptor of the functionality of a Facebook page which bears no resemblance to the way in which it actually operates. It is clear that the temptation to follow the path of analogy was an irresistible one, but the Court should have turned away. It had correctly identified the

90  See for example the US cases of Heller v Bianco 111 Cal App 2d 424 (1952) and Tacket v General Motors 836 F 2d 1042 (7th Cir 1987) referred to by Courtney J.

The Problem of Analogies


­ roblems with earlier analogies. Surely the matter should have been left there, and p the Court could have relied upon its earlier approach to achieve its outcome. This is demonstrated in the subsequent discussion about the problems surrounding the ‘ought to know’ test which did not require the utilisation of analogy. c.  Problems with the ‘Ought to Know’ Test The Court had concerns about the ‘ought to know’ test and Facebook hosts. First, an ‘ought to know’ test put the host in a worse position than the ‘actual knowledge’ test. In the ‘actual knowledge’ situation the host has an opportunity to remove the content within a reasonable time and will not be a publisher if this is done. In the ‘ought to know’ case publication commences the moment the comment is posted. What happens when a Facebook page host who ought to know of a defamatory comment on the page actually becomes aware of the comment? On the ‘actual knowledge’ test, he or she can avoid being a publisher by removing the comment in a reasonable time. But removal of the comment in a reasonable time after becoming aware of it will not avail him or her if, before becoming aware of the comment, he or she ought to have known about it, because on the ‘ought to know’ test he or she is a publisher as soon as the comment is posted. Another concern was that the ‘ought to know’ test makes a Facebook page host liable on a strict liability basis, solely on the existence of the defamatory comment. Once the comment is posted the host cannot do anything to avoid being treated as a publisher. A further concern involved the need to balance the right of freedom of expression affirmed in section 14 of the NZ Bill of Rights Act 199091 against the interests of a person whose reputation is damaged by another. The Court considered that the imposition of the ‘ought to know’ test in relation to a Facebook page host gives undue preference to the latter over the former. A fourth issue concerning the Court was that of the uncertainty of the test in its application. Given the widespread use of Facebook, it is desirable that the law defines the boundaries with clarity and in a manner that Facebook page hosts can regulate their activities to avoid unanticipated risk. Finally the innocent dissemination test provided in section 21 of the Defamation Act would be difficult to apply to a Facebook page host, because the language of the section and the defined terms used in it are all aimed at old media and appear to be inapplicable to internet publishers. Thus the Court concluded that the actual knowledge test should be the only test to determine whether a Facebook page host is a publisher. Therefore the decision clarifies the position for Facebook page hosts and the test that should be applied in determining whether such an individual will be a publisher of third party comments. But there are deeper aspects to the case that are


And similar international instruments.


The Transition to the Digital Paradigm

important in approaching cases involving new technologies and new communications technologies in particular. d.  The Deeper Aspects of the Case The first is the recognition by the Court of the importance of understanding how the technology actually works. It is necessary to go below the ‘content layer’ and look at the medium itself and how it operates within the various taxonomies of communication methods. In this regard, it is not possible to make generalisations about all communications protocols or applications that utilise the backbone that is the Internet. Similarly it would be incorrect to refer to defamation by Facebook or using a blog or a Google snippet as ‘Internet defamation’ because the only common factor that these applications have is that they bolt on to and utilise the transport layer provided by the Internet. An example in the intellectual property field where an understanding of the technology behind Google adwords was critical to the case was Intercity Group (NZ) Limited v Nakedbus NZ Limited.92 Thus, when confronted with a potentially defamatory communication on a blog, the Court will have to consider the way in which a blog works and also consider the particular blogging platform, for there may well be differences between platforms and their operation. The second major aspect of the case—and a very important one for lawyers— is the care that must be employed in drawing analogies particularly with earlier communications paradigms. The Court did not entirely discount the use of analogy when dealing with communication applications utilising the Internet. However it is clear that the use of analogies must be approached with considerable care. The Digital Paradigm introduces new and different means of communication that often have no parallel with the earlier paradigm other than that a form of content is communicated. What needs to be considered is how that content is communicated and the case demonstrates the danger of looking for parallels in earlier methods of communication. While a Facebook page may ‘look like’ a noticeboard upon which ‘posts’ are placed, or has a ‘wall’ which may be susceptible to scrawling graffiti, it is important not to be seduced by the language parallels of the earlier paradigm. A Facebook ‘page’ or a ‘web page’ are not pages at all. Neither have the physical properties of a ‘page’. It is in fact a mixture of coded electronic impulses rendered on a screen using a software and hardware interface. The word ‘page’ is used because in the transition between paradigms we tend to use language that encodes our conceptual understanding of the way in which information is presented. A ‘website’ is a convenient linguistic encoding for the complex way in which information is ­dispersed across a storage medium which may be accessible to a user. A website is


Intercity Group (NZ) Limited v Nakedbus NZ Limited [2014] NZHC 124.



not in fact a discrete physical space like a ‘building site’. It has no separate identifiable physical existence. The use of comfortable encoding for paradigmatically different concepts—the resort to a kind of functional equivalence with an earlier paradigm—means that we may be lured into considering other analogous equivalencies as we attempt to try to make rules which applied to an old paradigm fit into a new one. The real deeper subtext to Murray v Wishart and indeed all the cases that have been discussed in this section is that we must all be careful to avoid what appears to be the comfortable route and carefully examine and understand the reality of the technology before we start to determine the applicable rule.

VIII. Conclusion In this chapter I have discussed pre-digital communications systems and methods. As I have shown, there is often a reluctance to engage with new media, although this changes in time until the point is reached that assumptions are often made about methods of communication without any real consideration of what lies beneath the technology. In the past, content has been inextricably associated with the medium by which it is conveyed and we uncritically focus on the content without thinking too deeply about its method of communication. Digital systems challenge almost all of our assumptions about information communication. Settled and accepted means of engagement in the commercial space required the drafters of the Model Law to revisit what it was that writing and signatures actually did to determine how an electronic equivalent—a functional equivalent—could be developed. This involved going below the content layer in each of the examples that I have discussed, and looking at the way in which a ­parallel outcome could be achieved in the digital space. Sadly this level of analysis was not undertaken in the Reimerdes case. Instead of carefully considering the way in which hypertext links actually worked and the technology behind them, Judge Kaplan used the term ‘functional equivalence’ as a form of shorthand to achieved a desired result. The use of the term in determining applicable rules in a time of disruptive change with a co-existence of technologies cannot be done lightly. Proper analysis of the comparators must be undertaken. The concept of functional equivalence is very useful, but only so long as the examination is carried out in a rigorous fashion. If this is not done, the currency of the term is cheapened and results are reached which retard the development of rules surrounding new technologies and, as a result of these rules, may retard the development, scope, use and innovation of the technology itself. Defamation law is an area where content is king. The very content of a statement may or may not be defamatory. The digital paradigm has raised new and challenging issues as far as the element of publication is concerned. Questions of scale, the involvement of publishing conglomerates, the different means of


The Transition to the Digital Paradigm

c­ ommunicating defamatory content and who is responsible for that communication have in the past created little difficulty. The search for consistency and the utilisation of precedent has required courts to consider parallels or analogies in a previous paradigm when dealing with new communications technologies that are by their nature distributed in a way that could not be achieved in the pre-digital space. Analogy can be a powerful reasoning tool but once again it has to be used with care. Once again an understanding of the new technology is essential. Once again an understanding of ‘what lies beneath’ the content in the pre-digital comparator needs to be carefully considered. In the cases of both functional equivalence and analogy what must also be taken into account are some of the particular qualities or properties of new communications technologies that have been discussed in chapter two. Courts and lawyers need to go beneath the content layer to understand the true nature of difference between the old and new, to grasp the implications of the new and to devise the rules that fit not only with what has gone before, but what will be necessary for what is to be. I suggested in chapter two that one of the qualities of digital systems is continuing disruptive change. This change—disruptive but transformative—is having an impact on lawyers and judges. New communications systems are causing and will continue to cause disruption to what were established ways of thinking regarding rules that have had their basis or foundation in pre-digital communications systems. The disruption or transformation must be in the way in which lawyers and judges develop their arguments and judgments to recognise the realities of the Digital Paradigm.

4 Aspects of Internet Governance I. Introduction In this chapter I provide an overview of some of the issues surrounding Internet governance. This is a complex topic and has been the subject of considerable academic thought and writing as well as consideration at national and international geopolitical levels. Internet governance is a difficult term to define. There have been a number of attempts made. The Working Group on Internet Governance (WGIG) proposed the following: Internet governance is the development and application by governments, the private sector, and civil society, in their respective roles, of shared principles, norms, rules, decision making procedures and programmes, that shape the evolution and utilisation of the Internet.1

An alternative definition, devised by academics in the field, is that Internet governance is collective action by governments and/or private sector operators of TCP/IP networks, to establish rules and procedures to enforce public policies and resolve disputes that involve multiple jurisdictions.2

These are generous definitions going beyond ‘governance by government’ or formal structures by state actors using traditional rule-making systems, but including a range of mechanisms, both formal and informal, for management and control. Internet governance, as this chapter will show, covers not only the regulation of the infrastructure for transmitting data—the transport and associated technical layers—but also the information content of the data—the content layer. On

1 WGIG, Report of the Working Group on Internet Governance (Chateau de Bossey, June 2005) This definition was adopted in the WSIS, Tunis Agenda for the Information Society Document WSIS-05/TUNIS/DOC [6(rev. 1)-E adopted 18 November 2005 2 M Miueller, J Mathiason and LW McKnight, Making Sense of ‘Internet Governance’: Defining ­Principles and Norms in a Policy Context (April 2004)


Aspects of Internet Governance

­ ccasion it is difficult to differentiate between the two. Transport layer controls o may have a content regulation objective. In addition there are tensions between the various proponents of different regulatory models. There are tensions between authority and legitimacy, between the cyberlibertarians and the digital realists; between the Internet Exceptionalists and those who see the Internet as yet another technological novelty that will ultimately fall within nation state or international regulatory sway. This chapter does not pretend to deliver an exhaustive analysis of Internet governance. Rather it will illustrate some of the differing approaches and models of Internet governance and demonstrate the tensions that exist. It will argue that the Internet is primarily a technological phenomenon the governance or regulation of which requires an understanding of the nature of the technology and the practicality of regulating a distributed communications network. And it is here that the collision takes place between traditional governance expectations and the realities of the technology behind the Internet on the other. The discussion will start with a consideration of recent international moves towards Internet governance through the Internet Governance Forum (IGF) and how that organisation came to be. The IGF itself demonstrates some of the fragility that underlies a consideration of Internet governance in the face of other international actors such as the International Telecommunications Union (ITU). I shall then go on to consider aspects of technical governance or regulation of the network by network engineers and those who define and approve the standards of the systems that allow the Internet to operate. This will include a discussion of the Internet addressing system—the Domain Name System (DNS)—which is under the supervision and effective governance of the Internet Corporation for Assigned Names and Number (ICANN). I shall then discuss the various models of Internet governance, considering particularly the issue of Net Neutrality that underpins many theories of Internet governance and which suggests that there should be no discrimination between types of Internet content—which is merely data—and that all data should be treated alike and free to move at the same speed across the network. This ‘end-to-end’ design is a fundamental reality of Internet architecture and should act as a basic proposition of any form of Internet regulation. The models of Internet governance that will be discussed will be those proposed by the cyberlibertarians who see the Internet as a different space that is not amenable to regulation by individual states but which should develop its own regulatory models. The transnational approach advocated by Henry Perritt and which relies on formalisation of regulatory structures through international organisations will then be considered. Professor Lawrence Lessig is well known for his theory regarding the use of code as a regulatory mechanism. In many respects this theory of activity dovetails with the way in which technical governance may be developed and leads to a discussion of the layers theory of Internet Governance advanced by Lawrence Solum and and Minn Chung.

The Internet Governance Forum


The Digital Realist School, which holds that territorial authorities have not lost the power to regulate any activity—even Internet activity—within their borders will be discussed within the context of the writings of Judge Frank ­Easterbrook, Jack Goldsmith and Timothy Wu. This then will lead to a discussion of the concept of Internet Exceptionalism—that the Internet demands a separate series of ­ Internet-specific rules on the basis of its exceptional character. Internet ­Exceptionalism does not exclude the actions of the State in regulating Internet activity. Quite the contrary. But what it does propound is that special rules need to be—and have been—made governing specific on-line behaviours that may not be reflected in general ‘off-line’ rules.

II.  The Internet Governance Forum The onset of the Digital Paradigm and the revolution that developed after the Internet went commercial in 1993–94 fundamentally changed the way people think, behave, communicate, work and earn their livelihood.3 It has provided new ways to create knowledge, educate people and disseminate information. It revolutionised the way the world conducts economic and business practices, runs governments and engages politically. It provided for the speedy delivery of humanitarian aid and healthcare, and a new vision for environmental protection. It has created new avenues for entertainment and leisure. At the same time, concerns were noted that large sections of the world did not benefit from the advantages provided by the Internet—referred to as the digital divide. Recognising that this new dynamic required a global discussion, the International Telecommunication Union, following a proposal by the Government of Tunisia, resolved in 1998 to hold a World Summit on the Information Society and place it on the agenda of the United Nations. The Summit proposal was endorsed by UN General Assembly Resolution 56/183 on 21 December 2001.4 Meetings were held, first in Geneva in 2003 and then in Tunis in 2005. The Tunis Agenda5 called for the Secretary General of the United Nations to convene a meeting of a new forum for multi-stakeholder policy dialogue to be called the Internet Governance Forum. The mandate of the Forum is to: —— Discuss public policy issues related to key elements of Internet governance in order to foster the sustainability, robustness, security, stability and development of the Internet. 3  For a detailed discussion see Shane Greenstein, How the Internet Went Commercial—Innovation, Privitization and the Birth of a New Network (Princeton, Princeton University Press, 2015). 4 UN General Assembly Resolution 56/183—World Summit on the Information Society (21 December 2001) 5  Tunis Agenda for the Information Society (18 November 2005) clause 72. docs2/tunis/off/6rev1.html.


Aspects of Internet Governance

—— Facilitate discourse between bodies dealing with different cross-cutting international public policies regarding the Internet and discuss issues that do not fall within the scope of any existing body. —— Interface with appropriate intergovernmental organisations and other institutions on matters under their purview. —— Facilitate the exchange of information and best practices, and in this regard make full use of the expertise of the academic, scientific and technical communities. —— Advise all stakeholders in proposing ways and means to accelerate the availability and affordability of the Internet in the developing world. —— Strengthen and enhance the engagement of stakeholders in existing and/or future Internet governance mechanisms, particularly those from developing countries. —— Identify emerging issues, bring them to the attention of the relevant bodies and the general public, and, where appropriate, make recommendations. —— Contribute to capacity building for Internet governance in developing countries, drawing fully on local sources of knowledge and expertise. —— Promote and assess, on an ongoing basis, the embodiment of World Summit on the Information Society (WSIS) principles in Internet governance processes. —— Discuss, inter alia, issues relating to critical Internet resources. —— Help to find solutions to the issues arising from the use and misuse of the Internet, of particular concern to everyday users. —— Publish its proceedings. It was proposed that the IGF would build on existing Internet Governance structures, noting the various interests of governments, business entities, civil society and intergovernmental organisations. The IGF would have a lightweight, decentralised structure that would be periodically reviewed and would meet periodically as required. Since 2006 there have been 10 IGF meetings. Meetings are held annually. The IGF is primarily a discussion forum. It does not have any direct decision making authority. The principal objective seems to be to bring stakeholders together and strengthen engagement between them. The United Nations endorsed a five year mandate for the IGF in April 2006 and in 2010 after a re-evaluation of its continuation, the IGF was to be continued for a further five years.6 In December 2015 the mandate was further extended, this time for 10 years. During this period the IGF is expected to show progress on working modalities and participation of relevant stakeholders from developing countries.

6 General Assembly Resolution 65/141 Information and communications technologies for ­development 20 December 2010

The Internet Governance Forum


In extending the mandate the General Assembly of the UN emphasised the importance of multi-stakeholder participation: [R]ecognizing that effective participation, partnership and cooperation of Governments, the private sector, civil society, international organizations, the technical and academic communities and all other relevant stakeholders, within their respective roles and responsibilities, especially with balanced representation from developing countries, has been and continues to be vital in developing the information society.7

At the same time concern was expressed that there were still significant digital divides based not only upon lack of access but a discriminatory issue involving access by women to information and communications technology (ICT). Associated with this was a recognition that the vision of the WSIS should not be considered as a function of economic development and the spreading of information and communications technologies, but as a function of progress with respect to the realisation of human rights and fundamental freedoms.8 Paragraph 34 of the Tunis Agenda provided a working definition of Internet Governance ‘as the development and application by Governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures and programmes that shape the evolution and use of the Internet’. Within that very broad definition the IGF is expected to promote greater participation and engagement in the Internet governance discussions of Governments, the private sector, civil society, international organizations, the technical and academic communities and all other relevant stakeholders from developing countries, particularly African countries, least developed countries, landlocked developing countries and small island developing States, and middle-income countries, as well as countries in situations of conflict, post-conflict countries and countries affected by natural disasters.9

A.  What does the IGF do? Is the IGF merely a talk-fest? Does it serve a useful purpose? It first must be observed that it does not involve itself in active governance structures. What it does provide is a space for the sharing of information and the development of solutions on important Internet issues. It was decided from the outset that it would not be a decision-making body but rather an environment which allows participants to

7  Outcome document of the high-level meeting of the General Assembly on the overall review of the implementation of the outcomes of the World Summit on the Information Society (13 December 2015) clause 3. 8  ibid, clause 9 and in more detail clauses 41–47. 9  ibid, clause 61.


Aspects of Internet Governance

speak freely on an equal footing without the limitations that may be linked to the negotiation of formalised outcomes. What emerges from IGF discussions is material that plays an essential role in shaping the decisions taken by other groups that assists in the running of the Internet.10 Anyone who has a stake in the future of the Internet can go and be heard. The IGF was founded and operates on the principles of being bottom-up, transparent and inclusive. Without it, there would be no common ground where people who have a stake in the future of the Internet could develop local solutions with a global impact, addressing issues such as issues surrounding human rights on the Internet, problems with SPAM and unsolicited commercial electronic messages, security and privacy, online surveillance and infrastructure development. The IGF identifies issues that need to be dealt with by the international community and shapes the decisions that will be taken elsewhere. The IGF identifies issues of concern and puts them on the international policy agenda. It informs the decision-makers and shapes the policy-making processes of other institutions and governments. It relies therefore upon a moral persuasive power backed by the mandate given to it by the United Nations.11

B.  IGF Development The way in which the IGF developed through the proceedings of the Geneva and Tunis Conferences demonstrates the tension and indeed the collision between the interests and views of state actors on one hand, who base governance on principles of sovereign power and, on the other hand, civil society groups, who form a substantial stakeholder interest in the Internet and who, in the main, favour alternative approaches to policy-making that are often more ‘universalist’ in approach. The collision is between two approaches to authority and legitimacy in policymaking for the Internet. The approach of state actors looks to a centralised statecentred exclusive approach which is the antithesis of the distributed, meritocratic and open policy-making mechanisms of private and civil society stakeholders.12 This collision was apparent even before the first UN sponsored World Summit on the Information Society. Internet governance had been an issue in the late 1980s and early 1990s as the Internet moved to commercialisation. As matters stood then neither the diverse Internet community nor national government structures could unilaterally set Internet policy. The United States had adopted a relatively ‘hands

10  These groups include those responsible for the naming an addressing space such as ICANN and IANA, local, national, regional and global policy developers, those involved in education, the users themselves and the technical community and those responsible for open standards d ­ evelopment—see The Internet Society ‘The Internet Ecosystem’ ecosystem_020514_en.pdf. 11  The Internet Governance Forum 12  Dmitry Epstein, ‘The Making of Institutions of Information Governance: The Case of the Internet Governance Forum’ (2013) 28 Journal of Information Technology 137 at 141.

The Internet Governance Forum


off ’ approach to the Internet.13 A clear example of this may be seen in the creation of the ICANN in 1998. This is a private, US-based, not-for-profit corporation which has authority over the addressing and domain system that is vital to the operation of the Internet. It was a response by the United States Government to increasing pressures for the internationalisation of this important arm of Internet Governance, for there is no doubt that ICANN policies have a significant impact upon Internet operation.14 By the time that the ITU had identified Internet Governance as a strategically important area in communications networks, there were already a number of structures that were in place that exercised primarily technical supervisory ordering of the operation of the Internet. Organisations such as the Internet Society (ISOC) and the Internet Engineering Task Force were non-state actors that were playing an important and significant role in Internet operations. When the initial WSIS discussions were proposed there was a reluctance to include these nonstate players in UN-sponsored deliberations. However, it became apparent that these pre-existing non-governmental institutions would need to play a role in the discussions.15

C.  State Actors and Stakeholders—The Creation of the IGF The Geneva Conference was significant in that it recognised that non-state actors had a role in Internet governance. This had been a fundamental disagreement in the early stages of the WSIS. State-based actors were resistant to recognition of non-state actors having any sort of role in governance and policy setting which had traditionally been the preserve of nation states or formal international bodies. Nevertheless the recognition of the role of non-state actors was balanced against implicit recognition of the claim by nation states for exclusive authority over public policy-making. The tensions between state actors and other stakeholders led to the establishment of the Working Group on Internet Governance (WGIG) which was to consider a working definition of Internet Governance and identifying what policy issues would fall within its purview. The WGIG was an initial step towards identifying the role of non-state interests in Internet Governance. In itself this was an 13  K Hafner and M Lyon, Where Wizards Stay Up Late: The Origins of the Internet (New York, Simon & Schuster, 1996). 14  It took some time for the US Government to divest itself entirely of supervisory control over ICANN which it maintained for many years via Memoranda of Understanding between the US Department of Commerce and ICANN. The final links were severed on 30 September 2016 and ICANN is now governed by a multi-stakeholder group including businesses, users and representatives of governments. Increased pressure for US government divesture of its involvement in ICANN ­followed the Edward Snowden revelations of widespread NSA surveillance of Internet users. 15  Epstein observes that the non-state actors, particularly the civil society groups, had to go through a rapid process of institutionalisation. They had to adapt to the UN-specific ways of engaging in the deliberation process. Epstein above n 12, 140.


Aspects of Internet Governance

innovation within the UN system in that non-state stakeholders were included on an equal footing with state actors and the group was organised by the Secretary General of the UN thus according to the process a level of UN legitimacy. What transpired was a division of responsibility. Governments possessed the ultimate binding decision-making authority. Their area of responsibility was for national, regional, and international policy-making and implementation as well as the development and adoption of laws, regulations and standards, as well as other activities. The non-state actors were responsible for self-regulation and the development of best practice and standards. The civil society interests were to engage in policy development, represent the interests of marginalised groups and engage in capacity building. The definition of Internet governance that was later adopted at Tunis but was being discussed at Geneva meant that Internet governance was wider than questions of management and control over critical Internet resources and infrastructure. In essence it made it possible to define any communication-information policy as an issue of Internet governance.16 A number of mechanisms were available to develop and apply governance systems—that is the principles, norms and procedures for decision-making. First there was to be an inclusive ‘space’ for dialogue among stakeholders, emphasising the inclusion of developing countries. Four governance models were proposed involving the creation of new bodies. One was a UN-based Global Internet Council (GIC) which would be led by nationstate governments. The GIC would hold ICANN accountable and replace the Government Advisory Committee (GAC) of ICANN with an International Internet Council giving national governments oversight over ICANN. This desire for active national government supervision of ICANN has continued down to the present time. An alternative is that a Global Internet Policy Council would replace ICANN with the World Internet Corporation for Assigned Names and Numbers (WICANN) based at the UN. It was recognised, however, that neither an exclusively ‘nation-state centric’ nor an exclusively ‘non-state actors’ solution were politically feasible. In addition there were competing interests within the UN system that wished to mould the debate about Internet governance. In February of 2004 the ITU held an expert meeting on Internet governance. This emphasised the multi-institutional and multidimensional nature of the Internet governance debate. In March of 2004 the Information and Communication Technology Task Force (ICT) of the UN organised a global forum on Internet governance. This was viewed as a ‘counter-conference’ to the ITU meeting. What was of interest was that individuals who had participated in

16  ML Mueller, Networks and States: The Global Politics on Internet Governance (Cambridge, MIT Press, 2010) 67. The WGIG Report identified four areas that constitute the Internet governance domain. These were: issues of infrastructure and management of critical Internet resources (eg management of the DNS), issues related to the use of the Internet (eg spam), issues that go beyond the Internet and have existing institutions addressing them (eg copyright), and the link between Internet governance and development. See Epstein above n 12,144.

The Internet Governance Forum


the WSIS and WGIG deliberations took part in the discussions at these ­competing fora. What came out of these meetings was a recognition that the c­ hallenges that had confronted the WGIG were still present. But what also came out of these ­competing meetings was the basis for the second phase of the WSIS which took place in Tunis in 2005. However, it was clear that there was still considerable resistance to the proposition that non-state actors had any roles in Internet governance. State actors such as Brazil, Russia, Iran, China and Syria (BRICS)17 continued to challenge the legitimacy of non-state actors’ involvement in drafting the diplomatic language of any document that might have the hint of a binding nature. Despite this, there was a shift towards a compromise. What came out of the Tunis deliberations was the creation of the IGF. The final document from the Conference displayed a consensus on three points. First, the viability of present Internet governance arrangements with private sector responsibility for the day-to-day management and future development of Internet technologies.18 Second, the unilateral authority of the US Government over ICANN was eroded and this was done by an emphasis on the policy-making role of nation states and their sovereignty over the management of country-code top level domains (CCTLD) which was a significant step towards changing ICANN and the nature of the GAC. Finally, there was the creation of the IGF. The creation of the IGF was widely understood to be the kind of agreement that could get the WSIS out of its impasse; it allowed the critics to continue raising their issues in an official forum, but as a nonbinding discussion arena, could not do much harm to those interested in preserving the status quo.19

The creation of the IGF has been described as a result of a collision between two visions of Internet governance. Kleinwachter has described it as a clash between a view of globalisation, which anticipated a decline of the system of sovereign states in favour of global institutions and transnational corporations, much due to the evolution of media and communication technologies; and a view of ‘glocalisation’, which highlighted the centrality of physical space and left the governments a central role, while redefining the concept of sovereignty.20 Mueller saw it as a clash between two models of global governance focused on the one hand upon private leadership, on the other upon nation-state ordering.21 However, the compromise was reached

17  As will be seen, these nations and others tried to assert nation-state governance over the Internet at the 2012 ITU Conference at Dubai. 18  Effectively this affirmed the public authority of ICANN over management of critical Internet resources. 19  Mueller above n 16, 78. 20  W Kleinwachter, ‘Multistakeholderism, Civil Society and Global Diplomacy: The Case of the World Summit on the Information Society’ in WJ Drake and J Wilson (eds), Governing Global Networks (Cambridge, The MIT Press, 2008) 535–82. 21  Mueller above n 16.


Aspects of Internet Governance

as a result of a recognised need for common ground and a recognition of the fact that both sets of interests, especially the private ones, were well-­established in the field and different national policies would involve themselves in the regulation of private sector interests in varying and disparate ways—something that would result in chaos for a world-wide distributed communications system. However, the formation of the IGF did not bring the collision between nationstates and the user/stakeholder community to an end. It is in the nature of governments to attempt to assert control and the events of 2011 and 2012 provide an example. The context of these efforts was within the Deauville meeting of the G8 in 2011 and the ITU meeting in Dubai.22

D.  Nation-state Initiatives The revival of nation-state assertion of control over the Internet arose in part from the events in the Middle East early in 2011 which became known as the ‘Arab Spring’. The ‘Arab Spring’ is a term that refers to anti-government protests that spread across the Middle East. These followed a successful uprising in Tunisia against former leader Zine El Abidine Ben Ali, which emboldened similar antigovernment protests in a number of Arab countries. The protests were characterised by the extensive use of social media to organise gatherings and spread awareness.23 There has been some debate about the influence of social media on the political activism of the Arab Spring. The suggestion is that the availability of digital communications systems gave rise to a ‘digital democracy’ in parts of North Africa.24 On the other hand there is the suggestion that the issue is more nuanced than that. Other factors came into play such as issues of unemployment, economic hardship and political corruption.25 Nevertheless there is evidence of an increased uptake of Internet and social media usage over the period of the events. This was emphasised during the uprising in Egypt when President Mubarak’s State Security Investigations Service blocked access to Twitter and Facebook and on 27 January

22  It seems to be clear that the ITU is going to continue to be instrumental in asserting nation-state control over aspects of Internet Governance 23  See ‘Arab Social Media Report’ Mohammed Bin Rashid School of Government, Dubai, www.; and see Carol Huang, ‘Facebook and Twitter key to Arab Spring uprisings’ The National (Abu Dhabi, 6 June 2011) 24  See ‘Social Media: Power to the People?’ (NATO Review, 2011) social_medias/EN/index.htm. 25 See ‘Social Media: Cause, Effect and Response’ (NATO Review, 2011) review/2011/social_medias/21st-century-statecraft/EN/index.htm; and see Raymond Schillinger, ‘Social Media and the Arab Spring: What Have We Learned?’ Huffington Post (20 September 2011)

The Internet Governance Forum


2011 the Egyptian Government shut down the Internet in Egypt along with SMS messaging.26 In May 2011, just before the G8 Summit in France, at what was called the e-G8 Forum in Deauville, French President Nicolas Sarkozy issued a call for stronger Internet regulation. Present at the meeting were executives of Google, Facebook, Amazon and eBay. Sarkozy stated the classical nation-state position: The universe you represent is not a parallel universe. Nobody should forget that governments are the only legitimate representatives of the will of the people in our democracies. To forget this is to risk democratic chaos and anarchy.27

Prime Minister David Cameron of Britain stated that he would ask Parliament to review British privacy laws after Twitter users circumvented court orders preventing newspapers from publishing the names of public figures who are suspected of having had extramarital affairs, but he did not go as far as Mr Sarkozy who was pushing for a ‘civilized Internet’ implying wide regulation.28 The communiqué that issued from the Deauville meeting did not extend as far as Sarkozy’s stated position. While it affirmed the importance of intellectual property protection, the effective protection of personal data and individual privacy, security of networks, and a crackdown on trafficking in children for sexual exploitation, it did not advocate state control of the Internet. But it did affirm that governments had a role in Internet governance. In December 2012, the ITU held a Conference in Dubai. The ITU, it will be remembered, was behind the initiatives for the WSIS and WGIG that led to the IGF formation. The ITU, originally the International Telegraph Union, is a specialised agency of the United Nations and is responsible for issues concerning information and communication technologies. It was originally founded in 1865 and in the past has been concerned with technical communications issues such as standardisation of communications protocols (which was one of its original purposes), the management of the international radio-frequency spectrum and satellite orbit resources and the fostering of sustainable, affordable access to information and communication technology. It took its present name in 1934 and in 1947 became a specialised agency of the United Nations.29 The position of the ITU approaching the 2012 meeting in Dubai was that, given the vast changes that had taken place in the world of telecommunications and

26  ‘Egypt Severs Internet Connection Amid Growing Unrest’ BBC News (28 January 2011) This, of course, is the archetypal blunt force regulatory solution to the Internet. Cut the wires. 27  Nicholas Sarkozy at the opening of the E-G8 reported in the Huffington Post (27 May 2011) www. 28  Eric Pfanner, ‘G-8 Leaders to Call for Tighter Internet Regulation’ New York Times (24 May 2011) 29  For a history of the ITU see


Aspects of Internet Governance

information technologies, the International Telecommunications Regulations (ITR) that had been revised in 1988 were no longer in keeping with modern developments. Thus, the objective of the 2012 meeting was to revise the ITRs to suit the new age. This was no more and no less than a revisiting of the concept of nationstate control over the Internet. Clearly the ITU was not prepared to let the matter lie following the Tunisian Agenda defining Internet governance. Concerns had earlier been expressed about the ITU agenda by no less a person than Vint Cerf, co-developer with Robert Kahn of the TCP/IP protocol, one of the technologies that made the Internet possible. He said: But today, despite the significant positive impact of the Internet on the world’s ­economy, this amazing technology stands at a crossroads. The Internet’s success has generated a worrying desire by some countries’ governments to create new international rules that would jeopardize the network’s innovative evolution and its multi-faceted success. This effort is manifesting itself in the UN General Assembly and at the International Telecommunication Union—the ITU—a United Nations organization that counts 193 countries as its members, each holding one vote. The ITU currently is conducting a review of the international agreements governing telecommunications and it aims to expand its regulatory authority to include the Internet at a treaty summit scheduled for December of this year in Dubai … Today, the ITU focuses on telecommunication networks, radio frequency allocation, and infrastructure development. But some powerful member countries see an opportunity to create regulatory authority over the Internet. Last June, the Russian government stated its goal of establishing international control over the Internet through the ITU. Then, last September, the Shanghai Cooperation Organization—which counts China, Russia, Tajikistan, and Uzbekistan among its members—submitted a proposal to the UN General Assembly for an ‘international Code of Conduct for Information Security’. The organization’s stated goal was to establish government-led ‘international norms and rules standardizing the behavior of countries concerning information and cyberspace’. Other proposals of a similar character have emerged from India and Brazil. And in an October 2010 meeting in Guadalajara, Mexico, the ITU itself adopted a specific proposal to ‘increase the role of ITU in Internet governance’. As a result of these efforts, there is a strong possibility that this December the ITU will significantly amend the International Telecommunication Regulations—a multilateral treaty last revised in 1988—in a way that authorizes increased ITU and member state control over the Internet. These proposals, if implemented, would change the foundational structure of the Internet that has historically led to unprecedented worldwide innovation and economic growth.30

30  Testimony of Vinton Cerf, Hearing before the Subcommittee on Communications and Technology of the Committee on Energy and Commerce House of Representatives One Hundred Twelfth Congress Second Session ‘International Proposals to Regulate the Internet’ (31 May 2012) HHRG-112-IF16-WState-CerfV-20120531.pdf.

The Internet Governance Forum


After the Dubai meeting the Final Acts of the Conference were published.31 One of the controversial issues was that there was a proposal to redefine the Internet as a system of government-controlled, state-supervised networks. The proposal was contained in a leaked document by a group of members including Russia, China, Saudi Arabia, Algeria, Sudan, Egypt and the United Arab Emirates.32 However, the proposal was withdrawn. But the governance model defined the Internet as an ‘international conglomeration of interconnected telecommunication networks’, and that ‘Internet governance shall be effected through the development and application by governments’ with member states having ‘the sovereign right to establish and implement public policy, including international policy, on matters of Internet governance’. This wide-ranging proposal went well beyond the traditional role of the ITU, and other members such as the United States, European countries, Australia, New Zealand33 and Japan insisted that the ITU treaty should apply to traditional tele­ communications systems. The resolution that won majority support towards the end of the conference stated that the ITU’s leadership should ‘continue to take the necessary steps for ITU to play an active and constructive role in the multistakeholder model of the Internet’.34 However, the Treaty did not receive universal acclaim. United States Ambassador Kramer announced that the US would not be signing the new treaty. He was followed by the United Kingdom. Sweden said that it would need to consult with its capital (code in UN-speak for ‘not signing’). Canada, Poland, the Netherlands, Denmark, Kenya, New Zealand, Costa Rica and the Czech Republic all made similar statements. In all, 89 countries signed while 55 did not.35 Thus even among nation-state actors there was a lack of overall consensus about the way in which the Internet should be governed. The attempt through a technological forum such as the ITU to establish state-based governance of the Internet indicates that governments wish to control not only content but the various transmission and protocol layers of the Internet and possibly even the backbone itself. Continued attempts to interfere with aspects of the Internet or embark upon an incremental approach to regulation have resulted in expressions of concern from

31  See; and see epub_shared/GS/WCIT-12/E/web/flipviewerxpress.html. 32 ‘Proposals for the Work of the Conference’!!MSW-E.pdf. 33  ‘NZ Rejects International Telecommunications Treaty Changes’ (Media Statement, Amy Adams, Minister for Communications and Information Technology, 14 December 2012) stories/PA1212/S00296/nz-rejects-international-telecommunications-treaty-changes.htm. 34 ibid. 35  For a report of the Dubai Conference from the perspective of a member of the United States delegation see Eli Dourado, ‘Behind Closed Doors at the UNs Attempted “Takeover of the I­ nternet”’ (21 December 2012)


Aspects of Internet Governance

another Internet pioneer, Sir Tim Berners-Lee, who has claimed that governments are suppressing online freedom.36 Following upon the disclosures of Internet surveillance by Edward Snowden the issue of Internet governance came into sharp focus. The activities of the National Security Agency (NSA) and the Government Communications Headquarters (GCHQ), as revealed by the Snowden disclosures, provide an example of indirect government interference with the Internet and of challenges to the utilisation of the new communications technology by individuals. There were attempts to undermine encryption and to circumvent security tools which face challenges upon individual liberty to communicate frankly and openly and without state surveillance. The concern by some states was that privacy had been compromised and consideration was given to ways in which localisation or indeed balkanisation of the Internet could take place. Consideration was given by countries such as Brazil and Germany to encourage regional online traffic to be routed locally rather than through the distributed network that included the United States—the first steps in a significant shift in the way in which the Internet works. Following the Snowden revelations Brazil’s government published ambitious plans to promote Brazilian networking technology, encourage regional internet traffic to be routed locally, and is moving to set up a secure national email ­service.37 In India government employees were advised not to use Gmail services and German privacy commissioners called for a review of whether Europe’s Internet Traffic could be kept within the EU and beyond the reach of NSA and GCHQ surveillance. The Bali IGF meeting in 2013 and indeed subsequent meetings in Istanbul in 2014 and in Joao Pessoa in 2015 expressed concern about Internet-based surveillance within the context of its use by state actors and blatant privacy infringements and breaches. ICANN saw the NSA surveillance activities as undermining the trust and confidence of users and Daniel Castro, a senior analyst at the Information Technology & Innovation Foundation in Washington, said the Snowden revelations were pushing the internet towards a tipping point with huge ramifications for the way online communications worked.38 This hasn’t taken place but confidence in the integrity of the Internet and the violation of that integrity by a nation state shook the confidence not only of the organisations that were major Internet

36 See; and see ‘Tim Berners-Lee: Internet Freedom Must be Safeguarded’ The Guardian (26 June 2013) CMP=twt_gu. In addition, Berners-Lee issued a call for a Digital Magna Carta: ‘An online Magna Carta: Berners-Lee calls for a bill of rights for web’ The Guardian (12 March 2014) www.theguardian. com/technology/2014/mar/12/online-magna-carta-berners-lee-web. 37  Matthew Taylor, Nick Hopkins and Jemina Kiss, ‘NSA surveillance may cause breakup of internet, warn experts’ The Guardian (1 November 2013) nsa-surveillance-cause-internet-breakup-edward-snowden. 38 ibid.

Technical Governance


stakeholders, but businesses as well, which required a stable and reliable platform upon which to base e-commerce activities. In some ways the actions of the NSA and indirectly the US government increased confidence in organisations such as the IGF. Even so, The IGF occupies an advisory space within the realms of Internet governance. It does not purport to be a rule-making authority. It does not fulfil a supervisory role over existing institutions. But, having said that, the fact that many existing governance institutions—especially in the area of technical governance or standard setting organisations—are among the IGF stakeholders, there must inevitably be a ‘flow-on’ effect from IGF deliberations and recommendations to these organisations.

III.  Technical Governance The Internet Society and its associated technical organisations fall within the ambit of stakeholders in the IGF. These associated organisations provide examples of what may be described as a form of governance by the use of technology. But at the same time, they have no state supported or sovereign authority in the sense normally understood by national or internationally mandated authority. Nevertheless, the influence that these bodies have is significant in that they ­exercise considerable control over the engineering aspects of the Internet and set the standards that are essential to the continued functioning of the Internet and the way in which it operates. It is to a consideration of these organisations that the discussion now turns. Technical control and superintendence of the operation of the Internet lies with network engineers and computer scientists. There is no organisational charter. The structures within which decisions are made are informal. They involve a network of interrelated organisations the names of which convey an aura of legitimacy and authority. The Internet Society (ISOC) is essentially an umbrella body governed by a diverse Board of Trustees.39 It engages in a wide spectrum of Internet issues including policy, governance, technology and development. Everything that it does is based on ensuring that a healthy sustainable Internet is available to everyone.40 ISOC was founded in 1992 to provide leadership and Internet-related standards, education and policy around the world. Several other organisations are associated with ISOC. One of these, which appoints some of the Trustees of ISOC, is the Internet Engineering Task Force (IETF).41

39 40 41


Aspects of Internet Governance

The IETF is a separate legal entity whose stated goal is to make the Internet work better. It does this by producing high quality, relevant technical documents that influence the way people design, use and manage the Internet. It has five cardinal principles which guide it in the pursuit of its mission, namely open process, technical competence, reliance of a volunteer core, rough consensus and running code and ownership of and responsibility for protocols and functions that touch or affect the Internet.42 The Internet Architecture Board is an advisory body to ISOC and is also a committee of the IETF which exercises an oversight role. Within ISOC there is also the IETF Administrative Support Activity, which is responsible for the fiscal and administrative support of the IETF Standards Process. The IETF Administrative Support Activity has a committee,43 the IETF Administrative Oversight Committee (IAOC), which carries out the responsibilities of the IETF Administrative Support Activity supporting the Internet Engineering Steering Group (IESG)44 working groups, the IAB, the Internet Research Task Force (IRTF)45 and Steering Groups (IRSG). The IAOC oversees the work of the IETF Administrative Director (IAD), who has the day-to-day operational responsibility of providing the fiscal and administrative support through other activities, contractors and volunteers. Although the various structures suggest quite a level of complexity, the central hub of these various organisations is the IETF. It has no coercive power. But as has been observed, it is responsible for establishing the standards that enable the efficient functioning of the Internet. Some of these standards—such as TCP/IP— are core to Internet operations and are not optional. But the fact that core nonoptional standards have been imposed does not arise from any form of official regulatory power nor is there any coercive sanction for departure from standards. The continued successful operation of the Internet acts as a positive incentive and the adoption of standards arises from a critical mass of various network externalities involving Internet users. In time standards become economically mandatory— that overall acceptance of IETF standards maintains core Internet functionality and is a good example of the IETF principle of rough consensus and running code. Another element of the operations of the IETF and indeed all the technical organisations involved in Internet functionality is that of open process. Theoretically any person may participate. The settlement of standards as an example of rough consensus has been mentioned above, but rough consensus is a characteristic of the way in which decisions are made within the technical community. Proposals are circulated in the form of a request for comment (RFC) to members of the various Internet, scientific and engineering communities. The discussion and exchanges that follow from this collaborative process arrive at the adoption of a new standard or protocol. 42 43 44 45

Technical Governance


In this regard there can be no doubt that a considerable amount of responsibility rests with the organisations that set and maintain standards. Engineering and standards controls mean that the organisations have developed a power structure that has little or no official governmental or regulatory oversight. In addition there may well be ongoing legitimacy issues. The Internet has become an essential infrastructure but by the same token the objective of organisations such as IETF is a purely technical one that has little if any public policy ramifications. Its ability to work outside government bureaucracy enables greater efficiency. What is overlooked as an aspect of Internet governance is that the essential nature of the infrastructure and its continued efficient operation, although carried out in an open and transparent nature within a model based on technical, collaborative and consensus based practices are not driven by public interest considerations other than efficient engineering principles. The reality is that the technical operation and maintenance of the Internet is superintended by organisations that have little or no interactivity with any of the formalised power structures that underlie the various ‘governance by law’ models of Internet governance. The ‘technical model’ of Internet governance is an anomaly arising not necessarily from the technology, but from its operation.46 Of the various technical organisations that are involved in the operation of the Internet, the Internet Corporation for Assigned Names and Numbers (ICANN)47 has been subjected to the most penetrating examination and criticism.

A. ICANN—The Internet Corporation for Assigned Names and Numbers ICANN was formed in October 1998 at the direction of the Clinton Administration to take responsibility for the administration of the Internet’s Domain Name System (DNS). Since that time ICANN has been dogged by controversy and criticism from all sides. ICANN wields enormous power as the sole controlling authority of the DNS, which has a ‘chokehold’ over the Internet48 because it is the only aspect of the entire decentralised, global system of the Internet that is administered from a single, central point. By selectively editing, issuing or deleting net identities, ICANN is able to choose who is able to access cyberspace and what they will see when they are there. Further, if ICANN chooses to impose conditions

46  For a consideration of issues of technical governance see generally LA Bygrave and J Bing (eds), Internet Governance: Infrastructure and Institutions (Oxford, Oxford University Press, 2009) and especially in that volume LB Solum, ‘Models of Internet Governance’ at 48; LA Bygrave and T Michaelsen, ‘Governors of Internet’ at 92; and H Alvestrand and HW Lie, ‘Development of Core Internet Standards: The Work of IETF and W3C’ at 126. 47 48  CS Kaplan, ‘A Kind of Constitutional Convention for the Internet’ New York Times Cyber Law Journal (23 October 1998) html.


Aspects of Internet Governance

on access to the Internet, it can indirectly project its influence over every aspect of cyberspace and the activity that takes place there. In this respect ICANN is the first institution to wield significant governance power and demonstrate a capacity for effective enforcement by means of its Universal Dispute Resolution Policy (UDRP), set up to deal with and resolve disputes about entitlement to use domain names. After it was launched ICANN attempted to create democratic structures and processes to ensure that the election of its governing board would result in a representative structure responsive and alive to issues of importance affecting the Internet. Internet users could become members of ICANN and then have a say in how the corporation used its substantial power. ICANN, it was hoped, would be the first functioning example of cyberspace’s capacity for ‘bottom-up’ substantive centralised law-making using the medium of the Internet. This ‘democratic’ model of governance was attacked as unaccountable, antidemocratic, subject to regulatory capture by commercial and governmental interests, unrepresentative, and excessively Byzantine in structure. ICANN was initially unresponsive to these criticisms. It has only been after concerted publicity campaigns by opponents that the board has publicly agreed to change aspects of the process. Furthermore ICANN has struggled with the criticism that it is a tool of the United States government as a result of its initial establishment and the continued involvement—albeit in a very ‘hands off ’ way—of the US Department of Commerce. This involvement has now come to an end and the ‘de-coupling’ was accelerated after the embarrassment of the Snowden revelations. ICANN now operates with a degree of autonomy and it retains a formal character. Its power is internationally based. It has greater private rather than public sources of authority, in that its power derives from relationships with registries, Internet Service Providers (ISPs) and Internet users rather than sovereign states, notwithstanding continued calls from nation states for the de-coupling of US involvement. Finally, it is evolving towards a technical governance methodology, despite an emphasis on traditional decision-making structures and processes. However, as may be seen from the discussion above, it may still face further challenges as nation states try to assert their control over the root.

IV.  Models of Internet Governance ISOC and its associated organisations represent an example of one of a number of Internet governance models that have been propounded by Lawrence Solum49— the model of code and Internet architecture based on the proposition that many decisions of a regulatory nature are made by the communications protocols that determine how the Internet operates. 49 

Solum above n 46, 56.

Models of Internet Governance


Before moving to consider these models there is an importance issue that operates almost as a umbrella over all discussions of Internet governance—that of Net Neutrality.

A.  Net Neutrality and Regulation at the Ends50 Network Neutrality is a term that is used to describe certain functions of the Internet and its architecture which do not make them amenable to regulation. Fundamentally, the architecture of the Internet is neutral among applications. TCP/IP, for example, does not discriminate among applications. This is an illustration of the principle that underpins network neutrality which is that all Internet traffic should be treated the same.51

In the United States, Network Neutrality has been described in the following way: An Open Internet means consumers can go where they want, when they want. This principle is often referred to as Net Neutrality. It means innovators can develop products and services without asking for permission. It means consumers will demand more and better broadband as they enjoy new lawful Internet services, applications and content, and broadband providers cannot block, throttle, or create special ‘fast lanes’ for that content.52

The Federal Communications Commission (FCC) Open Internet Rules were adopted on 26 February 2015 and are designed to protect free expression and innovation on the Internet and promote investment in the nation’s broadband networks. The FCC Rules contain several bright line principles which are as follows: 1. No Blocking: broadband providers may not block access to legal content, applications, services, or non-harmful devices. 2. No Throttling: broadband providers may not impair or degrade lawful Internet traffic on the basis of content, applications, services, or non-harmful devices. 3. No Paid Prioritisation: broadband providers may not favour some lawful Internet traffic over other lawful traffic in exchange for consideration of any kind—in other words, no ‘fast lanes’. This rule also bans ISPs from prioritising content and services of their affiliates. This approach is clearly a regulation of providers to ensure consistency and equality of service delivery via the Internet. The main focus of Net Neutrality is what goes on ‘in the middle’ and addresses the way in which ISPs manage and otherwise

50  I am indebted to Susan Chalmers, former policy lead for InternetNZ for her assistance in this section and for allowing me to use material from her paper ‘Network Neutrality’ delivered to the New Zealand Law Society Technology Conference in May 2015. 51  Ministry for Business, Innovation and Employment (New Zealand) Review of the Telecommunications Act 2001 Discussion Paper (2012). 52  Federal Communications Commission Open Internet


Aspects of Internet Governance

shape traffic for various reasons. The principle of Net Neutrality is opposed to such discriminatory treatment and helps define when and where such treatment may occur. The FCC Rules make that clear. Net Neutrality principles have been articulated in other jurisdictions. The telecommunications regulator in Norway explains that ‘the overall goal for net neutrality is to ensure that the Internet represents an open and non-discriminating platform for all types of communication and content distribution’.53 The Internet Society describes it as ‘the founding principle of the Internet and what allows the Internet to be the largest and most diverse platform for expression in recent history’.54 In a 2014 report, the European Commission defined Network Neutrality as ‘the principle that all electronic communication passing through a network is treated equally. That all communication is treated equally means that it is treated independent of (i) content, (ii) application, (iii) service, (iv) device, (v) sender address and (vi) receiver address’.55 There have been certain legislative approaches to ensure Net Neutrality in various jurisdictions. The Brazilian Senate approved the Marco Civil da Internet— or the Civil Rights Framework for the Internet—in April 2014. This legislation emphasises equal treatment of traffic subject to very limited exceptions. For example Article 9 restricts an ISP from blocking, monitoring, filtering or analysing the content of data packets. This section in particular reflects the end-to-end principle; the ISP ‘in the middle’ is discouraged from undertaking activities other than the common carrier’s role to provide transit. The structure of Marco Civil’s Article 9 reflects careful consideration of the various elements involved in Network Neutrality. It enforces neutrality as a behaviour both in network management and in the commercial practices of the ISP (eg nondiscriminatory commercial conditions). At the same time, it qualifies ‘neutrality’ by recognising the practical need for ISPs to discriminate, to the extent they must, in order to manage their networks effectively. Importantly, the law protects the interests of the end-user and requires transparency from the ISP in their traffic management practices. Other legislative examples include the Netherlands, which introduced Network Neutrality to its Telecommunications Act in 2012,56 becoming the second country, after Chile, to legislate for Network Neutrality.57 Slovenia also passed a law in the same year.58


Norwegian Communications Authority, The Norwegian Model (on Net Neutrality) (2013). Internet Society page on Net Neutrality: 55  European Commission Staff Working Document Implementation of the EU regulatory framework for electronic communications (2014). 56  Government of the Netherlands, The Dutch Telecommunications Act 2012 (English translation), Art 7.4(a). 57  Claudio Ruiz for Global Voices Online, ‘Chile: First Country to Legislate Net Neutrality’ (2010) 58  EDRI ‘Slovenia Has a Net Neutrality Law’ (2013) 54 

Models of Internet Governance


These examples demonstrate the way in which nation states may protect Network Neutrality principles within their own territories. A further way in which Network Neutrality may be protected or guaranteed is to use what may be described as a ‘co-regulatory’ approach. In 2009, the Norwegian Post and Telecommunications Authority (NPT) published Network Neutrality guidelines that are based upon the following three principles:59 1. Internet users are entitled to an Internet connection with a predefined capacity and quality; 2. Internet users are entitled to an Internet connection that enables them to send and receive content of their choice, use services and run applications of their choice, and connect hardware and use software of their choice—so long as these choices do not harm the network; and 3. Internet users are entitled to an Internet connection that is free of discrimination with regard to type of application, service, or content, or based on sender or receiver address. The principles encourage transparency, user choice and access, and non-­ discrimination. The NPT took a broad, multi-stakeholder-based approach to designing these guidelines, which were adopted with enthusiasm at the time.60 Two years later, however, one of the original signatories—Telenor—withdrew from the guidelines by announcing its position that content providers should pay Telenor to reach Telenor customers.61 A final way in which Net Neutrality may be addressed within the context of regulatory models is self-regulation, whereby ISP industry members develop a code of conduct. Network Neutrality proponents are almost always skeptical of this third approach, which provides neither a legislative backstop to incentivise neutral behaviour, nor adequate representation of the consumer interest. This view may be reinforced by recent developments. Currently in New Zealand a review of the communications sector is being undertaken. Some of those in the ‘supply’ industry consider that there is more than enough capacity in New Zealand networks to ensure that net neutrality issues do not arise and that there is no need for regulatory intervention to address net neutrality issues.62

59  Frode Sorensen, ‘Norwegian Communications Authority, The Norwegian Model for Net Neutrality’ (2013) 60  Nate Anderson for ArsTechnica, ‘Norway gets Net Neutrality- Voluntary, but Broadly Supported’ (2009) 61  0yvind Herseth Kaldestad, ‘Norwegian ISP wants content providers to pay up’ (26 January 2011) 62  Kordia Submission on the Telecommunications Act Discussion Paper (3 November 2015) www. Kordia%20submission.pdf. This view is supported by the Telecommunications Forum (TCF Response


Aspects of Internet Governance

Network Neutrality argues for egalitarian treatment of Internet services and data. The examples that have been given demonstrate how States that generally follow the Western democratic tradition have taken steps to ensure that effective communication enabled by the Internet will remain open and free from interference based upon discriminatory treatment of information or data. This ‘umbrella’ concept should be kept in mind as I move to a discussion of some of the models of Internet governance.

B.  The Five Models Solum suggests that there are five basic models of Internet governance. One is a cyber-libertarian model in which the theory is that the Internet is a self-governing realm of individual liberty beyond the reach of nation state controls. The transnational theory considers that the Internet transcends international boundaries and is therefore amenable to institutions that are either transnational quasi-private organisations or international organisations based on treaty arrangements between national governments or under the aegis of established international organisations such as the UN. In this respect the IGF is a good example of the transnational model. The model of code and architecture and the layer theory, of which ISOC and the IETF are good examples, suggests that regulatory decisions are made by engineering standards, protocols and by software. The code limits what can and cannot be done. The national government model—which may be termed the school of the digital realists—suggests that as the Internet grows in importance, regulatory decisions will be made by national governments through legal regulation. This is a theory based on traditional territorial jurisdiction principles in that states may order and regulate that which occurs within their boundaries. Finally Solum proposes the proposition that market regulation and economics will drive the fundamental decisions about the nature of the Internet.

to Regulating Communications in the Future, technology-communications/communications/regulating-the-telecommunications-sector/reviewof-the-telecommunications-act-2001/submissions/Telecommunications%20Forum%20submission. pdf), by Craig Shrive, Christopher Graf and Edward Bowie of Russell McVeagh, Solicitors, Regulating Communications for the Future (10 September 2015) ViewPublication/tabid/176/Title/regulating-communications-for-the-future/pid/416/Default.aspx, where the authors suggest that it would be surprising if there was an appetite for complex net neutrality regulation in New Zealand. However, Nick Crang of law firm Duncan Cotterill pointed out that ‘without rules protecting net neutrality, the principle will become eroded over time as internet traffic increases and commercial pressures on telcos and ISPs build. If net neutrality is to be protected, some sort of statutory rules or statutory backstop is likely needed’: Nick Crang, ‘Government mulls changes in telecommunications and broadcasting’ National Business Review (2 October 2015) article/government-mulls-changes-telecommunications-and-broadcasting-179571.

Models of Internet Governance


i.  Cyber-libertarian Theory The cyber-libertarian model is one of the oldest theories about Internet governance. It encompasses a wide spectrum of different theories from the impassioned Declaration of Independence of Cyberspace by John Perry Barlow to the more reasoned private ordering writings of David Post and David Johnson.63 The cyberlibertarian theories were based upon the assumption that the Internet could not be controlled and that governance in the commonly accepted meaning of that term was a dead issue or alternatively that the Internet could be controlled but that it should not be. Post and Johnson argued that because of its decentralised architecture and lack of a centralised rule-making authority the net was able to prosper.64 They suggested that the sense of freedom the Internet allowed and encouraged has meant that sysops were free to impose their own rules on users. However, the ability of the user to choose which sites to visit, and which to avoid, meant the tyranny of system operators was avoided and the adverse effect of any misconduct by individual users was limited. However, they recognised that as Internet use grew and it became more complex that a free-wheeling approach might have its limits: Anarchy, after all, has costs. It just won’t do for packets, for example, to be systematically misrouted. People won’t trust their important commercial and private dealings to a network where a domain name might be translated to a different IP address depending on where the message happens to originate. Nor, indeed, will large numbers of users visit online spaces if they encounter systematic fraud or vandalism or other activities they view as harmful or antisocial. There are activities that, when permitted even in only a few online venues, impose costs on all others, and against which individuals may want to protect themselves. Spamming is a form of wrongdoing that may be beyond the capacity (or desire) of a particular local sysop to control but that can make lots of users of lots of other systems miserable. The same could be said of launching destructive code.65

Thus Johnson and Post suggested that there were four likely models of Internet governance that might arise. The first was that nation states might seek to extend their jurisdiction over the Internet and govern those aspects of Internet activity and behaviour that impacted upon their citizenry. The second model still envisaged sovereign state activity—that sovereigns would enter into multi-lateral agreements to establish uniform rules for Internet activity. The third proposal was that a new international organisation might attempt to establish new rules and enforce them. A final model advanced was that de facto rules might emerge from individual decision by domain name and IP registries, by sysops and by users.

63  D Johnson and D Post, ‘And How Shall the Net be Governed?’ in B Kahin and J Keller (eds), Coordinating the Internet (Cambridge MA, MIT Press, 1997). Barlow’s Declaration of Independence of Cyberspace of 8 February 1996 is available at 64 ibid. 65 ibid.


Aspects of Internet Governance

Greenleaf saw Johnson and Post’s work as addressing moral and political issues rather than analysing cyberspace regulation. He was of the view that Lessig advanced a more comprehensive theoretical approach to cyberspace regulation. Johnson and Post’s views are summarily dismissed by the digital realists and are the subject of critique by the transnational theorists.66 Centralisation or semi-centralisation characterises the first three models. The fourth is essentially de facto self-regulation. Johnson and Post recognised that allowing the rules of the Internet to be encoded into software by a technical elite with no mechanism of accountability to the online population is a question that answers itself. Johnson and Post support a variation of option three—a new international organisation that is similar to a federalist system, termed ‘net federalism’. Within ‘net federalism’ individual networked systems rather than territorial sovereignty are the elements of governance. Within this model multiple network confederations could emerge. Each may have individual ‘constitutional’ ­principles—some permitting and some prohibiting, say, anonymous communications, others imposing strict rules regarding redistribution of information and still others allowing freer movement—enforced by means of electronic fences prohibiting the movement of information across confederation boundaries.

ii.  Transnational Theory Transnational theory suggests that rather than the Internet being controlled by national governments there should be transnational institutions such as ICANN or the IETF which are outside national government control and are answerable to the ‘Internet community’ or a community of network engineers.67 On the other hand the international character of the Internet is accepted but rather existing transnational institutions would be replaced with organisations modelled upon or similar to the ITU or WIPO. Both models recognise a requirement for institutional structures. The difference between them is about the involvement of national governments. ICANN and the IETF are not the creation of treaties underpinned by consensus between states. WIPO and the ITU are agencies of the United Nations and have a nation-state underpinning. In some respects this latter approach reflects the transnational theory of Internet governance articulated by Henry Perritt who suggests that governance of the internet can be best achieved not by a multitude of independent jurisdictionbased attempts but via the medium of public international law. They argue that international law represents the ideal forum for states to harmonise divergent legal trends and traditions into a single, unified theory that can be more effectively

66  G Greenleaf, ‘An Endnote on Regulating Cyberspace: Architecture vs Law?’ (1998) 21(2) UNSW Law Journal 593; see 67  Solum above n 46, 59.

Models of Internet Governance


applied to the global entity of the internet.68 Perritt’s view is that the Internet can foster international co-operation by: 1. strengthening international law; 2. strengthening economic interdependence; 3. empowering non-governmental organisations and improving their abilities to contribute productively to the development of international regimes designed to deal with global problems; and 4. supporting international security mechanisms. Perritt suggests that the nature of the internet leads inevitably to a transnational approach to governance. Traditional sovereign states are geographically constrained. By contrast, the Internet is global in nature and thereby indifferent to geographically constructed political boundaries. This international character ‘facilitates the development and extension of international political and legal institutions’ and the development of the internet ‘as a set of virtual legal institutions, as a market, and as a political entity, has enormous implications for the evolution of international law’.69 Perritt’s approach is based upon an international source of authority deriving from nation state involvement in the international institutions.

iii.  Code is Law—Code Writers and Engineers The ‘Code is Law’ theory was propounded by Lawrence Lessig70 and claims that the nature of the Internet is determined by the software and hardware that implements it. This in turn involves an understanding of how the Internet works and its essential architecture. Code theory accepts to a certain degree that the distributed nature of the Internet challenges traditional notions of territorial sovereignty which in turn arises from the code or software that makes location irrelevant. Thus, argues Lessig, code is the prime regulator in cyberspace. This is further enhanced by the fact that the Internet as a data transfer system is essentially neutral. It cannot determine the nature of the data passing through the network and whether it utilised the HTTP, FTP or SMTP protocols. This has been characterised as the ‘stupidity’ of the Internet or its transparency to applications which are deployed. Permissionless innovation and the development of applications that ‘bolt on’ to the network are relevant only as senders or receptors of data which travels via a non-discriminatory network. Innovation arising from its permissionless nature is itself decentralised. These applications sit on top of the network and help to explain a further aspect of code and architecture which is known as ‘the layers’— key central characteristics of the Internet.

68 Henry Perritt, ‘The Internet as a Threat to Sovereignty? Thoughts on the Internet’s Role in Strengthening National and Global Governance’ 5 Indiana Journal of Global Legal Studies 423. 69  ibid, 162. 70  Lawrence Lessig, Code and Other Laws of Cyberspace (New York, Basic Books, 1999).


Aspects of Internet Governance

a.  The Layer Theory Solum and Chung71 and Solum72 propose a modularised, interconnected layered system consisting of six layers for the purposes of considering possible regulatory interventions—the physical, link, internet protocol (IP), transport, application and content layers. Choucri and Clark73 have simplified that model by suggesting a four-layered model that captures the essential features of interest—the physical layer forms the internet’s physical foundations; the link, IP and transport layers encapsulate the logical functions of an application; information is stored and transmitted through programs in the application layer; users make decisions and carry out actions with information in the content layer. Lest it be thought that the layer theory is merely descriptive of Internet architecture and has little relevance for issues of governance or control, it should be emphasised that layer theory identifies possible points within the architecture where governance or regulatory activity may take place. An example may be seen in Andy Yee’s discussion of a conceptual framework for the regulation of Bitcoin.74 Yee’s approach is to conceptualise the Bitcoin system using layer theory. He then goes on to identify the various control points within the layers where regulatory structures may be deployed and concludes by arguing that governments should adopt an adaptive and novel regulatory approach to ensure that society may benefit from the development of Bitcoin. Yee observes: The layers principle distills the complexities of the ecosystem into control points and guidelines for regulation that respect the integrity of the layers. It is concluded that Bitcoin intermediaries in the information layer are appropriate targets of regulation. This approach provides policymakers a way to curtail online harms and avoid interference on online architectures and innovation. In establishing a regulatory regime, policymakers need to adopt an adaptive and novel approach to ensure that illicit activities can be deterred while ensuring that society can fully benefit from the innovation and creativity on the Bitcoin network.75

iv.  National Governments The national government model is based upon the proposition that like any other sphere of human activity the Internet should be subject to regulation and c­ ontrol.

71  L Solum and M Chung, ‘The Layers Principle: Internet Architecture and the Law’ (2004) 79 Notre Dame Law Review 815–948. 72  Solum above n 46. 73  N Choucri and D Clark, ‘Integrating Cyberspace and International Relations: The Co-Evolution’ (2012) Working Paper No 2012-29, Political Science Department, Massachusetts Institute of Technology. Retrieved from 74  Andy Yee, ‘Internet Architecture and the Layers Principle: A Conceptual Framework for Regulating Bitcoin’ (2014) 3 Internet Policy Review 75  ibid 7.

Models of Internet Governance


This approach has been characterised as that of ‘Digital Realism’, the basis of which was encapsulated by Judge Frank Easterbrook’s comment that ‘there [is] no more a law of cyberspace than there [is] a “Law of the Horse”’.76 Easterbrook was emphasising that law should be taught and studied as a set of general rules or principles. Cases might deal with the sale of horses or injury by horses or the licensing and racing of horses but this does not justify a course on ‘the law of the horse’ which would overlook unifying principles. Rather, general legal principles should be applied to particular and isolated areas. Using the analogy of the ‘law of the horse’ Easterbrook’s comment is a succinct summary of the general position of the digital realism school: that the internet presents no serious difficulties, so the ‘rule of law’ can simply be extended into cyberspace, as it has been extended into every other field of human endeavour. Accordingly, there is no need to develop a ‘cyber-specific’ code of law. In this way, territorial sovereign states may apply their own sets of rules to Internet activity rather than engaging in an international endeavour. China provides an example. The Chinese government has a monopoly over all Internet connections going in and out of the country. The government acts as the gatekeeper to the Internet and actively regulates the content that is available to Chinese internet users of which there are 700 million. Although it is possible for China to regulate material in the way that it does there is a conflict between its regulatory model and the model of code and architecture. For example, the addressing system was not designed to create an address space rigidly associated with national boundaries. All IP numbers look alike as far as geography is concerned. In that regard China’s approach is costly and complex. China is not defensive about its censorship and Internet control activities. Indeed it bases its approach upon a vision of Internet sovereignty, at least as far as Chinese users are concerned. China leads the world in e-commerce and the large adoption of the Internet by Chinese citizens suggests that the ‘Great Firewall’ is not a deterrent. It has been described as cyber-governance with Chinese characteristics.77 Internet regulation, particularly of the content layer, is workable as the example of China demonstrates. There have been other examples where Internet content has breached local rules. In 2000 Yahoo ran an auction site which featured an auction of Nazi memorabilia. The site was accessible in France whose law prohibited the exhibition of Nazi artefacts for sale. Two French organisations commenced proceedings in the Tribunal de Grande Instance de Paris seeking relief.78

76  Frank H Easterbrook, ‘Cyberspace and the Law of the Horse’ (1996) University of Chicago Legal Forum 207. 77  Simon Denyer, ‘China’s Scary Lesson to the World: Censoring the Internet Works’ Washington Post (23 May 2016) re=6901464097498426&tid=ss_tw. 78  Tribunal de Grande Instance de Paris No RG:00/5308, 22 May 2000.


Aspects of Internet Governance

The French Court ordered that Yahoo block access to the site by French citizens and Yahoo was required to take steps to comply notwithstanding objections based on technical grounds.79 Finally the auction did not proceed. But Yahoo itself applied to the United States District Court for Northern California seeking a declaratory judgment that the French Court’s orders were not enforceable under United States law. This brought into play US Constitutional considerations, the issue being whether it was consistent with the Constitution of the US for another nation to regulate speech by a US resident within the United States simply because that speech could be accessed by Internet users in that nation. The Yahoo! case demonstrates a problem with the model of national regulation and demonstrates a classic conflict of laws dilemma. Although national regulation may be workable in respect to Internet use by citizens within the local territory, the difficulty comes in regulating the activity of other Internet users in other jurisdictions.

v.  Market Regulation and Economics Solum propounds that if national regulation is not a complete solution to Internet governance what of market forces?80 This model views Internet governance in economic terms in the wider context as a market for goods and services. The example given is in the field of domain names, and the provision of domain names—a form of finite and limited resource—as subject to the laws of supply and demand. The heart of the domain name system (DNS) is what is known as the root ­directory.81 A domain name is an alphanumeric string that is associated with an Internet Protocol (IP) number. The goal of the DNS is to ensure that no two computers have the same domain name and that all parts of the Internet know how to convert domain names into static IP numbers so that data packets may be sent to the correct address. The DNS is a distributed database which contains information that allows domain names to map to their allocated IP numbers. The data files with this particular information are known as ‘roots’ and the servers in which these files are located are known as ‘root servers’ or ‘root name servers’. The top root servers hold the Masterfile of registrations in each Top Level Domain (TLD) and p ­ rovide

79  LICRA and UEJF v Yahoo! Inc and Yahoo! France Order in Summary Proceedings by the Superior Court of Paris rendered on 22 May 2000 by First Deputy Chief Justice Judge Jean-Jacques Gomez: For commentary and a collection of documents on the Yahoo! case see 80  Solum above n 46, 75. 81  For a discussion of the issues surrounding domain names in detail see Lee A Bygrave, Susan Schiavetta, Hilde Thurman, Annebeth B Lange and Edward Phillips, ‘The Naming Game: Governance of the Domain Name System’ in Lee A Bygrave and Jon Bing (eds), Internet Governance: Infrastructure and Institutions (Oxford, Oxford University Press, 2009) 147 and following.

Models of Internet Governance


i­nformation about which other computers contain authoritative information regarding TLDs in the naming structure. The management of the DNS is organised hierarchically. Administrative power devolves from the TLDs to various sub-domains. The addition of new TLDs can only be carried out by ICANN. In contrast to other aspects of Internet architecture the hierarchical structure of the DNS means that the addition of new second level domains may only be carried out by administrators of TLDs and only administrators of second level domain may add new third-level domains. The activities of ICANN are inextricably intertwined with this particular ­theory.82 The DNS root is managed by ICANN. There have been conflicts with respect to the operation of the DNS. One area related to the way in which domain names were allocated—primarily upon a first come, first served basis with no obligation to justify entitlement to a domain name. Another relates to those domain names that may or may not be permitted. Yet another problem relates to the operation of WHOIS services which act as a form of registry, providing information about the registrants of domain names. ICANN adopted a restrictive policy on expansion of the root and the availability of domain names up until 2008. This resulted in a scarcity of domain names, albeit one that was artificially created. In addition the allocation of available domain names was not especially efficient. After 2008, ICANN radically liberalised its policy. There was a greater availability of top level domains. The identification of domain names by other than Roman lettering was allowed. Since 2008, ICANN’s liberalisation policies have extended further. In this respect, ICANN’s superintendence of the domain name space has a strong public interest element to it, but the absence of competitive alternatives and the policy decisions made prior to 2008 may well have been contrary to the public interest. Indeed, the absence of competitive institutions in itself demonstrates a problem with the market economics model of Internet governance.

C.  Internet Exceptionalism Internet Exceptionalism is a term that is used to describe the governance approach to the Internet that holds that the Internet demands laws that are specific to the Internet and that diverge from regulatory precedents in other media. This theory aligns very closely with the cyberlibertarian theories of Johnson and Post. Internet Exceptionalism reflects what is special and unique about the Internet. In 1997, a judge called the Internet ‘a unique and wholly new medium of ­worldwide human communication’.83 The novelty of the new communications

82  83 

For discussion of ICANN see above s III.A. Reno v ACLU 521 US 844 (1997) per Stevens J.


Aspects of Internet Governance

paradigm prompted the development of internet-specific laws. Goldman suggests that Internet Exceptionalism has developed in three waves.84 The first wave was during the mid-1990s at which time many saw the Internet as a communications ‘utopia’ that would not be beset by many of the problems that were apparent in other communications media. One United States statute provides an example. It provides immunisation for online providers from liability for the publication of most types of third party content. It was enacted ‘to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation’.85 This is a safe harbour provision that provides a special treatment for on-line providers that is not enjoyed by offline publishers. Indeed, all safe harbour provisions that provide protection for content and service providers can be seen as examples of Internet Exceptionalism.86 Goldman’s second wave took place in the latter part of the 1990s. Instead of favourable treatment, legislators treated the Internet more harshly than they did ‘off-line’ content providers. Goldman provides the following example: [I]n 2005, a Texas website called announced that it would offer ‘Internet hunting’. The website allowed paying customers to control, via the Internet, a gun on its game farm. An employee manually monitored the gun and could override the customer’s instructions. The website wanted to give people who could not otherwise hunt, such as paraplegics, the opportunity to enjoy the hunting experience. The regulatory reaction to Internet hunting was swift and severe. Over 3 dozen states banned Internet hunting. California also banned Internet fishing for good measure. However, regulators never explained how Internet hunting is more objectionable than physical space hunting.87

Although this example demonstrates a more restrictive approach to Internet use, nevertheless it is an example of Internet Exceptionalism, in that it addresses an Internet-specific phenomenon. The third wave of Internet Exceptionalism addresses new Internet-based technologies as they develop. The permissionless innovation that has allowed the development of increased interactivity and a wide variety of new communications platforms has resulted in legislation to regulate, for example, blogs, ­virtual worlds and social networking sites. The Harmful Digital Communications Act 2015 (NZ) is a classic example of the third wave on Internet Exceptionalism in that it p ­ rimarily targets interactive websites but also extends to other forms

84  Eric Goldman, “The Third Wave of Internet Exceptionalism” Santa Clara Magazine (Winter 2008) For the original unedited version to which reference is made in this section see ‘The Third Wave of Internet Exceptionalism’ Technology & Marketing Law Blog (11 March 2009) 85 47 USC 230—Protection for private blocking and screening of offensive material.­ 86  For example see ss 92B et seq Copyright Act 1994 (NZ) and ss 23–25 Harmful Digital Communications Act 2015 (NZ). 87  Above n 83.

Models of Internet Governance


of digital communications such as SMS messaging. The Australian Enhancing Online Safety for Children Act specifically targets social media websites and, as will be discussed in chapter 11, is more limited than the New Zealand example. Both pieces of legislation are examples of Internet Exceptionalism. Is there a problem with Internet Exceptionalism? In a sense there is because it challenges the quality of permissionless innovation but does so in an ex post facto way. In some respects it represents a panic-based response to a new technology that demonstrates a lack of understanding about the way in which the t­ echnology operates. Often Exceptionalist laws like the Harmful Digital Communications Act 2015 (NZ) are enacted as a populist knee-jerk response to a perceived ­problem.88 The wider implications of that legislation are that it discriminates between online and off-line communications which nevertheless may have precisely the same content and be just as harmful. In this way the marketplace between web-based enterprises and their off-line competition may be distorted. In extreme conditions, such as the case of the live hunting example discussed above, an online enterprise may go out of business. Before enacting Exceptionalist legislation two important steps must be taken. The first is to understand not simply that the Internet is a content delivery system, but that the qualities of the Internet that are discussed in detail in chapter two underlie Internet communication and that any legislation must carefully consider the precise target of legislation to avoid the difficulties of unintended consequences. Tim Wu89 asked whether or not there was such a thing as Internet Exceptionalism, acknowledging the difference between the Internet and the networks that preceded it, although querying whether it was different in a lasting way. Wu points out that the opposite side of the Internet Exceptionalist coin is the cyber realist school of Judge Frank Easterbrook and Jack Goldsmith. Wu himself falls into this category for in 2006 he co-authored the book Who Controls the Internet.90 The book challenged the so-called conventional wisdom of the time which was that the Internet could not be regulated. Despite the novelty of the Internet it did not challenge national legal systems which were reliant on threats of physical force as a means of enforcement of rules, and that nations would assert their power over the network where they considered it necessary. As Wu and Goldsmith said: We are optimists who love the internet and believe that it can and has made the world a better place. But we are realistic about the role of government and power in that future, and realists about the prospects for the future.91


For discussion see ch 11. Wu, ‘Is Internet Exceptionalism Dead’ in Berin Szoka and Adam Marcus (eds), The Next Digital Decade—Essays on the Future of the Internet (Washington DC, TechFreedom, 2010) available at 90  Jack Goldsmith and Tim Wu, Who Controls the Internet: Illusions of a Borderless World (Oxford, Oxford University Press, 2006). 91  ibid, Introduction. 89  Tim


Aspects of Internet Governance

The behaviour of nation states since, especially China, has proven the truth of that statement. But by the same token Wu recognises that the Internet has changed communications behaviours and amounts to an exception in its difference from the way that other mass communications media have operated. In essence the fundamental change lies in convergence—the coalescing of various communications platforms into the one delivery system. It is perhaps this convergence that amounts to the exception that the Internet represents. Wu is of the view that, even though it is too early to tell, the transience of all earlier examples of new communications systems suggests that what the Internet represents at the moment will fade. The reasons are many. It might simply be that the underlying ideas just discussed turn out to have their limits. Or that they are subject to an almost natural cycle—excessive decentralization begins to make centralization more attractive, and vice versa. More sinisterly, it might be because forces disadvantaged by these ideas seem to undermine their power—whether concentrated forces, like a powerful state, or more subtle forces, like the human desire for security, simplicity and ease that has long powered firms from the National Broadcasting Corporation to Apple, Inc. Whatever the reasons, and while I do think the Internet is exceptional (like the United States itself), I also think it will, come to resemble more ‘normal’ information networks— indeed, it has already begun to do so in many ways.92

Of the various exceptions posed by the Internet Wu considers that the concepts of Net Neutrality and the ‘end-to-end’ view of the Internet are easily upset. Discrimination in information systems has previously been the rule rather than the exception, and there are commercial advantages to such an approach. In this way the touted Exceptionalism of Net Neutrality is under threat. Wu and Goldsmith’s approach to Internet Exceptionalism was in answer to the cyberlibertarian position taken by Johnson and Post. But there can be no doubt that under the umbrella of national legal systems, where Wu and Goldsmith argue the regulation and governance of the Internet will lie, rules specific to the Internet will be made. It may well be that the rules will be different between jurisdictions, but they will be based upon the exceptional nature of the Internet and will recognise the fundamental principle underlying Westphalian sovereignty that individual nation states will set their own rules for the Internet. It is perhaps within this concept that the future of Internet governance lies.

V. Conclusion For some entities, Internet governance is a means of controlling a diverse and multi-jurisdictional communications system. National governments became particularly concerned at the way in which the Internet was being used to support 92 

Wu above n 89, 187.



and co-ordinate protest or to challenge existing political systems or regimes. At the same time moves were afoot to give consideration to the issue of Internet governance through international organisations such as the IGF and the ITU. However, because of the distributed nature of the network and the fact that the technical infrastructure is located in various jurisdictions, national sovereignty principles mean that to obtain a universal and consistent consensus to Internet governance at the technical or infrastructural level is going to pose almost insuperable difficulties. At the same time there appears to be some difficulty in understanding exactly what it is that is to be regulated. Is it the infrastructure—the backbone—that underpins the Internet? Is it the various technical standards that have been put in place by Internet engineers and approved by organisations such as the IETF or ICANN? Is it the various platforms—such as Facebook, Twitter, Instagram and Snapchat—that have been ‘bolted on’ to the Internet that enable various methods of communication and content? Or is it the content of the communication itself together with the associated metadata such as who said or sent what, to whom, from which location and at what time. The disclosures by Edward Snowden and the professed policy of the government of the People’s Republic of China have made it clear that content and associated information are of considerable interest to national governments and, in China’s case, are controlled rigorously. Cybersurveillance may not be an aspect of Internet governance in and of itself, but when it is associated with the control of access to content it becomes a powerful tool in the overall control of communication utilising the network. A further difficulty with a unified approach to Internet governance lies in the existing structures, none of which seem to be subservient to any identifiable state, that are responsible for setting the engineering and technical standards that allow the Internet to work. Over the years and up until recently the United States government adopted a ‘hand-off ’ approach to the activities of ICANN. Now that organisation is independent of any prospect of formal US Government supervision, but it is certainly subject to potential ‘capture’ by individual nation states or by interest blocs. The addressing system of the Internet is a vital and strategic prerequisite to the use of the Internet. Perhaps there is a grudging recognition that developing a satisfactory system of universal governance of the network is simply not possible from a technical or political point of view. To further complicate matters, certain fundamental principles such as Net Neutrality, which create tensions between providers and Internet advocates, pose further difficulties for potential regulators, be they governments or businesses. In addition, although the models of Internet governance propose possible solutions, none of those models provide a full solution to the problem. Indeed, the layers theory demonstrates the difficulty in identifying the target of the regulatory or governance structure. Thus, nation states are thrown back to traditional ways of regulating Internet activity within their own boundaries, utilising well-known concepts of territorial sovereignty. Perhaps the best solution for Internet governance on a wider scale is


Aspects of Internet Governance

if there is some form of harmonisation of national laws within a bloc of nations that will apply standard rules within those jurisdictions. The issue with territorial regulation or governance is that localised laws based on localised laws and values are applicable. This in turn results in a disparate set of rules of varying strengths and an approach to law-making over the Internet space which results in examples of Internet Exceptionalism—differentiated and often discriminatory treatment of Internet-based behaviours when compared with other ‘off-line’ behaviours. The problem that arises here is that depending upon whether or not a nation-state bases its concepts of jurisdiction over online behaviour on whether or not the effects of a certain Internet-based activity are experienced with the territory of the state, a person visiting from another jurisdiction may find him or herself subject to some sort of sanction for online activity that is legitimate within the home state.93 It is clear that no one nation is going to exercise governance or regulatory power over the entire network and its infrastructure. It is in this respect that the real collision in the new paradigm arises. The Internet challenges existing models of control and governance. The distributed nature of the network is one part of the challenge. The Internet has developed in a different way from the introduction of other communications technologies which have attracted the attention of states and which have been subjected, from an early stage, to some form of state supervision.94 The relatively free-wheeling way in which standards have developed, and which have allowed for permissionless innovation and a lack of control over what can be bolted on to the network presents another challenge. As Tsutomu Shimomura said, ‘the Internet began as a unique experiment in building a community of people who shared a set of values about technology and the role computers could play in shaping the world. That community was based largely on a shared sense of trust’.95 The values of those who were responsible for the early development of the Internet advanced a regulatory hands-off approach to the new technology. Despite the various theories of governance that have been discussed in this chapter, the properties of the technology themselves present the challenge to governance and the collision between the expectations of the powerbrokers of nation states on the one hand and the continuing disruptive nature of the Internet. However, despite these realities there are other possibilities that I have suggested elsewhere:96 There are a number of possible futures for Internet governance. A gloomy outlook is that the Internet will get more and more fragmented and re-nationalised. A growing

93  There are numerous examples of this sort of problem—see Dow Jones v Gutnick (2002) 210 CLR 575; US v Thomas 74 F 3d 701 (6th Cir 1996), US v Sklyarov and Elcomsoft 7 July 2001 US District Ct, Northern District California Case No 5-01-257—for the procedural history including copies of the Court documents see US v Elcomsoft Sklyarov 94  Telephony and broadcasting (both radio and TV) are examples. 95  Tstomu Shimomura, Takedown (New York, Hyperion, 1996) 494. 96  David Harvey, 4th edn (Revised) (Wellington, LexisNexis, 2016) 57.



number of governments will start to define a ‘national Internet segment’ and develop policies to surveil, censor and control access to and use of the Internet. National firewalls will separate the ‘domestic Internet’ from the global Internet and an exit and entrance regime into networks will be introduced where users need passwords, handed out by governmental authorities on an annual basis, to go from one domain to another. Political battles among governments over critical Internet resources, cybersecurity and human rights will dominate international discussions, no global agreement will be reached, the voice of non-governmental stakeholders will be ignored and the mandate of the IGF will not be renewed. A positive future scenario is that there will be a more secure Internet in the future with more freedoms, more privacy and more involved stakeholders which enhance their cooperation on equal footing in a further growing global Internet governance ecosystem. The rule of law will govern surveillance activities, internationally and domestically, and will be restricted on the basis of proportionality to cases where clear evidence is available for illegal activities. Internet use will increase and a new wave of innovative services and applications will eventuate, based on the concept of permissionless innovation, where objects are linked to the Internet creating new market opportunities, jobs and spaces for all kind of commercial, cultural and social activities improving the quality of life of billions of users around the globe. Between these two scenarios is the possibility of more of the same: ‘stumbling forward’, as the former US president Bill Clinton once described Internet governance.97 There will be hot political debates with numerous papers and controversial proposals, but little outcome. Some small steps could be taken as the successful start of some new top-level domains, some arrangements on confidence building measures to enhance cybersecurity or a global agreement on some high level non-binding basic principles for Internet policy making. But a lot of other open and orphan issues under discussion will remain unresolved and postponed.98

97 Wolfgang Kleinwachter ‘Is ICANN Stumbling Forward? GAC Advice and Shared ­ Decision ­ aking Procedures’ (24 October 2012); see M forward_gac_advice. 98 Wolfgang Kleinwachter, ‘Internet Governance Outlook 2014: Good News, Bad News, No News?’ (31 December 2013); see 2014_good_news_bad_news_no_news.

5 The Property Problem I. Introduction In the New Zealand courts between 2014 and 2015 there was considerable debate about whether or not a digital file could amount to property. In July of 2014 the Court of Appeal in the case of Dixon v R1 held, in the context of the crime of dishonestly accessing a computer to obtain property, that digital information or a data file did not fall within the definition of property contained in the New Z ­ ealand Crimes Act 1961. It adopted what it referred to as the ‘orthodox approach’—that there was no property in information. The decision was the subject of considerable critical comment. It was even suggested that, as the result of the Court’s holding, the current legislation relating to computer crime was unfit for purpose. Yet the decision should not have come as any surprise. There is a substantial body of authority, primarily in the civil arena, that supports the Court’s conclusion. A similar finding was reached in the ­subsequent case of Watchorn v R.2 In October of 2015 the Supreme Court overturned the Court of Appeal3 and held that the digital files at issue were in fact property, irrespective of whether it was tangible or intangible. The fundamental characteristic of property was that it was something capable of being owned or transferred. This chapter examines the underpinning of the decisions in Dixon. It argues that there are in fact two rationales for the Court of Appeal finding. One is based upon legal principle. The other is based upon technological reality. It also critiques the decision of the Supreme Court which, it is respectfully suggested, was wrong and was based more on expediency than principle and technological reality. The particular collision in the digital paradigm is that, with so much information being digitised—and important information at that—it may well be that current remedies for breach of confidence, copyright infringement and the like do not provide a sufficient remedy nor deterrent particularly when the behaviour is accompanied by clear instances of dishonesty associated with the appropriation


Dixon v R [2014] NZCA 329. [2014] 3 NZLR 504 (CA). Watchorn v R [2014] NZCA 493 (CA). 3  Dixon v R [2015] NZSC 147 (SC). 2 

Information as Property—The Debate in the Digital Paradigm


of information which can be converted into something of value. The difficulty is, as was observed in the case of Your Response Limited v Data Team Business Media Limited4 that the law of unintended consequences may come into play.

II.  Information as Property—The Debate in the Digital Paradigm The shift to digital systems has re-opened the debate about and interest in information as property especially given the fact that digital information is incoherent and tangible. The issue of property rights to information with the oncoming information or economy was the subject of an article by R Grant Hammond where, in the context of a digital economy, it was noted that problems may arise out of the ascription of property rights to information.5 Sole ownership is vastly complicated in the case of information and, in the context of the criminal law, the act of theft is often impossible to detect and difficult to prove especially where the information appropriated has not left the control of the original ‘owner’. Hammond points out that unused information is generally of no use but the moment information is used it reveals both its existence and content and thus value is difficult to ascribe. Certainly Anglo-American jurisprudence provides remedies for the unauthorised disclosure of information which is imparted in confidence but the basis or cause of action differs between the British Commonwealth jurisdictions and those of the United States. The majority of United States Courts had tended to espouse proprietary theory whereas Commonwealth Courts have opted in favour of an equitable obligation of good faith. Hammond observes that criminal sanctions against appropriation of information have been enacted but the results have been paradoxical. The argument that ‘property’ is necessarily associated with ‘things’ has been weakened. The notion that information can and will be legally protected may have received the imprimatur of the criminal law in certain areas but is by no means a complete solution. That seems to have reinforced, at least in the United States, the notion that trade secrets are proprietary in character. In terms of the nature of information stored in electronic impulses—that is electromagnetically charged particles on a medium—it must be observed that these electronic impulses exist only for a moment in time as hardware and software interact together. Hammond suggests that there seems no reason in principle why improper interception of or interference with such an impulse could not be


Your Response Limited v Data Team Business Media Limited [2014] EWCA CIV 281. Grant Hammond, ‘Quantum Physics, Ecometric Models and Property Rights to Information’ (1982) 27 McGill Law Journal 47. 5  R


The Property Problem

subjected to criminal sanctions but the problem is a complex one. Information recorded upon a paper medium poses little difficulty in that it is neither dynamic nor volatile. Offences relating to dishonest forging or dealing in documents focus on the medium rather than what is contained thereon. Information in electronic form associated with electronic media poses an entirely different problem in that the information is not immediately plain and requires the intermediation of hardware and software to render into comprehensible form. The problem remains within the British Commonwealth that judges have been reluctant in the absence of an express statutory provision to hold that information is property for any purposes although Hammond observes6 that such holdings appear to be no more than assertions. It seems unusual that in the second decade of the twenty-first century this debate should be continuing and has not as yet been resolved, and it certainly needs to be as more and more activity takes place in the digital space and as the digital economy increases.

III.  The British Commonwealth Approach Throughout the British Commonwealth there has generally been a reluctance in the absence of a specific statutory provision to hold that information is property for any purpose. By and large this has been an assertion.7 In the main the generalised proposition that information is not property is articulated in the context of a particular set of circumstances generally in relation to a form of remedies sought. Thus in the case of OBG v Allen8 the House of Lords held that wrongful interference with contractual rights could not constitute the tort of conversion because the tort applied only to chattels and not to choses in action. The position was articulated by Lord Hoffman who pointed out that historically conversion was a tort against a person’s interest in a chattel and expressed the view9 that the whole of the statutory modification of the law of conversion had proceeded on the assumption that the tort applies only to chattels. Although it was suggested by Lord Nicholls and Lady Hale that the tort of conversion should be extended to cover the appropriation of things in action Lord Brown rejected that proposition on the grounds that it would sever the link between the tort of conversion and the wrongful taking of physical possession of property. Thus OBG v Allan makes clear the sharp distinction in the common law between tangible and intangible property. The issue of tangibility is an important one in considering whether there may be a property right in information. Information in 6 

R Grant Hammond, ‘Theft of Information’ 1984 (100) Law Quarterly Review 252.


OBG v Allen [2007] UKHL 21, [2008] 1 AC 1. ibid para [97].

7 ibid. 9 

The British Commonwealth Approach


and of itself has no tangibility at all. Information incorporated into a document is associated with a medium and in such a situation conversion could apply but it relates to the medium—the document—thereby creating an unlawful interference with a physical object to which a commercial value can be attached. In contrast to chattels, choses in action are intangible things and incapable of the physical possession necessary to support a claim for conversion. The case of Your Response Limited v Data Team Business Media Limited10 is another case involving issues of information, property and, rather than being considered within the context of a remedy for conversion, the issue was whether or not a possessory lien could apply to a database. Data Team Business Media Limited carried on business as a database manager. It offered customers the service of holding electronic databases and amending them as required in order to ensure that the information contained was up to date. In 2010 Your Response engaged Data Team to hold and maintain its database of subscribers. Following non-payment of fees Data Team refused to release the database or give Your Response access to it until all outstanding fees were paid In the proceedings that followed the Judge at first instance held that the data manager, Data Team, was entitled to withhold the data until those fees were paid and rejected Your Response’s argument that the exercise of a lien was inconsistent with the terms of the contract and that it was not possible to exercise a lien over intangible property, in this case the electronic data. In his decision the Judge at first instance drew an analogy between information kept in hardcopy in the form of ledgers over which a book keeper could exercise control by means of physical possession and information kept in electronic form over which the data manager could exercise control by electronic means. The Court of Appeal observed that the Judge had not had his attention drawn to the case of OBG Limited v Allen11 and went on to consider the nature of a common law possessory lien, observing the possessory aspect of the remedy and the requirement for there to be actual possession of goods, requiring tangibility in contradistinction to a chose in action—essentially personal rights of property which could be claimed or enforced by action and not by taking physical possession.12 It was observed13 that there are indications that information of the kind that makes up a database—usually but not necessarily maintained in electronic form—if it constitutes property at all, does not constitute property of a kind that is susceptible of possession or of being the subject of the tort of conversion. Under the provisions of the Copyright Designs and Patents Act 1988 (UK) the nature of protection accorded to the makers of databases by that legislation reflects a


Above n 4. Above n 8. 12  Torkington v McGee [1902] 2 KB 427. 13  Above n 4 at para [17]. 11 


The Property Problem

r­ecognition that databases do not represent tangible property of a kind that is capable of forming the subject matter of torts concerned with the interference of possession. Davis LJ observed14 that the subtext of the argument on behalf of Data Team was that the courts should not leave the common law possessory lien stuck in its eighteenth and nineteenth century origins in developments but should go on to give it a twenty-first century application. Although that appealed to modernism and had its attractions it should be resisted. Davis LJ observed that although that approach found favour with the minority in OBG v Allen it did not find favour with the majority. The second point made by Davis LJ was more far reaching. He observed that the law of unintended consequences is no part of the law of England and Wales but it is worth paying attention to it in the appropriate case. He observed that if a common law possessory lien could arise in a case such as Your Response v Data Team it would be a right in rem and not a right in personam. Furthermore a right to such a possessory lien could have an impact upon creditors of the company and could confer rights in an insolvency which other creditors would not have. In addition the possession of lenders could be affected and, given the number of IT companies and businesses, the impact of Data Team’s arguments, if accepted, could be significant. Davis LJ also observed that if a database is to be regarded as tangible property it may have implications for other areas of the law altogether for example, the law of theft (as contrasted with the legislation relating to misuse of computers). Davis LJ’s observations about unintended consequences found favour with Floyd LJ. He made the observation that an electronic database consists of structured information which may give rise to intellectual property rights but again emphasised that the law had been reluctant to treat information itself as property. He observed that when information is created and recorded there are sharp distinctions between the information itself, the physical medium on which the information is recorded and the rights to which the information gives rise. Whilst the physical medium and the rights are treated as property the information itself never has been and to accept data teams arguments would result in a fundamental change in the law.15


ibid para [38]. There have been a number of other cases which have held that information does not amount to property. In the case of Boardman v Phipps [1967] 2 AC 46 it was held that confidential information was not property. The position in Australia and in New Zealand is similar—see TS and B Retail Systems v Three Fold Resources No 3 [2007] FCA 151 and Farah Construction Pty v Saydee Pty [2007] HCA 22. A similar conclusion has been reached in Hunt v A [2007] NZCA 332; [2008] 1 NZLR 368. In Money Managers Limited v Foxbridge Trading (Unreported High Court Hamilton CP 67/93 15 December 1993 per Hammond J) the observation was made that ‘extreme caution should be exercised in granting proprietary protection to information and that if protection is to be granted at all, it should be in very narrowly circumscribed terms’. The rejection of the argument that information is property was also upheld in Taxation Review Authority 25 [1977] TRNZ 129. 15 

The British Commonwealth Approach


A.  Information as Property for the Purposes of Theft The leading English case on whether information is capable of being the subject of theft is that of Oxford v Moss.16 Within the context of the Theft Act 1968 the question was whether or not information could constitute property. ‘Property’ is defined as including money and all other property, real or personal, including things in action and other intangible property. In that case the defendant was a student who dishonestly obtained the proofs of an examination paper for an examination to be held in his University in June of 1976. He read its contents, memorised them and returned the paper whence he had obtained it. The university authorities charged him theft of confidential information. At first instance the charge was dismissed on the grounds that there had been no appropriation of property. The argument by the prosecutor was that confidential information could indeed be property within the provisions of the Theft Act. The Court observed that by any standards the conduct of the student should be condemned and for the layman it would be readily described as cheating but the issue was whether or not that conduct fell within the scope of the criminal law. A number of authorities cited emanated from the area of trade secrets and matrimonial secrets but those cases concerned a duty of good faith in circumstances wherein a breach of confidence may arise. In such cases courts could restrain such actions by way of an injunction or damages. Certainly the student had interfered with the owner’s right over the paper (the medium) but it had not permanently deprived the owner of any intangible property especially given that there was no intention to steal the paper itself. In a comment on Oxford v Moss the observation was made that it was unrealistic to consider the proprietary interest in the examination paper as consisting solely in the piece of paper without regard to what is inscribed on it.17 A blank piece of paper is of negligible value but an examination question paper, the preparation of which can involve hours of work by several skilled persons, is a relatively valuable thing.18 It was suggested that if more than one person had been involved the case could have been dealt with as a conspiracy to defraud—an agreement by two or more by dishonesty to injure some proprietary right of another. There was another difficulty in Oxford v Moss and that there was the lack of an intention to ­appropriate the piece of paper. The decision related to the content or message contained on the piece of paper rather than the fate of the paper (or the medium) itself. In Canada the case of Stewart v R19 considered whether confidential information could be subject to theft. The question posed by Lamer J, was while one can 16 

Oxford v Moss (1979) 68 Cr App R 183. and Comment—Theft—Oxford v Moss’ (1979) Crim LR 119. concept of ‘added value’ underpins American theory as to the proprietary interest in information. 19  Stewart v R (1988) 1 RCS 963. 17  ‘Case 18 This


The Property Problem

steal a document containing confidential information, does obtaining without authorisation the confidential information, by copying the document or memorising its content, constitute theft? Is it fraud? In this case a union sought to obtain the names, addresses and telephone numbers of employees of a hotel. Because of hotel policy that information was treated as confidential. The employer also barred union representatives from the premises. Stewart, a self-employed consultant, was hired by someone he assumed to be acting for the union to obtain the names and addresses of the employees. He offered a security guard a fee to obtain this information. The security guard informed the hotel authorities. The case was argued throughout on an agreed statement of facts. It was agreed that no tangible object, such a list containing the information sought, would have been taken had the scheme been carried out. The security guard reported the offer to his chief and to the police and Stewart was charged with counselling the security guard to commit the indictable offence of theft. Canadian legislation provided that theft was committed by a person who fraudulently and without colour of right takes or fraudulently and with colour of right converts to his use or to the use of another person anything whether animate or inanimate with intent to deprive, temporarily or absolutely, the owner of it or a person who has a special property or interest in it, of the thing or of his property or interest in it. To determine whether or not confidential information could be the object of theft it was necessary to determine the meaning of ‘anything’. Under Canadian law that was held to encompass certain choses in action which are intangibles such as a bank credit in a bank account.20 However courts in Canada were of the view that information, even when considered as being confidential did not qualify as ‘anything’ within the meaning of the Act because it was intrinsically incapable of being an inanimate thing. The court observed that confidential information per se was a pure intangible and that the word ‘anything’ in the legislation was restricted in two ways. The first was that whether tangible or intangible it had to be the subject of a proprietary right. Second it had to be capable of being taken or converted in a manner that resulted in the deprivation of the victim, which did not include memorising or copying. It was observed that the protection that is afforded to confidential information in civil cases arose from an obligation of good faith or fiduciary relationship rather than a proprietary interest. It was the nature of the relationship and the circumstances under which the information was obtained that amounted to the critical factor.21 The court considered that the realm of information must be approached in a comprehensive way and take in to account the competing interests and the free flow of information and in one’s right to confidentiality or again one’s economic interests in certain kinds of information. The choices to be made rest upon political

20  In contra distinction to the position in New Zealand in 1999 see R v Wilkinson [1999] 1 NZLR 403. 21  In making this observation the court referred to Oxford v Moss.

The British Commonwealth Approach


judgements and matters of legislative action rather than judicial decision. Thus the court was of the view that confidential information should not amount to property. The second restriction on the scope of the word ‘anything’ was that property must be capable of being taken or converted in a manner that results in the deprivation of the victim. It was observed that tangible things present no difficulty but pure intangibles as they have no physical existence can obviously only be converted and not taken. The court went on to observe that conversion—an active interference with a chattel and consistent with the rights of another whereby that other is deprived of its use in possession—is not available for confidential information because it is not of such a nature that it can be converted. If one appropriates confidential information without taking a physical object—for example by memorising or copying it—the owner is not deprived of the use or possession thereof. Without deprivation there can be no conversion.

B. The Dixon Case The New Zealand case of Dixon v R centred around the use of a computer system to dishonestly obtain property—a digital file in breach of section 249(1)(a) of the Crimes Act 1961 (NZ). In that case the Court of Appeal held that a digital file cannot be property for the purposes of the criminal law.22 This depends upon the way in which various definitions contained in the Crimes Act coupled with the nature of the charge were interpreted by the court. The New Zealand Supreme Court reversed the Court of Appeal and adopted a different and, in my opinion, incorrect approach. The facts were that Mr Dixon had been employed by a security firm in ­Queenstown. One of the clients of the firm operated a bar in Queenstown and had installed a closed circuit TV system in the bar. In September 2011 the English rugby team was touring New Zealand as part of the rugby world cup. The captain of the team was a Mr Tindall who had recently married the Queen’s granddaughter. On 11 September Mr Tindall and several other members visited the bar and there was an incident involving Mr Tindall and a female patron which was recorded on the CCTV system. Mr Dixon found out about the existence of the footage and asked one of the bar’s receptionists to download it onto a computer that was used at work. This was done under the impression that Mr Dixon required it for legitimate work ­purposes. The footage was located and saved onto the computer. Mr Dixon accessed the computer, located the relevant file and transferred it onto a USB stick belonging to him. He then attempted to sell the footage but that proved to be unsuccessful and he posted it on a video sharing site, YouTube, resulting in a storm of publicity both in New Zealand and in the United Kingdom. At his trial the Judge found that Mr 22 

Above n 1.


The Property Problem

Dixon had done this out of spite and to ensure that no one else would have the opportunity to make any money from the footage. A complaint was laid with the Police and Mr Dixon was charged under section 249(1)(a) of the New Zealand Crimes Act.23 The indictment against Mr Dixon alleged that he had access to the computer system and thereby dishonestly and without claim of right obtained property— the video file. The issue before the court was whether or not that file and digital footage stored on a computer amounted to property as defined in the Crimes Act. The New Zealand Crimes Act24 contains a definition of property which is as follows: [P]roperty includes real and personal property, and any estate or interest in any real or personal property, money, electricity, and any debt, and any thing in action, and any other right or interest

The court considered the legislative history of the definition noting that in the bill that introduced the computer crimes sections in 1999 a separate definition of property specifically for those crimes had been provided. However this definition was discarded by the select committee which rejected the suggestion that there should be different definitions of the word ‘property’ for different offences. The court also made reference to an earlier case of Davies v Police25 where it was held that internet usage—the consumption of megabytes and the transmission of electronic data—is property but in that case the Judge specifically distinguished internet usage from the information that was contained in the data. Thus Dixon was the first case where the court had to consider property as defined in the context of electronically stored footage or images. The Court of Appeal was of the view that the trial Judge had been influenced by the very wide definition of property and the inclusion of intangible things, and that the footage in question seemed to have all the normal attributes of personal property. However, the operator of the CCTV system was not deprived of the file. What in fact it lost was the right to exclusive possession and control of it. The court considered that the trial Judge’s holding that the files were within the scope of the definition of property reflected ‘an intuitive response that in the modern computer age digital data must be property’.26 Digital files are not property within 23 

That section provides as follows: Accessing computer system for dishonest purpose (1) Every one is liable to imprisonment for a term not exceeding 7 years who, directly or ­indirectly, accesses any computer system and thereby, dishonestly or by deception, and without claim of right,— (a) obtains any property, privilege, service, pecuniary advantage, benefit, or valuable consideration; or (b) causes loss to any other person. (emphasis added)


s 2. Davies v Police [2008] 1 NZLR 638 (HC). 26  R v Dixon above n 1 para [20]. 25 

The British Commonwealth Approach


the definition in the Crimes Act. Thus Mr Dixon could not obtain property and was charged under the wrong part of section 249(1)(a). Rather, held the court, he should have been charged with accessing a computer and dishonestly and without claim of right obtaining a benefit.27 The court went on to consider whether or not the digital footage might be distinguishable from confidential information. Once again it noted the distinction between the information or data and the medium upon which it was contained observing the computer disc containing the information was the property whilst the information contained upon it was not. It noted that a digital file arguably does have a physical existence in the way in which information in non-physical form does not. The reality of the matter is however that a digital file does have a physical existence but not in coherent form. One of the subtexts to the Court of Appeal’s observations of the electronically stored footage was that when stored electronically it has a continuity similar to film footage. For reasons that I will discuss below that is not in fact the case. The Court then went on to discuss the nature of information in the electronic space and stated: [I]t is problematic to treat computer data as being analogous to information recorded in physical form. A computer file is essentially just a stored sequence of bytes that is available to a computer programme or operating system. Those bytes cannot meaningfully be distinguished from pure information. A Microsoft word document, for example, may appear to us to be the same as a physical sheet of paper containing text but in fact is simply a stored sequence of bytes used by the Microsoft word software to present the image that appears on the monitor.28

Although the court gave this consideration to the technological aspects of digital information it took what could be described as the ‘orthodox’ approach. What must be remembered is that the definition of property in the Crimes Act followed the decision of R v Wilkinson29 where it was held that credit extended by a Bank was not capable of being stolen because the definition of ‘things capable of being stolen’ in the Crimes Act as it stood at that time was limited to moveable, tangible things. The definition of a document also in the Crimes Act in fact extended to electronic files the word ‘document’ (which would have extended the definition of property to include electronic files) did not appear in the definition of property. The important thing is that the conclusion in Dixon did not make section 249(1) of the Crimes Act meaningless even though some commentators have subsequently suggested that the provisions are no longer ‘fit for purpose’. The section still extended to cases where, for example, a defendant accesses a computer

27  In its consideration the court referred to Oxford v Moss noting that it was not a closely reasoned decision but one which remained good law in England and had been followed by the Supreme Court of Canada in Stewart v R above n 19. 28  Dixon v R above n 1 para [31]. 29  Above n 20.


The Property Problem

and uses fraudulently obtained credit card details to unlawfully obtain goods. In this case it was observed by the court that Mr Dixon had been charged under the wrong part of the section. Indeed other charges were available including obtaining a benefit and the Court of Appeal substituted that charge for the charge of obtaining property considering that the advantage that Mr Dixon obtained fitted within the context of benefit. The issue of benefit within the digital space was further considered in the case of Watchhorn v R.30 In that case Mr Watchhorn was an employee of an exploration company engaged in the prospecting and production of oil and gas. In the course of his employment he downloaded extensive Geoscience data from his employers’ computer system onto a portable hard drive. This information included a ‘secret recipe’ which related to data dealing with the discovery of sites of oil and gas. The information had very high value to the employer and had it been disclosed to a competitor it would have been damaging to the company and beneficial to that competitor. On another occasion Mr Watchhorn loaded similar information onto a USB memory stick and at the same time gave notice of his intention to resign from his employer and commenced employment with another competing company. The Court of Appeal noted its decision in Dixon and confirmed the holding that the obtaining of data by accessing a computer system could not amount to obtaining property within the meaning of section 249(1)(a) of the Crimes Act, accepting that that analysis must apply to Mr Watchhorn. The court then went on to consider whether or not it could properly substitute an alternative charge of obtaining a benefit. The first thing the court did was to consider whether or not there had to be a dishonest purpose for obtaining a benefit and despite the fact that the heading to section 249 states ‘accessing a computer system for dishonest purpose’ that was not an accurate summary of the offence itself. The ingredients of section 249(1) do not include a dishonest purpose in that what the Crown must prove is that the accused accessed a computer system and thereby dishonestly or by deception and without claim of right obtained a benefit. In light of the definition of ‘dishonestly’ in section 217 of the Crimes Act all the Crown had to prove was that Mr Watchhorn did not have his employers’ authorisation to download the data that he acquired. It should be noted that the New Zealand Supreme Court in R v Hayes31 stated that ‘dishonestly’ within the definition contained in section 217 requires an absence of a belief that there was consent or authority and that it is not necessary to prove that the belief was reasonable. The Court of Appeal observed that if Mr Watchhorn actually believed he was authorised to download the data then the element of dishonestly obtaining would not have been proven. The evidence before the court was that his employers’

30  31 

Above n 2. R v Hayes [2008] 2 NZLR 321.

The British Commonwealth Approach


e­ xecutives had claimed that Mr Watchhorn had no authority implied or otherwise to take its Geoscience data and there were conflicting claims on Mr Watchhorn’s part as to whether or not he was authorised to download the files. The issue of absence of claim of right was different. Dishonesty addressed whether Mr Watchhorn believed he was authorised to download the data. Claim of right addressed whether or not he believed, even if he wasn’t authorised, that such downloading was permissible. There was no evidence in Mr Watchhorn’s case that any implied entitlement did exist and no evidence that he believed that it did. The fact that he had downloaded data from previous employers did not provide a proper foundation for a finding he was lawfully entitled to do so in this case. Once those aspects of the matter had been considered the court went on to consider the nature of obtaining a benefit. In Dixon the benefit had been the opportunity to sell digital CCTV footage that had been obtained by accessing his employer’s computer. On this occasion there was no evidence that Mr Watchhorn tried to sell the data but the issue was whether or not benefit was limited simply to financial advantage or something wider. What constituted a benefit in Watchorn involved a more nuanced situation than that in Dixon. The court considered that it was arguable on the facts of Watchhorn’s case that the advantage that he gained was his ability to access data outside his work environment and without the supervision of his colleagues after he had left his former employment. Indeed it could be argued that he did not in fact exploit the advantage given to him by selling the data or making it available to his new employer. To make matters more complex the Crown did not actually formulate the precise nature of the benefit that Mr Watchhorn might have received. The failure to articulate such a benefit meant that he did not have notice of any allegation that he could properly contest and this case differed from Dixon in that the court was able to identify the benefit Mr Dixon hoped to obtain from the fact proven at trial. Both Dixon and Watchhorn demonstrate the importance of bringing the proper charge.

i.  Dixon in the Supreme Court It is difficult to understand why Mr Dixon appealed to the Supreme Court. Although the Supreme Court held that the Court of Appeal was in error, it restored the original conviction. Pyrrhic victory does not adequately describe the outcome from Mr Dixon’s point of view. The focus of the decision was upon the definition of property in the Crimes Act to which reference was made in section 249.32 Arnold J, writing for the Court, noted that the definition was inclusive rather than exclusive, that it was circular in that property was defined as including real and personal property and finally


For the text of the definition of property and s 249 see above fn 23.


The Property Problem

in wide terms it included tangible and intangible property. The Court then went on to consider the definition of goods noting, as has been discussed above, that the term ‘goods’ in consumer legislation is defined to include and computer software for the purpose of avoiding doubt.33 It will be remembered that the specific inclusion of software was necessary because of conceptual difficulties especially surrounding the issue of tangibility which was an essential characteristic of goods. Upon a linear reading of the decision it is not immediately apparent why the Court embarked upon this discussion but the importance of tangibility was to become apparent at a later stage in the decision. The Supreme Court did not consider that it was called upon to reconsider the ‘orthodox’ approach to property that had been adopted by the Court of Appeal and in doing so effectively avoided the real issue in the case. It appears that this may have resulted from the way in which the case was argued. Before the Supreme Court counsel for the Crown stepped away from arguing that pure information was property. Rather, the argument was focused upon the fact that digital files were property because they could be owned and dealt with in the same way as other items of personal property. Thus the Court was able to sidestep dealing with the major finding of the Court of Appeal and could approach the problem from a different angle. Another reason for the Court not considering the ‘pure information as property’ issue was that Mr Dixon had dismissed his lawyer prior to the hearing and, accordingly, the point was not fully argued. Therefore it was considered that it was not an appropriate occasion to reconsider what the Court of Appeal had referred to as the orthodox view. Rather, the Supreme Court took a contextual approach to the issue of property. It started by observing that the word ‘property’ varies with context.34 It referred to comment made by Gummow and Hayne JJ in Kennon v Spry where they stated: ‘The term “property” is not a term of art with one specific and precise meaning. It is always necessary to pay close attention to any statutory context in which the term is used.’35 The Court then went on to observe that within the context of section 249(1)(a) and in light of the definition of ‘property’ in section 2, there was no doubt that the digital files at issue were property and not simply information. The Court ­considered that digital files were identifiable, had a value and were capable of being transferred to others. They also had a physical presence although that cannot be detected by means of the unaided senses. It may be that digital files could be classified as tangible or intangible. The Supreme Court did not say which, and thus avoided a significant issue. The Court also omitted to discuss the inconvenient issue of the necessity of exclusive possession as an element of property. What

33  Within the New Zealand context see the Commerce Act 1986, the Consumer Guarantees Act 1993, the Fair Trading Act 1986 and the Sale of Goods Act 1908. 34  Dixon v R above n 3 para [25]. 35  Kennon v Spry [2008] HCA 56, [2008] CLR 366 at [89].

The British Commonwealth Approach


the Court did was to classify digital files as property for the purposes of section 249(1)(a), but only for the purposes of section 249. The Court considered the history of the legislation, observing that a proposed definition of property, rather than an earlier concept of ‘things capable of being stolen’ would have put the issue of whether a digital file was property beyond doubt. However, Parliament did not adopt that extensive definition. The Court was of the view that the use of the word ‘property’ in section 249 was broad in its application and looked at the definition of a ‘computer system’ which included items such as ‘software’ and ‘stored data’. The focus upon data as an integral element was enhanced by the term ‘access’ which is defined to include receiving data from a computer and is received even although it is copied rather than permanently removed. The Court observed: Given that Parliament contemplated situations where a person copied stored data from a computer, which of the offences might apply where the person taking the data did so without authority? There are three possibilities—ss 249, 250 and 252. It is not obvious that s 250 would apply. If someone simply took a copy of existing data, but did not damage, delete or modify it, could it be said that the person ‘interfered with’ or ‘impaired’ the data? We rather doubt that it could. Section 252 could apply. It creates an offence of intentionally accessing the computer system without authority and provides for a maximum penalty of two years’ imprisonment. However that offence focuses on unauthorised access implicit, it does not address the issue of dishonest purpose. Where the access if for dishonest purpose, s 249 applies and there are significantly higher maximum penalties.36

The Court then went on to consider the situation where a person without authority located, copied and dealt with valuable digital files contrary to the interests of the file’s owner. The inclusion of that conduct is consistent with the features of the legislation to which reference had been made. Looking at the issue conceptually, of those concepts identified in section 249(1)(a)—property, privilege, service, pecuniary advantage, benefit or valuable consideration—property seemed most apt to capture what was obtained by Mr Dixon as the result of the unauthorised access. Thus from a conceptual view of what it was that the accused did and what he took, the word ‘property’ seemed the most suitable word to encompass the situation. In considering what Mr Dixon took the Court noted that the file had an economic value and was capable of being sold. Although the files remained on the CCTV system, the compilation contained what was valuable in the full files. The compilation had a material presence. It altered the physical state of the medium upon which it was stored—the computer disk or USB stick—illustrated by the fact that electronic storage space can be fully utilised. This aspect of material presence led to a discussion of some American cases where a different approach to computer files as property has been adopted. The


Above n 3 at [36].


The Property Problem

Court referred to the case of South Central Bell Telephone Company v Barthelemy37 where the physical processes and characteristics of software were examined. In response to the suggestion that software was merely knowledge or intelligence— perhaps another way of stating information—the Court observed that the software was knowledge recorded in a physical form which had a physical existence and which took up space on a tape, disk or hard drive and made physical things happen which could be perceived by the senses. The software was ultimately recorded and stored in the physical form on a physical object. The Court also referred to a number of other American cases, although noted that the US Courts had not been consistent on the point with some holding that software is intangible property. The decision of Ronald Young J in Erris Promotions Limited v CIR,38 where the argument was whether or not software code was tangible property, held that it was intangible rather than tangible. The issue of tangibility or intangibility is something of a red herring, having regard to the fact that property can be tangible or intangible according to the definition of property in section 2 of the Crimes Act 1961 but the Supreme Court had earlier stepped away from considering whether a data file was tangible or intangible. The reason why they did so was that in the Court’s mind it didn’t matter. The definition of property stated that it included tangible and intangible property. What emerged from the brief discussion of the US authorities is that although they differ as to whether software is tangible or intangible, there is general agreement that software is property. The Court then encompassed data files as property by observing, ‘There seems no reason to treat data files differently from software in this respect’.39 But it is important to note that the Court’s holding applies to the concept of property within the bounds of section 249 of the Crimes Act. The trouble with the approach of the Supreme Court is that it ignores technological reality. For example, what is it about a digital file that is capable of being owned, transferred and which provides economic value? As will be discussed below, the file is no more nor less than a series of electronic impulses that require hardware and software to render them comprehensible. It is this intermediation of technologies that determines what the user sees and does. Otherwise the file is useless. When the file is copied the original or exemplar remains on the host computer. To whom does the file belong? And how is the proprietary nature of a computer file separate from the data or information that it contains. By ­approaching the matter relying on such a distinction, this essentially has allowed the Court to treat information as property. A further problem arises by limiting the applicability of the definition of property to section 249 of the Crimes Act. If a computer file is property for those purposes, should it not also be property for the purposes of the crime of receiving property knowing it to be dishonestly obtained? This could have significant 37 

South Central Bell Telephone Company v Barthelemy 643 So 2d 1240 (Lou 1994). Erris Promotions Limited v CIR [2004] 1 NZLR 811 (HC). 39  Dixon v R above n 3 para [50]. 38 

The British Commonwealth Approach


adverse consequences for those who received digital files unlawfully obtained by a hacker or a ‘whistleblower’. I shall now proceed to consider the technological basis which I argue justifies the approach of the Court of Appeal rather than that of the Supreme Court.

C.  Information or Data in the Digital Space The issue of where property lies within the medium/information dichotomy has clearly been present for some considerable period of time as the cases discussed demonstrate. Even in the case of a tangible item such as a book I can have property in the book itself but I do not own the content because that is the intellectual property of the author. The particular property right there—the copyright—gives the author the control over the use of the content of the book. Thus the author may lose possession and control of the medium but he or she does not lose control of the message. But that is something that has developed through specific statutory provisions rather than the common law. Those legislatively created special property rights do not extend to the provisions of the criminal law even although copyright owners frequently quote the mantra that ‘copyright infringement is theft’. But to clearly understand the import of the decision in Dixon it is necessary to understand the nature of information or data in the digital space. The Court of Appeal refers to ‘information’ because that is the basis of the orthodox conclusion that it reached. ‘Information’ implies a certain continuity, contiguity and coherence that derives from the way in which it was communicated in the pre-digital paradigm. Lawyers are so used to obtaining information that is associated primarily with paper, that the medium takes second place to the message. Lawyers focus upon the content layer—an approach that must be reconsidered in the digital paradigm. Rather than use the word ‘information’, and for reasons that I shall develop, perhaps the word ‘data’ should be substituted. The properties of electronic and digital technologies and their product require a review of one’s approach to information. The nature of the print and paper-based information medium and its digital equivalent are radically different. Apart from occasional incidents of forgery, with paper based documents, what you saw was what you got. There was no underlying information embedded or hidden in the document, as there is with metadata in the digital environment. The issue of the integrity of the information contained on a static medium was reasonably clear. Electronic data is quite different to its pre-digital counterpart. Some of those differences may be helpful to users of information. Electronic information may be easily copied and searched but it must be remembered that electronic documents also pose some challenges. Electronic data is dynamic and volatile. It is often difficult to ensure that it has been captured and retained in such a way as to ensure its integrity. Unintentional modifications may be made simply by opening and


The Property Problem

reading data. Although the information that appears on the screen may not have been altered, some of the vital metadata which traces the history of the file—and which can often be incredibly helpful in determining its provenance and may be of assistance in determining a chronology of the events, and when a party knew what they knew—may have been changed. To understand the difficulty that the digital paradigm poses for our conception of data it is necessary to consider the technological implications of storing information in the digital space. It is factually and paradigmatically far removed from information recorded on a medium such as paper. If we consider data as information written on a piece of paper it is quite easy for a reader to obtain access to that information long after it was created. The only thing necessary is good eyesight and an understanding of the language in which the document is written. It is ‘information’ in that it is comprehensible. It is the content that informs. Electronic data in and of itself does not do that. It is incoherent and incomprehensible, scattered across the sectors of the electronic medium upon which it is contained. In that state it is not information in that it does not and cannot inform. Data in electronic format, as distinct from writing on paper, is dependent upon hardware and software. The data contained on a medium such as a hard drive requires an interpreter to render it into human readable format. The interpreter is a combination of hardware and software. Unlike the paper document the reader cannot create or manipulate electronic data into readable form without the proper equipment in the form of computers.40 There is a danger in thinking of electronic data as an object ‘somewhere there’ on a computer in the same way as a hard copy book is in the library. Because of the way in which electronic storage media are constructed it is almost impossible for a complete file of electronic information to be stored in consecutive sectors of the medium. Data on an electronic medium lacks the linear contiguity of a page of text or a celluloid film. An electronic file is better understood as a process by which otherwise unintelligible pieces of data are distributed over a storage medium, assembled, processed and rendered legible for a human reader or user. In this respect ‘the information’ or ‘file’ as a single entity is in fact nowhere. It does not exist independently from the process that recreates it every time a user opens it on a screen.41 Computers are useless unless the associated software is loaded onto the ­hardware. Both hardware and software produce additional evidence that includes, but is not limited to, information such as metadata and computer logs that may be relevant to any given file or document in electronic format. This involvement of technology makes electronic information paradigmatically different from traditional information where the message and the medium are 40  Burkhart Schaeffer and Steven Mason, ‘The Characteristics of Electronic Evidence in Digital Format’ in Steven Mason (ed), Electronic Evidence 3rd edn (London, Lexis Nexis Butterworth, 2012) 2.05. 41  ibid 2.06.

The British Commonwealth Approach


one. It is this mediation of a set of technologies that enables data in electronic format—at is simplest, positive and negative electromagnetic impulses recorded on a medium—to be recorded into human readable form. This gives rise to other differentiation issues such as whether or not there is a definitive representation of a particular source digital object. Much will depend, for example, upon the word processing program or internet browser used. The necessity of this form of mediation for information acquisition in communication explains the apparent fascination that people have with devices such as Smartphones and tablets. These devices are necessary to ‘decode’ information and allow for its communication and comprehension. Thus, the subtext to the description of electronically stored footage which seems to suggest a coherence of data similar to that contained on a strip of film cannot be sustained. The ‘electronically stored footage’ is meaningless as data without a form of technological mediation to assemble and present the data in coherent form. The Court of Appeal in Dixon made reference to the problem of trying to draw an analogy between computer data and non-digital information or data and referred to the example of the word document. This is part of an example of the nature of information as processed that I have described above. Nevertheless there is an inference of coherence or contiguity of information in a computer file that is not present in the electronic medium—reference to ‘sequence of bytes’ are probably correct once the assembly of data prior to presentation on the screen has taken place, but the reality is that throughout the process of information display on a screen there is constant interactivity between the disc or medium interpreter, the code of the word processing program and the interpreter that is necessary to display the image on the screen. In the final analysis there are two approaches to the issue of whether or not digital data is property for the purposes of theft. The first is the orthodox legal position taken by the Court of Appeal which is preferred as intellectually more rigorous as opposed to the abandonment of basic principle by the Supreme Court’s exercise in expediency. The second is the technological reality of data in the digital space. Even although the new definition of property extends to intangibles such as electricity it cannot apply to data in the digital space because of the incoherence of such data. Even though a file may be copied from one medium to another it remains in an incoherent state. Even though it may be inextricably associated with a medium of some sort or another, it maintains that incoherent state until it is subjected to the mediation of hardware and software that I have described above. The Court of Appeal’s information-based approach becomes even sharper when one substitutes the word data for information. Although there is a distinction between the medium and the data, the data requires a storage medium of some sort and it is this that is capable of being stolen. It is quite clear from both the decisions in Dixon and Watchhorn that any charge suggesting the obtaining of property where what in fact has been obtained is digital material cannot be sustained and one of the alternatives in section 249 must be considered. For this reason the Court of Appeal’s exposition of the nature of a benefit and the crystallisation of that benefit must be undertaken by a prosecuting


The Property Problem

authority. It is perhaps in this direction that a possible solution lies. However the final paragraph of the decision of the Court of Appeal in Watchhorn is instructive. The court said ‘the decisions of this court in Dixon and the present case have identified some drafting issues and inconsistencies in some Crimes Act provisions. We respectfully suggest that consideration be given to remedial legislation’.42 The difficulty is whether or not those legislative approaches may create the unintended consequences referred in Your Response Limited v Data Team Business Media Limited.43 The issue of the nature of any property in digital data needs to be considered but at the same time must be carefully thought out. Although it might be attractive for the definition of property simply to include digital data the problem that arises is that what amounts to copyright infringement within the digital space could well become a criminal offence and the presently incorrect adage offered by copyright owners that copyright infringement is theft could well become a reality.44 In the United States there are some subtle difference in approaching the nature of information, the property that may exist in it and the potential consequences for information in the digital space.

IV.  The United States’ Position The United States has a different approach to the issue of information as property. In Carpenter v US45 one Winans was co-author of a Wall Street journal investment advice column. He entered into a scheme with one Fellas and a stock broker who, in exchange for advance information from Winans as to the timing and contents of the column, bought and sold stocks based on the columns probable impact on the market and shared their profits with Winans. The particular issue was whether or not Winans had fraudulently misappropriated property within the meaning of mail and wire fraud statutes. White J, writing for the court, noted that the object of the scheme embarked upon by Winans was to take the journal’s confidential business information but noted that its intangible nature did not make it any less property protected by the mail and wire frauds statutes. The Court then went on to observe that confidential business information has long been recognised as property and cited Ruckelshaus v Monsanto Co;46 Dirks v SEC47 and Board of Trade Chicago v Christie Grain and Stock Co.48 42 

Above n 1. Above n 4. 44 This may be the subtext to the New Zealand Supreme Court’s limitation of a digital file as ­property to s 249 of the Crimes Act. 45  Carpenter v US 484 US 19 (1987). 46  Ruckelshaus v Monsanto Co 467 US 986 1,001 to 1,004 (1984). 47  Dirks v SEC 463 US 646, 653 (1983). 48  Board of Trade Chicago v Christie Grain and Stock Co 198 US 236 (1905). 43 

The United States’ Position


Confidential information acquired or compiled by a corporation in the course and conduct of its business was a species of property to which the corporation has the exclusive right and benefit which a court of equity would protect. The journal, the court held, had a property right in keeping confidential and making exclusive use, prior to publication, of the schedule and contents of Wynans’ column. It observed news matter, however little susceptible of ownership or dominion in the absolute sense, is stock and trade, to be gathered at the cost of enterprise, organisation, skill, labour, and money, and to be distributed and sold to those who will pay money for it, as for any other merchandise.49

In this case the confidential information was generated from the business, and the business had a right to decide how to use it prior to disclosing it to the public. The court held that it was sufficient that the Wall Street journal had been deprived of its right to exclusive use of the information for exclusivity as an important aspect of confidential business information and most private property. It is perhaps important to note that the subtext to the court’s decision about business information as property places considerable weight upon the manner and purpose of the acquisition of the information. The gathering of information for business purposes implicitly has an impact upon the way in which the business conducts its affairs and, ultimately, its profitability. In many circumstances such information will be sensitive, will necessarily involve aspects of confidentiality and will inevitability be ‘commercially sensitive’. In this respect, information becomes part of the stock in trade of the business and seems to occupy an equal standing with its inventory. This particular theory conflates information acquired in circumstances where there is a confidential element together with fiduciary duties arising within an employer-employee relationship. It seems also to have resonances of Locke’s ‘sweat of the brow’ theory of property. Californian jurisprudence also provides some interesting examples of circumstances in which information may be considered property. In the cases of People v Parker50 and People v Dolbeer51 the defendants devised schemes for obtaining and copying supplementary lists of new telephone customers which were not generally available to the public and which the telephone company considered confidential. The defendants argued that the lists were not property but mere information, and because the original lists were returned after they were copied, there was no specific intent to deprive the telephone company of them permanently and therefore no theft. In both Parker and Dolbeer the court held that the lists were in fact property, they had value and they were subject to theft. This provides an interesting contrast with the approach taken by the court in Oxford v Moss.52 In Williams


Above n 45 at 26 referring to International News Service v Associated Press 248 US 215, 236 (1980). People v Parker 1963 217 Cal App 2d 422, 31 Cal L Rptr 716. 51  People v Dolbeer 1963 214 Cal App 2d 619, 29 Cal Rpt R 573. 52  Above n 16. 50 


The Property Problem

v Superior Court,53 attorneys who had illicitly obtained a photocopy of an insurance company’s investigative file, of which the insurance company retained the original, were held to have received a concealed stolen property. These cases focused on whether the information contained in the documents, as distinguished from the paper upon which the information was recorded, was property the subject of theft54 on whether the information had value55 and on whether the copy itself had been taken from the rightful possessor.56

A.  Intangibles and Conversion—A United States Approach The Ninth Circuit Court of Appeals dealt with a case involving intangibles, property rights and conversion theory in the case of Kremen v Cohen57 using the theory of ‘document merger’. In the early days of the Internet domain names were virtually free for the asking. An entrepreneur, Gary Kremen, became the owner of the domain name which was registered to his business, online classifieds and he listed himself as the contact. Stephen Cohen also saw the potential of the domain name. Network Solutions, the registrar of the domain names was sent a letter from Cohen enclosing what purported to be a letter from Online Classifieds claiming that the company had been forced to dismiss Kremen but ‘never got around to changing our administrative contact with the internet registration and … the board of directors have decided to abandon the domain name’. The letter explained because we do not have a direct connection to the internet, we request that you notify the internet registration on our behalf, to delete our domain name Further we have no objections to your use of the domain name and this letter shall serve as our authorisation to the internet registration to transfer to your corporation.

Despite the letters claiming that a company called Online Classifieds had no internet connections, Network Solutions made no effort to contact Mr Kremen. It accepted the letter at face value and transferred the domain name to Cohen. When Kremen contacted Network Solutions some time later he was told it was too late to undo the transfer. Cohen went on to turn into a lucrative online porn empire. Kremen commenced proceedings against Cohen and Network Solutions in what was to become lengthy and complex litigation.58 When the case reached the Ninth Circuit Court of Appeals the central issue was the nature of a domain name for the purposes of the tort of conversion.


Williams v Superior Court 1978 81 Cal App 3d 330, 146 Cal Rptr 311. Dolbeer above n 51 and Williams above n 53. 55  Parker above n 50. 56  Williams above n 53. 57  Kremen v Cohen 337 F3d 1024 (9th Cir 2003). 58  For a readable account of the case and its protagonists see Kieren McCarthy, (London, Quercus, 2007). 54 See

The United States’ Position


Kozinski J writing for the Court addressed the issue and the nature of the property right to which Kremen was entitled. The starting point was that conversion was originally a remedy for the wrongful taking of another’s lost goods and so implied only tangible property although Kozinski J observed that virtually every jurisdiction had discarded the rigid limitation to some degree. The court discussed the concept of merger of intangible rights in a tangible item such as a document. This theory developed in the American Restatement of Torts recommended: 1. Where there is conversion of a document in which intangible rights merged, the damages include the value of such rights. 2. One who effectively prevents the exercise of intangible rights of the kind customarily merged in a document is subject to a liability similar to that of conversion, even though the document is itself not converted.59

An intangible is merged in a document when, by the appropriate rule of law, the right to immediate possession of a chattel and the power to acquire such possession is represented by the document or when an intangible obligation is represented by the document which is regarded as equivalent to the obligation. The Court at first instance had applied this test and found that there was no evidence that Kremen’s domain name was in fact merged in a document. Kozinski J observed that courts routinely applied the tort to intangibles without inquiring whether they are merged in a document and, while it was often possible to find a document to which the intangible is connected, it was seldom one that represented the owner’s property interest. The court considered that the issue of merger was minimal, requiring only some connection to a document or a tangible object. According to Kozinski J, Kremen’s domain name fell within that class of property. Kremen had argued that the relevant document was in fact the domain name system or DNS. The DNS is a distributed electronic database that associates domain names like with an Internet Protocol or IP number associated with particular computers connected to the internet. The Ninth Circuit Court of Appeals agreed that the DNS was a document or, more accurately a collection of documents stored in electronic form rather than ink and paper. Kozinski J observed it would be a curious jurisprudence that turned on the existence of a paper document rather than an electronic one. Torching a company’s file room would then be conversion while hacking into its main frame and deleting its data would not. That is not the law, at least not in California.60

Kozinski J also observed although conversion was a strict liability tort, there was nothing unfair about holding a company responsible for giving away someone else’s property even if it was not at fault. Although Cohen was the guilty party who

59  American Law Institute Restatement (Second) of Torts (4 Vols) § 242 (Philadelphia, American Law Institute, 1965). 60  Kremen v Cohen, above n 58, 1034.


The Property Problem

should have paid for his theft the issue was whether Network Solutions should be open to liability for its decision to hand over Kremen’s domain name. Negligent or not, Network Solutions did just that. Kremen did not do anything. It was not unfair to hold Network Solutions responsible and force it to try and recoup its losses by seeking its remedies against Cohen. There can be no doubt that the Ninth Circuit court’s decision in Kremen v Cohen constitutes an expansion of legal interests classified as property. Some writers consider the best way to characterise the legal status of a domain name is by analogy to a telephone number and that, in fact, a domain name is not actually property. In Canada for example when registering domain name the registrant agrees there is a contractual condition that acquires no property right in the domain name. The decision in Kremen v Cohen is a creative approach to the problem of intangible property which is tenuously given tangibility by the merger theory. Nevertheless it provides an interesting example of efforts undertaken by the courts to provide effective if somewhat anologistic solutions to wrongdoing in cyberspace. The decision in this case relies upon association with a document. It is interesting to note that the document in question is a digital one and from the point of view of civil proceedings there is little difference between a paper document and a digital one. But that all depends upon the fact that information in the digital space can be considered a ‘document’ in the same way that information contained on paper is a document. We probably use the word ‘document’ because it is a comfortable and convenient means of expression rather than a true or realistic representation of the way in which the information is contained and presented. As has previously been observed, digital information in and of itself has no coherence whereas writing on a paper medium are inextricably associated. In the case of Dixon, as has been observed, the definition of document in the Crimes Act includes information electronically stored but did not appear as one of the examples that fell under the definition of property in the Crimes Act. In that respect Kremen v Cohen allows for a more flexible approach. However it is unlikely, having regard to the House of Lords decision in OBG v Allen that the approach adopted in Kremen v Cohen and the application of conversion to an intangible—especially a digital intangible such as a domain name associated with information contained in the DNS—would be available within Commonwealth jurisdictions.

V.  Property or Cyberproperty The term ‘cyberproperty’ has been used in the academic literature to suggest a ‘right to exclude others from access to network connected resources’.61 This exclu61  R Polk Wagner, ‘On Software Regulation’ (2005) 78 Southern California Law Review 457, 496. See also Patricia L Bellia, ‘Defending Cyberproperty’ (2004) 79 New York University Law Review 2164, 2169.

Property or Cyberproperty


sionary aspect of the overall bundle of property rights overlooks that of exclusive possession, but, within the digital context the exclusionary aspect may be the digital equivalent of possession of a tangible item. Early commentary on cyberproperty theory included Judge Frank H Easterbrook suggesting that there should be property rights where presently there were none to make bargains possible62 and Trotter Hardy who argued for a property entitlement in cyberspace because of low transaction and boundary-monitoring costs.63 Richard Epstein suggested that ‘the rules that govern ordinary space provide a good template to understand what is at stake in cyberspace’64 Epstein argued that there should be cybertrespass rules because unauthorised entry has been a per se violation under ordinary trespass principles.65 On the basis of equating exclusionary features with a possessory element, proponents of cyberproperty argued that the owner of a network-connected system should have an absolute right to prevent other users from making electronic contact with the chattel. The Digital Paradigm has thrown up two issues which further challenge the information/data as property debate. One is that of virtual property. The other relates to the matter of ‘digital assets’ and what happens to one’s data located in social media sites or in online services after death.

A.  Virtual Property Virtual property or virtual goods are immaterial items that are usually associated with online activities such as gaming or social media virtual worlds. Virtual worlds are persistent, dynamic computer-based and computer-moderated environments in which interconnected users interact with each other and the virtual worlds around them.66 Most worlds allow for an ‘inworld’ property model, whereby users accumulate virtual products.67 End User Licence Agreements or EULA’s usually limit any claims that a user might wish to assert against an operator. It is debateable whether a court would ignore EULA terms and uphold a user’s claim to a virtual product as against an operator.

62  Frank H Easterbrook, ‘Cyberspace and the Law of the Horse’ (1996) University of Chicago Legal Forum 207, 212. 63 Trotter Hardy, ‘Property (and Copyright) in Cyberspace’ (1996) University of Chicago Legal Forum 217, 236–58. 64  Richard A Epstein, ‘Intellectual Property Old Boundaries and New Frontiers’ (2001) 76 Indiana Law Journal 803, 818. 65  Richard A Epstein, ‘Intel v Hamidi: The Role of Self-Help in Cyberspace’ (2005) 1 Journal of Law, Economics & Policy 147, 163. 66 Virtual worlds are often referred to as Massively Multiplayer Online Role-Playing Games (‘MMORPGs’). For background on virtual worlds, see Wikipedia, Virtual World, https://en.wikipedia. org/wiki/Virtual_world. 67  Jack M Balkin, ‘Virtual Liberty: Freedom to Design and Freedom to Play in Virtual Worlds’ (2004) 90 Virginia Law Review 2043, 2070.


The Property Problem

There has been academic writing on the subject of whether there are property rights in virtual goods.68 Fairfield has described a virtual property right as a property right in a virtual product,69 despite the fact that the item has no physical existence beyond the environment within which it has any relevance. A distinction may be made between virtual property and intellectual property rights in that virtual property rights apply to rivalrous goods whereas intellectual property rights apply to non-rivalrous goods. Horowitz gives the example of a domain name, arguing that a virtual property right can protect a domain name given its underlying exclusivity whereas anyone can own a copy of a popular CD without making others worse off.70 Horowitz argues that: The content of a virtual property right is also different from that of an intellectual property right. Like real property rights, virtual property rights typically provide for the right to use, exclude others from, and alienate or transfer objects. Intellectual property rights, by contrast, prohibit copying or producing similar ideas, expressions, or products.71

Despite the fact that items of virtual property associated with online activity has no tangible or physical existence, trade in virtual products is extensive. As long ago as 2005 a spokesman for Sony Online Entertainment estimated the sale of virtual world products as a $200 million market.72 The terms and conditions of EULAs delineate any property claims that a user may have against an operator. Indeed, many virtual world EULAs prohibit the trade in virtual products but these conditions are usually ignored by users. The EULA of Blizzard’s ‘World of Warcraft’ makes the matter clear: You may not purchase, sell, gift or trade any Account, or offer to purchase, sell, gift or trade any Account, and any such attempt shall be null and void. Blizzard owns, has licensed, or otherwise has rights to all of the content that appears in the Program. You agree that you have no right or title in or to any such content, including the virtual goods or currency appearing or originating in the Game, or any other attributes associated with the Account or stored on the Service. Blizzard does not recognize any virtual property

68  For example see W Erlank, ‘Acquisition of Ownership Inside Virtual Worlds” (2013) 46 De Jure 770; W Erlank, ‘Property and Sovereignty in Virtual Worlds’ in JC Smith (ed), Property and Sovereignty: Legal and Cultural Perspectives (Farnham, Ashgate Publishing, 2013) 99; F Gregory Lastowka and Dan Hunter, ‘The Laws of Virtual Worlds’ 92 California Law Review 1; F Gregory Lastowka, ‘Decoding Cyberproperty’ (2007) 40 Indiana Law Review 23; Wayne Rumbles, ‘Theft in the Digital: Can You Steal Virtual Property’ (2011) 17 Canterbury Law Review 354,; Joshua Fairfield, ‘Virtual Property’ 85 Boston University Law Review 1047. 69 ibid, Fairfield, ‘Virtual Property’ 1052–64 (describing the characteristics of virtual property rights, including rivalrousness, persistence, and interconnectivity). 70  Steven J Horowitz, ‘Competing Lockean Claims to Virtual Property’ (2007) 20 Harvard Journal of Law and Technology 443. 71  ibid 444. 72  Tom Leupold, ‘Spot On: Virtual Economies Break Out of Cyberspace’ GAMESPOT (6 May 2005) Castronova found that one virtual world had so much trade that he was able to calculate, among other things, that world’s GNP and currency exchange rate against the US dollar. See Edward Castronova, Virtual Worlds: A First-Hand Account of Market and Society on the Cyberian Frontier 31–33 (CESifo Working Papers, Paper No 618, 2001)

Property or Cyberproperty


transfers executed outside of the Game or the purported sale, gift or trade in the ‘real world’ of anything related to the Game. Accordingly, you may not sell items for ‘real’ money or otherwise exchange items for value outside of the Game.73

Thus users do not have any rights to virtual goods or the accounts for which they pay. There is no right to buy, sell, gift or trade in such goods, although this provision is regularly breached. For example, if one wishes to ‘purchase’ an item at a marketplace in World of Warcraft, one must have gold which is the in-game currency.74 Gold can be acquired by various means including mining. It is wellknown that players will go into the game and do nothing but mine gold. Some mining operations are so well organised that players go online in shifts so that the mining operation is ceaseless. The gold so acquired is then sold in off-line transactions for bitcoin or hard currency and an ‘in game’ transfer of the gold takes place. The player purchasing the gold is then able to embark upon in-game transactions using in-game currency.75 Notwithstanding EULA provisions, Horowitz argues that where users are trading in millions of dollars worth of virtual property then there must be property rights.76 He states: As a practical matter, users seem to have exclusive possession of the virtual products, and they have the ability to transfer those products to other users. It would be ignorant or naive, according to this argument, to deny the existence of property rights under such circumstances. But the pragmatic argument fails as well. First, trade among users may suggest the existence of rights among users, but it does little to indicate the structure of rights between users and operators. Second, pragmatic concerns can shape the opposing argument—the very conditions that give rise to putative property rights are controlled by the virtual world operators, and despite appearances, operators possess the virtual products insofar as they possess the entire world.77

Second Life presents a different scenario in the way in which it deals with ‘in-game’ property. Linden Labs, the developers of Second Life, purport to protect the virtual and intellectual property rights of users.78 Indeed the developers consider Second 73  ‘World of Warcraft: Terms of Use Agreement’ § 8 (as at 11 Jan 2007). The current terms of use may be found at 74  The in-game currency in Second Life is the Linden Dollar which can be acquired with real money. 75  Many Korean virtual worlds (such as Flyff) and other worlds outside that country (such as Archlord and Achaea, Dreams of Divine Lands) operate entirely by selling items to players for real money. Such items generally cannot be transferred and are often used only as a means to represent a Premium subscription via a method which is easily integrated into the game engine. See http:// 76  Horowitz above n 70, 447 stating in fn 21 ‘There is a stronger related policy argument, which states that we should protect virtual property rights because failing to do so would destroy an otherwise viable market. Such utilitarian policy arguments have been discussed elsewhere, and are worthy of further investigation. See Theodore J Westbrook, ‘Comment, Owned: Finding a Place for Virtual World Property Rights’ (2006) Michigan State Law Review 779, 795–97.’ 77  Horowitz above n 70, 447–48. 78  When Second Life announced its plan to give rights to users, other developers were shocked. See Amy Kolz, ‘Virtual IP Rights Rock Online Gaming World’ LAW.COM (6 Dec 2004)


The Property Problem

Life as a form of ‘developing nation’ noting that if people cannot own property, the wheels of capitalism cannot turn.79 Linden Lab even sells land directly to users who can have property for a lump sum payment and a lesser monthly sum thereafter. However Linden Lab’s terms of service state that it retains the perpetual and irrevocable right to delete any or all of your Content from Linden Lab’s servers and from the Service, whether intentionally or unintentionally, and for any reason or no reason, without any liability of any kind to you or any other party.

In addition the terms of service make it clear that Linden has no obligation to protect the value of user property, and reserves the right to do anything it likes with it. Had it not been for the fact that the case of Bragg v Linden Research Inc80 settled, there may well have been judicial consideration of the nature of the virtual property rights within the Second Life virtual world. Bragg accumulated Second Life property worth thousands of dollars, some of which he purchased through a loophole in an auction system, and some of which he accumulated through legitimate means. When Linden Research became aware of this they seized all of his in-game assets including land, items and approximately $2,000 in real world money held to his account. One of the preliminary rulings by the Court prior to settlement was to deny Linden’s application to compel arbitration as stated in its terms and conditions. It held that the terms were a contract of adhesion that was unjustly biased towards Linden Labs. However, a Dutch Court dealt with a case of theft of virtual property.81 The Court had to find that there was a protectable property right before the offenders could be convicted of theft. In that case the defendants had used physical violence and thus forced the victim to hand over virtual goods in MMO game RuneScape. The virtual goods, a mask and an amulet, were transferred from the victim’s account to the other defendant’s account in RuneScape. The court confirmed in its verdict that the said virtual goods qualify as goods under Dutch law. This was a prerequisite for the actions by the defendants, forced transfer of the virtual goods by using physical violence on the victim, to qualify as robbery (diefstal met geweld) under Article 312 of the Dutch Criminal Code. It would have been easy for the Court to have convicted the boys of an assault but they chose to deal with the facts advanced by the prosecution—that there had been a theft of virtual property, thus sustaining the charge of robbery. The court referred to the fact that virtual worlds has become a huge phenomenon and that players attach a lot of value to their ‘virtuele goederen’. It found that the virtual property had value both for the complainant and the accused. It then went on to state that the items did not need to be physical items (stoffelijkevoorwerpen) and

79  Posting of Aleks Krotoski to Guardian Unlimited Gamesblog, ‘Second Life and the Virtual Property Boom’ 80  Bragg v Linden Research Inc 487 F Supp 2d 593 (ED Penn 2007). 81  Runescape LJN: BG0939, Rechtbank Leeuwarden, 17/676123-07 VEV.

Property or Cyberproperty


that it could be equated to electricity and money held in an account (giraal geld). Another important aspect is that there has to be transfer of factual (not physical) control from the accuser to the accused. That case, of course, must be considered within the context of Dutch law and is not of universal application but it does establish that a Court was prepared to recognise that there was such a thing as virtual property and that it was important enough to claim the protection of law. How, then, should one justify a property interest in something that is intangible and which has no existence independent of the virtual operator constructed environment within which it is used. Horowitz argues for a Lockean approach to the issue of virtual property and develops the argument in this way: Users in virtual worlds acquire possessions that would otherwise lie in their natural state, for example behind a dragon or in the store of a virtual shopkeeper’s goods. In the process of acquiring these goods, a user must labor to distinguish them from goods that remain in their natural state. My dragon saber is different from all other dragon sabers insofar as the rest remain in the possession of a vicious and wild virtual beast. There are various objections to this account, however, especially when user and operator claims conflict.82

There are issues that arise from this approach. Is labour possible for players in a game although by the same token real world professional athletes receive significant sums for playing? A further problem is that users within a game acquire goods that may have been produced through the labour of others. Horowitz exemplifies the problem in this way: [O]ne might earn an item by defeating a virtual foe that carries it, where the operators have created both the foe and its item. Often, even new products are simply combinations of existing in-world products created by the operators. In proprietary worlds, most resources from which users claim to acquire property rights are owned by the operators. Or, at the very least, operators have a strong Lockean claim to those resources such that users must provide sound arguments to overcome the initial claims. If there is no common for virtual products—for users or for operators—then the defect that fails to confer to the operators a right to the world itself would likely vitiate users’ claims as well. If the Lockean appropriation can justify virtual property rights at all, users should have no greater claim to the resources of virtual worlds than operators do.83

The reality of the matter is that, depending upon the wording of the EULA, users will probably not have a strong claim to virtual property that can be asserted against game operators. If a Lockean labour-based argument is to be advanced it would have to overcome competing claims of operators, because, for the vast majority of products in the virtual world, operators would have a stronger Lockean claim to virtual property than users. There can be little doubt that in England or the Commonwealth it is unlikely that any rights to virtual items in an online environment could be sustained as between 82  83 

Horowitz above n 70, 454. ibid 455.


The Property Problem

participants. As between users and the operator, as observed, the m ­ atter would be determined by reference to terms and conditions. But nevertheless, within the context of the game or online environment many of these ‘virtual property’ items may have value which could be relevant if considered within the context of a property dispute following a marriage breakdown, or a realisation of assets in the context of a deceased estate. It is to the latter proposition that I shall now turn.

B.  ‘Digital Assets’ Planning for control of personal information after death used to be as simple as telling executors about the desk drawer or the fireproof box or the safe deposit box at the local bank. In the era of smartphones and cloud computing services, that same information may be stored in digital formats on servers scattered across the globe. Documents may be kept online or email may be used as a repository for paperless receipts, insurance information or financial transactions. Photos, videos and musings may be left behind at social media sites like Facebook, Twitter, LinkedIn, blogs, Instagram and Flickr. In addition, access to this information may be difficult. Providers that store digital content are restricted in how they can disclose it to someone other than the account holder and may be dependent upon local privacy laws or terms and conditions of service which may go so far as to determine ‘property’ in the data stored on the providers’ servers or which may prevent companies from disclosing that material to anyone without a court order. The rules for deceased estates are still very much grounded in aspects of property, the traditional distinction between choses in possession and choses in action together with concepts of ownership. As Rick Shera points out ‘in the online world, digital assets are often not owned—they are created or purchased, and their existence and legal attributes are governed by, the licenses granted by whatever platform they are purchased from, uploaded to or shared on’.84 In addition there is the vexed question of metadata, generated with every Internet interaction and as revealing and as valuable as the content with which it travels and which usually ‘belongs’ to the provider platforms to monetise as they see fit as provided in their terms and conditions of service. The way in which assets may be disposed of pursuant to a will may not apply to digital ‘property’ because the existence of that information is bounded by the licence terms accompanying the use of the platform that has allowed for the acquisition of the program or the use of the data stored. Shera points to two potential problems: 1. The account itself, governed by the terms of use or end user licence, may not be able to be transferred on the death of the account holder or, if it can be, those terms may limit this in some way;

84  Rick Shera, ‘The Internet: It’ll Be the Death of Us All’ (NZLS IT and Online Law Conference Papers, New Zealand Law Society, Wellington 2015).



2. The objects that we have bought, created or uploaded may not in fact belong to us or may have been licensed in perpetuity to the platform so they cannot be the subject of an exclusive testamentary bequest or they may be subject to some other limit on their use or transfer, all dictated by the terms of use or EULA.85

To make the matter more difficult there is little consistency between platforms in the way in which they deal with data in a user’s account upon death. Facebook allows memorialisation, Twitter is not entirely clear, some platforms explicitly allow transmission, others do not whilst yet others may provide for automatic account termination which could be distressing if family photos and digital memorabilia were lost forever. Google, for example, offers a tool to help its users deal with the problem. Called Inactive Account Manager, it allows a user to designate up to 10 people to receive content from sources like mail, documents or blogs. A user may also choose to have content deleted after they have died. When the account becomes inactive, those designated by the user are notified and receive the content you chose to share. They do not receive a means of logging in to your account. Shera suggests two changes are needed: 1. Uniform rules governing the rights of executors to deal with the accounts and digital assets of the deceased. 2. Adding these types of digital assets and accounts as another class of assets that can be the subject of a testamentary bequest, as of right.86

Given the gestation of most online platforms in the US, a uniform law has been promoted allowing adoption by US States to deal with the first issue.87

VI. Conclusion The increased reliance upon digital systems and the consequent use of digital files as a repository of information means that there is an anomaly if some form of legal protection is unavailable for these items. Although digital files fall beyond the classic definition of property in English and Commonwealth jurisdictions, American jurisprudence recognises that the value of such files means that there must be some protection afforded by law. The New Zealand Law Commission recognised this in 1999 when it stated: It is necessary to protect commercial information which may be of immense value. For many businesses operating in this environment, the information which is stored on their computer system will be its most valuable commodity. It is important to recognise and

85 ibid. 86 

ibid, 4.




The Property Problem

protect the intellectual capital of information stored on a computer. The importance of information as a business asset in the knowledge economy may justify redefinition of information as a property right for both civil and criminal law purposes. In essence it is both the information and the systems which we are proposing to protect in our recommendations in this report.88

In a later report the Law Commission made this observation: Until particular cases involving the misuse of information come before our courts pleading reliance upon a common law or equitable cause of action, it is difficult to be emphatic that existing causes of action will provide a remedy. Our provisional view is that the protections offered by the action for breach of confidence (which is generally regarded as being of equitable origin), the tort of unlawful interference with economic relations and the claim of unjust enrichment (which is considered by some to be quasi contract in nature and others as a restitutionary claim), as well as the wide ranging nature of section 9 of the Fair Trading Act 1986 (designed to provide a remedy for misleading or deceptive conduct in trade, or for conduct likely to mislead or deceive), should be sufficient to deal with most cases.89

The proposal that information be regarded as property was considered inappropriate. There would be difficulty in determining what is property and what is not, especially in terms of who was the creator and the difficulty in asserting a property right where there was a collaborative creation. The Law Commission was concerned that by defining information as property for the purposes of the civil law, the remedies provided by the criminal law would be outstripped and property per se is not protected by the criminal law. It is the way people deal with property that is the subject of sanction. Furthermore, creating a new cause of action for ‘information as property’ would cut across existing causes of action in, say, intellectual property, passing off and breach of confidence, thus muddling existing law. Finally, a likely consequence of introducing a cause of action around ‘information as property’ may be that parties may no longer have adequate incentives ‘to make their own provision for the importance of information to them’.90 I suggest that the time has come for there to be a reassessment of the suggestion that a digital file is not capable of civil or criminal law protections. Underpinning this suggestion, and essential, is whether there should be a reconsideration of the concept of property in ‘information’. The New Zealand Parliament in 2003 when it enacted the new definition of property in the Crimes Act 1961 resisted the suggestion of the Law Commission that it have a special definition of information as property for the purposes of computer crime, because it would create confusion about the nature of property. We have come a long way down the path of the Digital Paradigm since then. The Court of Appeal decisions in Dixon and Watchorn,

88  New Zealand Law Commission, Computer Misuse (Wellington, New Zealand Law Commission, 1999) para 36 (emphasis added). 89 New Zealand Law Commission, Electronic Commerce Part Two: A Basic Legal Framework ­(Wellington, New Zealand Law Commission, 1999) para 209. 90  ibid para 229.



the unavailability of the remedy of a possessory lien in Your Response all point to the fact that something of value to the owner is unprotected from the depredations or failings of others. Perhaps the answer lies in considering adopting the American approach together with clarifying the fact that digital data to exist must be associated with a medium, be it a hard drive, a USB drive or stored in the Cloud. It is this aspect of a digital file that gives it its tangibility. The issue of virtual property remains an open question and much depends upon the nature of the terms and conditions that exist between the provider and the customer. It may be that legislation will address this problem in the future. What must be understood is that in a paradigm of continuing disruptive change, changes to perceptions of what may fall within the category of intangibles and that may have value needs to be recognised. In addition there may need to be a recognition that existing remedies under ‘traditional’ fields of law such as intellectual property and breach of confidence may be too limited to accord sufficient protection. The concept of no property in pure information could remain. Information that is not associated with a medium could remain as intangible. But the digital file associated with a medium would have a level of tangibility sufficient to attract the protection of the civil and criminal law.

6 Recorded Law—The Twilight of Precedent in the Digital Age When faced with a totally new situation, we tend always to attach ourselves to the objects, to the flavor of the most recent past. We see the world through a rear-view mirror. We march backwards into the future.1

I. Introduction I commenced chapter two with the quotation from Marshall McLuhan that appears above. I have repeated the quote for two reasons. The first is that it sums up one of the features of our precedent-based common law system. The second is that it emphasises a common theme about law and new technologies. We often fail to look ahead, assess a new communications technology and work out possible futures and how we might adapt. This chapter will suggest that new technologies may have significantly impacted upon the law’s reliance on the ‘rear-view’ mirror. In chapter two I considered the underlying qualities or properties of new information media. Against that analytical framework, which is by no means an absolute but is nuanced and at times contradictory, in this chapter I consider the impact of the Digital Paradigm upon the information matrix that is the law. I argue that the authoritative basis of the law lies in the way that it is communicated, and that many of our assumptions about the certainty of law and its foundations, particularly of the doctrine of precedent, have been built upon print technology. I suggest that the Digital Paradigm and an understanding of the new media for communicating legal information present some fundamental challenges to our assumptions about law and may well revolutionise established legal institutions and doctrines. In the course of this discussion I challenge the often advanced and convenient escape route that suggests that what the Digital Paradigm offers is merely content in a different delivery system which may be ‘functionally equivalent’ to that which has gone before. I argue that escape route is now closed off in light of the paradigmatically different manner by which content is delivered in the Digital Space. 1 

Marshall McLuhan, Understanding Media: The Extensions of Man (London, Sphere Books, 1967).

Law and Precedent in the Print and Digital Paradigms


II.  Law and Precedent in the Print and Digital Paradigms The assumptions that underlie the doctrine of precedent provide an example of how Digital Paradigm qualities present a new challenge to the law. For hundreds of years law was declared or ‘discovered’ by common law judges on a case by case basis. Judges might follow the decisions of other judges in similar cases but because of the paucity of adequate or authoritative written records,2 and the distance between courts coupled with inadequate transportation and communication systems there was considerable variance between and even within jurisdictions. Law was highly localised and individualised. The goal of the Monarch’s law may have been, as Maitland put it in the context of feudal contract, to swallow all other law but it was no easy task to accomplish at the time.3 The advent of the Print Paradigm and the qualities of print affected the structure, the capabilities and functioning of law in various ways. It is not, as may be thought, the ‘fine print’ that characterises the law, but print itself. Print effected and affected the organisation, growth and distribution of legal information. The processes of law, the values of law and many of the doctrines of law required a means of communication that was superior to handwriting and handwritten manuscripts to store and communicate information. Ethan Katsh puts forward the proposition that changes in the means used to communicate information are important to law because law has come to rely on the transmission of information in a particular reliable and authoritative form. Katsh propounds that law does not simply produce information but structures, organises and regulates it.4 It does this primarily through the medium of print.

2  Early modern lawyers made extensive use of privately compiled notebooks in which they recorded cases but on occasion these could be inaccurate. An example appears in the comment by Fitzherbert J, ‘put that case out of your books for it is not the law’: see Year Books 27 Hen 8 23 (London, Tottell, 1556) folio 11 STC 9963. For a discussion of the use of notebooks by lawyers see David J Harvey, The Law Emprynted and Englysshed: The Printing Press as an Agent of Change in Law and Legal Culture 1475–1642 (Oxford, Hart Publishing, 2015) ch 3.4 and ch 4.5). The approach to statutory interpretation was a little more flexible and depended on factors that were often extraneous to the text. For example in the 14th century judges were often members of the King’s council and they would have been present when a law was adopted. The written record of legislation might have mattered less than a judge’s own recollection of what had been decided. The text would be a reminder of what had taken place. This is reflected by the statement made by a judge to a lawyer in 1305, ‘do not gloss the statute, for we understand it better than you; we made it’, when a lawyer was arguing why a statute had been enacted. See Peter Tiersma, Parchment Paper Pixels: Law and the Technologies of Communication ­(Chicago, U ­ niversity of Chicago Press, 2010) 146. 3  Jim Dator, ‘Judicial Governance of the Long Blur’ Futures, Vol 35, No 1, January 2001; ­Frederick Pollock and Frederick William Maitland, The History of English Law Vol 1 2nd edn (Cambridge, ­Cambridge University Press, 1968) 460. 4  Ethan Katsh, The Electronic Media and the Transformation of Law (New York, Oxford University Press, 1989).


Recorded Law—The Twilight of Precedent in the Digital Age

Law before Gutenberg was different from law today in significant ways. The printing press made it possible for the past to control the future as never before. Prior to the printing press scribes merely took notes, under the judge’s direction, of what was said and done. Without a verbatim transcript of judicial proceedings, later judges could not be certain what was said and done previously. Thus with most law residing in the minds of judges and not in black and white on paper, judges could innovate and invent while pretending to follow strict precedent. This ended with the printing press, printed judicial decisions, printed positive law and especially printed constitutions.5 Printing and the qualities identified by Eisenstein6 enabled many copies of one text to be distributed throughout a community or a country. It meant that the mistakes, errors and glosses that had previously been a characteristic of the scribal culture were no longer perpetuated. It meant that the words that were printed and read by a person in London were the same as those read from the same edition by a person in New Orleans. The printed word could not be changed. Once it was on paper it was immutable. Printing replaced the brittle oral and script forms of communication with a stable, secure and lasting medium. Memory, so vital for the oral tradition, could now be committed to print. Instead of looking for the earliest or original manuscript that had not received the attention of glossators, one seeking information would look in the latest print edition.7 In the medieval period, oral contracts were often preferred over written ones. The nature of writing and the idea of placing reliance upon or consulting a written document was not common. Memory was considered to be more trustworthy than something written and practical questions were answered by oral testimony and not by reference to document.8 If there was a dispute over land ownership and a written charter needed interpretation or was contradicted by what was remembered, memory took precedence over written proof and the principle that an oral witness deserved more credence than written evidence was a legal commonplace.9 The development of movable type resulted in a product more fixed and stable than the work of a scribe. Forgery and careless copying became less common. Granted, printed works could contain errors but a large number of standardised copies provided works that were not easily changed. That in itself gave a sense of authority and authenticity that had been lacking earlier. A reader could assume that the printed words were the words of the author.


ibid 12. Elizabeth Eisenstein, The Printing Press as an Agent of Change (Cambridge, Cambridge University Press, 1980). For an interesting discussion of the law in print and the impact of the changing appearance of the law on the printed page, and the suggestion that the appearance of the law in print matters see Kasia Solon Cristobal, ‘From Law in Blackletter to “Blackletter Law”’ (2016) 108 Law Library ­Journal 181. Available from 7  For a detailed study see Harvey above n 2. 8  Michael T Clanchy, From Memory to Written Record: England 1066 to 1307 (Oxford, Blackwell, 1993). 9  For a more detailed discussion see ch 3. 6 

Law and Precedent in the Print and Digital Paradigms


The advent of print provided a keystone for the legal process. The development of the common law by a system of precedent is expedited when lawyers and judges have a common reference point and can rely on the fact that there are exact copies of a case or a statute in different places. Thus, lawyers and judges are assured that the language that they are using is identical to the language consulted by others. Printing enabled the standardisation of legal information and the words on paper began to acquire an authority that had been lacking in the scribal period.10 Thus law, previously associated with custom and the remembered words of judges and what was contained in the Year Books, gave way to law based upon books. Printing was introduced into England in 1476 and five years later the first law books were printed. In 1485 the printing of Parliamentary Session Laws began. Prior to the printing of reports a strong judge could still say that the cases alleged had never been decided, or that the decision was to the contrary than was submitted. Fitzherbert J, in the reign of Henry VIII, had a good collection of manuscripts which he used frequently in court, preferring his own notes to those of others.11 His well-recorded statement ‘put that case out of your books for it is not the law’ demonstrates the fluid way in which manuscript reports were used and the problems of variable recording. Ellis-Lewis suggests that inaccurate compilations of manuscript notes or reports could find their way into the hands of printers. The development of printing was a driver for change in reporting style, as authors took control of the printing of their reports, demonstrated by Plowden and Coke.12 This, as I have demonstrated elsewhere, was not the sole driver for putting reports into print.13 Certainly the development of written pleadings which directed the attention of the court to a specific issue coupled with the development of case citation led to a change of focus in the way that cases were reported, from the style of the Year Books to that of Dyer, Plowden and Coke. Coke observed that during the Year Book period counsel rarely cited specific cases but appealed in general terms to principles that were well established. However, by Coke’s day counsel were citing particular cases—what

10  Nevertheless there was a period of co-existence. During this period manuscript was referred to and treated as authoritative, notwithstanding the properties of print. For example there are a number of cases where manuscript versions of statutes were preferred to the printed copy. See Harvey above n 2 esp 201–02. See also JH Baker, The Lost Notebooks of Sir James Dyer (London, Selden Society, 1993) lxi fn 25 citing the cases of Stowell v Lord Zouche (1569) where there was an error in the printed statute of Edward I; Vernon v Stanley & Manner (1571) where the printed statute was corrected by sense and by ‘librum scriptum domini Catlyn’; Ligeart v Wisheham (1573) where the printed statute was at odds with ‘lestatute script’ and Taverner v Lord Cromwell (1572) where French and English versions of the statutes were compared along with Rastell’s edition and the manuscript. There were other interpretative principles the were developing in the field of defamation evidencing the general interest in interpretation and, because many of these cases did not see print it supports the view that print was not such a significant contributor to issue of interpretation—see IS Williams, ‘He Creditted More the Printed Booke’ (2010) Law and History Review 39, 47. 11  T Ellis-Lewis, ‘The History of Judicial Precedent—Part III’ (1931) 47 LQR 411, 413–14. 12 T Ellis-Lewis, ‘The History of Judicial Precedent—Part IV’ (1932) 48 LQR 230, 231 citing Plowden’s Commentaries Preface. 13  Harvey above n 2 esp at ch 4.


Recorded Law—The Twilight of Precedent in the Digital Age

Coke referred to as ‘a farrago of authorities’.14 But it was Coke himself who utilised the preservative and disseminatory qualities of print in ensuring that his Reports and later his Institutes were put in print.15 Ellis-Lewis, whose focus was upon the development of a theory of precedent, traced the various printed reporters from Coke to Burrow.16 He observes that Styles Reports17 were written expressly for publication. It is clear from the preface that the reports were published for the purpose of being cited as precedents. Yet the practice of posthumous publication continued into the eighteenth century, based on manuscript notebooks. These created some difficulties because many of them were inaccurate.18 It was not until 1765 when the reports of James Burrow changed the face of law reporting. Burrow was the first of a regular series of authorised reporters attached to a particular Court to report decisions for publication. It is interesting to note that from the late fifteenth century the technology was available to ensure a printed report of cases. But there were still problems. The utility of a precedent depends upon a reliable and accurate reporting system. The development of printed reports alone did not guarantee that. As late as 1849 the report of the Special Committee on the Law Reporting System remarked on the unsatisfactory state of law reporting, the importance of accurate and reliable reports, that the publication of law reports arises from a desire by the publishers to sell books and the importance of reports vis-à-vis an established doctrine of precedent.19 The steps that were taken in England leading up to the establishment of the official Law Reports demonstrates the concerns that were outstanding at the time at the lack of an official reporting system and the problems caused by inaccuracies and gaps in reported cases.20 Lindley observed that one of the requirements of the profession of law reports was that: [R]eports should be accurate, full in the sense of conveying everything material and useful, and as concise as is consistent with these requirements. The points contended for by counsel should be noticed, and the grounds on which the judgment is based should receive especial attention. The whole value of a report depends on this part of it, and on the distinctness with which it is brought out. In this respect much of course depends on the Judge and the care he takes to make plain the grounds of his decision. But much also depends upon the reporter. Even when a judgment is written, much of it may relate to matters requiring decision but not worth reporting; and it should be shortened-accordingly.21

By the time of the development of a proper and authorised law reporting system in the nineteenth century in England, printing as a means of recording law had long 14 

10 Coke Reports (1614) Preface. It must be acknowledged that Cole himself saw on the First Institutes Coke on Littleton into print. The other volumes of the Institutes were printed after his death. 16  Above n 12, 240 et seq. 17  Covering the period 1645–1656 18  Ellis-Lewis above n 12, 244. 19  WTS Daniel, The History of the Origin of the Law Reports (London, Wildy & Sons, 1883) 4 et seq. 20  ibid 11. 21  N Lindley, ‘The History of the Law Reports’ (1885) 1 LQR 137,144. 15 

Law and Precedent in the Print and Digital Paradigms


been a given. The time for the co-existence of print and manuscript as a means or recording and, in the case of manuscript, coterie sharing of legal information, had passed.22 The focus of the activities described by Daniel and Lindley were upon achieving an accurate system of reporting. Print as a means of recording, preserving, disseminating and referencing was the means of communication of accurate reports. No consideration was given to factors such as some of the physical qualities of print and how these limited as well as enabled the publication of cases. What is clear, especially from Lindley’s statement about the accuracy of the reports, was that often the minutiae of a case could be omitted where there was an articulated principle. Thus although there was a means of recording cases, the information that was to be made available would be limited. This was not only influenced by the need for reports to state principles. There was a limit to the volumes that lawyers could afford to buy. The cost of law reports was a matter referred to by both Daniel and Lindley and that reports be made available economically was a matter which had to be considered.23 Thus, although print was an enabler of the circulation of legal information and the provider of an accurate record it had within it built-in physical and economical limitations which had an impact upon the volume of legal information that was available. Thus the development of law reporting, as we know it, could not have taken place without print24 and the history of the doctrine of judicial precedent is intimately bound up with the history of law reporting. By the eighteenth century the printed word was sufficiently accepted that, ‘Each single decision standing by itself had already become an authority which no succeeding Judge was at liberty to disregard’.25 In 1765 Lord Camden claimed that if the law was not found in the books it was not law.26 By the end of the eighteenth century the importance of Law Reports was such that Edmund Burke claimed ‘to put an end to the reports is to put an end to the law of England’27 but, as has been observed above, it was not until the 1860s that a reliable system of accurate reporting was developed. Thus the development of print and the development of precedent, a foundation stone of our common law legal structure, are linked. Precedent provides fairness, consistency and predictability in that like cases should be treated alike, and an aid to judicial decision-making to prevent unnecessary reconsideration of established principles.28

22  For discussion of the co-existence of print and manuscript in the law see Harvey above n 2 esp ch 4. 23  Lindley above n 21, 138. 24  T Ellis-Lewis, ‘History of Judicial Precedent—Part 1’ (1930) 46 LQR 207. 25  William Markby, ‘Elements of Law’ in R Pound and TFT Plunkett (eds), Readings on the History and System of the Common Law (Rochester NY, Lawyers Co-Operative, 1927) 125. 26  Entick v Carrington [1765] EWHC KB J98; 19 Howell’s State Trials 1029 (1765). 27  Quoted in William Holdsworth, Some Lessons from our Legal History (New York, Macmillan, 1928) 18. 28  F Schauer, Precedent (1987) 39 Stanford Law Review 571, 595–60.


Recorded Law—The Twilight of Precedent in the Digital Age

The development of precedent provides certainty and security in the law so that citizens may rely upon it to order their affairs. Yet, by the same token, the legal process is not renowned as innovative and has rarely been at the forefront of change. Rather, it slows change down by way of precedent. Precedent has been adopted by the legal process to integrate legal change to the pace of change in society. If the law is to become more tolerant of change the role of precedent will continue to evolve. It will not disappear as a concept but it will not be the concept to which we have become accustomed. The development of precedent has been somewhat serendipitous. Holdsworth observed ‘One of the main conditions for the success of the system of case law is a limit on the number of case reports’29 and Grant Gilmore has observed that: When the number of printed cases becomes like the number of grains of sand on the beach, a precedent-based case law system does not work and cannot be made to work … the theory of precedent depends, for its ideal operation, on the existence of a comfortable number of precedents, but not too many.30

Bruno Leoni argued that codification and the over-publication of opinions were an existential threat to the common law because they sapped its vitality and destroyed its predictability.31 Too many opinions by appellate courts on hard cases challenge trial judges as they try to reconcile them. Given that Leoni focused upon the virtue of the certainty that the common law provided over rapidly changing and often arbitrary legislation it is implicit that he favoured fewer published opinions. The authority of case law has been enhanced by a slow development where reported decisions are not rapidly modified. Leading cases not only settle a particular point of law but also add to the general authority of decisions because they settle a point with some finality. And importantly, the nature of the printing technology imposed a limitation on the number of cases that could be reported and printed and the speed with which they may be published. So it is that the very nature of the print paradigm has placed certain boundaries upon the development of law and has allowed for the development of the doctrine of precedent to the point where we are today.

A.  Print and Precedent as a Brake on Change Thus the law had an ally in working towards its goal of maintaining a measured pace of change. The silent partner which has assisted in fostering a public image of law as an institution that is both predictable and flexible is the communications medium that has dominated the legal process for the past 500 years, the medium of print. As the new digital media of the twentieth and twenty-first centuries have


Holdsworth above n 27, 19. Grant Gilmore, ‘Legal Realism: Its Cause and Cure’ (1961) 70 Yale Law Journal 1037. 31  Bruno Leoni, Freedom and the Law expanded 3rd edn (Indianapolis, Liberty Fund, Inc, 1991). 30 

Law and Precedent in the Print and Digital Paradigms


taken on some of the duties performed by print, one of the consequences will be to upset the balance the law has worked so diligently to achieve over several centuries.32 Diana Botluk describes the challenges posed by Internet publication in the following way: ‘Publication on the Web can often bypass … traditional methods of filtering information for quality, thus making the end user of the information more responsible for the evaluation process.’33 The traditional methods to which she refers include determining that a) ‘an authoritative source’ has written or published the information; b) that the information has been ‘authenticated by editorial review’; and c) that it has been ‘evaluated by experts, reviewers, subject specialists or librarians’.34 The importance of publication—up until recently in print—as an authoritative concept is put into sharp focus by the following comment by Professor Robert Berring: The doctrines of the law are built from findable pieces of hard data that traditionally have been expressed in the form of published judicial decisions. The point of the search is to locate the nugget of authority that is out there and use it in constructing one’s argument. Because legal researchers are so accustomed to this idea, it is difficult to realize how unique this concept is in the world of information. In most fields in the humanities or social sciences, a search of the literature will reveal certain orthodoxies or prevailing views, certain points in contention with each side having its own warrior-like adherents, but there are no points of primary authority. There are no nuggets of truth or treasure … Legal researchers believe that there are answers out there that are not just powerfully persuasive, but are the law itself.35

Precedent has been a brake on change. To continue the motoring metaphor, within the law it has also encouraged the rear-view mirror36 as a mode of thinking about the present and future. Paul Levinson suggests, in the context of media studies, that we frequently use backward looking metaphors for the new digital environment. RealAudio becomes equated with ‘radio’—a metaphor enhanced by


Katsh above n 4, 62. Diana Botluck, Evaluating the Quality of Web Resources published 3 April 2000. 34 ibid. 35  Robert C Berring ‘Collapse of the Structure of the Legal Research Universe: The Imperative of Digital Information’ (1994) 69 Washington Law Review 9, 11 and 14. 36  Paul Levinson, Digital McLuhan—A Guide to the Information Millennium (London, Routledge, 1999). According to McLuhan’s laws of media, an environment obsolesces or reverses at the moment of fever pitch. The information environment, during its moment of superabundance, becomes obsolescent. It has passed into cliché, if we can see it at all. ‘When faced with a totally new situation, we tend always to attach ourselves to the objects, to the flavour of the most recent past. We see the world through a rear-view mirror. We march backwards into the future’: Marshall McLuhan and Quentin Fiore, The Medium is the Massage: An Inventory of Effects (Berkeley, Gingko, 2001). 33 


Recorded Law—The Twilight of Precedent in the Digital Age

streaming content. Research takes place in a ‘digital library’. An online chat room is treated as a ‘café’. Information provided in a web browser is a ‘web page’ and information that does not appear on the screen extends the print metaphor to one from the newspaper world—the information is ‘below the fold’. These analogies, according to Levinson, call attention to a possible benefit of walking into the future with our eyes upon the past; we use linguistic terms with which we are familiar and comfortable; but Levinson also points out that the mirror may blind us to ways in which the new medium is not analogous to the media of the past. If we use the Internet as a library, unlike the real world library, when the Internet connection crashes it is impossible to continue reading the text. If for some reason the lights go out in the library an alternative light source can be found. Levinson demonstrates the problem in this way: If we stare too long into the rear-view mirror, focussing only on how the new medium relates to the media of the immediate past, we may crash head-on into an unseen, unexpected consequence. On the other hand, if we look only straight and stiffly ahead, with no image or idea of where we are coming from, where we have just been, we cannot possibly have a clear comprehension of where we are going. … A quick glance in the rear-view mirror might suggest that electronic ink is an ideal solution: it allows the convenience of paper, with the word processing and telecommunication possibilities of text on computers with screens. But, on more careful examination, we find that we may not have been looking at not the most relevant part of an immediately past environment. One of the great advantages of words fixed on traditional paper is indeed that they are stationary with an ‘A’: we have come to assume, and indeed much of our society has come to rest upon the assumption, that the words in books, magazines, and newspapers will be there for us, in exactly the way we first saw them, any time we look at them again in the future. Thus, the stationery as stationary, the book as reliable locus, is a function as important as their convenience in comparison to text on computers. Of course, we may in the future develop electronic modes of text that provides security and continuity of text equivalent to that on paper—modes that in effect allow the liberation of text without any diminution of its reliability—but current electronic ‘inks’ ‘papers’ are ink and paper only via vision in a rear-view mirror that occludes a crucial desirable component of the original.37

Using Levinson’s rear-view mirror and recognising that by developing that metaphor further, we are in a state of movement and in transition—moving away from the print paradigm and moving towards the digital paradigm, not yet divorced from the one and not fully attached to the other—a state of flux and co-existence. It took a generation for print technology to move from the lectern-based bible of Gutenburg to the convenience of a handheld book that could be included in a traveller’s pack. Henry VII recognised the value of the new technology when he came to the throne 10 years after Caxton introduced the press by appointing a Stationer to the King—an office which later became the King’s Printer. Nearly 100 years after Caxton, Edmund Plowden recognised the damage that could be done to his 37 

ibid, Levinson 176–77.

Law and Precedent in the Print and Digital Paradigms


reputation if he did not supervise the printing of his Commentaries. Lawyers and judges were giving credit to printed material in the early seventeenth ­century38 at the same time as Sir Edward Coke was ensuring his approach to the law would be disseminated and preserved by the printing of his Reports and Institutes. In some respects this explains why it is that we seek to explain and use new communications phenomena by the term ‘functional equivalence’. Functional equivalence in itself is a manifestation of rear-view mirror thinking—an unwillingness to let go the understandings of information that we had in the past. It roots us in an environment where the informational expectations no longer pertain—where the properties of the equivalent technology are no longer applicable or valid. This reflection upon the transition from the scribal culture to that of print, whilst recognising that the two co-existed for a considerable period, demonstrates that we must adapt to new technologies and at the same time adapt the old. But we should not adapt nor allow the values arising from the qualities of the old technology to infect or colour our understanding or approach to the new. Certainly the use of precedent, by its very nature, involves use of the rear-view mirror. This is not to decry the importance and necessity of precedent as a means of creating certainty and consistency in the law. But, as the argument develops, it may be seen that we may lose those two elements of the law that we take so much for granted as we move into an environment of constant, dynamic and disruptive change. We are so familiar with the paradigm of print that we do not give its ramifications or its qualities a second thought. The qualities of dissemination, standardisation, fixity of text and the opportunity to cross reference to other printed sources go unnoticed. We have become inured to them. Yet they provide the foundation for our acceptance of printed law as reliable and authoritative. Earlier editions of The Bluebook, which is the American Uniform System of Citation not unlike the rules provided in the Oxford Standard Citation of Legal Authorities,39 provided that citations should be to paper versions. Rule 18.2 provided: This rule requires the use and citation of traditional printed sources, except when the information is not available in a printed source, or if the traditional source is obscure or hard to find and when the citation to an Internet source will substantially improve access to the same information contained in the traditional source. In the latter case, to the extent possible, the traditional source should be used and cited.40


Williams above n 10, 24. Oxford Standard Citation of Legal Authorities, Faculty of Law, University of Oxford 4th ed (Oxford, Hart Publishing, 2012) See also Geoff McLay, Christopher Murray and Jonathan Orpin, The Australian Guide to Legal Citation published by the Melbourne University Law Review Association—; and the McGill Law Journal’s Canadian Guide to Uniform Legal Citation published by Carswell. 40  The Bluebook: A Uniform System Of Citation 17th edn (Columbia Law Review Ass’n et al eds,. 2000). See also the reasons for an early reluctance to cite Internet materials including a lack of confidence in their reliability and accuracy. Many Web sites are transient, lack timely updates, or may have had their URLs changed. Thus many Internet sources did not consistently satisfy traditional criteria for cite-worthiness. See Colleen Barger ‘On the Internet Nobody Knows You’re A Judge: Appellate Court’s Use of Internet Materials’ (2002) Journal of Appellate Practice and Process 417 at 425. 39 


Recorded Law—The Twilight of Precedent in the Digital Age

Since that was written there have been two subsequent editions of The Bluebook, but the directive preferring print sources remains the same although they may be forgone if ‘there is a digital copy of the source available that is authenticated, official, or an exact copy of the printed source’.41 In its preference for printed sources, The Bluebook impliedly recognises that information in the Digital Paradigm by its nature and with its different underlying qualities presents an entirely different information environment.42 One of the most significant aspects of the Digital Paradigm is continuing disruptive change. Moore’s Law43 is as applicable to information in cyberspace as it is to the development of microprocessor technology. New information becomes available and is disseminated more quickly and exponentially via the Internet than previously through the print media. The Internet enables the distribution of court decisions within hours of delivery rather than the months that it took for cases to be edited and printed in law reports and the information flow normally experienced where the student or lawyer would go to a library to access information is reversed—the information now is delivered to a local device. New information can be manipulated more quickly by virtue of the dynamic document and participation. In the legal environment new cases and new developments may be publicised more rapidly but by the same token, unlike a paper document, a digital document ‘bears little evidence of its source or author’.44 In addition, greater credit may be given to ‘image-based’ formats but Rumsford and Schwartz do not believe that ‘non-imaged’ documents should receive the same treatment as paper.45 Continuing change challenges even the certainties that law librarians try to ascribe to certain formats—new information is constantly replacing old information and old information appears to be less and less relevant to the solution of modern problems. Our legal system, particularly in terms of the development of principle, has moved at a measured pace. The availability of large amounts of new information and the change in perspective that that new information introduces creates challenges for a system that is accustomed to looking backwards towards precedent and that moves at a sedate pace.


The Bluebook, ibid, 19th edn. above n 4. Richard Susskind, The Future of Law: Facing the Challenges of Information T­echnology (Oxford, Oxford University Press, 1996). 43  Moore’s law is the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The law is named after Intel co-founder Gordon E Moore, who described the trend in his 1965 paper—Gordon Moore ‘Cramming More Components onto Integrated Circuits’ moore.pdf. His prediction has proven to be accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development. 44  Mary Rumsey and April Schwartz, ‘Paper vs Electronic Sources for Law Review Cite Checking: Should Paper be the Gold Standard’ (2005) 97 Law Library Journal 31, 42. 45  ibid, 46. 42 Katsh

Law and Precedent in the Print and Digital Paradigms


The development of precedent is characterised by what could be referred to as landmark decisions from higher appellate courts in a jurisdiction or leading cases. These settle a particular point of law. They also add to the general authority of judicial decisions because they appear to settle the question with finality.46 The digital environment provides us with more material in more recent cases that more swiftly modify the broad statements of principle contained in landmark decisions. Other qualities also come into play, many of which challenge those of the print paradigm upon which the law relies. In my discussion about delinearisation of information I made reference to the fact that the primary text may no longer be considered the principal source of information, and that text could be considered within a wider informational context and could change the linear approach to analysis.47 This quality may underlie a challenge to a strict form of analysis based upon an a previously accepted line of cases. Information persistence and endurance is what the law requires for its certainty—something that the Print Paradigm has been able to give it, but the tension arises with dynamic information which constantly develops, grows and changes. This is associated with the quality of volume and capacity. The storage capacity of computer systems is so large as to be almost unlimited. At a time when print libraries are nearing capacity with the amount of printed information available digital systems can be seen as a blessing, but also pose serious challenges to established legal thinking. One of these lies in the necessity for a critical mass of decisions for the development of a precedent-based principle.48 The quality of high data volumes challenges this. The information is available, searchable and retrievable and because of the higher volumes of case law available the minutiae of fact situations or the nuance of legal interpretation becomes apparent, eroding the earlier certainties that were present with the ‘critical mass’ of information that was a characteristic of precedent.

46  However, note the case of R v Jogee and Ruddock v R [2016] UKSC 8 which were appeals before the UK Supreme Court and the Privy Council dealing with the principle of secondary liability sometimes called ‘joint enterprise’ or ‘parasitic accessorial liability’ were reconsidered and early authority was found to be wrong. The principle originated in Chan Wing-Siu v R [1985] AC 168 in the Privy Council and was endorsed by the House of Lords in R v Powell and R v English [1999] 1 AC 1. Those authorities were cited and approved 25 times over a 30-year period. The UK Supreme Court in Jogee stated that Chan was based on a fundamental conceptual error. The Court said:

we do not consider that the Chan Wing-Siu principle can be supported, except on the basis that it has been decided and followed at the highest level. In plain terms, our analysis leads us to the conclusion that the introduction of the principle was based on an incomplete, and in some respects erroneous, reading of the previous case law, coupled with generalised and questionable policy arguments. We recognise the significance of reversing a statement of principle which has been made and followed by the Privy Council and the House of Lords on a number of occasions. 47  48 

See ch 2. See the comments of Holdsworth and Gilmore above nn 27 and 30.


Recorded Law—The Twilight of Precedent in the Digital Age

B.  The Digital Revolution and the Legal Process In 1996 in his book The Future of Law: Facing the Challenges of Information Technology49 Richard Susskind suggested that today we are between the phases of print and information technology and, in essence, are in a transitional phase50 similar to the co-existence of the scribal and print cultures in the law in the late sixteenth and early seventeenth centuries. He was of the view, correctly in my opinion, that in the days of the oral tradition and the scribal culture, change was a rarity. In the Digital Paradigm, information is dynamic and subject to regular alteration rather than remaining in static form and from this arises a number of consequences or features. Susskind described them as unfortunate, although it may be that there is a certain inevitability arising from the qualities of information in the Digital Paradigm.

C.  Hyper-regulation, the Internet and Too Much Law One of the features identified by Susskind is what he describes as the hyper-­ regulated society. Susskind identifies the phenomenon, which I suggest is driven by the qualities of the Digital Paradigm. Being hyper-regulated means that there is too much law for us to manage and our current methods for managing legal materials are not capable of coping with the quantity and complexity of the law which governs us. Another of the difficulties pointed to by Susskind is that hyperregulation is aggravated by difficulties in the promulgation or notification of legislation and case law. One of the requirements of Lon Fuller in his book The Morality of Law was that a failure to publicise and make available rules that citizens are expected to observe results in bad or, at worst, no law.51 Although the digital environment has not been solely responsible for the hyperregulation described by Susskind, certainly the ability to generate large quantities of printed material has moved from the print shop to the photocopier and then onwards to the word processor with high capacity laser printers and now to digital space via the Internet. Technology allows text to be transmitted and disseminated at minimal cost. Susskind’s consideration of hyper-regulation is further evidenced by the large volume of legal material that was available in print but now in greater quantities in electronic form from the courts and from legislatures. Whilst one should resist the suggestion that such volume is overwhelming, certainly there is more material available for consideration. Pressures, particularly upon legislators and the judiciary, to perform within set time frames simply mean that much potentially relevant


Susskind above n 42. ibid 91. 51  Lon L Fuller, The Morality of Law revised edn (New Haven, Yale University Press, 1969). 50 

Law and Precedent in the Print and Digital Paradigms


material may well be overlooked. A further consequence of the hyper-regulation described by Susskind is the wide variety of information resources provided by new technologies. For example, television is no longer a limited number of network channels but Cable TV, satellite systems, Internet TV and online content distribution such as Hulu and Netflix thus allowing almost an infinite number of sources of information. The television screen has become an Internet portal for the home. Thus, we have before us a huge selection of informational alternatives. The information received from each source may be different in appearance and content from the information received by others. Furthermore, the nature and content of information will change. The day is with us where information from the courts, not only in terms of decision but in terms of the arguments in the cases themselves, is akin to watching a breaking news story on YouTube or social media as more and more information becomes available from the courts and is disseminated or becomes the subject of commentary.52 As a result of hyper-regulation and the vast amount of material provided by the digital environment, which, by its nature, is dynamic and subject to rapid and constant change, a lawyer or Judge searching for relevant cases now has more material to sift through, more detail to assimilate and more flexibility in terms of potential arguments or outcomes. The consequence of this could be to change the nature of legal argument from what could be described as a linear progression through a line of cases in the development of a precedent to the point where the authority of those cases is diminished or negated by the wealth of material available. ­Holdsworth comments that a system of precedent: Will not work so satisfactorily if the number of Courts, whose decisions are reported, are multiplied. The law is likely to be burdened with so greater mass of decisions of different degrees of excellence that its principles, so far from being made more certain by the decisions of new cases, will become sufficiently uncertain to afford abundant material for the infinite disputations of professors of general jurisprudence. A limitation is needed in the number of reported cases … English lawyers have hardly realised that it was a condition precedent for the satisfactory working of our system of case law.53

The more cases that are available the greater the flexibility and the creation of a legal argument, but the adverse consequence is that in terms of developed principle the link with precedent becomes more ephemeral. The delinear approach may introduce an alternative to the linear progression that has marked the development of principle. The Internet has made more legal information available to more people more immediately than at any other time in human history. Although this fulfils the 52  In May 2015 the UK Supreme Court launched its ‘video on demand’ service in addition to the existing live streaming service. Lord Neuberger said ‘Now justice may be seen to be done at a time which suits you.’ Supreme Court News Release 5 May 2015 53  Holdsworth above n 27, 22.


Recorded Law—The Twilight of Precedent in the Digital Age

philosophical and societal ideals of bringing law to the people and providing for a fully informed populace, the implications for informational reliability and for precedent are substantial. Internet availability of judgments at a number of levels means that decisions are accessible everywhere, cross-jurisdictionally. The prohibitions on the citation of unpublished opinions in the United States may well crumble in the face OF this technological revolution. It is clear that the increased availability of and access to judicial pronouncements and the number of opinions and judgments that are available in addition to traditional hard copy reported decisions has serious ramifications both for the precedential value of those decisions, and indeed for the concept of precedent itself. There is no doubt that the law is unable to resist the tides of change. The question is: During this transitional period, how may the law accommodate change and maintain its integrity in providing the rules that regulate the activities and relationships of citizens within the community? The law traditionally looks back to precedent but the digital environment means that the depth of field is shorter, focused upon what is closer while infinity becomes a blur. The problem is with the vast amount of material that is available, how can one maintain a precedent-based system that will rely upon dynamic changing material rather than the reliability provided by the printed law report? In addition, an overly large volume of decisions may mean that cases become determined not on a carefully refined and developed legal principle, but on factual similarities. The authority of precedent in the past has depended upon the fact that the legal process does not rapidly modify reported decisions.54

D.  Informing Judicial Decisions The idea of a world in which any individual could access all cases of legislation and commentary on any area of law with immediate access to other lawyers in that field and without having to go through intermediate steps involving the utilisation of Google, Lexis Nexis, LinkedIn and blogs is an attractive one but in many respects this places the law and legal information into a form or subset of Big Data. In terms of the development of law reporting, although this has been a private and selected business the law reports themselves, constrained by the properties of print technology, have not only contributed to the content of the law but also that critical mass necessary for the proper development of principle. Lord Neuberger concluded the first annual Bailii lecture by quoting Lord ­Lindley who stated in 1885 that: The law reports are so valuable, not only to legal practitioners but to all persons who care for English law as a scientific study or who take an interest in its development and


Katsh, above n 4, 46.

Law and Precedent in the Print and Digital Paradigms


improvement that every member of the profession ought to the best of his ability to assist in supporting and perfecting them.55

However in more recent times the development of principle has come under threat not only from the wide factual disparity that arises from an overly representative data set of cases available online, but from the fact that in more and more cases litigants are having to represent themselves. This means that whether or not there is a move towards a more inquisitorial process or whether the adversarial approach continues there will be fewer and fewer cases that are sufficiently well argued on both sides to warrant treating the court’s decision as binding law. Lord Dyson MR in a speech entitled ‘Are the Judges too Powerful?’56 made the observation that the development of the common law is often said to be incremental but some increments being of bold and major significance. The examples may be found in Hedley Burn Co Limited v Heller & Partners57 which involved a far reaching extension of the law of negligence and the change in law in England on marital rape in R v R (Rape: Marital Exception) where Lord Keith of Kinkel said the common law is ‘capable of evolving in the light of changing social, economic and cultural developments’.58 That incremental development has necessarily been slow notwithstanding, from time to time, being radical. The importance in the development of principle of proper reference to important and relevant cases is critical and judges are well aware of the problem of over-citation of authority. In some law reports such as the Weekly Law Reports additional cases may be cited in argument or referred to in written skeleton arguments which are not mentioned in the judgment. This enables any later court to ascertain whether or not a critical case was cited to the earlier court and if not to consider confidently whether the decision was made per incuriam, that is a decision by a court which appears to have been reached on an incomplete appreciation of existing law. The difficulty arises where a plethora of cases have been cited as to determining where or not a common principle runs through those cases or whether decisions on the cases are essentially facts specific. If, as has been suggested by Paul McGrath,59 there is a possibility of a move towards an inquisitorial system arising from the increase in litigants in person a traditional stare decisis approach may continue or there may be a transition to some other form of guidance from earlier cases. It may well be that the erosion of the adversarial process will in its own way contribute further to the difficulties that precedent may face from an overabundance of decisions or legal data. The 55  Lord Neuberger, First Annual Bailii Lecture ‘No Judgement No Justice’ (20 November 2012) c­ iting N Lindley, ‘The History of the Law Reports’ (1885) 1 LQR 137, 149. docs/speech-121120.pdf. 56  Lord Dyson ‘Are the Judges Too Powerful’ Bentham Association Presidential Address 2014 www. 57  Hedley Burn Co Limited v Hellar & Partners [1964] AC 465. 58  R v R (Rape: Marital Exception) [1992] 1 AC 599, 616. 59  Paul McGrath, ‘The End of the Road for the Common Law’ ICLR Blog Posted on 30 Apr 2014 in Law Reporting at


Recorded Law—The Twilight of Precedent in the Digital Age

shift in focus in the adversarial process has been noted by the Lord Chief Justice Lord Thomas in a speech in January 2015 entitled ‘Reshaping Justice’60 and by the President of the Family Division in England Sir James Munby in a speech entitled ‘Family Justice Reforms’ given in April 2014.61 Lord Thomas said: [W]e have to keep an open mind even on radical options. For example to some are changed to a more inquisitorial procedure seems like the obvious or the only solution to the present situation we find ourselves in with the increase in litigants-in-person and the need to both secure fair trial for all whilst doing so within limited and reducing resources that have to be distributed equitably amongst all those who need to resort to the courts. It might be said by then that to attach to it the label of inquisitorial was doing it a disservice, as it was really little more than the active interventionism characteristic of much pre-trial procedure, case and trial management. But I think it is right to refer to it as inquisitorial, because the essence of the change would be a much greater degree of enquiry by the Judge into the evidence being bought forward.62

Sir James Munby observed that in the court room we must adapt our processes to the new world of those who, not through choice, have to act as litigants in person. We need to think anew about the appropriate roles in the court room of McKenzie friends and other lay advisors. We will need to make our judicial processes more inquisitorial… our system, and for good reason, is essentially adversarial, even in the family court. But it is a system very different from the adversarial system or yore. Then the Judge function is little more than an umpire, adjudicating on whatever claim the litigant chose to bring, the only limitations being the need for some recognised cause of action and the requirement that the evidence had to be both relevant and admissible. Those days have long since gone.63

In the new world of ‘Big Data’ legal information and the ever increasing abundance of case law online there will still be precedents. Rules and principles will be stated. Guidelines will be given. Issues of law will be resolved and those cases will be reported. Paul McGrath suggests that the role of those precedents and reports will be less cardinal and their authority less binding. He attributes this to the absence of full adversarial argument by expert lawyers with tools and training to cite all the relevant preceding cases. But my view is that in addition to that and perhaps more importantly the sheer volume of preceding cases will be overwhelming unless of course judges use data analytical tools to separate the legal wheat from the chaff. In some respects the problem has been addressed in the United States by separating cases that may have some precedential use from those that do not.64 60  Lord Thomas of Cwmgiedd, ‘Reshaping Justice’ 3 March 2014 uploads/JCO/Documents/Speeches/lcj-speech-reshaping-justice.pdf. 61  Sir James Munby, ‘Family Justice Reforms’ 29 April 2014 the-family-justice-reforms-remarks-by-sir-james-munby/. 62  Lord Thomas above n 60. 63  Sir James Munby above n 61. 64 See Anastasoff v US 223 F 3d 898 (8th Cir 2000). For a discussion of the Anastasoff decision and the rationale for limiting precedential authority to only published decisions see Thomas R Lee and

The Twilight of Precedent?


A number of rules surround the ability of lawyers to cite cases as an authority for a proposition in United States courts.

III.  The Twilight of Precedent? How do we maintain the fundamentals of precedent in the Digital Paradigm? It seems that there may be two possible alternative ways forward. One solution focuses upon the content layer of the Digital Paradigm and effectively ignores the fact that its qualities make the nature of information and its communication different from what went before. In addition content itself lacks the fixity or stability of printed text. It may be suggested that a number of rules might be developed around which the challenges posed by digital qualities may be met. Such an approach artificially tries to maintain a reality that is no longer present. We are merely anchoring ourselves in an unsafe harbour, ignoring the winds of change that are blowing and hoping to ride them out. By the same token technological co-existence will allow the status quo to continue, at least for a reasonable period of time, in the same way that manuscript and scribal habits continued well past the introduction of the printing press. The pace of change, as suggested by Susskind, will overtake co-existence within a generation or so, if that, rather than over a period of centuries. The real test will probably come as lawyers are drawn from the ranks of those commonly described as Digital Natives65—those who have grown up in the Digital Paradigm and know no other means of information communication apart from device-driven digital ones. But what if there is a second way where the technology itself may provide an answer—a technological solution to the problems that digital qualities pose. This may be termed the ‘Charles Clark’ solution deriving from his oft-quoted solution to challenges to intellectual property in the Digital Paradigm—‘the answer to the machine is in the machine’.66 Artificial intelligence may be the path to follow. This has been discussed by Richard Susskind as a means by which process facilitation may be achieved in online courts.67 Putting the matter very simplistically legal information either in the form of statutes or case law is data which has meaning when properly analysed or ­interpreted. Lance S Lehnhof, ‘The Anastasoff Case and the Judicial Power to Unpublish Opinions’ (2001) 77 Notre Dame Law Review 135. 65 

See the discussion of Prensky’s theory in ch 2. Charles Clark, ‘The Answer to the Machine is in the Machine’ in P Bernt Hugenholtz (ed), The Future of Copyright in a Digital Environment: Proceedings of the Royal Academy Colloquium organized by the Royal Netherlands Academy of Sciences (KNAW) and the Institute for Information Law’ (Amsterdam, 6–7 July 1995) (The Hague, Kluwer Law International, 1996). 67  Richard Susskind, ‘Online Dispute Resolution’ 02/Online-Dispute-Resolution-Final-Web-Version1.pdf. 66 


Recorded Law—The Twilight of Precedent in the Digital Age

Apart from the difficulties in location of such data, the analytical process is done by lawyers or other trained professionals. Already a form of data analysis or artificial intelligence (AI) variant is available in the form of databases such as LexisNexis, Westlaw or Bailii. Lexis and ­Westlaw have applied natural language processing (NLP) techniques to legal research for 10-plus years. The core NLP algorithms were all published in academic journals long ago and are readily available. The hard (very hard) work is practical implementation against good data at scale. Legal research innovators like Fastcase and RavelLaw have done that hard work, and added visualisations to improve the ­utility of results. The usual process involves the construction of a search which, depending upon the parameters used will return a limited or extensive dataset. It is at that point that human analysis takes over. What if the entire corpus of legal information is reduced to a machine readable dataset? This would be a form of Big Data with a vengeance, but it is a necessary starting point. The issue then is to: (a) Reduce the dataset to information that is relevant and manageable. (b) Deploy tools that would measure the returned results against the facts or a particular case to predict a likely outcome. Part (a) is relatively straight forward. There are a number of methodologies and software tools that are deployed in the e-disclosure space that perform this function. Technology-assisted review (TAR, or predictive coding) uses natural language and machine learning techniques against the gigantic data sets of e-­discovery. TAR has been proven to be faster, better, cheaper and much more consistent than human-powered review (HPR). It is assisted review, in two senses. First, the technology needs to be assisted; it needs to be trained by senior lawyers very knowledgeable about the case. Second, the lawyers are assisted by the technology, and the careful statistical thinking that must be done to use it wisely. Thus, lawyers are not replaced, though they will be fewer in number. TAR is the success story of machine learning in the law. It would be even bigger but for the slow pace of adoption by both lawyers and their clients.68 Part (b) would require the development of the necessary algorithms that could undertake the comparative and predictive analysis, together with a form of probability analysis to generate an outcome that would be useful and informative. There are already variants at work now in the field of what is known as Outcome Prediction utilising cognitive technologies. There are a number of examples of legal analytics tools. Lex Machina,69 having developed a set of intellectual property (IP) case data, uses data mining and 68  Michael Mills, ‘Artificial Intelligence in Law: The State of the Play 2016 (part 2)’ (23 February 2016) Thomson Reuters Legal Executive Institute 69

The Twilight of Precedent?


predictive analytics techniques to forecast outcomes of IP litigation. Recently, it has extended the range of data it is mining to include court dockets, enabling new forms of insight and prediction. LexPredict70 developed systems to predict the outcome of Supreme Court cases, at accuracy levels which challenge experienced Supreme Court practitioners. Premonition71 uses data mining, analytics and other AI techniques ‘to expose, for the first time ever, which lawyers win the most before which Judge’.72 This proposal, of course, immediately raises issues of whether or not we are approaching the situation where we have decision by machine. As I envisage the deployment of AI systems, the analytical process would be seen as a part of the triaging or Early Case Evaluation (ECE) process, rather than as part of the decisionmaking process. The advantages of the process are in the manner in which the information is reduced to a relevant dataset performed automatically and faster than could be achieved by human means. Within the context of an Online Court (OC) process it could be seen as facilitative rather than determinative. If the case reached the decision-making process it would, of course, be open to a judge to consider utilising the ‘Law as Data’ approach with, of course, the ultimate sign-off. In that way the decision would still be a human one, albeit machine assisted. But what if the machine does not provide an answer and digital qualities do force a re-assessment of precedent as a result of the challenges posed by the qualities of digital information systems? What shape will precedent and the common law then take? Will the detailed principles developed by precedent become a series of broadly stated principles rather than the refined and intricate intermeshing of decisions that exists at present? Will the common law as we understand it wither or perhaps be replaced by a rule-based system similar to that of some European countries? Given the suggestion that print sources incline one towards legal principles while keyword searches are more apt to generate groups of cases based upon similarities of fact73 will litigants, frustrated by lack of clarity, consistency and predictability of outcome where judges rely only upon fact specific outcomes, turn to arbitrators and mediators who are quicker, cheaper and less troubled by the procedural arcana of a court, or will the adversarial system be replaced by an inquisitorial one? It may well be that by travelling the digital path (and that journey, once started, cannot be retraced) we are irrevocably committed to a course that will change the doctrine of precedent as we know it.



72  Michael Mills, ‘Artificial Intelligence in Law: The State of the Play 2016 (part 3) (10 March 2016) 73  F Allan Hanson, ‘From Key Numbers to Key Words: How Automation Has Transformed the Law’ (2002) 94 Law Library Journal 563, 583.

7 Digital Information—The Nature of the Document and E-discovery I. Introduction One of the collisions that has taken place between pre-digital rules and the realities of the digital paradigm has been in the field of discovery in civil proceedings. As computer use gradually increased, so too did the use of computer media for the storage of information and documents increased as well. The advent of independent and portable devices that enable storage of data has compounded the problem. Whereas data was once stored on a desktop computer or on a business server, now data may be spread across laptops, mobile phones, tablets and iPads, portable storage media such as USB thumb drives and the Cloud. Large volumes of information created by computers and computer programs remained in digital format and were not printed out. The location and volume of data posed significant problems in the proper conduct of discovery. Isolating discoverable documents was potentially a time consuming and therefore expensive process. The manual review of every piece of computer information required a trained—that is legally qualified—eye to assess it for relevance and to determine whether or not it was discoverable. Section 3 of the Cresswell Report identified the problem in the following way: The stages a party to a commercial litigation dispute now has to go through in order to comply with its disclosure obligations in relation to electronic documents are as follows: (1) identify how many of the documents which might be relevant to the case have been created by electronic means; (2) identify whether these electronic documents have been preserved and where they might be stored; (3) retrieve, and search for, any relevant electronic documents; (4) conduct a review of the electronic documents; and (5) then produce the electronic documents, ideally, in an agreed format. When disclosing electronic documents as opposed to paper documents there are additional costs and burdens on the parties at each of these stages as discussed below.1 1  A Report of a Working Party Chaired by the Honourable Mr Justice Cresswell Dated 6 October 2004 uk/docs/electronic_disclosure1004.doc para 3.5–3.6.



For commercial litigation, and increasingly for other civil litigation, discovery was becoming even more of an obstacle in terms of time and cost of litigation than it had been previously. The matter was further complicated by the case of Compagnie Financière et Commerciale du Pacifique v Peruvian Guano Co (The Peruvian Guano Test) where Brett LJ said: It seems to me that every document relates to the matters in question in the action, which not only would be evidenced upon any issue, but also which, it is reasonable to suppose, contains information which may—not which must—either directly or indirectly enable the party requiring the affidavit either to advance his own case or to damage the case of his adversary. I have put in the words ‘either directly or indirectly’ because, as it seems to me, a document can properly be said to contain information which may enable the party requiring the affidavit either to advance his own case or to damage the case of his adversary, if it is a document which may fairly lead him to a train of inquiry, which may have either of these two consequences.2

This train of inquiry test had the effect of extending discovery well beyond the particular issues of the case and the relevance of documents to those issues. The problems posed by Peruvian Guano have been recognised throughout the common law world. The problems were discussed in detail by Lord Woolf in his Access to Justice: Final Report (July 1996).3 He concluded that discovery had become disproportionate, particularly in larger cases where very significant numbers of documents had to be identified and listed, even though very few would have any significance to the issues in the trial. His sentiments were echoed in Australia, Canada and New Zealand.4 This chapter examines how the collision of the analogue approach to discovery and the problems posed by digital documents was addressed so that it did not become a train wreck. Early recognition of the problems posed by the new paradigm came from an organisation known as The Sedona Conference. The influence of this organisation upon the development of e-discovery solutions cannot be under-estimated. The Sedona Principles sit like a foundation under the approaches to e-discovery that have developed in most jurisdictions. The chapter considers the various rule systems that have developed to deal with e-discovery. In some cases—and the United States is a prime example—the changes have been evolutionary. In New Zealand’s case the changes were revolutionary. Finally the chapter examines the way in which technology has been deployed to provide effective solutions to the problem of reducing volume in the quest for relevance. 2  Compagnie Financière et Commerciale du Pacifique v Peruvian Guano Co (The Peruvian Guano Test) (1882) 11 QBD 55. 3  Available at htm. 4  Australian Law Reform Commission, Managing Justice: A Review of the Federal System (ALRC 89 2000) para 6.67; Canadian Bar Association, Systems of Civil Justice Taskforce Report (1996) at 43; Rules Committee Consultation Paper (New Zealand) Proposals for Reform of the Law of Discovery. www.


Digital Information

II.  The Development of E-discovery Rules A. Introduction In Anglo-American jurisprudence, discovery has not been the subject of specific legislation but is viewed rather as a procedural matter, governed by the rules of court. Thus the development of e-discovery can be traced through rule changes, amplified by the decisions of the courts. In this section I will discuss the rules relating to e-discovery, how they have developed and how they encourage the use of technology. I shall consider how technology is used to confront and address the problem of reducing the volume of material and isolating the information that is relevant to the case. It is interesting to see that there have been different approaches to a common problem. One common theme that is clear is that the deliberations of the Sedona Conference provide an umbrella of principles that have largely been reflected, with subtle variations, in the various jurisdictions discussed.

B.  The Sedona Conference The Sedona Conference was founded in 1997 by Richard G Braman. The organisation behind the Sedona Conference is dedicated towards the advanced study of law and policy and the forward development of the law in the areas of anti-trust, intellectual property rights and complex litigation. Its mission is to drive the reasoned and just advancement of law and policy by stimulating ongoing dialogue amongst leaders of the bench and bar to achieve consensus on tipping point issues. TSC brings together the brightest minds in a dialogue based, think-tank setting with the goal of creating practical solutions and recommendations of immediate benefit to the bench and bar.5

In 2002 the Working Group Series was established which were forms of focused ‘think-tanks’ which would develop guidelines and best practices in targeted areas. One of the first of the Working Groups was WG1 organised in mid-2002, which addressed electronic document retention and production. WG1 has the goal of setting out concise principles supported by developed dialogue and explanation. The Working Group sought to address what was seen as an element of disarray in the way in which the field of discovery and document production had developed, especially in light of the growing array of solutions offered by vendors and with a lack of precedent and consistent guidance. What WG1 endeavoured to do was produce principled, practical guidelines that would be of assistance as the case law in the field began to develop. 5  ‘About

the Sedona Conference’

The Development of E-discovery Rules


In 2003 the first public comment version of the Sedona Principles was published and it was revised in 2004.6 There were further revisions in 2005 and 2007. There were 14 principles reflecting the law as it is or as it ought to be, and were designed to be of practical assistance to the Bench and Bar. Underpinning the Sedona Principles was a concern for discovery to be based upon reasonableness and balance, together with a desire to produce principles that would be long lasting in that they were reasonably flexible and scalable and could be developed and refined over time. The 14 Principles are as follows: 1. Electronic data and documents are potentially discoverable under Federal Rules of Civil Procedure 34 or its state law equivalents. Organisations must properly preserve electronic data and documents that can reasonably be anticipated to be relevant to litigation. 2. When balancing the cost, burden and need for electronic data and documents, courts and parties should apply the balancing standard embodied in Federal Rule of Civil Procedure 26(b)(2) and its state law equivalents, which require considering the technological feasibility and realistic costs of preserving, retrieving, producing, and reviewing electronic data, as well as the nature of the litigation and the amount in controversy. 3. Parties should confer early in discovery regarding the preservation and production of electronic data and documents when these matters are at issue in the litigation, and seek to agree on the scope of each party’s rights and responsibilities. 4. Discovery requests should make as clear as possible what electronic documents and data are being asked for, while responses and objections to discovery should disclose the scope and limits of what is being produced. 5. The obligation to preserve electronic data and documents requires reasonable and good faith efforts to retain information that may be relevant to pending or threatened litigation. However, it is unreasonable to expect parties to take every conceivable step to preserve all potentially relevant data. 6. Responding parties are best situated to evaluate the procedures, methodologies and technologies appropriate for preserving and producing their own electronic data and documents. 7. The requesting party has the burden on a motion to compel to show that the responding party’s steps to preserve and produce relevant electronic data and documents were inadequate. 8. The primary source of electronic data and documents for production should be active data and information purposely stored in a manner that anticipates future business use and permits efficient searching and retrieval. Resort to disaster recovery backup tapes and other sources of data and documents requires the requesting party to demonstrate need and relevance that outweigh the 6


Digital Information

cost, burden and disruption of retrieving and processing the data from such sources.  9. Absent a showing of special need and relevance a responding party should not be required to preserve, review or produce deleted, shadowed, fragmented or residual data or documents. 10. A responding party should follow reasonable procedures to protect privileges and objections to production of electronic data and documents. 11. A responding party may satisfy its good faith obligation to preserve and produce potentially responsive electronic data and documents by using electronic tools and processes, such as data sampling, searching or the use of selection criteria, to identify data most likely to contain responsive information. 12. Unless it is material to resolving the dispute, there is no obligation to preserve and produce metadata absent agreement of the parties or order of the court. 13. Absent a specific objection, agreement of the parties or order of the court, the reasonable costs of retrieving and reviewing electronic information for production should be borne by the responding party, unless the information sought is not reasonably available to the responding party in the ordinary course of business. If the data or formatting of the information sought is not reasonably available to the responding party in the ordinary course of business, then, absent special circumstances, the costs of retrieving and reviewing such electronic information should be shifted to the requesting party. 14. Sanctions, including spoliation findings, should only be considered by the court if, upon a showing of a clear duty to preserve, the court finds that there was an intentional or reckless failure to preserve and produce relevant electronic data and that there is a reasonable probability that the loss of the evidence has materially prejudiced the adverse party. The Sedona Principles have not been just ‘blue-sky’ thinking, nor have they been mere ‘comfortable words’. They have had a practical impact and have been cited in a number of cases7 as well as in the development of local and national rules such as amendments to the Federal Rules of Civil Procedure (FRCP) and the Ninth Circuit Draft Model Rule. They have been cited in numerous articles addressing e-discovery and in judicial and legal education programmes. They are used as resources in University courses as well as for businesses and have provided guidance for software developers and vendors.

7 Including Zubulake v UBS Warburg, 220 FRD 212 at 217 (SDNY 22 Oct 2003) (‘Zubulake IV’) The case relied on the Sedona Principles for the proposition that ‘as a general rule … a party need not preserve all backup tapes even when it is reasonably anticipates litigation.”’ That proposition has been followed in Consolidated Aluminum Corp v Alcoa, Inc No 03-1055-C-M2, 2006 WL 2583308, *5 (MD La 19 July 2006) where the court held that Alcoa was not required to preserve every shred of paper but only those documents of which it had ‘actual knowledge’ that they would be material to future claims. See also E*Trade Securities LLC v Deutsche Bank AG, 230 FRD 582, 592 (D Minn 2005) (holding in reliance on the Sedona Principles that when backup tapes are used to preserve evidence that was not preserved through a litigation hold, the backup tapes should be produced).

The Development of E-discovery Rules


C. The United States’ Experience—The Federal Rules of Civil Procedure The United States rules about discovery are contained in the Federal Rules of Civil Procedure (FRCP). These Rules were first created in 1938. The English rules of procedure have provided a model for most of the Commonwealth jurisdictions. However, changes reflecting the development of e-discovery have been somewhat slower than those in the United States, and, with the exception of New Zealand, have been in the form of practice directions or protocols. In the United States the Federal Rules were amended in 1970 to include FRCP 34. This provides for discovery of ‘electronic data compilations from which information can be obtained only with the use of detection devices’. This applied mainly to large organisations such as banks, insurance companies, academic institutions and government agencies.8 The rise in the use of computers led to increased volumes of electronic material and the issue of whether computer-based information could constitute a document for the purposes of discovery became an issue as did the scope of discovery.9 At this early stage of the development of e-discovery, issues of costs and proportionality were on the horizon. In 1983 FRCP 26(b)(1) was amended to address the issue of costs in discovery by putting a limit upon the frequency and the extent of use of discovery methods. Courts were able to examine discovery that was ‘unreasonably cumulative … or is obtainable from some other source that is more convenient, less burdensome and less expensive’. The Court was able to take into account the needs of the case, the amount in controversy, limitations on resources and the importance of the issue at stake in the litigation.10 Sadly the 1983 Rules did not have the desired effect. Costs were not reduced and new Rules were adopted in 1993 in an attempt to reduce the burdens associated with discovery. The 1993 amendments created automatic disclosure provisions, limits on interrogatories and depositions and an early conference so that a discovery plan could be developed. As the Sedona Conference was developing, but prior to the first version of the Principles in 2003 there were further changes to the FRCP in 2000, prompted by a desire to reduce costs in e-discovery by reducing the amount of information that had to be disclosed. By these changes it was hoped that ‘over-discovery’ might be contained and time limits established for taking depositions. A further objective of the 2000 amendments was to develop national uniformity of discovery in federal courts. The 2000 changes have been described as ‘modest’ and

8  Kenneth Withers, ‘Electronically Stored Information: The December 2006 Amendments to the Federal Rules of Civil Procedure’ (2006) 4 Northwestern Journal of Technology & Intellectual Property 171, 173. 9  Richard L Marcus, ‘Confronting the Future: Coping with Discovery of Electronic Material’ (2001) 64 Law & Contemporary Problems 253, 258–60. 10  Fed R Civ P 26(b)(2)(C).


Digital Information

designed to clarify that electronically stored information stood on the same footing as documents.11 Rule 26(b)(1) of the FRCP created a two level approach to discovery. The first level involved discovery of material that was essential to the claim or the defences of the respective parties. This discovery was in the hands of the parties. Level two was limited to discovery once good cause had been established. This meant that discovery was limited to material that was relevant to the claim of the defence together with an exception for wider discovery only where there was good cause to seek material that was relevant to the subject matter of the action. Prior to 2000, this form of discovery did not require judicial intervention.12 There was a problem, however, in that the advisory committee for the changes to the FRCP failed to define or limit what amounted to ‘good cause’. In some cases litigants ignored the amendments.13 Writing in 2004, William C Gleisner III observed: When it comes to electronic evidence, it seems that the law changes slowly or not at all. The bench and bar have for the most part elected to deal with electronic evidence by subjecting it to rules that were created to solve the problems of a paperbound world. While the existing rules of civil procedure and evidence have been used with some measure of success to manage the electronic revolution to date, we must fundamentally modify our procedural and evidentiary rules so that they are responsive to our electronic world.14

Gleisner’s concern was that although the Rules had been moderately successful in managing the electronic revolution, they did not go far enough. He particularly advocated changes to the Rules that took into account the many ways that electronic evidence differed from that accumulated in the paper paradigm. One problem that the Rules did not bring about was essentially a cultural problem. Although some courts adopted ‘creative rules and well-thought-out decisions to deal with the electronic revolution’15 in the main courts struggled with electronic data, treating

11  Thomas Y Allman, ‘The “Two-Tiered” Approach to E-Discovery: Has Rule 26(b)((2)(B) Fulfilled Its Promise’ (2008) 14 Richmond Journal of Law & Technology 1 at 6. v14i3/article7.pdf. 12  This form of wide ranging discovery is similar to what may be referred to as Peruvian Guano discovery that was (and in some jurisdictions still is) a feature of the English and Commonwealth discovery regimes. 13  Henry S Noyes, ‘Good Cause is Bad Medicine for the New E-Discovery Rules’ (2007) 21 Harvard Journal of Law and Technology 49, 57; For an example of the type of dispute between whether discovery related to a claim, defence or wider subject matter see Thompson v Dep’t of Hous & Urban Dev 199 FRD 168, 171 (D Md 2001). 14  William C Gleisner III, ‘Electronic Evidence in the 21st Century’ (2004) 77 Wisconsin Lawyer (No 7) ArticleID=664. 15  ibid. For an example of the ‘creative approach’ in 2004 see David J Waxse, ‘“Do I Really Have to Do That?” Rule 26 (a)(1) Disclosures and Electronic Information’ (2004) 10 Richmond Journal of Law & Technology 50,, referring to the Guidelines for Electronic Discovery in the US District Court of Kansas. The Spring 2004 issue of the Richmond Journal of Law and Technology contains some useful articles on e-discovery issues providing a snapshot of the law as e-discovery was beginning to become a major issue.

The Development of E-discovery Rules


it as its paper predecessor. There seemed to be a lack of consistency of approach in dealing with electronic data in the context of discovery. Although there had been mandatory disclosure obligations, many courts chose to opt out and disparate approaches continued notwithstanding the 2000 amendments to FRCP. Further amendments to the FRCP came in April 2006. These changes were more wide-ranging than those of 2000 and covered Rules 16, 26, 33, 34, 37 and 45 along with changes to Form 35. The ‘good cause’ issue was clarified and the changes were put in place to deal specifically with electronic discovery. A proportionality approach was introduced.16 Courts were required to limit discovery when it was unduly burdensome or the cost of the discovery outweighed the benefit. Discovery would not be available if it was unduly burdensome or costly and that would apply even if the information sought was relevant. Once again a two-level approach was adopted.17 This approach, contained in the new Rule 26(b)(2)(B) was specific to electronically stored information and reflected some of the Sedona Principles. At the first level, a party could withhold material from production that was ‘not reasonably accessible because of undue burden or cost’, without resorting to a court order, provided there is an appropriate identification of the sources of electronically stored information that are not being produced. The issue of accessibility was a matter that had to be determined by the court and the assessment of burden and costs had to consider the technology being used. The Rule is technology neutral in that it was not possible, nor was it wise, to limit the scope or applicability of the Rule to a particular technology or set of technologies. One of the factors that a court must take into account is the medium or media upon which data may be stored.18 At the second level a requesting party may move to compel discovery and must establish good cause, taking into account the limitations imposed by Rule 26(b)(2) (C). The methods of discovery must be limited when the burden or expense of the proposed discovery outweighs its likely benefit, taking into account the needs of the case, the amount in controversy, the parties’ resources, the importance of the issues at stake in the litigation, and the importance of the proposed discovery in resolving the issues.19

A evaluation of the following issues must take place to determine whether or not good cause exists. —— The specificity of the discovery request; —— the quantity of information available from other and more easily accessed sources;

16  Referred to in Parkdale Am v Travelers Cas & Sur Of Am, No 3:06-CV-78-R 2007 WL 4165247, at *12 (WDNC Nov 19, 2007). 17  A useful critique of the two-tiered approach may be found in Allman, above n 11. 18  Zubulake 1, 217 FRD 309, 318 (SDNY 2003). 19  FRCP 26(b)(2)(C).


Digital Information

—— the failure to produce relevant information that seems likely to have existed but is no longer available on more easily accessible sources; —— the likelihood of finding relevant, responsive information that cannot be obtained from other, more easily accessed sources; —— predictions as to the importance and usefulness of the further information; —— the importance of the issues at stake in the litigation; and —— the party’s resources. The Rules also require that a party identify unsearched sources of information that were considered inaccessible and must make initial disclosure of potential sources of electronic information. One of the other significant features of the 2006 amendments involved the responsibilities of counsel. Rather than a fiercely adversarial approach to the issue of discovery, co-operation is now key. Rule 16(b) provides that counsel meet and confer about discovery issues and prepare for a case conference. There is further impetus towards the ‘meet and confer’ obligations provided under Rule 26(b) (5)(B) and there must be negotiation towards a case management order which must specify how the parties will conduct discovery, preserve electronic evidence, identify sources of electronic information, agree on forms of production and determine issues of cost shifting. An agreement is not mandatory and the court cannot require such an agreement, thus preserving the power of the court to make orders. Furthermore, the court exercises a supervisory power as the case continues. For example a case management plan may provide for preservation of certain electronic data but that may be proven to be unduly onerous as discovery continues.20 The FRCP can be described as organic. A year after the 2006 amendment further changes were proposed in 2007, changing the language of Rule 26 to make it more easily understood and to introduce a consistent style throughout the Rules. Thus the changes were stylistic rather than substantive. Further changes to the Rules came in 2010 and 2015. The 2010 changes related to concerns about expert discovery and amendments to Rule 26(a)(2) require disclosure regarding expected expert testimony of those expert witnesses not required to provide expert reports and limit the expert report to facts or data (rather than ‘data or other information’, as in the current Rule) considered by the witness. Rule 26(b)(4) is amended to provide work-product protection against discovery regarding draft expert disclosures or reports and—with three specific exceptions—communications between expert witnesses and counsel. The focus of the Rule changes were upon attorney expert communications and information rather than directly associated with discovery of electronic information. The 2015 changes to the FRCP sped up the e-discovery process. The revised Rule 16 of the Federal Rules of Civil Procedure shortens the time within which 20  Carolyn Southerland and Jake Frazier, ‘Top Ten Considerations When Negotiating an E-­Discovery Case Management Order’, Digital Discovery and E-Evidence Oct 2005, at 12 cited in BT Ward, JS Sipior, JP Hopkins, C Purwin and L Volonino, ‘Electronic Discovery: Rules for a Digital Age’ [2012] Boston University Journal of Science & Technology Law 150 at 187, fn 280.

The Development of E-discovery Rules


the court must issue a scheduling order after a lawsuit has been filed. Specifically, proposed Rule 16(b)(2) requires the judge to issue the scheduling order as soon as practicable, but unless the judge finds good cause for delay … within the earlier of 90 days [down from 120 days in the current rules] after any defendant has been served … or 60 days [down from 90 days] after any defendant has appeared.21

In addition the revised Rule 16(b)(1) requires direct and simultaneous communications with and between all parties early in the litigation during an initial scheduling conference, which the Rules Committee believed would avoid time delays caused by more indirect communications. This rule emphasises the positive ‘consult and confer’ obligations required of counsel, and accelerates the time frame within which the Court embarks upon its management function. The 2015 changes to Rule 26(b)(1) were directed towards the scope of discovery. Information is discoverable under the revised Rule 26(b)(1) if it is relevant to any party’s claim or defence and is proportional to the needs of the case. This continued the focus of the 1983 amendments to deal with over-discovery although it did appear that approach had been softened by changes in 1993. Those changes subdivided Rule 26(b)(1) into two paragraphs for ease of reference and to avoid renumbering of paragraphs (3) and (4). The problem was that the subdivision was done in such a way that could be read to separate the proportionality provisions as ‘limitations’, so that they were no longer an integral part of the (b)(1) scope provisions, notwithstanding that it was the intention of the 1993 changes to enable the court to keep a tighter rein on discovery. The 2000 amendment added further limitations to reinforce the necessity of using the limitations as the Rules intended. The 2015 changes restored: [T]he proportionality factors to their original place in defining the scope of discovery. This change reinforces the Rule 26(g) obligation of the parties to consider these factors in making discovery requests, responses, or objections.22

The objectives of the FRCP in the area of e-discovery were informed by the principles articulated by the Sedona Conference. In essence the approach of reasonableness and proportionality lies behind discovery, and e-discovery in particular. Co-operation between counsel, emphasised by the meet and confer provisions and directed not towards applications for orders but presenting a case management conference with some clear discovery proposals gave the court a wider supervisory power than it had previously enjoyed. It seems that some of the early difficulties experienced in achieving the objectives of the Rules was occasioned by a resistance to change which could be a cultural matter as much as anything else, alongside a lack of understanding about the true nature of digital information and


FRCP 16(b)(2). Committee Notes on the Rules 2015 Amendment—may be accessed from rules/frcp/rule_26. 22 


Digital Information

the paradigmatic differences occasioned by the very nature of digital information. However, as will be seen the courts are now grasping the nettle and the decisions in Zubulake together with the work of Magistrate Judges Peck, Facciola and Grimm have redefined the e-discovery landscape in the United States. In many respects, however, the United States’ experience has been a proving ground for e-discovery approaches and has been of assistance in the way in which e-discovery rules and protocols have developed in other jurisdictions. In the United States technological competence is developing as a part of overall counsel competence requirements. Even without the 2015 changes to the federal rules, courts are becoming increasingly savvy to e-discovery practices and procedures and increasingly frustrated with practitioners and clients who do not stay on top of them. Indeed, the ABA’s Model Rules of Professional Conduct counsel that: To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.23

At heart, e-discovery is an issue of ethics and one that practitioners and clients must take seriously in order to avoid potentially draconian consequences.

D.  The English Experience The rules governing discovery and e-discovery in particular are contained in Part 31 and the Practice Direction to Part 31 Paragraph 2A of the Civil Procedure Rules.24 23  24 

ABA Model Rule of Professional Conduct 1.1, cmt 8 (2015) (emphasis added). Pt 2A provides as follows: ELECTRONIC DISCLOSURE 2A.1 Rule 31.4 contains a broad definition of a document. This extends to electronic documents, including e-mail and other electronic communications, word processed documents and databases. In addition to documents that are readily accessible from computer systems and other electronic devices and media, the definition covers those documents that are stored on servers and back-up systems and electronic documents that have been ‘deleted’. It also extends to additional information stored and associated with electronic documents known as metadata. 2A.2 The parties should, prior to the first Case Management Conference, discuss any issues that may arise regarding searches for and the preservation of electronic documents. This may involve the parties providing information about the categories of electronic documents within their control, the computer systems, electronic devices and media on which any relevant documents may be held, the storage systems maintained by the parties and their document retention policies. In the case of difficulty or disagreement, the matter should be referred to a judge for directions at the earliest practical date, if possible at the first Case Management Conference. 2A. 3 The parties should co-operate at an early stage as to the format in which electronic copy documents are to be provided on inspection. In the case of difficulty or disagreement, the matter should be referred to a Judge for directions at the earliest practical date, if possible at the first Case Management Conference.

The Development of E-discovery Rules


The Practice Direction was introduced following the recommendations of the working party chaired by Mr Justice Cresswell, which provides a useful background both to the Practice Direction and the move in England towards a special regime for e-discovery. The Cresswell Report was released on 6 October 2004. Its purpose was to investigate, and make recommendations as to, the particular problems thrown up by the disclosure of emails and other electronic documents and how the current Civil Procedure Rules and Commercial Court Guide on disclosure apply to electronic documents.25 The Committee reviewed the case of Zubulake v UBS Warburg LLC26 as well as the Sedona Principles of January 2004. Even prior to the Cresswell Report, the Courts in England were being challenged by the developing Digital Paradigm. In the case of Derby & Co Ltd v Weldon (No 9)27 Vinelott J held that a computer database, so far as it contains information capable of being retrieved and converted into readable form, is a document (within the meaning of Rules of the Supreme Court Order 24). He stated the principle as follows: It must, I think, apply a fortiori to the tape or disc on which material fed into a simple word processor is stored. In most businesses, that takes the place of the carbon copy of

2A.4 The existence of electronic documents impacts upon the extent of the reasonable search required by Rule 31.7 for the purposes of standard disclosure. The factors that may be relevant in deciding the reasonableness of a search for electronic documents include (but are not limited to) the following:(a) The number of documents involved. (b) The nature and complexity of the proceedings. (c) The ease and expense of retrieval of any particular document. This includes: (i) The accessibility of electronic documents or data including e-mail communications on computer systems, servers, back-up systems and other electronic devices or media that may contain such documents taking into account alterations or developments in hardware or software systems used by the disclosing party and/ or available to enable access to such documents. (ii) The location of relevant electronic documents, data, computer systems, servers, back-up systems and other electronic devices or media that may contain such documents. (iii) The likelihood of locating relevant data. (iv) The cost of recovering any electronic documents. (v) The cost of disclosing and providing inspection of any relevant electronic documents. (vi) The likelihood that electronic documents will be materially altered in the course of recovery, disclosure or inspection. (d) The significance of any document which is likely to be located during the search. 2A.5 It may be reasonable to search some or all of the parties’ electronic storage systems. In some circumstances, it may be reasonable to search for electronic documents by means of keyword searches (agreed as far as possible between the parties) even where a full review of each and every document would be unreasonable. There may be other forms of electronic search that may be appropriate in particular circumstances. 25 

A Report of a Working Party Chaired by the Honourable Mr Justice Cresswell above n 1 para 1.1. Zubulake v UBS Warburg LLC (2003) 217 FRD 309. 27  Derby & Co Ltd v Weldon (No 9) [1991] 2 All ER 901. 26 


Digital Information

outgoing letters which used to be retained in files. Similarly, there can be no distinction in principle between the tape used to record a telephone conversation in Grant v Southwestern and County Properties Ltd [1974] 2 All ER 465, [1965] Ch 185, which was an ordinary analogue tape on which the sound waves is, as it were, mimicked by the pattern of chemical deposit on the tape, and a compact disc or digital tape on which sound, speech as well as music, is mapped by co-ordinates and recorded in the form of groups of binary numbers. And no clear dividing line can be drawn between digital tape-recorded messages and the database of a computer on which the information which has been fed into the computer is analysed and recorded in a variety of media in binary language.28

The former Rules, now replaced by the Civil Procedure Rules, were changed to adopt a more extensive definition of a document to encompass material such as databases and disks holding information in electronic form.29 These developments were important because they made it clear that the meaning of document was no longer to be restricted to writing upon paper, and a more conceptual approach was adopted. By the time of the Cresswell Report such a conceptual approach had been adopted in Australia and, as has been discussed, in the United States. The Cresswell Report identified a number of reasons why discovery in the electronic space presented difficulties. These reasons include the huge volume of documents which are created and stored electronically, the ease of duplication of electronic documents, the lack of order in the storage of electronic documents, the differing retention policies of the parties, the existence of metadata and the fact that electronic documents are more difficult to dispose of than paper documents.30 After considering the experience of the United States and the 2004 Sedona Principles, the report concluded that although some of the problems with e-discovery had been identified there did not appear to be a coherent pattern of decisions. In addition, it considered that the Sedona Principles were not suitable for wholesale adoption in England. However, when one considers the Sedona Principles with the Cresswell recommendations, it is clear that elements of Sedona were considered useful for adoption. In this respect it is clear that the Sedona Conference was influential in the consideration of e-discovery matters. There was a problem and that was the way in which the changes were introduced in 2005. The drafting of the Practice Direction was perfectly satisfactory but in a sense the importance of the e-discovery elements was understated and often overlook by lawyers and Judges. One commentator has suggested that the e-discovery provisions were treated as optional because almost no judge wanted to touch the subject ‘and there was a subliminal conspiracy which meant that nobody raised it—after all, if the judge is not interested, which lawyer is likely to provoke 28  ibid at 905. The decision of Vinelott J was followed in a number of cases including, for example, Victor Chandler International v. Customs & Excise [2000] 2 All ER 315. 29  RSC R 31.4. The Rule extends to covers computer databases and emails, word processed documents, imaged documents and metadata held on computer databases. It also covers electronically recorded communications and activities such as instant messaging on online systems and multi-media files including voicemail and videos. 30  Cresswell Report above n 1 para 3.3.

The Development of E-discovery Rules


judicial interference?’31 This meant that early e-discovery cases were retrospective in nature, dealing with matters which, had proper attention been given to the Direction, need not have occurred. The case of Digicel (St Lucia) Ltd et al v Cable & Wireless PLC et al32 provides a useful illustration of the way in which the English Court had to deal with e-discovery from a retrospective position. The case is also useful because it provides a discussion of the background to the Rules together with a discussion of the Cresswell Report and the approaches of the US Courts. The case was about specific disclosure of certain classes of electronic documents. One of the orders sought was for restoration of back-up tapes to allow a search for the email accounts of former employees. Another addressed electronic documents that had already been disclosed and sought an order that the defendants carry out a further search across those documents by reference to a set of additional key words/phrases as identified by the claimants—an application for additional search terms. The basic issue was whether or not the scope of disclosure sought was reasonable within the meaning of Rule 31.7 which dealt with the duty of search. This had to be considered against a background of the clear provisions of the Practice Direction dealing with what is referred to in the United States as the obligation to consult and confer. The Court observed: [T]he parties should at an early stage in the litigation discuss issues that may arise regarding searches for electronic documents. Paragraph 2A.5 of the PD states that where key word searches are used they should be agreed as far as possible between the p ­ arties. ­Neither side paid attention to this advice. In this application the focus is upon the steps taken by the Defendants. They did not discuss the issues that might arise regarding searches for electronic documents and they used key word searches which they had not agreed in advance or attempted to agree in advance with the Claimants. The result is that the unilateral decisions made by the Defendants’ solicitors are now under challenge and need to be scrutinised by the Court. If the Court takes the view that the Defendants’ solicitors’ key word searches were inadequate when they were first carried out and that a wider search should have been carried out, the Defendants’ solicitors’ unilateral action has exposed the Defendants to the risk that the Court may require the exercise of searching to be done a second time, with the overall cost of two searches being significantly higher than the cost of a wider search carried out on the first occasion.33

However, the defendants argued that what was a reasonable search had to decided by the solicitor in charge of the disclosure process. With that proposition Morgan J agreed but pointed out that the Practice Direction made it clear that some parts at least of the process ought to be discussed with the opposing solicitor with a view to achieving agreement so as to eliminate, or at any rate reduce, the risk

31  Chris Dale ‘A Tribute to Former Senior Master Steven Whitaker’ The E-Disclosure Project 10 April 2014 32  Digicel (St Lucia) Ltd et al v Cable & Wireless PLC et al [2008] EWHC 2522 (Ch), [2009] 2 All ER 1094. 33  ibid para [47].


Digital Information

of later dispute. If a solicitor, whose decision as to what is a reasonable search is later challenged on a specific disclosure application, the Court may well be influenced, in the solicitor’s favour, if it sees that the solicitor was very fully informed as to the issues arising in the case, and had made a fully considered decision applying all the factors in Rule 31.7 and paragraph 2A.4 of the Practice Direction.34

Ultimately, however, it was for the Court to decide what was a reasonable search either in advance or with the benefit of hindsight. In doing so, the Court had to act on its own view rather than review the decision-making process of the solicitor involved, which would deflect the enquiry from determining what was a reasonable search. The Court also observed that disclosure was a continuing process and later events may require a different approach to be considered from what had been earlier determined by a solicitor. As far as the back-up tapes were concerned, Morgan J found that there had not been a reasonable search. The defendants had omitted to search for the email accounts of seven specified individuals to the extent that those accounts may exist in the back-up tapes. The issue was how the order would be crafted. Essentially the Judge adopted a ‘consult and confer’ approach. The solicitors for the parties were to meet almost immediately to discuss how the restoration exercise could be best accomplished. Experts could be present and minutes kept of the discussions. After the discussions the defendants were to embark, so far as was reasonably practicable, upon restoration of the back-up tapes for the purpose of identifying and enabling a search of relevant e-mail accounts. There was an expectation by the Court that the parties’ solicitors cooperate fully with each other, to maintain a dialogue and for there to be questions and answers passing between them as to whether anything further can be done or should be done. The approach of the defendants to the use of keywords to narrow the number of documents that needed to be reviewed came in for criticism from Morgan J. They had acted unilaterally in choosing keywords and conducting the search. This disregarded the clear advice in Part 31 of the Practice Direction and meant that the defendants could be exposed to a court order requiring a further search. The Judge examined the keywords that were used and considered that some eight other search terms should have been included. This finding emphasises the importance of compliance with ‘consult and confer’ obligations and the expectation enshrined in the Rules that counsel co-operate rather than act unilaterally. The case is an interesting and early example of the shift in approach occasioned by the Rules and the realities of managing discovery in the Digital Paradigm. The importance of proactive case management and the use of judicially crafted tools to assist in e-discovery in the form of the ESI (Electronically Stored Information) Questionnaire is demonstrated in the decision of Senior Master Steven Whitaker in Goodale v Ministry of Justice.35 The case sets out the problems raised by electronic documents and the rules which cover them together with a clear

34  35 

ibid para [51]. Goodale v Ministry of Justice [2009] EWHC B41 (QB).

The Development of E-discovery Rules


template for the type of analysis that a judge needs to undertake once it is clear that there are e-discovery issues. Goodale involved a Group Litigation Order—a type of case that requires costeffective management of multiple but often low-value claims; the fact that nearly all the documents tend to lie with the defendants; the fact that they often involve questions of comparative treatment or long-term policy, and so may turn on documents which are scattered in both storage and creation terms. In addition, they often involve public money which gives the state an additional incentive in managing the process. The case involved the complaints of several convicted prisoners admitted into the prison system in England and Wales over a period of time since 2000, who were, at the time they were admitted, dependent on opiates, either because they were being weaned off them in the community by being administered doses of methadone or because when admitted they were dependent on illicit ‘street’ drugs such as heroin. The complaint is that, because of systemic failures by the defendant in the prisons to which these defendants were admitted, there was a policy to submit them to a ‘one size fits all’ detoxification regime rather than, in the case of those already on methadone, to continue them on that treatment or, in the case of those dependant on street drugs, to offer them methadone treatment. The main problem was the lack of agreement by the defendants to the production of electronic documents. They did not want to carry out a search for ESI and argued that it would be disproportionate to do so, especially given the time frame within which the issues arose. Although there is no difference as to the discovery test between paper documents and ESI, ESI differs in that they are of greater volume, more easily created, are often duplicated and are difficult to find. Master Whitaker’s decision is a brief one—28 paragraphs—but it is a significant one and every paragraph is important. Master Whitaker is careful to spell out not merely what must be done but why it is necessary in this case that it should be done, and goes on to set out how disclosure might be handled in a way which goes to the most likely sources of useful information first. In the absence of agreement by the parties, Master Whitaker set out an approach which could be followed with recommendations for further steps. But significantly the parties were required to complete the questionnaire which would give the Court adequate information should there be a need for further applications for discovery directions. The obligations of the parties are set out. Emphasis is placed upon the fact that the burden must be limited to what is necessary, although, like Digicel, he recognises that discovery is on on-going process which may need to be revisited. Master Whitaker clearly demonstrated a familiarity with the technology available that could be utilised and that might reduce the volume of material to sensible level. The decision ends with an order for the completion of the questionnaire as a means of providing the claimant and the Court with the necessary information in a structured manner. This emphasises the importance of gathering sufficient information for the parties and the court to determine the most effective way of managing the case both within the scope of the Rules, but using the discretions provided in the Rules to meet their objective.


Digital Information

III.  Common Themes in the Development of E-discovery in Asia-Pacific Jurisdictions A. Introductory Australia, Singapore, Hong Kong and New Zealand36 have all adopted special rules to address the e-discovery phenomenon. Australia, Hong Kong and Singapore have addressed the issue within existing rule structures by the provision of special protocols or Practice Directions dealing specifically with e-discovery and which run parallel with what may be termed conventional discovery rules. New Zealand followed a different course and changed its rules, replacing earlier discovery provisions in the High Court Rules with a new set of rules that not only changed the scope of discovery—moving it away from Peruvian Guano type discovery to an issue relevance system—but also provided for a universal approach to discovery be it conventional or e-discovery.37 The institution of the new Rules followed the issue of a consultation paper in September 2009 containing proposals for the reform of the law of discovery. Following submissions on a number of options, a new set of discovery rules and schedules to the High Court Rules were put in place in 2012. The District Court Rules were amended to reflect the High Court provisions in 2014. The various protocols, Practice Directions and rules in the Asia-Pacific (APAC) region are largely similar but contain certain variations. This section demonstrates some of the common themes and the impact of some of the variations.

B.  Engagement Threshold A characteristic of the protocol or Practice Direction approaches to e-discovery is the provision of an engagement threshold. In Australia the Federal Court Practice Note requiring a plan for the utilisation of electronic documents at a hearing and e-discovery in general will be engaged where some 200 or more documents relevant to the proceeding have been created or stored in electronic format and where the use of technology and document management will facilitate the quick, inexpensive and efficient resolution of the matter. In Singapore and in Hong Kong an engagement threshold based upon the sum in dispute is one of the criteria that has to be taken into account. In Singapore the


All common law countries in the Asia Pacific (APAC) region. a detailed discussion of the New Zealand Rules see David Harvey and Daniel Garrie, ‘E-­Discovery in New Zealand Under the New Amended Rules’ (2012) 9 Digital Evidence and Electronic Signature Law Review 7 and David Harvey, ‘Reasonable and Proportional Discovery in the Digital Paradigm’ (2014) 3 Journal of Civil Litigation and Practice 103. 37 For

Common Themes in the Development of E-discovery


dispute must involve a claim or a counterclaim of more than $SGD1 million. In Hong Kong the claim or counterclaim must exceed $HKD 8 million. Singapore and Hong Kong, like Australia, have a document threshold as well. In Singapore the Practice Direction will be engaged where discoverable documents exceed 2,000 pages. In Hong Kong the document threshold is one where the case requires the parties to search for at least 10,000 documents. In Singapore a further criterion is that discoverable documents are predominantly in electronic format. Both Hong Kong and Singapore provide a power vested in the Court to order that the Practice Direction may be applicable. In Hong Kong the parties may also opt-in to the Practice Direction should they so desire. The Singapore Practice Direction first came into effect in 2009 and at that stage engagement occurred where the parties opted in. In 2012 the Practice Direction was reviewed and the opt-in provision was dropped in favour of a criteria-based approach. The Hong Kong Practice Direction came into effect in 2014 and was due for review in 2015. Some concerns have been expressed about an absence of judicial proactivity or enthusiasm for the 2014 Practice Direction. In New Zealand the Rules engage automatically as a result of the universal approach to discovery.

C.  Court and Judicial Management A second theme common throughout the rules is a shift away from a ‘parties driven’ approach to discovery—where conduct of discovery in the litigation is in the hands of the parties who seek the direction of the court in cases of dispute—to a court management and judicial control approach to discovery. All of the APAC e-discovery rules provide for a case management conference at which time the parties must have their discovery and document management proposals in place. The parties must satisfy the judge that proper consideration of discovery issues has been undertaken and that progress has been made in considering the nature of material to be discovered, the processes by which this may be undertaken and some of the technological solutions that may be available. Where technology is to be used it is the expectation of the court that there will be some agreement as to search methods and search terms. The case management conference is a critical milestone in the e-discovery process. The management role of the court and judge are emphasised and the case management conference provides an opportunity for supervision by the court of the discovery process. This is quite a different role for the court when compared with what may be termed conventional discovery but it is of interest that in the protocol or Practice Direction jurisdictions of Australia, Hong Kong and Singapore conventional discovery processes sit alongside specific provisions for e-discovery. This means that


Digital Information

different standards and management approaches apply to different types of discovery. This is not the case in New Zealand where no distinction is made between conventional discovery or e-discovery in terms of the supervisory role undertaken by the court in that all cases—e-discovery or not—go to a case management conference and the judge must be satisfied that proper arrangements have been undertaken to effect discovery albeit by conventional means or by the utilisation of electronic systems.

D.  Consult, Confer, Cooperate By the time the milestone of the case management conference has been reached the parties will necessarily have had to consult and confer. Consult and confer requirements are common to all of the APAC e-discovery regimes. This requirement emphasises the shift away from confrontation, adversarialism and the ‘cards close to the chest’ approach to litigation that was a feature of the pre-digital age and recognises that there must be cooperation to ensure proper compliance with the rules and a cost effective approach. The theme of cooperation, effected by consult and confer requirements, is necessary to achieve a further theme common in all of the APAC e-discovery rules which is that of ensuring that the scope of discovery is reasonable and proportional. This is a shorthand way of stating that the costs of discovery do not spiral out of control; that the scope of discovery is reasonable having regard to factors such as the sums in dispute, the documents to be discovered, the issues to be determined; and discovery should be proportional in that the scope of e-discovery and the consequential costs thereof should not be disproportionate to the matters in dispute.

E.  Reasonableness and Proportionality Approaches The extent of reasonableness and proportionality of an e-discovery exercise may be governed by the particular circumstances of the case. Proportionality has been the subject of guidelines in Singapore and Hong Kong. In Singapore the matters to which regard must be had in determining proportionality and economy include: a) b) c) d)

the number of documents involved; the nature of the case and the complexity of the issues; the value of the claim and the financial position of each party; the ease and expense of retrieval of any particular electronically stored document or class of electronically store documents including: i. the accessibility, location and likelihood of locating any relevant documents; ii. the costs of recovering and giving discovery and inspection including the supply of copies of any relevant documents;

Common Themes in the Development of E-discovery


iii. the likelihood that any relevant documents will be materially altered in the course of recovery, or the giving of discovery or inspection; e) the availability of electronically stored documents or class of electronically stored documents sought from other sources; and f) the relevance and materiality of any particular electronically stored document or class of electronically stored documents which are likely to be located to the issues in dispute. Under the Hong Kong rules proportionality will depend upon: a) the number and significance of electronic documents; b) the nature and complexity of the proceeding; c) the ease and expense of document retrieval.

F.  The Checklist Approach Cooperation and consultation require a continued awareness of ensuring a reasonableness and proportionality. To this end all of the rules contain a reference to a checklist or guideline document contained in an appendix to the rules. A consideration of the various rule structures in all of the APAC jurisdictions suggests that the rules themselves are broad brush statements of principle. The detail in ensuring compliance with the rules and taking into account the matters that have to be considered in deep detail are contained in the various checklists. In Australia the related materials with the Practice Note include: a) b) c) d)

a pre-discovery conference checklist; a default document management protocol; an advanced document management protocol; a pre-trial checklist.

The Australian rules not only address e-discovery issues but also document management and presentation at a court hearing. Thus it is for this reason that the document management protocols are included among the checklists and associated guidance documents. In New Zealand a separate document management protocol dealing with document presentation at court was recently introduced. The Singapore Practice Direction contains a checklist of issues for what is described as good faith collaboration between the parties—the Singaporean expression for cooperative ‘confer and consult’ obligations upon counsel. In addition, an agreed electronic discovery plan is provided containing details of search terms, the scope and format of a list of documents, review for privileged material and the method of inspection. The Hong Kong rules provide for an Electronic Document Discovery Questionnaire or EDDQ which is based upon the English e-discovery questionnaire. In addition there is a sample protocol for discovery of electronic


Digital Information

documents. In New Zealand the rules provide for a checklist of matters that counsel must take into account in approaching electronic or indeed any discovery. One of the advantages of the questionnaire or checklist approach is that such documents can also provide an agenda for the judge at case management conference. Such an approach means that counsel who have been lax in their consult and confer obligations or in their consideration of e-discovery approaches will face a careful and critical judicial examination at the case management conference.

G.  Early Case Assessment A further theme within the various APAC e-discovery rule systems is that of the necessity for early case assessment and the identification of data sources. In Hong Kong the parties must serve a draft of an electronic document questionnaire with their initial pleadings. Thus the court expects parties to consider discovery issues as soon as litigation is contemplated. The Early Case Assessment requirement is not so clearly spelled out in the Singapore Practice Direction although in my view, because e-discovery must be seen as a process—and that theme is common to all of the APAC rule systems—it would be foolhardy for counsel to delay discovery assessments. Under the New Zealand rules it is necessary to provide copies of essential documents when the statement of claim is filed thus emphasising the need for counsel to give early consideration to discovery obligations. Early case assessment has always been a critical part of the e-discovery process and it is of interest that the most recent Electronic Discovery Reference Model (EDRM) emphasises the importance of information governance as a first stage in the e-discovery process and preceding identification in early case assessment.

Figure 1:  Electronic Discovery Reference Model

The Rules and Utilisation of Technology


IV.  The Rules and Utilisation of Technology The various rule systems in the APAC region necessarily contemplate the utilisation of technology starting with basic keyword searching all the way through to technology assisted review (TAR). Some of these terms are defined in the Hong Kong and New Zealand rules with some specificity. The Hong Kong and Singapore rules seem to suggest that preliminary evaluation and development of data sets is done by key word searching and emphasis is placed upon the elimination of document duplication. The New Zealand rules go into some detail in the glossary of technological terms if only to ensure that the parties are clear about what the various technological solutions may achieve. The level of technological sophistication must of course depend upon issues such as reasonableness and proportionality. Expensive and time consuming solutions are not necessarily going to be practical in the case of the small claim. The Singapore rules appear to be technology neutral in that there is reference only to the development of search terms together with the prevention of duplication and limiting recovery of documents. The Hong Kong rules go into a little more detail by suggesting tools to be used to achieve particular objectives such as the categorisation of documents date ranges and the like. The Hong Kong rules also suggest processes for agreeing upon key words for searches, concept searches and data sampling together with provision for the identification of privileged or nondiscoverable documents and associated redaction techniques. Both the Singapore and Hong Kong rules suggest a staged discovery process in terms of the utilisation of technologies. Initial discovery or general discovery may take the parties to a certain point and a further stage of discovery may be necessary to advance the matter. An advantage of a staged discovery approach is that the parties can continually keep an eye upon issues of reasonableness and proportionality and of course cost. In New Zealand there is provision for a tailored discovery approach that may be made by agreement with the assistance of a judge. A tailored discovery order may limit or restrict discovery or alternatively, if justification can be advanced, broaden the scope of discovery even so far as a Peruvian Guano type of discovery. Such an approach would rarely be granted in my view and would have to be demonstrably necessary.

A.  The Use of Technical Expertise The Hong Kong rules make provision for technical expertise to be obtained to address the following matters prior to the first case management conference. a) the categories of ESI that are within the control of the parties or contained in their computer systems and devices; b) the scope of a reasonable search of ESI;


Digital Information

c) the deployment of techniques to reduce the burden and costs of discovery of ESI, such as key word or automatic searching, the elimination of duplicated documents and the identification in dealing with privilege material; d) the preservation of ESI; e) the formats in which lists of documents in ESI are to be produced; and f) the digitisation of paper documents.

B.  Technology Solutions—Keywords and TAR In general the Rules or Practice Notes do not specify the use of technology or if they do so it is by oblique reference. Singapore does specifically mention the use of keywords as a preliminary or first step in isolating relevant material and as a prelude to the possible use of other search methods at a later stage. It does seem that a sub-text of the Singapore approach is that the careful and proper use of keyword searching may eliminate any further technological steps in the discovery process. It is when one examines the checklists, which are in many respects the engine room of the developing e-discovery regimes, that one sees a reference to the use of technology and how this may be employed. The New Zealand Rules provide an example. Schedule 9 Part 1 3(2)(a)(ii) provides: Methods and strategies for locating documents: [The parties must] seek agreement on what methods and strategies are appropriate to conduct a reasonable and proportionate search for the documents as identified in paragraph (a), including (but not limited to) the following: (A) appropriate keyword searches; and (B) other automated searches and techniques for culling documents (including concept searching, clustering technology, document prioritisation technology, email threading, and any other new tool or technique); and (C) a method to be used to identify duplicate documents; and (D) whether specialist assistance is required to locate documents efficiently and accurately;

It will be noted that specific technologies are mentioned in this part of the checklist and to assist the New Zealand Rules also contain a glossary of terms which includes a non-exclusive list of technologies which might be employed.38 I shall now consider two of the technologies in detail.

38  Such as clustering, concept searching, de-duplication, document description, document prioritisation technology (predictive coding) e-mail threading, keyword search along with metadata and native electronic document or native file format.

The Rules and Utilisation of Technology


i.  Keyword Searching The use of keyword searching, as I have observed, appears in Rules or in checklists. Keyword searching is a fairly blunt instrument. Keywords create a black or white scenario based upon whether or not a document contains a word or does not. The difficulty with keyword searching is that it may result in irrelevant documents being identified because the keyword selected may have different meaning or context to what is desired. The important thing to remember with keyword searching is that the construction of the search itself is critical together with an understanding of the limitations of the method. Ideally, the construction of the search string or keywords should be discussed with other parties so that the keywords may be agreed. Because of its limitations, keyword searching is not an ideal method of cutting and filtering documents and other automated searches may be preferable. But, under the principles of the rules, if keyword searching is to be used it is important to agree an approach with the other side to avoid conflict. Nevertheless, keyword searching has its place and purpose in the overall scheme of e-discovery. The use of keyword searching is very likely to be on the agenda of a case management conference. A court should look carefully at whether keyword searching, particularly in big cases, is appropriate and whether more sophisticated techniques and software should be used. Judges should not decide on keywords without evidence of the number of ‘hits’ particular terms throw up.39 The UK Practice Direction warns against the dangers of keyword searching. In ‘10 Key E-Discovery Issues in 2011’, Lender and Peck observed: Following on decisions by Magistrate Judges Facciola and Grimm, Judge Peck’s decision in William A Gross Construction Assoc v American Manufacturers Mutual Ins Co, 256 FRD 134, 134, 136 (SDNY 2009) (Peck, MJ), constituted a ‘wake up’ call to the Bar, as follows: This Opinion should serve as a wake-up call to the Bar in this District about the need for careful thought, quality control, testing, and cooperation with opposing counsel in designing search terms or ‘keywords’ to be used to produce emails or other electronically stored information (‘ESI’) … Electronic discovery requires cooperation between opposing counsel and transparency in all aspects of preservation and production of ESI. Moreover, where counsel are using keyword searches for retrieval of ESI, they at a minimum must carefully craft the appropriate keywords, with input from the ESI’s custodians as to the words and abbreviations they use, and the proposed methodology must be quality control tested to assure accuracy in retrieval and elimination of ‘false positives’. It is time that the Bar—even those lawyers who did not come of age in the computer era—understand this. Even if the steps suggested in the William A Gross decision are followed, keyword searching will produce less than 50% of responsive ESI. There are more sophisticated search tools available, such as clustering and concept searching techniques instead of or in

39 See William A Gross Construction Associates, Inc v American Manufacturers Mutual Insurance Co 256 FRD 134, 136 (SDNY 2009) (Peck, MJ).


Digital Information

c­ ombination with keyword searches that may be considered. We are not aware of any published judicial decision addressing these tools.40

With this strong caution in mind, keyword searches may still be helpful, especially at the Early Case Assessment (ECA) phase of litigation. How can a party go about doing better than just brainstorming keyword terms? A mixed approach should be undertaken, sampling and testing returns for different proposed search criteria. This provides a quantitative approach to estimate how many documents will be returned with different keywords. The next step is to take a sample of the returned disputed keywords and determine, through manual review, the percentage of relevant documents brought back with the search. This approach involves the construction of superior search keywords through an iterative methodology of keyword searches on a representative sample and then manual review of returned documents from the sample to develop an expanded list of keywords. An approach like this helps to give magistrates and judges a better basis to include or exclude disputed search terms. In William A Gross Construction Associates, Inc v American Manufacturers Mutual Insurance Co,41 Peck MJ complained that he was in the ‘uncomfortable position’ of having to construct a search term methodology without sufficient input from the parties or the relevant custodian. He ruled that in addition to one party’s proposed terms, the search should incorporate the names of the parties’ personnel involved in the courthouse project, but rejected a much expanded keyword list without more justification. He criticised the parties search term methodology as ‘just the latest example of lawyers designing keyword searches in the dark’. All keyword searches are not of like quality. Grimm J wrote: [W]hile it is universally acknowledged that keyword searches are useful tools for search and retrieval of ESI, all keyword searches are not created equal; and there is a growing body of literature that highlights the risks associated with conducting an unreliable or inadequate keyword search or relying exclusively on such searches for privilege review.42

Keywords are often negotiated very early in the case. But as the particular case evolves new issues and keywords inevitably will be identified. As this will involve more expense for the responding party, going back and negotiating more keywords may be difficult. Before allowing this without consent a court may want to know why proposed keyword additions were not identified earlier. This supports the importance of the iterative process of sampling and an early case assessment to understand the case issues and potential case keywords as early as possible. [F]or lawyers and judges to dare opine that a certain search term or terms would be more likely to produce information than the terms that were used is truly to go where angels

40  D Lender D and A Peck, ‘10 Key E-Discovery Issues in 2011: Expert Insight to Manage Successfully’ (Huron Legal Institute) (2011) 19(4) The Metropolitan Corporate Counsel 1 at 5 (see 41  Above n 39 at 134, 136. 42  Victor Stanley Inc v Creative Pipe Inc 250 FRD at 251 at 256–57 (D Md 2008).

The Rules and Utilisation of Technology


fear to tread. This topic is clearly beyond the ken of a layman and requires that any such conclusion be based on evidence that, for example, meets the criteria of Rule 702 of the Federal Rules of Evidence.43

This thinking suggests that there may need to be greater use of experts in the area of keyword validation and the resulting expert evidence foundation challenges as part of cases involving disputed e-discovery methodology. Thus we have a situation where rather than reducing disputed hearings arguments about keyword searches, the construction of a keyword search becomes a specific form of dispute. This may result in court ordered keyword search parameters based on expert evidence of a highly technological nature. Even a properly designed and executed keyword search may prove to be overinclusive or under-inclusive, resulting in the identification of documents as privileged which are not, and non-privileged which, in fact, are. The only prudent way to test the reliability of the keyword search is to perform some appropriate sampling of the documents determined to be privileged and those determined not to be in order to arrive at a comfort level that the categories are neither over-inclusive nor under-inclusive.44 The Victor Stanley Case demonstrated the difficulties associated with keyword searches and the essence of the case is that when parties decide to use a particular search methodology they need to be aware of the strengths and weaknesses of the various tools and at least be aware of the literature that is available.45 The only input from the parties is from an adversarial position and although the result may well be a compromise it may still not achieve the ultimate goal of locating relevant documents within the reasonable and proportional requirements. In my view resort to the court must be considered an in extremis approach. A proper assessment of available technologies, their strengths and limitations together with consultation with technical experts should resolve the matter rather than going before a judge with affidavits and expert evidence. If a reasonable approach had been taken in Victor Stanley the critique by Judge Grimm would not have been necessary. Indeed, one wonders whether, in light of other more defined search techniques such as Technology Assisted Review or TAR,46 an evidence-based dispute about


United States v O‘Keefe 537 F Supp 2d 14 at 24 (DDC 2008). Victor Stanley Inc v Creative Pipe Inc above n 41. 45  Such as The Sedona Conference Best Practices Commentary on Search & Retrieval Methods (August, 2007); Jason R Baron (ed), ‘The Sedona Conference Best Practices and Commentary on the Use of Search and Information Retrieval Methods in E-Discovery’ (August 2007) 8 The Sedona Conference Journal 189. There are a number of resources online in the form of blogs which deal with developments in the e-discovery field. Chris Dale’s eDisclosure Information Project is an excellent resource, An excellent piece on the strengths and weaknesses of various technologies can be found in Ralph Losey’s commentary on the 2012 Georgetown Advanced eDiscovery Conference 46  Which will be discussed below at s IV.B.ii. 44 


Digital Information

the parameters or definitions of a keyword search may indeed be reasonable and proportionate. The rules require a co-operative approach by counsel and this should be encouraged at conference level rather than allowing the matter to escalate to a hearing about the parameters of a keyword search. Judge Grimm’s concerns in Victor Stanley were echoed by Judge Andrew Peck who, in 2009 took the bold step of instructing the Bar that a ‘wake-up’ call was needed for ‘careful thought, quality control, testing and cooperation with opposing counsel in designing search terms or “keywords” to be used to produce email or other electronically stored information’.47 Any proposed methodology for key words should be quality control tested to assure accuracy in retrieval and elimination of ‘false positives’.48

ii.  Technology Assisted Review (TAR) Predictive coding or document prioritisation are technological solutions that come under the umbrella of technological review or TAR. The definition of document prioritisation in the New Zealand High Court Rules Glossary provides a useful example. Document prioritisation means technology that analyses the decisions of a human review of a sample set of documents. The software then prioritises/ ranks the remainder of documents based on the decisions made on the sample documents, which allows the most relevant documents to be identified first. TAR has been defined as: A process for prioritizing or coding a collection of documents using a computerized system that harnesses human judgments of one or more subject matter expert(s) on a smaller set of documents and then extrapolating those judgments to the remaining document collection. Some TAR methods use machine learning algorithms to distinguish relevant from non-relevant documents, based on training examples coded as relevant or non-relevant by the subject matter experts(s), while other TAR methods derive systematic Rules that emulate the expert(s)’ decision-making process. TAR processes generally incorporate statistical models and/or sampling techniques to guide the process and to measure overall system effectiveness.49

The use of data analytics is of assistance in managing large datasets but reduces the need for time consuming and expensive manual review50 often involving reliance upon junior lawyers who have little detailed knowledge or understanding of the case and who have to handle and read every document in a long and tedious process which is fraught with error occasioned by human failings.

47  William A Gross Construction Associates, Inc v American Manufacturers Mutual Insurance Co, above n 38; see also Hon Andrew Peck, ‘Search, Forward—Will Manual Document Review and Keyword Searches be Replaced by Computer-assisted Coding?’ Law Technology News (online), 1 October 2011 48  ibid, Peck. 49  Grossman-Cormack Glossary of Technology-Assisted Review (2013) Fed Courts L Rev 7 at 32. 50  The Sedona Conference Best Practices Commentary above n 45, 195.

The Rules and Utilisation of Technology


Data analytics programs are not subject to fatigue, eye-strain, interruptions and lapses in concentration. Data analytics tools use every word in every document to assign relevance which is determined by a senior lawyer. Data analytics or TAR programs can include a number of different ‘clever’ technologies, and is an area in which research is ongoing in order to find even more clever ways of finding what lawyers seek in a repository of documents. These technologies include ‘clustering’, ‘concept searching’, ‘email threading’, ‘near de-duplication’ and ‘predictive coding’. In-built features, such as predictive coding, are being celebrated as the answer to help curtail ever-increasing litigation costs for both in-house and external counsel. The scope of TAR has been a somewhat contentious area. Some commentators consider TAR has a broad scope to include the technologies referred to in the previous paragraph. Others use the term Computer Assisted Review—again a widely scoped definition. Others prefer the more limited term ‘predictive coding’ because it refers unambiguously to a specific class of technology and is the name used by most of the software developers in the field.51 The ‘Computer Assisted Review’ process, however, is common to the technologies that fall under its umbrella. The Electronic Discovery Review Model diagram depicted above now includes a standard for Computer Assisted Review.

Figure 2:  Computer Assisted Review Reference Model

There are three types of computer based or TAR tools—Continuous Active Learning (‘CAL’), Simple Active Learning (‘SAL’) and Simple Passive Learning (‘SPL’).52 All three involve ‘training’ the system to find documents that have been pre-defined by the legal team as relevant. A training dataset of a fixed number of documents is coded as relevant or irrelevant and provides the basis for the software to determine relevance of documents in a larger dataset. The process is repeated until the review team is satisfied that a sufficient number of relevant documents have been located.

51 Chris Dale, ‘Ralph Losey on the Georgetown TAR/CAR/Predictive Coding Panels’ (11 ­ ecember 2012) The eDisclosure Information Project D ralph-losey-on-the-georgetown-tar-car-predictive-coding-panels/. 52  Gordon V Cormack and Maura R Grossman, Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery, at: WLRK.23339.14.pdf.


Digital Information

The difference between the three processes is whether randomly selected documents are used, or whether the set of documents has been located via a non-random method such as using basic keyword searching. In the CAL method, the 1,000 documents are selected using keyword searches and then the documents that are coded by the lawyer are used to train a learning algorithm, which scores each document in the collection by the likelihood of it being relevant. In SAL, the set of documents can be selected randomly or non-randomly, but then subsequent document sets for coding by the reviewer are selected based on those about which the learning algorithm is least certain. With SPL, the document set is selected randomly and relies on the review team to work on an iterative basis until there is some certainty that the review set is ‘adequate’. Predictive coding as a unique technological e-discovery solution is described by Chris Dale as follows: There are perhaps three levels at which one can describe what predictive coding is and does. One level relies heavily on statistics, with detailed discussion about seed sets, precision, recall and the F-Measure, supported by equations demonstrating the validity of the process and its defensibility. This is fundamental to the acceptability of the process, but not necessarily the most easily-assimilated level for those new to the subject. At the other extreme, one can describe the process in broad, high-level terms without descending into the detail, making it clear that the statistical underpinning exists and that the better applications have tools which allow the lawyers to check their results and, if necessary, to prove their validity. That is my own approach, consistent with my own position as a) a translator for lay audiences and b) a mathematical dunce, in common with many of the lawyers who should be interested in predictive coding.53

Interestingly enough the approach of the courts was to focus upon the term ‘predictive coding’54 but that has moved, especially in the case of Judge Andrew Peck, towards the use of the term ‘technology assisted review’.55 One of the advantages of the term TAR is that it embraces broader concepts and moves away from specific software providers.56 Predictive coding has a better track record in the production of responsive documents than human review.57 Although both predictive coding and human review fell short of identifying for production all of the documents the parties in

53 ibid.

54  Da Silva Moore v Publicis Groupe 11-civ-1279 (ALC) (AJP), US Dist LEXIS 23350 (SDNY 24 Feb 2012). Federal Housing Finance Agency v HSBC North America Holdings Inc. et al 2014 WL 584300. 55  Rio Tinto plc v Vale SA et al (2015) No 14, Civ 3042 (SDNY). 56  Chris Dale, ‘TAR-red with the same brush in the US and Ireland’ 11 March 2015 eDisclosure Information Project 57  Federal Housing Finance Agency v HSBC North America Holdings Inc, et al above n 54.

The Rules and Utilisation of Technology


litigation might wish to see, ‘no one should expect perfection for this process’.58 Judge Cote observed that parties in litigation are required to act in good faith during discovery and that production of documents can be a herculean undertaking often requiring clients to pay vast sums of money. All that can be expected, said her Honour, was that ‘good faith, diligent commitment to produce all responsive documents uncovered when following the protocols to which the parties have agreed, or which a court has ordered’.59 Federal Housing Finance Agency was decided in 2014 and demonstrates that the use of technology such as predictive coding is becoming an accepted method of review during discovery and indeed, can be more accurate than human review. But the starting point for any consideration of the approach of the courts to predictive coding must be the decision of Judge Andrew Peck in Da Silva Moore.60 In this case the parties agreed to use predictive coding but could not agree on how it should be used. There are a number of significant points that emerge from Judge Peck’s decision. First the Judge actively encouraged the use of predictive coding stating that ‘it may save the producing party (or both parties) significant amounts of legal fees in document review.’61 Thus there is judicial approval of the use of technology. Second, Judge Peck stated ‘the Court determined that the use of predictive coding was appropriate considering… the superiority of computer-assisted review to the available alternatives (ie, linear manual review or keyword searches)’.62 This takes technology approval a step further and is judicial approval of a particular technology. Third the Judge observed ‘the idea is not to make this perfect, it’s not going to be perfect. The idea is to make it significantly better than the alternatives without nearly as much cost.’63 Thus there is a concession that whilst predictive coding is not perfect, it is better than the alternatives. Fourth the Court focused on statistically valid results rather than the mathematics of the algorithms. Quoting his article on predictive coding, Judge Peck states: ‘I may be less interested in the science behind the “black box” of the vendor’s software than in whether it produced responsive documents with reasonably high recall and high precision.’64 In this respects the outcomes were more persuasive than the ‘nuts and bolts’ or scientific validity of the technology.

58 ibid. 59 ibid.

60  Above n 54. Da Silva Moore was a case involving allegations of discrimination in the context of an employment dispute. The pre-trial supervision of the case was assigned to Judge Peck and his decision was in the context of resolving discovery issues. Judge Peck’s decision went on appeal and Judge Carter upheld his approach. USDC SDNY 11 Civ 1279 (ALC) (AJP). 61  Above n 54. 62 ibid. 63 ibid. 64 ibid.


Digital Information

Fifth, Judge Peck made it clear the need to ‘design an appropriate process, including use of available technology, with appropriate quality control testing, to review and produce relevant ESI’.65 Finally, Judge Peck provided a ‘take away’ for the legal profession: What the Bar should take away from this Opinion is that computer-assisted review is an available tool and should be seriously considered for use in large-data-volume cases where it may save the producing party (or both parties) significant amounts of legal fees in document review. Counsel no longer have to worry about being the ‘first’ or ‘guinea pig’ for judicial acceptance of computer-assisted review. As with keywords or any other technological solution to e-discovery, counsel must design an appropriate process, including use of available technology, with appropriate quality control testing, to review and produce relevant ESI while adhering to Rule 1 and Rule 26(b)(2)(C) proportionality. Computerassisted review now can be considered judicially-approved for use in appropriate cases.66

As will be immediately apparent, Judge Peck’s view on the use of technology in e-discovery was not new nor was it unexpected. There had been a number of earlier signals. The William Gross Case was one.67 His article co-written with David Lender was another68 and another article, ‘Search, Forward: Will manual document review and keyword searches be replaced by computer-assisted coding?’69 provided a third signpost and to which the Judge referred at the beginning and throughout Da Silva Moore. Judge Peck’s decision was a significant one and was seen as a major shift in the approach that courts might adopt to technology use in discovery and was followed by similar decisions in Kleen Products70 and Global Aerospace71 where the Judge not only approved the plaintiff ’s use of predictive coding but went on to approve its results. As a result, it became incumbent upon counsel to understand the implications of discovery technologies to properly advise their client, properly fulfil discovery obligations and properly argue the matter before the court if that became necessary. However, the decision did not necessarily mean that predictive coding became the ‘gold standard’ for e-discovery and Chris Dale suggested that there would not necessarily be a general judicial view on the use of predictive coding or of any other tools or techniques.72 The principles of reasonableness and p ­ roportionality

65 ibid. 66 ibid. 67 

Above n 39. Lender and Peck, above n 40. Legal Technology News, Oct 2011, at 25, 29. 70  Kleen Products LLC v Packaging Corporation of America, 10 C 5711, 2012 WL 4498465 (ND Ill 28 Sept 2012). 71  Global Aerospace Inc et al, v Landow Aviation, LP dba Dulles Jet Center, et al No CL 61040, 2012 WL 1431215 (Va Cir Ct 23 Apr 2012). 72  Chris Dale, ‘Predictive Coding = Proportionality’ Society of Computers and the Law Foundation of IT Law Programme 68  69 

The Rules and Utilisation of Technology


which drive the English rules focus more on outcomes than methods, and the English Practice Direction is more in the nature of a road map than a definitive requirement. Furthermore, Dale suggests: However close becomes the alignment of the procedural rules in the US and in England and Wales, I suspect that we in the UK will never really get our heads round the US idea that one needs judicial blessing (from some court, somewhere) before doing anything new or different. We are not talking here of formal precedent, but of day-to-day case management of the kind which is rarely reported in England and Wales anyway.73

In EORHB Inc et al v HOA Holdings LLC74 the Judge directed that both sides to the dispute use predictive coding despite the fact that neither had asked for it, and in addition the Judge specified a vendor and a software package. This decision was made despite Sedona Principles 6 which states [r]esponding parties are best situated to evaluate the procedures, methodologies, and techniques appropriate for preserving and producing their own electronically stored information.75

In March of 2015 two decisions were delivered—one from the United States,76 again from Judge Peck, and one from the High Court of Ireland.77 Both decisions resulted in court approval of the use of the technology proposed by one party, subject to protocols which included descriptions of the tools and the processes to be used and the methods for validating the results. In Rio Tinto Judge Peck observed that the case law had developed to the point where it was black letter law that where a producing party wants to utilise TAR for document review, the Court will permit it,78 and spent a considerable part of the decision analysing the developing case law over the three years since Da Silva Moore. Another interesting matter that emerges from Rio Tinto is the use of the term TAR and the reference to the three methodologies discussed above. Furthermore the parties in Rio Tinto agreed on a protocol but Judge Peck wrote a

73  Chris Dale, ‘UL Judges and Predictive Coding—open to any proportionate suggestion’ (1 May 2013) The eDisclosure Information Project 74  EORHB Inc et al v HOA Holdings LLC CA No 7409-VCL (Del Ch 15 Oct 2012). 75 ‘The Sedona Conference Best Practices Commentary on the Use of Search and Information Retrieval Methods in E-Discovery’ (2007) 8 Sedona Conference Journal 189, 193. 76  Rio Tinto PLC v Vale SA et al 14 Civ 3042 (RMB) (AJP) 3 March 2015. 77  Irish Bank Resolution Corporation Ltd v Sean Quinn & Ors [2015] IEHC 175 Judgments.nsf/09859e7a3f34669680256ef3004a27de/b40ea52f90e274f380257e18003d15fa?OpenDoc ument. 78  He also observed that in contrast, where the requesting party has sought to force the producing party to use TAR, the courts have refused. See, eg, In re Biomet M2A Magnum Hip Implant Products Liability Litigation No 3:12-MD-2391, 2013 WL 1729682 & 2013 WL 6405156 (ND Ind Apr 18 & Aug 21, 2013); Kleen Products LLC v Packaging Corporation of America, 10 C 5711, 2012 WL 4498465 (ND Ill 28 Sept 2012). The Court notes, however, that in these cases, the producing parties had spent over $1 million using keyword search (in Kleen) or keyword culling followed by TAR (in Biomet), so it is not clear what a court might do if the issue were raised before the producing party had spent any money on document review. Rio Tinto above n 76 at fn 1.


Digital Information

reasoned decision ‘because of the interest within the ediscovery community about TAR cases and protocols’ although it does seem that the decision also underscores the acceptance of the use of technology in e-discovery cases by the Courts. In Irish Bank Resolution there was a dispute over the assertion by the plaintiff that TAR will save time and be more cost-effective compared to the traditional manual / linear method of discovery. The defendant argued that the use of TAR would not capture all relevant documents and therefore is not compatible with the obligation of the party making discovery, which is the objective target of 100 per cent of relevant documents. The issue boiled down to the way in which discovery obligations should be interpreted in light of the Irish rules.79 The judge found, amongst other things, that the discovery duty was qualified by considerations of proportionality and went through a number of authorities which related to discovery specifically but also through those which pointed to ‘the inherent power of the courts to adapt in the absence of a specific rule.’ After summarising the ‘limited Irish jurisprudence on that topic’ the Judge expressed himself satisfied that: ‘Provided the process has sufficient transparency, technology assisted review using predictive coding discharges a party’s discovery obligations under O 31 r12’.80 He then considered the plaintiff ’s proposed protocol and, crucially, the efforts which had been made by the plaintiffs’ lawyers to persuade the defendants of the merits of using TAR, and of the safeguards to be built into the process concluding that the proposed protocol would be more efficient than manual review in terms of saving costs and time. The important thing about these decisions is that they rely on principles including the balance of proportionality and existing duties such as cooperation which are of much wider significance than the use of any new technology. These principles predated the use of TAR. Furthermore the courts have a considerable degree of flexibility in crafting discovery orders. The English rules contain a provision that the court ‘may take any other step or make any other order for the purpose of managing the case and furthering the overriding objective’81 and a principle of ‘overriding objective’—echoed by the statement in Irish Bank referring to the inherent power of the Court to adapt—which supercedes everything else. The significant thing is that TAR is seen as facilitating or fulfilling those objectives.

iii.  TAR in England There have been two recent English cases which have considered the use of TAR. In the case of Pyrrho Investments v MWB Property82 Master Matthews was asked to approve a proposal by the parties that predictive coding be deployed to avoid

79  Order 31 rule 12 of the Rules of the Supreme Court. This rule, on its face, requires a rather stricter test of completeness than, say, the UK rules. 80  Above n 77. 81  Rule 3.1(2)(m). 82  Pyrrho Investments v MWB Property [2016] EWHC 256 (Ch).

The Rules and Utilisation of Technology


the expense of manually reviewing some three million electronic documents. Although predictive coding had been approved in the United States and Ireland the approval of the Court was sought as a counsel of caution. The emphasis was upon the savings in costs83 and Master Matthews observed that experience in other jurisdictions suggested that predictive coding can be useful and provide greater consistency in document review. In the course of his decision, Master Matthews considered earlier authorities from England84 the United States85 and Ireland86 and developed a list of 10 factors in favour of approving the use of predictive coding: (1) Experience in other jurisdictions, whilst so far limited, has been that predictive coding software can be useful in appropriate cases. (2) There is no evidence to show that the use of predictive coding software leads to less accurate disclosure being given than, say, manual review alone or keyword searches and manual review combined, and indeed there is some evidence (referred to in the US and Irish cases to which I referred above) to the contrary. (3) Moreover, there will be greater consistency in using the computer to apply the approach of a senior lawyer towards the initial sample (as refined) to the whole document set, than in using dozens, perhaps hundreds, of lower-grade fee-earners, each seeking independently to apply the relevant criteria in relation to individual documents. (4) There is nothing in the CPR or Practice Directions to prohibit the use of such software. (5) The number of electronic documents which must be considered for relevance and possible disclosure in the present case is huge, over 3 million. (6) The cost of manually searching these documents would be enormous, amounting to several million pounds at least, in my judgment, therefore, a full manual review of each document would be ‘unreasonable’ within paragraph 25 of Practice Direction B to Part 31, at least where a suitable automated alternative exists at lower cost. (7) The costs of using predictive coding software would depend on various factors, including importantly whether the number of documents is reduced by keyword searches, but the estimates given in this case vary between £181,988 plus monthly hosting costs of £15,717, to £469,049 plus monthly hosting costs of £20,820. This is obviously far less expensive than the full manual alternative, though of course there may be additional costs if manual reviews still need to be carried out when the software has done its best. (8) The ‘value’ of the claims made in this litigation is in the tens of millions of pounds. In my judgment the estimated costs of using the software are proportionate. (9) The trial in the present case is not until June 2017, so there would be plenty of time to consider other disclosure methods if for any reason the predictive software route turned out to be unsatisfactory.

83  Manual review would involve estimated costs of at least several million pounds whereas the use of predictive coding was estimated to cost between £197,705 and £489,869. 84  Goodale v Ministry of Justice above n 35. 85  Da Silva Moore v Publicis Group above n 54. 86  Irish Bank Resolution Corporation v Quinn above n 77.

206  (10)

Digital Information  he parties have agreed on the use of the software, and also how to use it, subject T only to the approval of the Court.87

Brown v BCA Trading88 was the first contested application to use predictive coding as part of a document review exercise in England. The respondent held the significant majority of potentially relevant documents. In the face of the desire of the petitioner’s solicitors for a process involving the filtration of documents using an agreed list of search terms followed by a manual review of responsive documents by a paralegal, the respondent successfully argued that superior results could be achieved at a more proportionate cost using predictive coding. The argument was dominated by the issue of proportionate cost.89 The traditional linear approach proposed by the petitioner would cost more than two and a half times that of predictive coding. In addition the respondent’s solicitors could run the predictive coding work in-house, rather than needing to pay an external technology provider to provide daily rate access, support, hosting fees, and so forth. In the course of the decision the factors developed by Master Matthews in Pyrrho were considered and all but two were found to be applicable. On this basis the Court ordered that predictive coding be used by the Respondents. A number of lessons can be drawn from this and other cases. The first is that it is simply not possible to use conventional means to search through very large volumes of documents within either the time limits imposed by the most tolerant court or within the bounds of proportionality. The lawyers must establish at an early stage how much potentially disclosable material they have got and set that against the available time and resources. That is the essence of proportionality. TAR is not a process that should be held to a higher standard than manual review. It is a way of filtering an excessively large dataset to something that is manageable. Once that point is reached a human eye is going to be applied to the filtered data. TAR is no substitute once that point is reached. It may be permissible to overlook documents if it would be disproportionate to find everything. In this regard the observation of Morgan J in Digicel (St Lucia) Ltd & Ors v Cable & Wireless Plc & Ors is apposite: [I]t must be remembered that what is generally required by an order for standard disclosure is ‘a reasonable search’ for relevant documents. Thus, the rules do not require that no stone should be left unturned. This may mean that a relevant document, even ‘a smoking gun’ is not found. This attitude is justified by considerations of proportionality.90 87 

Pyrrho above n 82 para [33]. Brown v BCA Trading [2016] EWHC 1464 (Ch) 89  The issue of cost and entitlement to costs following a predictive coding exercise occurred in Auckland Waterfront Development Agency Ltd v Mobil Oil New Zealand Ltd [2015] NZHC 470. Katz J decided that a producing party was entitled costs related to the implementation of TAR because ‘the overall exercise appears to have been undertaken in a relatively efficient and cost effective way’ [86]. The parties invested considerable work to test and substantiate their TAR approach and even met with the opposing solicitors to ensure that the search complied with the High Court Rules [69]. These cases show it is imperative that parties cooperate, understand what they need, before seeking judicial approval. If these steps are taken, a court can rule more effectively and will likely affirm the agreement. 90  Above n 32 at [46]. See also Nichia Corp v Argos Ltd [2007] EWCA Civ 741 per Jacob J. 88 



The courts, as well as the lawyers, must make themselves aware of the availability of technology as an aid to proportionality. If courts are to set timetables in cases like Pyrrho judges are going to have to understand the use of technology like predictive coding. Certainly in England Pyrrho is likely to increase the chance of finding a case-managing judge who will question reliance on other and older ways of conducting e-discovery. The important thing about these decisions discussed is that they rely on principles including the balance of proportionality and existing duties such as cooperation which are of much wider significance than the use of any new technology. These principles predated the use of TAR. Furthermore the Courts have a considerable degree of flexibility in crafting discovery orders. The English rules contain a provision that the court ‘may take any other step or make any other order for the purpose of managing the case and furthering the overriding objective’91 and a principle of ‘overriding objective’—echoed by the statement in Irish Bank referring to the inherent power of the Court to adapt—which supercedes everything else. The significant thing is that TAR is seen as facilitating or fulfilling those objectives.

V. Conclusion The development of the law dealing with discovery in the Digital Paradigm demonstrates the way in which rule systems and courts can, in the face of a new and disruptive technology, adapt. This evolutionary approach has been characterised in the United States by the gradual and progressive amending of the Federal Rules of Procedure. The numerous amendments that have become necessary seem to have been as a result of slow adaptation by the legal profession and the judiciary to the new environment, notwithstanding the innovative and farsighted approach of the Sedona Conference where the problems of digital discovery were recognised at an early stage. Perhaps the most recent iteration of the FRCP will at least crystallise, from a regulatory point of view, what is required. However, it seems to be the way in which Judges Andrew Peck, Paul Grimm, John Facciola and Elizabeth LaPorte have validated the use of technology solutions that has driven the acceptance of change in the United States. The reliance on court directions in England is not as great as in the United States. Judges and masters have flexibility and discretions to deal with the cases before them and in many respects the cases are fact specific and of little precedential assistance. The Discovery Rules allow that level of flexibility and the provision of checklists provide an excellent roadmap. Although the wholesale rewrite of the New Zealand Rules has introduced a relevance-based test rather than the old Peruvian Guano approach, judges and associate justices have the same sort of flexibility


Rule 3.1(2)(m).


Digital Information

and discretion possessed by the English judges. Similarly in broad terms the discovery rules in Singapore, Hong Kong and Australia allow flexibility of approach although the thresholds in Singapore and Hong Kong restrict e-discovery to high value document intensive cases. One wonders why this was necessary, given that reasonableness and proportionality would inevitably restrict the use of TAR to these cases in any event. But the collision hasn’t only been with the established rules relating to discovery. The way in which the various rule systems have developed, aided by the Sedona Conference, has been innovative even if, in the case of the United States, of long development. There is another collision and that is a cultural one. It is perhaps no accident that a limited number of judicial names appear with some regularity in discussions about e-discovery. These judges have made it their business to become au fait with the way in which the law and technology work together. For the rules to be effective, for the promise of e-discovery to be realised, there needs to be wider understanding by judges and lawyers of the ramifications of the new paradigm in this important area of procedure. As was stated in the Sedona Conference, the legal profession by and large remains stuck at a crossroads: [T]he choice is between continuing to conduct discovery as it has ‘always been practiced’ in a paper world—before the advent of computers [and] the Internet … or alternatively, embracing new ways of thinking in today’s digital world.92

Lawyers, generally speaking, are responsible for the slow uptake of TAR in discovery. This has happened for at least two reasons. The preference for operating within their comfort zone has suppressed a more widespread adoption of TAR and they have failed to appreciate the significance of what is to be discovered and who the document custodians are. The other reason is a form of cultural resistance to change.93 By and large the legal profession is a very conservative one usually born out of significant risk averseness. They either simply do not appreciate the value of TAR,94 or are overly protective over the type of work that has to be handled by people who are qualified to be lawyers. Professor Susskind warns of the danger of complacency and fossilisation of old practices befalling the legal profession.95 Disruptive legal technologies, of which TAR is just one, should not be viewed as the enemy, but should be viewed as an opportunity for transformative change. The rules and the judiciary are leading the charge and the legal profession must inevitably follow.

92  The Sedona Conference, ‘The Sedona Conference Commentary on Achieving Quality in the E-Discovery Process’ (2009) 10 Sedona Conference Journal 229, 302. 93  Sally Slater, ‘Corporate Counsel Slow to Embrace E-Discovery Technology Advances Survey by BDO’ (21 October 2015) 94  Geoffrey A Vance, ‘Confessions of an E-Discovery Lawyer: We’re Light Years Behind’ LegalTech News (23 June 2015). 95  Richard Susskind, Tomorrow’s Lawyers: An Introduction to Your Future (Oxford, Oxford University Press, 2013).

8 Evidence, Trials, Courts and Technology I. Introduction The trial is the law’s high theatre. The courtroom is a stage and the participants are the players. Some, such as witnesses, have bit parts. Some are major players—on stage throughout the whole performance. It is little wonder that trials—especially criminal trials—feature so frequently in literature and in entertainment. The trial scene in Shakespeare’s Merchant of Venice is gripping drama as well as being a showpiece for a number of jurisprudential theories. The trial is a set piece in Harper Lee’s To Kill a Mockingbird and the film Witness for the Prosecution is the trial itself. The trial dynamic brings all the players into the one place, with the classic dramatic formulae of human interactions, conflict and denoument. Television is replete with lawyer shows in which trials feature—Rumpole of the Bailey and Silk provide two examples. Of course the trial is more than that. It is a critical part of a state-provided dispute resolution process that has evolved over the centuries and has become an elegantly moderated reasoned argument supported by specialised information which lawyers call evidence. In the same way that the practice of law involves the acquisition, processing, sharing and communication of information, likewise court proceedings are all about information. Information takes certain forms, be it by way of pleadings which inform the court what the dispute is about, evidence which informs the court as to the strength of the assertions contained in the pleadings, submissions by which the court is informed as to the possible approaches that it may adopt in determining the outcome, and from the court to the lawyers and the parties when it delivers a decision. In the course of processing the decision the judge or judges will embark upon their own information acquisition activities, looking up the law, checking the assertions or alternatively having recourse to an internal information exchange involving judges’ clerks. A court is not only a place of adjudication, but also an information hub. Information is assembled, sorted and brought to the courtroom for presentation. Once presented, various theories of interpretation are put before the fact-finder, who then analyses the data according to prescribed rules, and determines a verdict


Evidence, Trials, Courts and Technology

and result. That result, often with collateral consequences, is then transmitted throughout the legal system as required either by law reports, academic comment or online legal information systems. The court is thus the centre of a complex system of information exchange and management.1 In this chapter I consider aspects of the trial process—both civil and criminal— and suggest that although the use of communications technologies such as videoconferencing are being used, the procedures of the court can go much further. If information is a vital factor in the trial process, information communication technologies (ICT) should be deployed in the quest for better, more accurate and just outcomes. I will consider certain aspects of the trial process that are claimed and accepted as essential and I will challenge such assertions. I will consider the deployment of other information communications technologies within the context of a taxonomy of technologies that reflect aspects of the trial process. I will also discuss proposals for the use of information communications technologies as a transformative element in the adjudicative process, focusing on some of the fundamental changes to civil dispute resolution that have been proposed by Professor Richard Susskind.2

II.  Orality and Physical Presence of Witnesses One important matter that needs to be considered is the focus upon oral evidence and the need for physical presence in the trial arena. Although this has been challenged to a certain degree by the use of video-conferencing, as will be seen when I discuss this technology later, there are constraints imposed upon the use of this technology that suggest that the starting point for a witness giving evidence must be physical presence. An enormous amount of value is placed upon the giving of oral evidence in a courtroom. Indeed this is the primary means by which information that informs the factfinder is conveyed. For reasons which I shall develop, the focus of attention upon the courtroom and upon the physical presence of human witnesses is misplaced as the primary and most effective means of information gathering. Yet to tamper with this aspect of the criminal trial results in outraged howls, especially from that most conservative of the lawyer classes—the criminal defence bar. What is advanced in defence of the system are a number of myths or constructed justifications about the importance of presence and evidence giving almost as holy writ. These ‘constructed justifications’ include the so-called confrontation right which

1  F Lederer, ‘The Courtroom as a Stop on the Information Superhighway’ (1997) 4 Australian Journal of Law Reform 71. 2  Civil Justice Council, ‘Online Dispute resolution for Low Value Civil Claims’ (February 2015) For the report see wp-content/uploads/2015/02/Online-Dispute-Resolution-Final-Web-Version1.pdf.

Orality and Physical Presence of Witnesses


was a reaction to secret informers and unidentified accusers that developed from the continental system, the inquisition, and was associated with Star Chamber in the English experience; the importance of cross-examination and testing the oral evidence which does not require physical presence and the demeanour of the witness as a guide to truth-telling or reliability. It is probably the last characteristic that needs primary consideration. The issue of demeanour as a guide to truth telling and the reliance upon nonverbal cues as an aid to assessing credibility has been the subject of a considerable body of literature from the field of the behavioural science, and the overwhelming conclusion is that demeanour is not a useful guide to veracity. As Fisher puts it: There are no observational advantages when assessing the honesty of a witness’s evidence. Those confined to reading the transcript will do just as well. Of particular concern is the fact that so many sincerely believe that they are capable of assessing veracity through demeanour. The widespread belief that they can is a popular fallacy.3

And it is a fallacy that is still perpetuated by the legal profession and the judiciary. Directions about the importance of seeing and hearing the witness and assessing their behaviour in the witness box (and sometimes, in the case of an accused or a witness who has given evidence and remained in court,4 out of it) are still common although some progress is being made,5 but nevertheless, despite recognition of scientific reality the fallacy is still promoted even by courts at the highest level.6 As an indication of judicial unwillingness to put demeanour to one side, the need for demeanour assessment has featured in a consideration of whether a witness should give evidence by way of video-conferencing or Skype.7 For example in Australia judges commend video-conferencing because it allows the judge to observe demeanour.8 As recently as 2013 an Australian study of 61 judges, lawyers, 3  Robert Fisher QC, ‘The Demeanour Fallacy’ [2014] New Zealand Law Review 575 at 582. See also Chris Gallavin, ‘Demeanour Evidence as the backbone of the adversarial process’ Lawtalk Issue 834 (14 March 2014); Professor Ian R Coyle, ‘How Do Decision Makers Decide When Witnesses Are Telling The Truth And What Can Be Done To Improve Their Accuracy In Making Assessments Of Witness Credibility?’ Report to the Criminal Lawyers Association of Australia and New Zealand (3 April 2013) 8; on the subject of demeanour generally see Professor Coyles extensive bibliography. See also Lindsley Smith, ‘Juror Assessment of Veracity, Deception, and Credibility’ (2002) 4 Communication Law Review 45 4  Catalano v Managing Australia Destinations Pty Ltd (No 2) [2013] FCA 672 at [64]. 5  Brian MacKenna, ‘Discretion’ (1974) 9 The Irish Jurist 1 at 10 (Ireland); AM Gleeson, ‘Judging the Judges’ (1979) 53 Australian Law Journal 344 (Australia), Patrick Browne, ‘Judicial Reflections’ (1982) 35 Current Legal Problems 1 at 5 (England) Fox v Percy [2003] 214 CLR 118 at [30]–[31] per Kirby J; CSR Ltd v Della Maddalena (2006) 224 ALR 1 at [19] and [23]; R v Munro [2008] 2 NZLR 87 at [82]. 6  Austin, Nichols & Co Inc v Stitching Lodestar [2008] 2 NZLR 1412; R v Matenga [2009] 3 NZLR 18; Brown v Murdoch & Or (No 2) [2014] FAMCA 618 at [23] where there was an application to use Skype to take evidence and Cronin J said ‘it would make my task so much more difficult if I was not able to assess their demeanour where credit is the issue’. 7  Video-conferencing operates over a dedicated ISDN line whereas Skype uses a Voice Over Internet Protocol (VOIP) The differences between the two are discussed below. 8  Sunstate Airlines (Qld) Pty Ltd v First Chicago Australia Securities Ltd (unreported, 11 March 1997); R v Wilkie, R v Burroughs, R v Mainprize [2005] NSWSC 794, [32].


Evidence, Trials, Courts and Technology

court staff, expert witnesses and others found that participants had different views regarding whether it is possible to assess a witness’s credibility by videoconference. Some believed that it was possible, whereas others were doubtful. The use of Skype becomes even more problematical because of differences and perceived shortcomings in the technology. In Brown v Murdoch the use of Skype was not allowed for video-conferencing evidence because he could not assess demeanour through Skype although he did not give reasons explaining why.9 The demeanour issue becomes even more significant when judges are called upon to consider the use of Skype as an alternative to other technologies because of lower picture resolution and delays in connection. Krawitz and Howard state that ‘Skype’s resolution is critical for assessing demeanour and credibility.’10 ­Curiously enough the authors accept the ‘demeanour fallacy’ uncritically whilst referring to the literature on the fallacy and dismissing a consideration of the issue as beyond the scope of their article. Given the weight that they place upon the importance of demeanour as an issue in the use of video-conferencing technologies, that is a serious oversight. Fisher suggests that the courts and the Bar should respond positively to developments in the understanding of demeanour, and focus upon other methods of veracity assessment such as internal inconsistencies in evidence, consistency with what a witness has said on an earlier occasion and comparison with records contemporary to the events in question, the response of the witness when challenged with inconsistencies, and the plausibility of the account—is it likely that people would act in the way suggested.11 However, Fisher is not suggesting that recognition of the demeanour fallacy dispenses with the need for physical presence or oral evidence. But the recognition of the fallacy may remove a significant objection to the use of communications technologies that allow a witness to give evidence from a location remote to that of the Court.

A.  The Problems of Presence and the Confrontation Right In the United States the provisions of the Sixth Amendment12 has formed the basis for the ‘physical presence’ trial in the United States. The manner of 9 

Above n 6. Marilyn Krawitz and Justice Howard, ‘Should Australian Courts Give More Witnesses the Right to Skype?’ (2015) 25 Journal of Judicial Administration 44, 60. However cases in other jurisdictions have accepted Skype referring to its reliability and its wide use. People v Novack 41 Misc 3d 733, 736, 971 NY 2d 197 (2013) See also Re ML (Use of Skype Technology) [2013] EWHC 2091 (Fam); S (Relocation: Parental Responsibility) [2013] 2 FLR 1453, [2013] EWHC 1295 (Fam). 11  Fisher above n 3, 577–78. 12  The Sixth Amendment provides ‘In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favour, and to have the assistance of Counsel for his defence’ (emphasis added). 10 

Orality and Physical Presence of Witnesses


its i­nterpretation also gives rise to some faulty premises about the historical ­background to the­­‘confrontation right’. This erroneous foundation has permeated our thinking about the importance of the confrontation right to the point where, in New ­Zealand the presence of an accused and witnesses is statutorily enshrined both in the New Zealand Bill of Rights Act 1990 and in the Evidence Act 2006.13 These provisions are subject to two major exceptions to which reference will be made shortly. Although the historical justification for the confrontation right or the presence right is debatable there is a modern rationale comprising seven purposes identified by Friedman.14 I have summarised these with a brief critique as follows: —— Openness. Confrontation guarantees openness of procedure, which among other benefits ensures that the witness’s testimony is not the product of torture or of milder forms of coercion or intimidation. This is particularly important given the contrast to early continental systems, in which coercion of witnesses examined privately was very common. This has resonances of the Inquisition or the worst practice of Star Chamber. One would have expected in the twenty-first century that the use of torture would have been at least an anachronism and at worst a war crime practiced by ruthless totalitarian governments and inimical to the values of Western democracies although the treatment of terror suspects following 9/11 and the provision by law for secret trials challenge those assumptions. —— Adversarial Procedure. Confrontation provides a chance for the defendant, personally or through counsel, to dispute and explore the weaknesses in the witness’s testimony. In an earlier day that chance came in the form of a wideopen altercation in court. Today it comes in the form of cross-examination, although the rationale for the involvement of the defence lawyers had little to do with confrontation—quite the contrary.15 —— Discouragement of Falsehood. Confrontation discourages falsehood as well as assisting in its detection. The prospect of testifying under oath, subject to cross-examination, in the presence of the accused, makes false accusation much more difficult than it would be otherwise, or so at least is the wellsettled belief. But does cross-examination require the physical presence of the witness or of counsel?

13  s 25(e) of the New Zealand Bill of Rights Act guarantees the right to be present at the trial and to present a defence. s 83(1) of the Evidence Act 2006 (NZ) defines ‘the ordinary way’ of giving evidence as orally in a courtroom in the presence of the judge or, if there is a jury, the judge and jury, the parties to the proceeding and their counsel and any member of the public who wishes to be present, unless excluded by order of the judge. 14  Richard D Friedman, ‘The Confrontation Clause Re-Rooted and Transformed’ (2004–05) Cato Supreme Court Review 439, 442–43. 15  See John H Langbein, The Origins of the Adversary Criminal Trial (Oxford, Oxford University Press, 2003).


Evidence, Trials, Courts and Technology

—— Demeanour as Evidence. If, as is usually the case, the confrontation occurs at trial or (in modern times) in a videotaped proceeding, the trier of fact has an opportunity to assess the demeanour of the witness. I have discussed the issue of demeanour above but it should be emphasised that modern assessments of credibility rely less and less on demeanour which has largely been discounted as an indicium of truth telling. —— Elimination of Intermediaries. Confrontation eliminates the need for intermediaries, and along with it any doubt about what the witness’s testimony is. This is not so much an issue with presence or the confrontation right but rather is answered by the best evidence rule. Exceptions are recognised by law and evidence giving by an intermediary is absent from the modern trial system. —— Symbolic Purposes. Beyond these instrumental purposes, confrontation of prosecution witnesses serves a ‘strong symbolic purpose’ that has been ­recognised in the United States. Even if confrontation had no impact on the quality of the prosecution’s evidence, it would be important to protect because, ‘there is something deep in human nature that regards face-to-face confrontation between accused and accuser as essential to a fair trial in a criminal ­prosecution’.16 It speaks to the community abhorrence of the hidden accusation. In the twenty-first century does this require or mandate physical presence? One has to reflect on Levi-Strauss’s comments on ritual in considering whether this form of symbolism is mere anachronism.17 —— The Weight of History. The symbolic value of confrontation is enhanced by the fact that it has been part of the trial process for some centuries. Indeed, the very fact that for many centuries accused persons have had the right to confront the witnesses against them makes it especially important to continue to honour that right. This, of all the purposes, is the most debatable. In its original state the criminal trial was a contest or a debate between accuser and accused. There were no lawyers. In this ‘accused speaks’ process presence was critical, especially that of the accused. Over time the accused speaks process developed into the ‘lawyer speaks’ trial. In this development, an accused was prohibited from giving any evidence at all until 1896.18 Apart from the history, mere tradition or adherence to ritual—this is the way we have always done it—without a rational or practical underpinning is a very poor justification. 16  Maryland v Craig (1990) 497 US 836, 846; Coy v Iowa (1988) 487 US 1012, 1019–20; Lee v Illinois (1986) 476 US 530, 540. 17  ‘The survival of a custom or a belief can in fact be explained in two ways. Either the custom or belief is a survival without any other significance than that of a historical residue spared by chance or as a result of extrinsic causes, or else it has survived because through the centuries it has continued to play a role and because this role is the same as might account for its initial appearance. An institution can be archaic because it has lost its reason for existing, or, on the contrary, because this reason for existing is so fundamental that any transformation of its ways of acting has been neither possible nor necessary’: Claude Levi-Strauss, The Elementary Structure of Kinship (Boston, Beacon Press, 1969) 2. 18  See Langbein above n 15.

Facing Up to Change


It is important to note that there is no suggestion that the confrontation right had anything to do with the issue of whether or not testimony was reliable. It was a rule that underpinned the manner in which testimony was taken. A witness may not be heard for the prosecution unless the accused has an opportunity to be confronted by him or her—the witness must speak in the presence of the accused and be subject to cross-examination.19 Today the confrontation right is associated with the so-called adversarial process and adversarialism began with the increased role of the defence lawyers in the criminal trial. Yet there is no suggestion of the development of a confrontation right that went hand in hand with adversarialism. Indeed, the reasons for the development of the adversarial trial seem more tied up with inequality of arms and concerns about the reliability of evidence and subsequent convictions than the ‘right’ of an accused to confront his accuser. Notwithstanding the ‘confrontation right’, the trial ritual with its emphasis on orality and reliance on the testimony of individual witnesses has its problems. Although the medieval mentality may have preferred oral testimony it was then, as it is now, subject to a number of shortcomings which must be recognised. It may well be that cross examination is the greatest engine for determining truth but it, too, is flawed and is dependent upon the forensic skills of the advocate for its effectiveness. Even then, as a truth seeking device, it falls short. In light of new technologies that enhance information exchange, the justification for the ‘physical presence’ trial with oral, presence-based, evidence-giving processes are no longer valid and indeed lack anything other than a deep atavistic basis for their continuation. Yet, some of the other important aspects of criminal trial procedure may remain including adversarialism. But the focus of adversarialism should be upon information testing. Technology provides some of the solutions to effectively placing information before the fact-finder. However, it has been observed that technology leads to a disenchantment with and trivialisation of ritual. Ritual, particularly through its symbolic aspect, contributes to the social order. The challenge for justice in the Digital Paradigm is to re-invent rituals that are based on those of the past or adapt rituals to a new technology so that the concurrence and authority that they cast on the thing that they adorn appear consubstantial with the exercise of justice.20

III.  Facing Up to Change In the main, lawyers are slow adopters of new technologies, and the criminal defence lawyer is perhaps one of the most conservative, especially if proposals are

19  20 

Friedman above n 14, 445. Karim Benyakhlef and Fabian Gelinas, ‘On-line Dispute Resolution’ (2005) 10 (2) Lex Electronica.


Evidence, Trials, Courts and Technology

put in place that make evidence clear and, as is often the case with t­echnological systems, almost irrefutable. The technology is neutral. It cannot be cross-­examined to the point where it acknowledges that it ‘cannot be sure’. As the US Supreme Court said in Scott v Harris the technology speaks for itself.21 Resistance to change does the trial process little good. If anything it compromises its effectiveness and its credibility in the minds of the public. There are certain imperatives that are driven by technology. Those who were born in 1996 have grown up in a world of the Internet, computers, smartphones and digital devices. They are children of the digital paradigm. They are Marc Prensky’s ‘digital natives’.22 Prensky was writing about students and their use of technology but the University students of whom he wrote in 2001 are now adults and available for jury service. They have spent their entire lives surrounded by and using computers, videogames, digital music players, video cams, cell phones, and all the other toys and tools of the digital age. Today’s average college grads have spent less than 5,000 hours of their lives reading, but over 10,000 hours playing video games (not to mention 20,000 hours watching TV). Computer games, email, the Internet, cell phones and instant messaging are integral parts of their lives. It is now clear that as a result of this ubiquitous environment and the sheer volume of their interaction with it, today’s students think and process information fundamentally differently from their predecessors. These differences go far further and deeper than most educators suspect or realize.23

Prensky’s ‘digital natives’ are ‘native speakers’ of the digital language of computers, video games and the Internet. Those who were not born into the digital world but have, at some later point in life, become fascinated by and adopted many or most aspects of the new technology are ‘digital immigrants’. Prensky suggests that the difference is important because, like it or not, digital immigrants speak with a different ‘accent’ from digital natives.24 Lord Chief Justice Judge in 2010 recognised the digital native issue, observing that his grandchildren sourced information at school from machines, used the Internet and were provided with information in written form rather than listening. He was concerned that this form of education lacked training in the ability


Scott v Harris (2007) 550 US 372. Marc Prensky, ‘Digital Natives, Digital Immigrants’ (2001) 9 On the Horizon 1; www.…/prensky%20-%20digital%20natives,%20digital%20immigrants%20-%20part1. pdf. For a brief introduction to the development of Presnsky’s theory see Wikipedia ‘Digital Native’; see also Sylvia Hsieh, ‘“Digital Natives” Change Dynamic of Jury Trials’ Mass Law Weekly (7 November 2010) 23  ibid, Prensky. 24 ibid. 22 

Facing Up to Change


to sit still and listen and think simultaneously for lengthy periods—an essential requirement for jurors.25 His Lordship describes a trial system that depends upon orality as its focus and there is an underlying assumption that the ‘oral presence’ trial is the only available system. With respect, perhaps what he also fails to recognise is that the digital natives find such a means of absorbing information incompatible with the way in which their learning systems are becoming adapted as a result precisely of the technological proficiency to which His Lordship refers. The means of information gathering is radically different from that acquired, say, from a book as Sven ­Birkerts observes in a passage written in 1994. Information and contents do not simply move from one private space to another, but they travel along a network. Engagement is intrinsically public, taking place within a circuit of larger connectedness. The vast resources of the network are always there, potential, even if they do not impinge on the immediate communication. Electronic communication can be passive, as with television watching, or interactive, as with computers. Contents, unless they are printed out (at which point they become part of the static order of print) are felt to be evanescent. They can be changed or deleted with the stroke of a key. With visual media (television, projected graphs, highlighted ‘bullets’) impression and image take precedence over logic and concept and detail and linear sequentiality are sacrificed. The pace is rapid, driven by jump-cut increments, and the basic movement is laterally associative rather than vertically cumulative. The presentation structures the reception and, in time, the expectation about how information is organised. Further, the visual and non-visual technology in every way encourages in the user a heightened and ever-changing awareness of the present. It works against historical perception, which must depend on the inimical notions of logic and sequential succession. If the print medium exalts the word, fixing it into permanence, the electronic counterpart reduces it to a signal, a means to an end.26

This is the information ecosystem within which the digital natives who are beginning to make up today’s lawyers and juries—and, in the future, judges—dwell. They have been brought up on an information-rich, technologically-based environment. Their expectation is that the information processing that leads to the decision of a jury will use the information gathering, presentation and analytical tools to which they have become accustomed. To expect them to do otherwise is to allow archaic systems of information exchange prevail for no other reason than ritualistic processes and ‘this the way that it always has been done’. It is time to consider a dramatic, possibly revolutionary, change.

25  Rt Hon The Lord Judge, ‘Jury Trials’ (Judicial Studies Board Lecture, Belfast 16 November 2010) See ch 3. 26  Sven Birkerts, The Gutenberg Elegies: The Fate of Reading in an Electronic Age (Winchester MA, Faber, 1994) 122–23. For a discussion of Birkerts’ other misgivings see ch 3.


Evidence, Trials, Courts and Technology

IV.  Technology in Court Professor Fred Lederer, Director of the Center for Legal and Court Technology at the College of William and Mary Law School made the following comment: Most evidence is and will be digital in nature, largely eliminating any need to show the ‘original’ physical exhibit in evidence. Indeed, as most people are visually and data oriented, jurors and even judges will expect to see as much information as possible on screens in front of them. The trial lawyer will continue to be essential, but the underlying evidence will become even more important-and it will need to be visually presented. The advent of the smartphone with camera foreshadowed what we think the short-term future will bring-a huge increase in recorded incident video. It’s hard now to have something happen in the world without recorded video from phones and tablets.27

Much information that is not in digital format can be digitised. Information in digital form requires digital systems to present it. In addition digital and communications systems enable evidence to be pre-recorded and played at trial, or for a witness to be ‘present’ in court by way of a video link. The prospect of pre-recorded testimony and remote testimony means that such evidence may be categorised as follows.

A.  Spatial Technologies Spatial technologies allow contemporaneous communication of information over a distance. The communicator may be in one physical location—the recipient may be in another. Technologies that provide us with examples of this class are not new and may range from the signal fire, semaphore, telegraph and wireless to radio, television and Skype. In terms of the application of spatial technologies in the court process, the provisions of the New Zealand Courts (Remote Participation) Act 2010 is a perfect example of the use of spatial technologies. The underlying theme of the legislation is to enable participation in court proceedings from a distance by the use of communication technologies. There are two concerning features about this legislation. The first is its underutilisation by courts and participants. Most of the time, the Act is employed in remand hearings in criminal cases. The other is its restrictions on use in the criminal trial process.

27 Fred Lederer, ‘Some Thoughts on Technology and the Practice of Law’ (2014) The Bencher (a bi-monthly publication of the American Inns of Court)

Technology in Court


B.  Temporal Technologies Temporal technologies are those that allow information to be gathered, collated and stored and used at a later time. Again, there are examples of temporal technologies that predate digital ICT. The written question and answer record of an interview between a police officer and a suspect, the tape recording of such an interview, the video recording of such an interview all provide examples. Digital technologies now present us with a wider range of temporal evidence gathering and presentation techniques. One example is the use of recordings from static CCTV cameras in buildings or on city streets. Another may be found in recordings derived from body-worn cameras by police officers in London which were the subject of a 2014 trial.28 In the United States the US Supreme Court accepted the presentation of video evidence of a high-speed pursuit. Such procedure is quite uncommon in the Supreme Court and was viewed as part of an interesting relationship between the Supreme Court and technology. The video had a strong effect on the court and is viewed as a major factor in how the court made its decision.29 The practice of using recorded interviews of the accused or of recordings of intercepted conversations either by telephone or a remote listening device are examples of temporal technologies. The use of a recorded interview of the complainant to be played as evidence in chief, and for contemporaneous cross-­examination to take place with the witness located in another room in the courthouse,30 provides an example both of the use of temporal and spatial classes of technology. This is not the place to debate the legal requirements surrounding the use of spatial or temporal technologies. Certainly there has been a certain judicial caution in allowing temporal technologies and there remain certain issues about the use of spatial technologies where a participant is located outside the jurisdiction.31 How can these technologies assist in improving the communication of ­testimony? What are some of the challenges that may be addressed? 28 Josh Halliday, ‘Met police trial of body-worn cameras backed by David Davis’ The Guardian (8 May 2014) 29  Scott v Harris above n 21. The video may be found on YouTube at watch?v=qrVKSgRZ2GY. For a critique of Scott v Harris—not as to outcome but as to reasoning—see Dan M Kahan, David A Hoffman and Donald Braman, ‘Whose Eyes Are You Going to Believe: Scott v Harris and the Perils of Cognitive Illiberalism’ (2009) Harvard Law Review 838. 30  Why it is necessary for the witness to be transported to the courthouse for cross-examination when spatial technology would enable him/her to be cross-examined from any other location may be premised only on the basis that the court may need to exercise some supervisory function over the witness, but for no other purpose. 31  However, for a detailed New Zealand consideration of the use of spatial technologies for offshore participants in the context, not of the Courts (Remote Participation Act 2010) but by way of an application pursuant to s 103 of the Evidence Act 2006, see the decision of Stevens J in Deutsche Finance NZ Ltd v CIR (2007) 18 PRNZ 710 where he provides for a detailed list of procedural requirements accompanying the participation of the witnesses involved. It is suggested that these criteria could and should be applied in cases involving remote off-shore participation under the Courts (Remote Participation) Act 2010.


Evidence, Trials, Courts and Technology

C.  The Translation Problem Difficulties in comprehension are increased when a translator is present. This necessarily means that the story is delivered in a stuttering fashion with an absence of nuance, and the true meaning that the speaker seeks to convey may be lost as a result of lack of nuance. The problem is further complicated not only for the witness who does not have English as a first language but for people who do and are participating in a trial where the accused does not. As a result of the New Zealand Supreme Court decision in Abdula v R every word must be translated line by line for the benefit of the accused.32 This heightens the stuttering way in which the story is being told, interferes with the sequentiality of the account and impairs or reduces concentration and comprehension. In its quest for ensuring that the accused comprehends what is being said, the Supreme Court has thrown the importance of communicating information between witness and fact-finder to the wind and has done potential damage to the proper assessment of the information that is being conveyed, not only by the witness who requires a translator, but by all the witnesses who do not, but whose evidence is effectively being laboriously translated line by line for the accused.

D.  The Court Environment The court environment is an intimidating one—a recognised problem that can result in nervousness and inhibition in all but the most experienced witnesses (who are usually police officers). The atmosphere, the unusual garb worn by the participants, the ritualised procedure are all impediments to proper and coherent story telling. Most witnesses manage, but the whole focus for the witness should be on the story that is being told rather than being distracted by nervousness and inhibition. The problem is that rather than becoming a forum for ascertaining fact, the court itself inhibits the communication of information upon which the fact finding depends.

E.  The Ability to Recount This may be seen as an aspect of demeanour which I have discussed above, but in terms of communication and information flow it is in a category of its own. Demeanour goes to the assessment of the credibility of the person communicating the information. The ability to recount goes to the act of communication and has an impact upon the assessment and the quality of the information that is being given.


Abdula v R [2012] 1 NZLR 534 (SC).

Technology in Court


An eloquent and verbally skilled witness—one who is comfortable with the subtleties and nuance of language—is going to be able to tell the story more convincingly than the person with a limited vocabulary, unskilled in the nuance of language. Such a person is easy game for the articulate and skilled lawyer in crossexamination yet may still be a witness of truth, unable to properly tell his story. Associated with problems of articulation may be those that a person may have in being comprehended. Such a person may have the ability to converse in, say, English, but a problem for the auditor may arise when the speaker’s accent impairs the auditor’s comprehension of what is being said. Subconsciously the auditor may attribute to this witness’s story less weight, simply because she had difficulty fully understanding what was being said. In such a case any empathy that might naturally occur between speaker and auditor is reduced, diminished or lost completely. Further problems may arise in terms of tone of voice, accent, speech impediments and the like—associated aspects of articulation that deal with the ability to speak or enunciate.

F.  Intellectual Ability and Suggestibility In some respects these aspects of communication are related to those of articulation and vocal and verbal skills. There can be no doubt that the person who suffers an intellectual disability that affects recall or articulation is going to have difficulties telling a story let alone a convincing one. And one has to be careful to ensure that empathy with a witness does not overflow into sympathy which clouds objectivity. It is suggested that new digital and communications technologies may be deployed that will assist. The may not provide a complete solution but they may mitigate some of the problems that I have described.

G.  Dealing with the Translation Problem There is no reason why the fact-finder should be distracted by the ‘line-by-line’ approach suggested by the New Zealand Supreme Court33 when technology can solve the problem and afford the accused his right to and participation in a fair trial. Simultaneous translation employing a remote translator and a set of headphones for the accused—a facility which was used in 1946 at the Nuremburg trials34—will resolve this issue without interrupting or compromising narrative 33 ibid.

34  See ‘The History of Simultaneous Interpretation’ United Nations Interpret/COV/Simultaneous/default.aspx; Jesus Baigorri Jaion, From Paris to Nuremburg: the Birth of Conference Interpreting (Holly Mikkelson and Barry Slaughter trans, Amsterdam, John Benjamins Publishing, 2014) 211 et seq; Christina Anna Korak, ‘Remote Interpeting via Skype—a Viable Alternative to In Situ Interpreting?’


Evidence, Trials, Courts and Technology

flow. In addition the devices used are now more compact and discreet than those of 1946. As an adjunct to the discussion of translation where possible—and in my view it should be a rule—a witness statement from a person who speaks other than English as a first language should be taken and the interview conducted in his or her first language. An interpretation transcript can later be provided. The possibility of inaccuracy arising from a translated police interview—even when recorded on video—can lead to problems where questions or answers are mistranslated and the interview pursues a different direction as a result of inaccurate translation. This has happened in more than one trial over which I have presided.

H.  Dealing with Environment and the Ability to Recount Environmental factors, along with challenges to articulation of evidence and intellectual problems may be addressed by technology. In saying that it is recognised that there will be no immediate solution to communication problems involving vocal or articulation skills or intellectual ability. But there is a case for reducing any aspects of procedure that may enhance these problems. The most obvious aspect of the trial that might aggravate these problems and create impediments to the communication of information is the court environment itself. A solution may be found in the employment of both temporal and spatial technologies. Using closed circuit TV or video-conferencing technology a witness could give evidence outside the courtroom—either in a special suite in the courthouse or elsewhere. Alternatively, testimony could be pre-recorded and is an example of the use of a temporal form of technology employing a preservational evidence retention system. Once video tapes were used. DVDs are now the preferred preservational medium although with the development of hard drive cameras it may well be that pre-recorded evidence could be retained on flash drives or small ‘in camera’ hard drives. The usual way that evidence may be given in such cases is for the pre-recorded statement to be played in the courtroom and then the witness, who is present in another location (usually in the courthouse although this is not required by the Act) will be cross-examined via CCTV—an example of a mixed use of temporal and spatial technologies.

I.  What of the ‘Confrontation Right’? Many of the obstructions to the proper evaluation of information needed by a fact-finder to arrive at a conclusion arise from practices rooted in the ritualised oral procedures of evidence giving that have surrounded the criminal jury trial. These procedures were perfectly satisfactory in an era where communication imperatives and an absence of the range of communication technologies present

Technology in Court


today mandated the ‘physical presence participation’ model of the criminal jury trial. It is my argument that the essential elements of the confrontation right may be maintained through the use of information technologies whilst dispensing with the inconveniences and costs of the ‘physical presence participation’ model. The justification for witnesses to be physically present in the court for examination is no longer relevant when ‘virtual presence’ by means of a high definition screen can enable a better and clearer view of a witness than is possible from a jury box across a courtroom to the witness stand. The questionable value of demeanour suggests that this justification for presence is at best arguable and in reality is a fallacy.35 One could go so far as to suggest that video-conferencing technology may make it possible for witnesses to give evidence from remote locations and for the accused to be ‘virtually present’ without compromising rights. ‘Visual presence’ may replace ‘physical presence’. Audio-visual (or videoconferencing) technology dispenses with the need for physical presence because it maintains the essential aspects of the confrontation right. The accused is able to hear the evidence that is given. There is the ability for cross examination. The availability of high definition screens means that there will be little if any image distortion for the accused or other participants located elsewhere. In addition the provision of technology should pose little difficulty. There are a number of ‘video-conference’ technologies available. At the moment New Zealand courts use a dedicated Voice/Video over Internet Protocol (VOIP) system that is effective but expensive and is not widely available. In late May 2014 I participated in a test of video-conferencing software and electronic bundle software in a mock international trial.36 All the participants were scattered—Auckland, Washington DC, London, Croydon and Edinburgh. The communications software used was Microsoft Lync—now Skype for Business—and the Electronic Bundle was provided by Caselines, a product of Netmaster Solutions, an English company. The trial rapidly established the feasibility of the software tools, both of which are reasonably priced and are browser-based which meant that no additional software needed to be installed on a user’s computer. In addition, the software meant that place did not matter—a classic example of the application of spatial technologies. From a technological and practical point of view, a remote hearing is possible and feasible. The use of video-conferencing or audio-visual technology is relatively common throughout the English and Commonwealth Courts. Section 32 of the Criminal


Fisher above n 3. reports see email&utm_campaign=GAZ020614; and for an interview with Judge Simon Brown QC on the effectiveness of the trial see =7r8RUwORvkc& 36 For


Evidence, Trials, Courts and Technology

Justice Act 1988 (UK) allows evidence to be given by a witness (other than the accused) by way of ‘live television link’. Leave is required if the witness is overseas. There is provision for pre-recorded testimony pursuant to section 27 of the Youth Justice and Criminal Evidence Act 1999. Rule 32 of the Civil Procedure Rules in England allow for evidence in civil proceedings to be given via video-link.37 The criminal Code of Canada contains provisions governing the reception of evidence by video and audio. The Canada Evidence Act applies to non-­criminal matters under federal law and if provincial statutes are silent, federal law is adopted. Individual courts may have rules relating to the reception of evidence.38 Australian courts have deployed video-conferencing for court proceedings and the taking of evidence and for pre-trial matters—not unsurprising given the vast distances in that country. However, despite what is clearly widespread use of video-conferencing in a number of jurisdictions there is still hesitancy, even among legislators. In the debate about the introduction of the Courts Remote Participation Act 2010 objections to Audio Visual Links (AVL) use had two major themes. The first, as may be expected, related to the confrontation right and the ‘physical presence’ rule implied by section 25(e) of the New Zealand Bill of Rights Act 1990. The other related to some of the technological shortcomings surrounding the use of AVL. There was little opposition to AVL being used for procedural hearings but there was considerable objection to its use for a substantive hearing. One suggestion was that without physical presence an accused could not keep tabs on the ‘cosy’ conversations between counsel, the inattentive or snoozing juror or, worse still, the sleeping judge or that the camera may not be playing on the key participants at a vital stage. Such a suggestion ignores split screen and multi camera technology, along with voice activated cameras and swivelling cameras. The days of a single static camera are long gone. At no stage in the debate did there seem to be a consideration of the advantages or shortcomings of the use of technology to fulfil the purposes of the Bill of Rights Act or the Evidence Act. Rather, the visceral reaction was based upon the outrageous suggestion that a trial could take place other than in the physical presence of the accused.39 It can only be a matter of dismay that technologically ignorant legislators are debating the use of technology in the court system.

37  Information on video-conferencing in the English Courts can be found here courts/video-conferences. 38  For example the Court in Ontario may order that a hearing be conducted in whole or in part by means of a telephone conference call, video-conference or any other form of electronic communication, and ‘The Court may give directions to facilitate the conduct of a hearing by the use of any electronic or digital means of communication or storage or retrieval of information, or any other technology it considers appropriate’. 39  For the debates see Hansard Vol 664, 12266 HansD_20100629_00001172/courts-remote-participation-bill-%E2%80%94-second-reading; Hansard Vol 664 12349 courts-remote-participation-bill-%E2%80%94-in-committee.

Technology in Court


I have spent some time discussing the issue of video-conferencing in the context of communicative/evidential technologies because oral testimony from witnesses plays such a significant part in the trial process.

J.  Presentational Technologies For digital evidence to be produced and presented, technology is required. One of the problems in the current criminal trial process involves the use of photographs. The jurors are provided with a booklet of photos and the witness demonstrates, on a hard copy photograph that he or she is holding, matters of interest in the photo. Problems of distance between jurors and witness can create communication problems, and the marking of the photo with a pen may not be the most accurate way of preserving a reference to a matter of interest. The projection of photos onto screens resolves the problem of scale. A 50 or 60 inch high definition screen can project an illustration that displays more detail than is apparent on a 5 × 7 inch photo. The identification of matters of interest can be done with a laser pointer, and markings can be retained on a photo-responsive copy of the image that can later be printed out. We are wedded to hardcopy because of the apparent preservational qualities of paper. However, the communication of images and illustrations can be at least as effectively achieved using digital technologies. In addition to the large screen, jurors could be provided with their own screens in the jury box or, alternative, a tablet computer linked to a wireless system to which the illustrative exhibits are transmitted. In addition software tools could be provided so that jurors could make their own annotations to the exhibit. Presentational technologies can be used for any of the illustrative or demonstrative requirements during the course of a trial. In addition, use can be made of the wide range of publicly and freely available sources of information that can properly inform the jury of the context of events. Utilities such as Google Maps, Google Earth and Street View can be used as scene setting utilities and that may avoid the necessity of a scene visit. Street layouts, the intersection where the accident took place, the relative location of buildings or commercial premises to the location of the scene of the crime may all be presented using publicly available resources for illustrative purposes. It is acknowledged that these sources of information are primarily illustrative and may not depict the scene at the moment at which events took place. However, as long as there is reasonable contemporaneity with the events in question, their use can be considered and could well be helpful.

K.  Documents—Digitisation, Searchability and Analysis From time to time trials will involve documentary evidence often of considerable volume. The usual means of document presentation has been by way of hard


Evidence, Trials, Courts and Technology

copy, often contained in the ubiquitous lever arch folder. However, there has been progress in the use of digital technologies which have been employed in document presentation in the course of a hearing or a trial. In addition there are occasions where documents have been created for the purposes of a trial—in particular transcripts of intercepted conversations or streams of text messages or emails that are similarly voluminous. While the presentation of these items of evidence may be enhanced by the use of digital technologies, their use by the jury may be compromised by the volumes of paper through which the jury must sift to locate and analyse aspects of the evidence. The factfinder—be it judge or jury—should be presented with documentary evidence in digital format so that they can properly search for and locate matters of evidence or information that may assist in their determination. Using document analytics tools such as concept searching or e-mail threading as well as ‘blunt force’ keyword searching, the jury can more efficiently go about their task of analysing the information that is before them. Using analytical tools the jury may, for example, identify common threads in recorded conversations, frequently utilised modes of expression in text messages and the like. The tools that are employed in document isolation and analysis in e-discovery can be made available to the jury to assist in the analysis of documentary evidence.40

L.  3D Use The potential for presentational technologies is expanding as a result of the development of 3D modelling. The Future Crime Scene Project of the New Zealand ESR provides juries with detailed 3D virtual tours of crime scenes. The data that forms the basis of the modelling is collected as the crime scene is investigated including the location of items of interest such as blood spatter.41 The use of such technology has the advantage of presenting an illustrative, realistic, contemporaneous view of the crime scene uncluttered by floor plans or without the necessity of having to explain the position of a photographer or the like. The ‘walk-through’ nature of the technology, and the capability for 360 degree views allows the viewer to be ‘virtually present’ in a way that would be difficult to capture orally or by conventional means.


Proof Finder by Nuix is a relatively inexpensive option ESR Media Release, 18 December 2012 ‘ESR working with Academy Award winning 3D artist on new CSI technology trials’; ESR Annual Report 2013 esp at 17 and 20; 3D mapping of crimes scenes is also used in Queensland, Australia—see Michelle Starr ‘Queensland Mapping Crime Scenes With 3D Scanner’ (17 February 2014) CNET news/queensland-police-mapping-crime-scenes-with-3d-scanner/; Rohan Pearce, ‘Queensland Police to map crime scenes with 3D scanner’ Computerworld (13 February 2014) au/article/538192/queensland_police_map_crime_scenes_3d_scanner/. 41 

Technology in Court


i.  3D Projection The use of 3D projection may mean that a sensitive piece of real evidence need not be handled by the jury, but may be projected on a screen and viewed using 3D glasses, obviating the need for the exhibit to be handed around the jury box. In a demonstration at the Courts Technology Conference 2013 in Baltimore, Maryland a brick which had fragments of hair and skin adhering to it was presented using 3D technology. This was done because the fragments of hair and skin could easily be dislodged, compromising the value and integrity of the evidence. The exhibit could be rotated, so that it could be viewed from all angles. Professor Fred Lederer outlined the experiment as follows: With the help of Wolf Vision and Panasonic, CLCT demonstrated the first known courtroom use of 3D evidence at the 2013 Court Technology Conference, showing a bat and jagged brick in 3D in a simulated road rage trial. (Yes, we have reached the point at which the judge’s instructions include, ‘Jurors should now put on their 3D glasses’.) It’s unclear whether 3D evidence should be admissible or, if so, should await 3D monitors without glasses.42

Professor Lederer’s caution is not unexpected, but as 3D technology develops it is very likely that this option for evidence presentation will have to be considered. There seems to be little reason why such a means of evidence should be excluded. The real evidence—the brick—is available and present in the courtroom. It has to be for the 3D presentation to take place. It is not as though the item is a reconstruction.

ii.  3D Printing Having said that, 3D technology does allow for the reconstruction of an item of real evidence by way of 3D printing. The use of 3D printing for investigative or court purposes is still relatively new. This may be in part because of a perception of a complex technology, cost, or simply a lack of understanding of what can be done with 3D printing. It’s a wonder why more investigators, lawyers and expert witnesses haven’t seen the benefit of 3D printing for use in court. For anyone who has been following the trends in 3D printing, it comes as no surprise that there has been significant growth in this area in the past several years. New companies have formed providing small, at-home 3D printers for ready-made parts while larger and more professional printers allow for a variety of materials to be used with colour, tight tolerances, and improved surface finishes. Materials and technologies range from powder-based materials, liquid resins, metals, and ceramics. Traditionally, these 3D printing systems have been used by engineers to create new or replacement parts while hobbyists and artists have the ability to create ready-made pieces to their own specifications. However, in the


Lederer above n 27.


Evidence, Trials, Courts and Technology

case of the criminal investigator or forensic scientist, only a few have actually used this technology in court. Perhaps the greatest reason for the 3D printing boom has to do with the availability of 3D digitising systems such as laser scanners, structured light scanners, photogrammetry and similar technologies. The cost of hardware has become affordable and the ease of use of photogrammetry software has made these technologies available to the average consumer. A quality laser scanning system for smaller parts can be purchased for less than a few thousand dollars and in the case of photogrammetry, there are several low cost and even freely available programs and services offered to create highly detailed 3D models of everyday objects. The first step to creating a 3D printed object is to be able to digitise the object into a 3D model. Although terrestrial laser scanners have seen some increased use by law enforcement agencies, close range scanners that accurately record smaller pieces of evidence like skulls, bones and shoes are not commonly owned or used by police departments. This is one reason why 3D printing for forensic use is not such a common practice. Fortunately, a local service provider with equipment capable of digitising a particular piece of evidence should not be too far away. The second step after an object has been documented in 3D is to ensure that the model is made into a continuous volume without any ‘missing pieces’. 3D printing is a process of combining materials, one layer at a time, to make objects from 3D model data. This is opposite to a subtractive manufacturing method such as machining. The benefit is that 3D printing allows for very complex parts to be made that would be impossible with other manufacturing methods. However, much like stacking layers of blocks on top of each other, there must be at least a partial block underneath to support the next layer on top. Therefore, the 3D model usually requires some preparation to fill any gaps and ‘solidify’ the object into a water tight mesh. The final step is the actual printing process itself. Similar to a regular inkjet printer, there are different quality settings that can be chosen for most 3D printers that define the surface finish and step increment of the part. Depending on the shape and size of the part, print times can range from just a few minutes to several hours for more complex parts. 3D printing can be used for footprints which previously have been modelled using plaster casts. Fingerprints can be captured and rendered in 3D. Presently they are captured by the use of powder and tape. While in court, a fingerprint examiner could use a large replica of a suspect’s fingerprint to make identifications and comparisons by colour coding certain ridge features (such as islands, crossovers and bifurcations) and matching them to a found print at a crime scene. Jurors benefit by being able to easily visualise the 3D replica and they have the benefit of haptic perception. Fingerprints are a good example of where we take something small and create it at a much larger scale to bring out specific details which would normally not be easily visible by the naked eye. Fingerprint examiners in training benefit similarly from having the ability to easily visualise and ‘feel’ what an enlarged 3D replica of a person’s finger looks like before making a flat print comparison.

Technology in Court


Other forensic uses of 3D printing are extensive and are open to creativity. Some of these might include: —— Printing a scale model of the first floor in a home where a crime was committed. —— Recreating a physical copy of a weapon found at a crime scene. —— Displaying bullet trajectories through a 3D scanned article of clothing. —— Creating a model of a suspect’s dentition and showing how well a bite mark aligns. —— Printing a scaled model of a collapsed building due to a bombing. —— Creating test pieces of a piece of evidence that might be used in an experiment. Although there are few cases where 3D printing has been adopted for investigative or court purposes, the ability to physically recreate a piece of evidence is an interesting approach. The range of objects can be as small as a fingerprint or can be an entire crime scene that is scaled down to just a few feet. As investigators and scientists start to see the benefit of replicating evidence, they will need to begin looking at digitising technologies such as close range laser scanners, structured light scanners and photogrammetry. Once these technologies have been adopted and more evidence is captured in 3D, there will very likely be many more cases where 3D printing will be applied.43 There are continually developing methods of evidence presentation. Although the use of a monitor will always require some form of interface such as a projector, laptops are being replaced with tablet-style devices. Professor Lederer observes that lawyers and judges who were uncomfortable with computers in the courtroom have far fewer concerns about iPads in particular. We hear constantly about lawyer interest in jury selection, evidence presentation, and even jury use of iPads. That iPads in particular were not designed for easy secure courtroom use matters not. Evidence presentation via iPad seems to be increasingly popular, notwithstanding security and display connection issues.44

M.  Other Possibilities The discussion so far has focused upon issues surrounding the information that a factfinder needs to reach a decision—evidence. However, there are other ways in which technology can be employed in the trial process. Technology can also be

43  Eugence Liscio, ‘Forensic Uses of 3D Printing’ Forensic Magazine (4 June 2013); for recent legal scholarship on 3D printing and its uses see Nora Freeman Engstrom, ‘3-D Printing and Product Liability: Identifying the Obstacles’ (2013) 162 University of Pennsylvania Law Review Online 35; Peter Jensen-Haxel, LC Ebert, MJ Thali, R Ross, ‘3D Printers, Obsolete Firearm Supply Controls, and the Right to Build Self-Defense Weapons Under Heller’ (2012) 42 Golden Gate University Law Review 447; LC Ebert, MJ Thali and R Ross ­‘Getting in Touch—3D Printing in Forensic Imaging’, Forensic Science International (Sep 10 2011) 211(1–3). 44  Lederer above n 27.


Evidence, Trials, Courts and Technology

used by lawyers for presenting arguments, openings or closings to the jury using presentation software or in cases where the case is ‘document heavy’. The courts in Queensland have developed processes for electronic trials in cases where the number of documents exceeds 500.45 In New Zealand the High Court Electronic Bundle Protocol is recommended in cases involving a similar volume of documents. Andrew Sinclair has set out some of the advantages of the E-Trial process: —— Each instructing solicitor and Counsel can have their own tablet or computer instantaneously accessing other documents within the database. —— Parties can keyword search documents. —— Transcripts can be added to the database and are also keyword searchable. —— With appropriate security in the form of password protected access to their file, the parties may login to the server from home or Chambers and thus have access to all the material wherever they are. —— Screens can be placed in the public gallery so public and media can also follow cross examination on documents. —— All this material remains available to the judge in court, in chambers and at home with the purpose of writing the judgement, again all indexed and searchable. —— If there is an appeal to the e-trial becomes an e-appeal with minimal document management issues.46 In England the use of the Caselines Electronic Bundles, referred to above, have eliminated paper from the Crown Court and the criminal jury trial. The suggested and actual uses of technology in the more effective provision of information necessary to assist the fact-finder can only be seen to be beneficial.

V.  The Next Phase The trial, like all aspects of legal practice, is an exercise in information exchange. The objective is to come to a conclusion by determining the facts that are available and whether these facts fit within certain requirements defined by law. The consequences of that decision may be far reaching and often involve the life or liberty of the subject, financial well-being, corporate health or reputation as well as having a significant impact upon an individual’s immediate and wider family. In such circumstances, the conclusions that are reached should be based on the best quality of information available. Technology can assist in that objective. However, it is suggested that technology can be employed in creative ways to drive changes in process and it is to this aspect in the chapter that I shall now turn. 45

46  Andrew Sinclair, Electronic Practice Management: The Tools for Managing the Preparation and Presentation of a Trial Brief (Copy on file with the author).

Using Technology to Change Process Models


VI.  Using Technology to Change Process Models In February 2015 two reports about changing the civil dispute resolution process in England were released. The first was the report of the Civil Justice Council, chaired by Professor Richard Susskind, about Online Dispute Resolution (ODR) for low value civil claims.47 The second was released by the organisation JUSTICE entitled ‘Delivering Justice in an Age of Austerity’.48 It too proposed a new model for dispute resolution in civil courts and tribunals.49 The JUSTICE report built on Professor Susskind’s proposals in three ways. First, it was considered that the scope of the dispute resolution model could extend to most first instance proceedings across the civil courts and tribunals. Professor Susskind’s ODR proposal was restricted to certain civil claims up to a certain value. Second, the JUSTICE model further refined and clarified the scope of online facilitation and online adjudication. Third the JUSTICE proposal considered an integrated online and telephone platform offering a first port of call for individuals with potential legal problems and offering information, advice and assistance as the case proceeded. What is significant about the two proposals, which share many features including a significant use of communications technologies, is that it is proposed that the Dispute Resolution Service should be state sponsored and be part of the overall suite of options offered by the state for the resolution of legal disputes. The matter was further advanced by a report by Lord Justice Sir Michael Briggs.50 In January of 2016 he released a Civil Courts Structure Review Interim Report which recognised a pressing need to create an Online Court for claims up to £25,000. The advantages of such a proposal were that litigants would have access to justice without the necessity of incurring the cost of using lawyers. Sir Michael built on the Susskind proposals and developed a three stage process. The first stage would be largely automated—an inter-active online process for the identification of the issues and the provision of documentary evidence. The ­second phase would involve attempts at dispute resolution and case management by trained case o ­ fficers. The third and final phase would involve determination of the matter by a judge using documents on screen, telephone, video or face to face meetings to meet the needs of each case. According to Sir Michael, transfer to the ‘normal’ court process would be available for complex cases.

47 See The report is available at www. 48 A copy of the report is available at http://2bquk8cdew6192tsu41lay8t.wpengine.netdna-cdn. com/wp-content/uploads/2015/04/JUSTICE-working-party-report-Delivering-Justice-in-an-Age-ofAusterity.pdf. 49  Interestingly enough Professor Susskind was a member of the Working Party which produced the report. 50  Lord Justice Briggs, Civil Courts Structure Review: Interim Report


Evidence, Trials, Courts and Technology

Sir Michael’s final report was released late in July 2016. He observed that the reaction to his Interim Report ranged from straight condemnation: ‘it will just be an expensive disaster’ (from the Young Bar) to the warmest of welcomes: ‘I am the happiest man in England’ (from Prof Richard Susskind), with every shade of approval, scepticism and disapproval in between.51

Sir Michael observed that the underlying rationale was that whereas the traditional courts are only truly accessible by, and intelligible to, lawyers, the new court should as far as possible be equally accessible to both lawyers and litigants in person. In essence the proposals originally made by Lord Justice Briggs remained in place. One thing that did change was the name ‘Online Court’ which Lord Justice Briggs considered unfortunate. He preferred the name ‘Online Solutions Court’ and would tell would-be users where the new court is to be found and accessed. And it would be a part of Her Majesty’s Courts. In this section I examine the nature of Online Dispute Resolution which underpinned the recommendations of Professor Susskind, the JUSTICE Group and Sir Michael Briggs. The utilisation of technology as an aspect of dispute resolution, and indeed ODR is not new and has been offered in many areas as use of online services and e-commerce platforms has developed. The Canadian Civil Dispute Resolution Tribunal52 is an online Tribunal that was launched in 2015. It is available in British Columbia and is regulated under the Civil Resolution Tribunal Act 2012. It is directed to the resolution of small claims such as debts, damages, recovery of personal property, and certain types of condominium disputes up to $25,000 CAD. It is directed towards early resolution of disputes using an online negotiation process. If that is not successful a mediation is conducted online or via telephone. Adjudication is seen as a last resort and is conducted via an online platform, over the telephone or by videoconferencing. The steps in the process are set out in figure 1.53

Figure 1:  Civil Resolution Tribunal British Columbia Process Diagram

51  Lord Justice Briggs, Civil Courts Structure Review: Final Report para 6.1. 52 53

Using Technology to Change Process Models


The technology behind ODR may be exemplified by Rechtwijzer 2.0.54 This was developed by the Hague Institute for the Internationalisation of Law. The application was initially designed to support people with divorce-related issues in The Netherlands. A landlord and tenant module and an employment module are scheduled to go live in 2015. The Rechtwijzer website suggests that the software will support the British Columbia proposal as well as the English one.55

A.  Online Dispute Resolution In the later 1990s the proliferation of Internet-based communications and the development of cross-border e-commerce made traditional face-to-face dispute resolution processes irrelevant and difficult. Courts and traditional alternative dispute resolution (ADR) processes were difficult to access due to the high costs associated with face-to-face processes that required long distance travel and legal representation for dealing with what were often small-scale conflicts (in monetary terms). ODR originally referred to processes for dispute resolution that relied on ICT or were offered via the Internet for addressing conflicts that arose in the online space such as e-commerce or in online social fora or that related to the digital environment such as copyright abuse. Over time, use of such processes has expanded, and these mechanisms are increasingly being offered for the resolution of offline disputes (although interestingly some of the very early ideas for using ODR targeted offline conflicts, offering online processes for addressing family disputes).56 If one views the technology supporting ODR as a toolset that opens up options for the parties, in fact ODR is not a distinct field but a support system to arbitrators and mediators who address individual disputes. In seeking out these tools or software applications, practitioners are seeking technological overlays for existing practices or procedures. Examples of these tools or software applications may be found in the developments of processes for ‘automated negotiation’ where a human third party such as a mediator or arbitrator is substituted with software-based decision making.57 Negotiation support systems are another toolset. This software assists negotiating parties determine their own interests as well as reaching a mutually acceptable


55  Rechtwijzer: Divorce and separation 56  Orna Rabinovich-Einy and Ethan Katsh, ‘Lessons from Online Dispute Resolution for Dispute Systems Design’ in Mohamed S Abdel Wahab, Ethan Ktash and Daniel Rainer (eds), Online Dispute Resolution: Theory and Practice (The Hague, Eleven International Publishing, 2013) 40. 57  Examples may be found at Electronic Consumer Dispute Resolution (ECODIR) preliminary stage of dispute resolution and Cybersettle’s double-blind bidding process,


Evidence, Trials, Courts and Technology

solution that benefits both parties—the ‘win–win’ solution.58 Other tools allow mediators and arbitrators to exchange documents and communicate with parties without meeting face-to-face. What is interesting is that these tools are being used to facilitate a mediation while the parties are in the same room. The use of ODR tools for ‘offline’ disputes have been mainly in the area of efficiency, low cost and speed of communication. ‘Over the years, additional advantages have been recognized, which extend beyond efficiency-related considerations, and relate to the potential of new technologies to overcome disputant biases and facilitate parties in reaching better, pareto-optimal resolutions’.59 ODR systems include the use of tools in a larger framework. The tools are used in a co-ordinated way within a closed setting by a limited (but potentially very large) number of users who are engaged in ongoing relationships with other users and may experience similar problems over time. The classic example of an ODR system is the eBay dispute resolution mechanism. 60 million disagreements amongst traders on eBay are resolved every year using ODR. There are two main processes involved. For disputes over non-payment by buyers or complaints by buyers that items delivered did not match the description, the parties are initially encouraged to resolve the matter themselves by online negotiation. They are assisted in this by clearly structured, practical advice on how to avoid misunderstandings and reach a resolution. Guidance is also given on the standards by which eBay assesses the merit of complaints. If the dispute cannot be resolved by negotiation, then eBay offers a resolution service in which, after the parties enter a discussion area to present their argument, a member of eBay’s staff determines a binding outcome under its Money Back Guarantee. The system developed as a result of studying patterns of disputes and developing a system that can handle large numbers of conflicts of a repetitive nature. As a result the eBay system resolves disputes at low cost and in a timely fashion. Wikipedia provides another example of a developed ODR system. This offers a number of parallels to a traditional ADR system such as negotiation, mediation and arbitration as well as some uniquely online features such as online polling. Wikipedia also focuses upon dispute prevention by using technology to study dispute patterns and effective resolutions but for automatically detecting problems such as unauthorised content editing and dealing with such a problem before a user report. What both Wikipedia and eBay have done is to provide effective dispute resolution processes onsite and which were part of the overall function of the site, thereby enhancing trust in the site and improving its reputation and use. Wikipedia, eBay and other organisations that offer a dispute resolution system cater for the online environment but Rabinovich-Einy and Katsh are of the view 58  An example is Smartsettle which utilises online negotiation tools, resources/articles/negotiation-support-system-challenges-and-opportunities/. 59  Katsh and Rabinovitch-Einy above n 56, 41; EM Thiessen and JP McMahon, ‘Beyond Win–Win in Cyberspace’ (2000) 15 Ohio State Journal on Dispute Resolution 643.

Using Technology to Change Process Models


that technology has challenged some of the underlying assumptions of dispute systems design (DSD) but has also demonstrated new means for addressing and preventing disputes. DSD, especially when enabled by technology considers dispute resolution in a holistic and systematic way, emphasising the prevention of disputes as much as the resolution of them with tools that address individual disputes on an ad hoc basis.60 Thus technology becomes an essential part of the overall system rather than just a discrete tool for a particular purpose. Dispute Systems Design developed in the later 1980s.61 The major premise was that patterns of disputes can be found in closed settings and, therefore, by institutionalising avenues for addressing disputes, conflict will be handled more effectively and satisfactorily than through ex-post measures and the shift was from an individual dispute resolution model which characterised ADR to a structural one. The structural approach involved identifying patterns of disputes. Where patterns could be identified, the dispute resolution system could move beyond resolution of individual disputes and enhance prevention on a system-wide basis. Since the publication of the Costantino and Merchant book, the DSD field has generated a substantial literature on the topic and conflict management systems have been established in many different organisations and institutions.62 However, it must be noted that these systems have been developed within discrete organisations and are available within the context of the organisation. [T]he digital environment requires familiarity with the new opportunities and dangers that are associated with digital communication and the use of digital tools for locating, addressing and preventing conflicts. As digital technology becomes an inherent part of the way people interact and organizations function, it will have to be incorporated into the way people communicate about their differences.63

Like the practice of law itself, ODR is focused around the management of communication and information. The introduction of software-based processes, modes of analysis and presentation will join and, in the case of a complete online based remote adjudicative system, replace ‘in person’ processes. In fact, digital technology can enhance participation by parties by allowing a wider array of voices to


Katsh and Rabinovitch-Einy above n 56, 39. William Ury, Jeanne M. Brett and Stephen B Goldberg, Getting Disputes Resolved: Designing Systems to Cut the Costs of Conflict (San Francisco, Jossey-Bass Management Series, 1988); C Costantino and C Merchant, Designing Conflict Management Systems: A Guide to Creating Productive and Healthy Organizations (San Francisco, Jossey-Bass Publishers, 1996). 62  Katsh and Rabinovitch-Einy above n 56, 46; for examples of the literature see JP Conbere, ‘Theory Building for Conflict Management System Design’ (2001) 19 Conflict Resolution Quarterly 215–36; CA Costantino, ‘Using Interest-Based Techniques to Design Conflict Management Systems’ (1996) 12 Negotiation Journal 207–14; DM Kolb and SS Silbey, ‘Enhancing the Capacity of Organizations to Deal with Disputes’ (1990) 6 Negotiation Journal 297–304; MP Rowe, ‘The Ombudsman’s Role in a Dispute Resolution System’ (1991) 7 Negotiation Journal 353–60. See also S Lipsky, Seeber and Fincher’s comprehensive book on conflict management systems DB Lipsky, RL Seeber and Richard Fincher, Emerging Systems for Managing Workplace Conflict: Lessons from American Corporations for Managers and Dispute Resolution Professionals (San Francisco, Jossey-Bass, 2003). 63  Katsh and Rabinovitch-Einy above n 56, 48. 61 


Evidence, Trials, Courts and Technology

be heard. In person hearings impose their own limitations upon participation by the very requirement of attendance of parties and witnesses at court, disrupting daily schedules, often involving the expense of travel from a distance. Remote or ‘virtual’ presence will address these difficulties although I acknowledge will not completely eliminate a reluctance on the part of some citizens to participate. The digital environment allows for greater capture of information in digital form which may be used as evidence in a dispute. Attitudes to privacy, including the disclosure of digital information within the context of a dispute, are changing dramatically as millennials seem to be more willing to disclose personal, sensitive information online. A further aspect of DSD is that the participants must be comfortable with the technology and adjudicators in particular will have to adjust to technology-based processes which may modify ‘traditional’ court processes, and many adjudicators, as a result, may find such developments threatening. These fears may go deeper than mere discomfort with new systems or the requirement to upskill. Some tools are based on automated negotiation, which displace a third party and for repetitive simple disputes these tools can be very effective in their resolution.64 The use of technology tends to lead both to the emergence of more complex processes and also to technological resources to manage these more complex processes. Information machines should be particularly adept at preventing disputes by tracking cases and identifying causes of problems. Technology can not only reinforce processes but change them and this is something that is inevitable as the field of DSD itself is transformed by information and communications technologies.65

B.  From Online ADR to Online Court This potential for process change by technological utilisation is realised when the online courts proposals are considered. What the literature on the use of digital systems within the courts and dispute resolution process makes clear is that there are two distinctive avenues where technology can be deployed. The first is by utilising technology within the existing court process as an alternative to paper-based/print-based systems with the objective of increasing efficiency and decreasing costs. In this respect the utilisation of technology becomes a reflection or a mirror of existing processes and in this author’s opinion fails to properly realise the opportunities that digital technologies present for a complete revitalisation and renewal of systems. The underlying properties or affordances of digital systems which I have discussed in chapter two present the justice system with increased opportunities for wide ranging reforms that nevertheless maintain the various values of justice,

64  65 

ibid 51. ibid 59.

Using Technology to Change Process Models


fairness, participation, equality, promotion of stability and predictability, participation, truth, legitimacy and efficiency which underpin all justice systems. Admittedly, some of these values compete with one another. For example, the focus upon efficiency tends to overshadow other values and what may be considered a means to an end becomes an end in itself at the expense of those competing values. On occasions choices have to be made and prioritisation of values may have to take place in determining the manner and extent of the utilisation of technology. It is within this second area of the utilisation of technology rather than mirror but to reform that the Civil Courts Structure Review, the Civil Justice Council report and that of JUSTICE falls into sharper focus. Both of these reports suggest significant changes not only in the utilisation of online systems and effectively the utilisation of an online court but also in terms of the process through which a dispute passes before it reaches the point, if indeed it does reach that point, of final adjudication by a judge. In this respect some of the ADR/ODR models have been transplanted into both proposals but once again it must be emphasised that this does not mean that the proposals are alternative to the existing court process and I emphasise that in fact they are part of it. In this respect the justice system is adapting some of the developed models of ADR/ODR. Lord Thomas in his 10 November 2015 speech pointed out that reform to the court infrastructure is essential and that the present system could not continue. Such reform is predicated upon the better use of technology to provide the basis of a modernised infrastructure. At the same time he pointed out that technology must be used as a means to enhance rather than to undermine. He also described the reform process as radical because the technology-based reforms particularly to administrative processes would lead to procedural reform. He gave the example that if the default position becomes one that claims are issued and served online, procedural rules must be made that underpin that. The rules however must enable the procedure properly to be simplified in a large number of areas. Lord Justice Richards in his speech ‘Civil Litigation: Should the Rules be Simpler’ of 25 June 2015 at Gresham College66 made some observations about the Civil Justice Council report, stating that it had proposed a fundamental change in the way the court system handles low value civil claims, by the introduction of an internet-based service known as Her Majesty’s Online Court. The idea is a service with a three-tier structure. The first tier is that of online evaluation, involving a suite of online systems to guide users who think they may have legal problem and to help them if possible to avoid a dispute. The second tier applies where a dispute has arisen and involves trained facilitators working online to review papers and statements from the parties, using a mix of alternative dispute resolution and advisory techniques to try to get an agreed settlement. Only if that fails are judges brought into

66 This link contains video of Richards LJ’s address which is also available at watch?v=kwD_F-JJC5E.


Evidence, Trials, Courts and Technology

play at the third tier, for judicial dispute resolution, deciding suitable cases online, largely on the basis of papers submitted to them electronically, within a structured system of online pleading and argument. An online system of that sort, at least in its basic form, may be suitable only for a subset of cases involving relatively straightforward problems, though its full potential has yet to be assessed. The kinds of case for which it is suitable are likely, however, to involve a high proportion of people for whom the procedural complexities of ordinary litigation are a particular problem. This new online court will need its own body of rules. The intention is to keep them simple, clear and compact, consistent with the aspiration for a service that is intelligible, accessible, speedy and proportionate in cost. There may be scope for embedding the rules in the online system itself so that, for example, when users complete forms online the system will require that this be done in a compliant way and the matter will proceed only when the appropriate formalities have been met. The idea of the online court has been received with great enthusiasm and work is underway to take it forward. It offers exciting prospects though it remains to be seen how easy or difficult it is to get such a scheme up and running and how effective it proves to be.

Lord Justice Richards suggested that the best hope for simplification was to start again with a new way of conducting litigation and offer the online court as an example. The process as well as the rules would have to be simple from the outset. It must be emphasised again that the online court is part of the court system. It is not court sanctioned ODR but it utilises electronic communication tools exclusively. Some examples of the utilisation of technology within the court system can be seen in the Michigan Cyber Court which, although it had legislative endorsement in 2002 and was intended to be the first courtroom in the United States to fully operate over the internet using electronic document filing, Web-based conferencing and virtual courtrooms, sadly did not come into effect as a result of failure to fund actual infrastructure. However it is understood that Michigan has obtained software utilised by the West Australian courts to at least put in place the necessary infrastructure for the proposal. By contrast the Federal Court of Australia’s eCourt is a virtual courtroom that enables submissions to be exchanged and directions and other orders to be made online. In essence the federal e-courtroom utilises online services to mirror existing systems and is basically an email service with a secure environment combined with a message board system. In the United Kingdom, Money Claim Online and Possession Claim Online, although both Internet-based services, are e-filing systems that default to the normal ‘in person’ system once a defence to a claim has been filed. The Subordinate Courts of Singapore’s e-alternative dispute resolution process is available for parties to an e-commerce transaction to resolve their disputes on the Internet. This is properly described as a state-based ODR system which requires the consent of both parties, in which case a court appointed mediator

Using Technology to Change Process Models


will be assigned to the case. All communications and correspondence thereafter will be done by email for mediation although the mediator may ask the parties to meet face-to-face or produce and exchange documents and exhibits. As noted, the system is limited to e-commerce transactions and matters, but if a settlement is not reached, the claim reverts to the normal court process. The three proposals for the online court allow for a full adjudicative process. The model for the online court, as described in the Civil Justice Council report, suggests that under the current system the focus of the provision of justice services has as its objective and its focus bringing a dispute to court. Matters such as dispute avoidance and dispute containment are not given significant priority although in many civil courts there is provision for a judicial settlement conference that could be seen as a form of dispute containment. The Civil Justice Council report suggested that to approve access to justice there should be more than ensuring that judges and courts are affordable and available to resolve disputes and that it was vital to prevent disagreements that have arisen to escalate excessively. It is suggested that the court should extend its scope beyond the resolution or adjudication of the dispute to include dispute containment and dispute avoidance which will thus reduce the number of disputes that need to be resolved by judges. Thus the three-tier approach was developed directing resources more towards dispute avoidance and dispute containment than to dispute resolution, although the dispute resolution phase would still involve the online element which, unlike other systems, would remain throughout. The first tier would be to assist users with their grievances to evaluate them, categorise the difficulties, understand entitlements and consider the options that would be available. It is described as a form of information and diagnostic service available at no cost to court users. This would work alongside other online legal services that are currently available to help users with their legal problems such as those developed by charitable bodies or provided by law firms on a pro bono basis which would either sit within the online court or be linked to the service. The idea of online evaluation is that the first port of call for users should be a suite of systems that would guide them. By being better informed users may be assisted in avoiding legal problems and disputes in the first place or may help them resolve difficulties and complaints before they develop too far into substantial legal problems. The second tier, which is online facilitation, would be engaged if the online evaluation did not dispose of the matter. Trained and experienced facilitators working online would review papers and statements from the parties and help them by mediating, advising or encouraging them to negotiate. A mix of ADR and advisory techniques would be used and would be actively led by the facilitators using an inquisitorial rather than adversarial approach. The facilitation is in the spirit both of ADR and EDR and was inspired by the work of adjudicators who work in the financial ombudsman service. These individuals managed to dispose of 90 per cent of the service’s workload so that only 10 per cent of cases reached the ombudsman. A similar filtering process is anticipated with the online court.


Evidence, Trials, Courts and Technology

Facilitators would be supported by telephone conferencing facilities although there would also be some automated negotiation. The third tier would provide for a new and more efficient way for judges to work. It is proposed that online judges would be full time and part time members of the judiciary who decide suitable cases on an online basis largely on the papers which would be submitted electronically as part of a structured but adversarial system of online pleading and argument. The process would be supported where necessary by telephone conferencing facilities and the decisions of online judges would be binding and enforceable in the same way that decisions made by judges in traditional courtrooms are. There should be a procedure, the report suggests, where online judges have discretion to refer cases to the conventional court system where there is an important issue of legal principle involved or where it is considered that the credibility of witnesses or evidence would better be judged in the physical courtroom. In this regard, in my opinion, one of the shortcomings of the online court as proposed becomes apparent. In tier two and three in the Civil Justice Report there is an emphasis upon telephone conferencing facilities. The physical presence model is suggested if there may be issues relating to credibility and it seems curious that the utilisation of videoconferencing facilities or some other form of VOIP/video technology is not proposed to eliminate the rather disembodied utilisation of the telephone and to enable virtual ‘presence’ in the online court. Videoconferencing was recognised in the Civil Justice report as part of what were referred to as second generation ODR systems. The report states: [B]y the time this is fully introduced in HMOC we expect this to be a form of tele-presence (systems whose sound and video quality is so high that users feel as so they are in the same room as those with whom they are engaging). In crude terms, this would be like adding a very high quality of Skype video call to the ODR service and this will replace the telephony that will used in the first generation. In this way users will have videoconferences in tier one perhaps with advice counsellors or pro bono advisers. In tier two with specialists who are, for example, mediating or offering neutral views on the legal merits and in tier three directly with judges in a suitably designed online environment.67

It seems to me that there is no reason why video technology could not be deployed upon the introduction of the system which would allow for other future systems suggested in the Civil Justice report, such as analytical systems for legal problems, the development of systems to assist in negotiation, analytical systems for evidence presentation and the development of artificial intelligence systems throughout all tiers although it is not anticipated that AI based systems would replace human online judges. The JUSTICE report went further, containing more detailed provisions for the early case evaluation process and the utilisation of specially trained facilitators.


Online Dispute Resolution for Low Value Civil Claims above n 47 para 8.4.



In addition, it identified software tools that are currently deployed in the Netherlands, British Colombia and New South Wales and, as has been noted, considers that the jurisdiction of the online court should be extended to all civil claims. It will be seen that while the general shape of the proposed online court is recognisable with an adjudicative process as the final step on the journey, the utilisation of online systems allows for a significant shift in focus in terms of the various steps that are taken before the adjudicative process is engaged. Under the current system unless parties are prepared to sit down and talk or negotiate through their solicitors, there is little opportunity for litigants actually to engage with the process apart from the various steps of filing a claim, filing a defence, engaging in interlocutory procedures and discovery and taking the matter to a hearing. The facilitation and evaluation tiers require party engagement and also involve active engagement on the part of the court system, particularly at the tier two level where the process becomes an inquisitorial one. It will also be seen that the proposal for the online court is not simply the deployment of technology to enhance or make more efficient an existing system. While it still involves the filing of claim and answer online, the underlying process and focuses are different, enabled as they are by technology.

VII. Conclusion The deployment and use of technology challenge some of our assumptions about the trial process—the importance of orality, presence and ritual—and can significantly alter the way in which information is put before a fact finder using more effective communication tools. These changes occur, of course, within an existing adjudicative structure and act as overlays or alternatives to an existing system. The use of technology can significantly challenge and have the effect of transforming a clumsy, creaking, archaic process into a streamlined, inexpensive, efficient system less prone to error. The Digital Paradigm offers all these opportunities to radically reform our trial processes. The presence of unused digital tools that hold so much promise to effect this reform are the gauntlet challenging the Court system to move into the Digital Paradigm. What has been discussed in this chapter is all in the realm of the actual—that which is happening now—or the probable—where the technology is available and where the law or legal processes are playing catch-up. But continuing disruptive change is a reality of the Digital Paradigm and developing new technologies and processes could radically drive procedural change. Online alternative dispute resolution could well herald challenges to fundamental processes such as the jury trial. Drawing on the model of the court process from ancient Athens, sourcing information from disparate sources which is fact checked and laying the case before a crowd-sourced jury—all of which would be done online, and utilising block chain technology to ensure process integrity—an


Evidence, Trials, Courts and Technology

interdisciplinary team with expertise in economics, legal philosophy, block chain, coding, design and startups have developed a system called Crowdjury to reimagine the judicial system for the collaboration era. It may or may not succeed.68 But it is an example of what the technologies that form a part of the Digital Paradigm enable.

68  Frederico Ast, ‘The Crowdjury: a Crowdsourced Judicial System for the Collaboration Era’ https:// See also ‘CrowdJury: Justice for All – A Judicial System for the Internet Era’

9 Social Media I. Introduction The rise of social media and the multitude of social media platforms have presented some interesting legal challenges. Exponential dissemination means that content that is distributed on social media potentially has a worldwide audience and the effect of and damage from the publication often of confidential information may have occurred and have been irremediable by the time the legal process is activated and addresses the issue. In England in 2011 the publication on social media of details about celebrities and some of their more embarrassing activities resulted in the use of injunctions to prevent or prohibit publications and the development of the super-injunction which forbade publication or disclosure of the fact that an injunction had been made. Although such orders would have received compliance from mainstream media, individuals using social media felt free to ignore them. By flouting such orders the individuals demonstrated the power of social media, the difficulty that may be encountered in enforcing an order ‘against the world’, an attitude towards an order of the court that demonstrated the ease with which they could be ignored and with a consequential erosion of confidence in the ability of the courts to enforce their orders and a certain disrespect for the law. A case in point is that of Ryan Giggs, an English football star who was engaged in an extra-marital affair. He obtained a super-injunction to prevent publication of the affair and publication of the fact that an injunction had been made in the first place. Mainstream media complied. Facebook, Twitter and LinkedIn subscribers did not. Giggs’ name was revealed under the protection of Parliamentary privilege but only after the information was circulated on Twitter.1 Twitter’s UK audience jumped by a third between April and May 2011 as thousands of users tweeted about Giggs’ affair. Facebook overtook MSN in May 2011 and became the second most popular site in the UK with 26.8 million users. In May 2011 LinkedIn registered 3.6 million visitors, up 57 per cent from a year earlier.2 1 Patrick Wintour and Dan Sabbagh, ‘Ryan Giggs named by MP over injuction’ The Guardian (23 May 2011) 2  Josh Halliday, ‘Ryan Giggs helps Facebook, Twitter and LinkedIn’ The Guardian (27 June 2011)


Social Media

Grave concern was expressed by the Lord Chief Justice, who stated that modern technology was out of control and called for action against those who defied court injunctions and told lies on social media and websites.3 The development of superinjunctions arose as a result of the intrusive activities of mainstream media, particularly the tabloid press. Indeed, the media’s lack of respect for ordinary injunctions gave rise to the superinjunction.4 Although the superinjunctions could be enforced against the likes of The Sun, the disseminatory qualities of Internet-based social media platforms and the difficulty in isolating a single perpetrator out of the ‘flock of birds’ who may tweet and retweet protected information over Twitter posed very real difficulties for the enforcement and effectiveness of court orders. Once the information was available on the Internet it was impossible to stop. Another example may be found in the case of PJS v News Group Newspapers Ltd involving whether an injunction against publication of relationship activities and the identity of those involved in English newspapers should be permitted.5 Although the English press had not identified the participants, their identity had been published in the United States, Canada and Scotland. Online publication in England and Wales had been ‘geo-blocked’ although details had been made available across social media sites. At first instance the injunction was refused. Nothing would be served by granting it. The Court of Appeal upheld the injunction but the newspaper publishers applied to set it aside on the basis that the information was in the public domain. The Supreme Court upheld the injunction. There was no or limited public interest in the story and the identification of the various participants would have a detrimental effect upon the children.6 The way in which the Supreme Court dealt with the ‘public domain’ issue was interesting. It identified a difference between the press and the Internet as media of publication. It observed that there was a qualitative difference in intrusiveness and distress likely to be involved by way of unrestricted publication in the hard copy media and on their own Internet sites, referring to a ‘media storm’.7 Furthermore the geoblocking of sites via search engines would serve to mitigate any harm and justified the retention of the injunction, notwithstanding that the information was available on other sites. As Lord Mance said: Unlike Canute, the courts can take steps to enforce its injunction pending trial. As to the Mail online’s portrayal of the law as an ass, if that is the price of applying the law, it is one which must be paid … It is unlikely that the heavens will fall at our decision.

3  Owen Bowcott, ‘Superinjunctions: Modern Technology out of control, says Lord Chief ­Justice’ The Guardian (20 May 2011) 4  Stephen Sedley, ‘The Gordon and Giggs Show’ London Review of Books (16 June 2011) 3 www.lrb. 5  PJS v News Group Newspapers Ltd [2016] UKSC 26. 6  For observations on the issue of privacy see Marion Oswald, ‘Have “Generation Tagged” Lost Their Privacy’ SCL Foundations of IT Law Programme 4 June 2016 7  Above n 5 [35].



It will simply give the appellant, his partner and their young children a measure of temporary protection against further and repeated invasions of privacy pending a full trial which will not have been rendered substantially irrelevant by disclosure of relatively ancient sexual history.8

The focus seemed to be more upon maintaining the integrity of the injunction within the territorial jurisdiction of the Court than upon a recognition of the more global problems surrounding the dissemination of information via the Internet. The use of geoblocking, and its recognition as a technological means of mitigating the damage of dissemination and giving the injunction some ­‘technological’ support is another example of the ‘answer to the machine’ adage but it is a partial one only. Indeed, Lord Manse’s comment seems to recognise that the law does not provide a complete answer to the problem. Bloggers are an easier target when there has been a breach of court orders. In 2009 10 charges were brought against a New Zealand blogger, Cameron Slater, alleging breaches of non-publication orders made in the New Zealand courts. At the time Slater was carrying out a campaign against the use by the courts of non-publication orders. He developed a method of publishing the particulars that would identify the suppressed name that was a little less obvious than using text. He provided particulars of the names by way of pictograms or, in one case, by using binary code. In each case there were sufficient pieces of information to identify a person by way of a process of elimination. The fact that the information was in code mattered little and the Judge held that to say that encoding information in binary does not constitute particulars was a distinction without a difference. Similarly with the pictogram. The information could be decoded in the same way that an aggregation of information may lead to the identification of a person by way of a process of elimination—another form of interpreting a particular code or solving a puzzle. Of the 10 charges laid, nine were held to be proven and Slater was convicted and fined.9 These examples demonstrate the problems that the publication and dissemination of information via social media can create for established rules and legal processes and indeed for the rule of law itself. The legal problems arising from social media would justify a separate text. In this chapter I shall consider the challenges of social media within the framework that I have set out in chapter two. What is social media? Once we have defined the phenomenon and the technology behind it can we understand the nature of the problem and how it can be addressed. The issue of definition is a somewhat nuanced one. It could be said that we know social media when we see it, but that is more definition by example or by platform, and that is limited and not entirely adequate. For that reason some consideration of the matter of definition is necessary. Associated with the issue of definition is that of ordering social media p ­ latforms into a general taxonomy identifying the general characteristics of the various 8  9 

ibid, para 3. Police v Slater [2011] DCR 6.


Social Media

platforms and then determining where they fit within the overall ‘social media ecosystem’. I then move on to consider the challenges posed by social media to the jury trial. This has been an issue that attracted considerable attention in England as a result of publicity surrounding juror misbehaviour in a complex and expensive trial10 and another case where a juror conducted Internet-based research about an accused person and communicated that information to fellow jurors.11 The discussion identifies the nature of the problem and suggests a nuanced approach to its solution that recognises the way in which information communication via the Internet may demand a variety of responses. Then follow two examples of the way in which social media use has attracted the interest of the authorities and resulted in prosecutions. Once again the nature of the information communicated, and, importantly, the context and background to its communication must be taken into account. Both cases demonstrate the ‘contextlessness’ of some Internet based-communications and the care that must be taken in initiating prosecutorial activity. In both cases there were plenty of opportunities to pause and ‘take a deep breath’. In both cases, the initiation of prosecutions was somewhat heavy handed and had the potential to bring the law into disrepute. The case of Chambers v DPP was itself widely publicised on social media and attracted the Twitter hashtag of #twitterjoketrial.12

II.  What is Social Media? Social media and social networking are phenomena that have developed on the Internet and are best understood when compared with pre-digital forms of media communication. When we think of media we generally think of mainstream news media such as radio, television or newspapers—basically using print or broadcast technologies. Communication using these technologies is generally in the hands of large conglomerates, centrally located with a ‘one to many’ distribution model. In the case of broadcast technologies—putting to one side the recording of broadcast content—engagement with the content provided is on an appointment basis where the viewer\listener must be in the proximity of a receiver to view or listen to the content. Feedback, if any, is generally by means, in the case of newspapers, of letters to the editor. The ability to engage and participate in this form of communication is very limited indeed. Social media presents an entirely different form of engagement. Rather than a ‘one to many’ model social media presents a ‘many to many’ model where anyone


Attorney-General v Fraill [2011] EWCA Crim 1570, [2011] 2Cr App R 21. Attorney-General v Dallas [2012] EWHC 156 (Admin), [2012] 1 WLR 991. 12  Chambers v DPP [2012] EWHC 2157 (QB). 11 

What is Social Media?


using a social media platform can engage in the ‘conversation’ and share a point of view, pictures, video, lengthier comment thus democratising the information and communication space. Definitions of social media seem to converge around digital technologies emphasising user-generated content or interaction.13 Some definitions focus upon the nature of message construction in social media, defining social media as ‘those that facilitate online communication, networking and/or collaboration’.14 Kaplan and Haenlein briefly define social media as ‘a group of Internet-based applications that build on the ideological and technological foundations of Web 2.0, and that allow the creation and exchange of User Generated Content.’15 The problem with a ‘Web 2.0’ characterisation is that it ignores that there is a movement towards mobile handheld devices that are not web-based that contain social media tools as individual applications or ‘apps’. Lewis suggests that the term social media serves as a ‘label for digital technologies that allow people to connect, interact, produce and share content’.16 One of the difficulties with these definitions is that they can encompass other technologies such as email and overlook the unique technological and social qualities that distinguish social media. A more complex definition suggests that social media can be divided into three parts: a) the information infrastructure and tools used to produce and distribute content; b) the content that takes the digital form of personal messages, news, ideas and cultural products; and c) the people, organisations and industries that produce and consume digital content.17 This interesting definition identifies a transport layer, a content layer and a form of user interface. However Howard and Parks use specific platforms as exemplars. This focus upon tools overlooks their actual and potential social impacts. The ‘platform exemplar’ approach may widen or narrow the scope of the definition. Under this approach social media—the media for online ­communication—means online sites and tools that enable and facilitate online interaction and collaboration as well as the sharing and distribution of content.

13 AM Kaplan and M Haenlein, ‘Users of the World, Unite! The Challenges and Opportunities of Social Media’ (2010) 53 Business Horizons 59 S0007681309001232. 14  A Russo, J Watkins, L Kelly and S Chan, ‘Participatory Communication with Social Media’ (2008) 51 Curator: The Museum Journal 21. 15  Kaplan and Haenlein above n 13, 61. 16  BK Lewis, ‘Social Media and Strategic Communication: Attitudes and Perceptions among College Students’ (2010) 4 Public Relations Journal 1. 17  PN Howard and MR Parks, ‘Social Media and Political Change: Capacity, Constraint, and Consequence’ (2012) 62 Journal of Communication 359, 362.


Social Media

This wide ­definition includes blogs, wikis, on-line fora, social networking sites such as F ­ acebook, LinkedIn, Twitter and Google+, content communities such as YouTube, Flickr and Vimeo, social bookmarking and pinboard sites like Delicious, Pinboard and ­Pinterest, RSS and web feeds, web manipulation and parsing tools, web creation tools and embeddable multimedia. The Oxford English Dictionary defines social media as ‘websites and applications which enable users to create and share content or to participate in social networking’. Social networking is defined as ‘the use or establishment of social networks or connections; (now esp) the use of websites which enable users to interact with one another, find and contact people with common interests etc’. The focus upon the content layer and the very broad scope of some of the definitions either casts the net too wide or leads to uncertainty and imprecision. There tends to be a general consensus of the tools that may be considered social media but a lack of consensus on what defines these tools as social media. The definitional approach using exemplars is the one that has been adopted by most commentators.18 Social media tools are recognisable but defining social media in this way limits the opportunity to develop a broad and robust theory of social media. An interaction on Twitter is useful as an exemplar of social media only for as long as Twitter remains stable both in technology and how users communicate through tweets. This model cannot be extended beyond Twitter.19 Carr and Hayes suggest that there must be a common understanding of social media that is applicable across disciplines and only then can we theorise social media processes and effects.20 Although it is important to understand the technology, it is more important to understand how the technology affects user behaviours. Thus to adopt a technocentric approach to social media based on specific devices or tool affordances, often considered to be synonymous with Web 2.0 or the collaborative web,21 is unhelpful because it tells us little about the development of behaviour. An additional difficulty is experienced when social media and social networking are conflated.22 Social network sites have been defined as ‘web-based services that allow individuals to (1) construct a public or semi-public profile within a bounded system, (2) articulate a list of other users with whom they share a connection, and (3) view and traverse their list of connections and those made by

18 ibid.

19  Caleb T Carr and Rebecca A Hayes, ‘Social Media: Defining, Developing, and Divining’ (2015) 23 Atlantic Journal of Communication 46, 47. 20 ibid. 21  For example see E Agichtein, C Castillo, D Donato, A Gionis and G Mishne (11 February 2008). ‘Finding High-quality Content in Social Media’. Paper presented at the The International Conference on Web Search and Web Data Mining, Palo Alto, CA.; Tim O’Reilly, ‘What is Web 2.0: Design Patterns and Business Models for the Next Generation of Software’ (2005) O’Reilly Media. html. 22  For a discussion of social networking within the context of issues of privacy see ch 10.

What is Social Media?


others within the system’.23 Although social network sites are usually social media tools, not all social media are inherently social network sites. Thus it can be seen that social media have sometimes been considered as amalgamations of site features and at others defined by specific features or technological affordances, minimising their unique communicative properties. Carr and Hayes propose a new definition that recognises social media as a distinct subset of media tools that share a common set of traits and characteristics. This is based on the proposition that the content that individuals create and consume provides an intrinsic value that is far greater than the individual site provides. The definition that they suggest is as follows: Social media are Internet-based channels that allow users to opportunistically interact and selectively self-present, either in real-time or asynchronously, with both broad and narrow audiences who derive value from user-generated content and the perception of interaction with others.24

This definition recognises that social media are a phenomenon of the Internet, that there is user autonomy as to the level of participation and that the creation or use of content in one form or another is essential. Thus the only technological aspect of the definition lies with the Internet as the basis for the communication channels. Carr and Hayes observe that earlier attempts at definition were hampered by the following problems: a) an excessive focus upon emerging trends in technology, media and users thus limiting their temporal applicability; b) being so broad that they could apply to other forms of communication technology such as e-mail; and c) being so ‘discipline specific’ that they were too limited to be applicable for the development of theory.25

On the basis of their definition and without necessarily becoming too ‘platform’ or ‘exemplar’ specific Carr and Hayes have divided a number of Internet-based communications technologies into social media and those that are not a social medium.26 Social Medium Social network sites—Facebook, Google +, YouTube, Yelp, Pheed Professional network sites—LinkedIn, IBM’s Beehive Chatboards and Discussion fora

Not A Social Medium Online news services—NY Times Online, Wikipedia Skype Netflix

23  DM boyd and NB Ellison, ‘Social Network Sites: Definition, History, and Scholarship’ (2007) 13 Journal of Computer-Mediated Communication 210, 211. 24  Carr and Hayes above n 19, 50. 25  ibid, 52. 26  ibid, 53.


Social Media

Social Medium Social/Casual Games—Farmville, Second Life Wiki ‘Talk’ pages Tinder Instagram Wanelo Yik Yak

Not A Social Medium E-mail Online News SMS and Texts Ooovoo Tumblr Whisper

A.  The Social Media Taxonomy The various platforms identified as social media have certain characteristics that serve to assist in identifying precisely how engagement via a social medium takes place. In this regard it is helpful to consider developing a social media taxonomy. Rather like the definition of social media itself no universally accepted classification system exists. Scholarly and business research studies analyse social media usage behaviours and draw upon past studies to come to an understanding of how business can use social media to market products and services.27 There are differing approaches to the development of a social media taxonomy. Some are based upon the data types that are generated in social media use such as service provided data types and user-related data types.28 Some taxonomies may be centred upon users and their activities.29 Others have been based on Bloom’s taxonomy of learning domains and as developed in the 1990s by Lorin Anderson the various levels of remembering, understanding, applying, analysing, evaluating and creating.30 These categories, which suggest active engagement as a part of the thinking process, include various social media tools within each category as shown in figure 1. Most of the attempts at taxonomies run contrary to the definitional approach. Whereas the definitional approach attempts to exclude definition by example, the development of a taxonomy almost demands the identification of platforms which exemplify the various classes within the taxonomy. Thus, within the classification

27  Rosa Lemel, ‘A Framework for Developing a Taxonomy of Social Media’ (2014) 6 Business Studies Journal 67. 28  Christian Richthammer, Michael Netter, Moritz Reisner, Johannes Sanger and Gunther Pernul, ‘Taxonomy of Social Network Data Types’ (2014) EURASIP Journal on Information Security 11, www. Service provider data types include login data, connection data and application data. User-related data may include profile data and associated information such as ratings and interests, and communication data. 29  Min-Sook Park, Jong-Kuk Shin and Yong Ju, ‘A Taxonomy of Social Networking Sites Users: Social Surveillance and Self-Surveillance Perspective’ (2015) 32 Psychology and Marketing 601 http:// 30  In 1956 Benjamin Bloom led a group of educational psychologists who developed a classification of levels of intellectual behaviour important in learning. During the 1990s a new group of cognitive psychologists, led by Lorin Anderson, a former student of Bloom’s, updated the taxonomy reflecting its relevance to work in the twenty-first century.

What is Social Media?


Figure 1: Web Tools and Social Media Platforms in the Context of Blooms Taxonomy Levels Source:

Figure 2:  Fred Cavazza’s Social Media Landscape 2008


Social Media

applying the modified Bloom approach, platforms are identified which are illustrative of the various intellectual activities undertaken. The platforms themselves are free and are available on the Internet or as Apple or Android Apps. Different platforms have different methods of developing revenue streams, primarily from advertising but also from the collection and aggregation of user data which then can be used for social media analytics. The most common use of analytics information is to develop a picture of customer sentiment for marketing and customer service activity. The approach that has been undertaken by French social media researcher Frederic Cavazza unashamedly approaches the development of a social media taxonomy on the basis that social media are places, tools and services allowing individuals to express themselves in order to meet, communicate and share. His classification system has been based on first identifying social media tools and classifying them broadly under headings based on what particular aspects of Internet social activity they fulfil. His first and perhaps best-known classification was developed in 2008 and is entitled ‘The Social Media Landscape’ and is presented in the form of a diagram which reflects subsets of activity in which users may engage.31 These include publishing by means primarily of blogs but also by collaborative systems known as Wikis, of which Wikipedia is the best example; by platforms devoted primarily to sharing; by discussion fora; by social networking; by microblogging—Twitter is the best known example; by lifestyle activities which include Lifestream, Livecasting; by gaming forms of interaction like virtual worlds of which Second Life is an example, social games and MMO or Massive Multiplayer Online Games which have a significant social component associated with the game. Cavazza himself acknowledges the dynamic and disruptive nature of Internet social media platforms. Each year he has updated his landscape which now represents the changing face of social media. Online social media is an evolving field with new platforms and features. Access technology has evolved as well. The rise of the mobile or handheld device such as the tablet and the smartphone have changed user access habits. Indeed, smartphones are now the first devices used for communication. The fate of Google+ provides an example of the volatility of Internet social media\ networking platforms. Although Google+ had featured in Cavazza’s classification for some time it was a platform that had not achieved widespread acceptance and in early 2015 Google developed products knows as Photos and Streams which were elements of Google+ but are now distinct from the social network.32 At the same time the profile links began to disappear. As may be seen from the ­earlier discussion, profiles form a fundamental part of a social media/network platform.33 On the


See figure 2 McCormick and Thomas Ricker, ‘Google + officially splits into Photos and Streams’ The Verge (2 March 2015) 33  Casey Newton, ‘Google + profile links have started disappearing from Google’ The Verge (1 June 2015) 32  Rich

What is Social Media?


other hand new social media/networking platforms have become available, some of them capitalising on live video streaming services such as Meerkat and Periscope. Not only have social media platforms changed and evolved, but so has Cavazza’s classification system. In 2012 he based his classification not only on a reduced number of activity classes34 but also surrounded those activities with the various types of device that could be used to access social media. The latest iteration of Cavazza’s classification system is further simplified. Cavazza describes it as one large ecosystem with four major usages.35 At the centre of social media activity are Facebook and Twitter which allow users to fulfil four major usages or activities in the social media/networking ecosystem, namely publishing, sharing, networking and discussing. Mobile applications such as WeChat, Hangouts and Snapchat occupy a central position based primarily upon their multi-functionality within the social media ecosystem. Cavazza has recognised the importance of the various devices that might be used to interact with social media platforms. The proliferation of connected devices such as tablets, desktops, laptops and particularly handheld devices such as smartphones demonstrate the actual and potential ubiquity of social media platforms.

Figure 3:  Fred Cavazza’s Social Media Landscape 2015

34  Cavazza’s 2012 categories of activity were described primarily as conversations and interactions and were further defined as Buying, Localisation, Publishing, Sharing, Playing and Networking. Central to all these activities were the three social media platforms of Facebook, Twitter and Google + See for the diagram 35  Frederick Cavazza, ‘Social Media Landscape 2015’


Social Media

Why is this classification important? Primarily a form of classification locates a particular platform within a certain Internet-based social interaction. Although the platforms all share the characteristics of social media such as profiles, sharing, communities and the like, they have sometimes subtle, sometimes significant differences in the way in which they work. Whilst the law regulates behaviour rather than a technology, within the field of Internet-based communications it is my contention that there must be an understanding of what the technology does and how it works.36 In chapter three I have already discussed the difficulties that the law may experience in attempting to use analogy to ascertain the applicability of a rule. A proper understanding of the technology and its purpose will lead to correct and proper decision-making that locates a behaviour within its correctly stated technological context. Given that observation, I shall now proceed to consider some examples of the interaction between social media and the law.

III.  Social Media Meets the Law Social media as a new step in communications will be subject to the laws which have always applied to communications. Laws regulating communications have existed as long as communications technology. All forms of communication, from personal conversations to mass media broadcasts, are covered by one form of law or another but conventionally the level of legal regulation increased with the reach of the communication medium. The law has long recognised that broadcast communications carry with them significant power, both because of the size of their prospective audience and the fact that, due to the considerable economic barriers to entry, the proprietors of mass communication mediums have always been relatively few in number.37

As a general proposition this statement is probably correct but it rather avoids the paradigmatic difference between what could be described as ‘mainstream media’ and ‘new media’ or social media. This may flow from identifying social 36  There has been critical comment of the use of analogy to address problems posed by new technologies illustrated by the comment by Matt Collins QC in ‘Paddling in the Backwater: Australian Courts and Online Defamation’ The International Forum for Responsible Media Blog (27 November 2015) where he observed ‘The New Zealand Court of Appeal recently spilled litres of ink analysing whether defamatory comments posted on a Facebook profile were best compared with poems tacked onto a golf club notice board, graffiti on a wall, or statements shouted out at a public meeting’. See Murray v Wishart [2014] NZCA 461. html. For a discussion of Murray v Wishart see ch 3 in the context of analogies and ch 11 as an example of ‘platform defamation’. 37  Joseph Collins, ‘Social Media and the Law’ in Patrick George, Monica Allen, Stefanie Benson, Joseph Collins, James Mattson, Justine Munsie, Gabriella Rubagotti and Gavin Stuart (eds), Social Media and the Law (Australia, LexisNexis Butterworths, 2014) 3–4.

Social Media Meets the Law


media simply as another communication medium rather than taking stock of the social medium itself. The technology itself must be examined especially given that social media platforms to varying extents exhibit the underlying qualities of digital communications technologies such as exponential dissemination, information persistence, dynamic information, anonymity (to a certain degree), continued development of platforms via permissionless innovation and associated continuing disruptive change. Commentators focus upon messages posted on social media websites, or the various activities conducted on social media rather than examining the medium itself and trying to make some sense of that. The focus, as is so often the case, is on the message rather than a consideration of the medium and how it affects or drives communications behaviours. Michael L Kent states the issue in this way: If we take McLuhan’s premise from 1964 that media are ‘extensions of humans’, then a reasonable question might be, how do social media extend our senses and experiences, not simply how are social media used, which is akin to a study of newspaper readership or Nielsen ratings. I believe that most scholars, professionals, and social media users would agree that social media are different in many ways than the traditional print and broadcast media.38

The fact of the matter is that social media exemplifies the paradigmatic nature of the change in the way we deal with and communicate information within the digital space. Social media is a complex phenomenon. It has, as Petra Theunissen suggests, co-existing multiple states and potentialities rather than simply a sender to receiver information dissemination tool that has been the approach of most social media commentators. Theunissen concentrates not upon the content of communication, which has been the focus of most studies, but upon the logic and potential of the medium and the technology.39

A.  Social Media and the Courts In 2010 the committee of the Conference of Court Public Information Officers in the United States issued a report on the impact that the new media is having on the court system.40 The findings of that study were interesting. It observed that there are emerging interactive social media technologies that are powerfully multimedia 38  Michael L Kent, ‘Introduction—Social Media Circa 2035: Directions in Social Media Theory’ (2015) 23 Atlantic Journal of Communication 1, 2 2015.972407. 39  Petra Theunissen, ‘The Quantum Entanglement of Dialogue and Persuasion in Social Media: Introducing the Per-Di Principle’ (2015) 23 Atlantic Journal of Communication 5 www.tandfonline. com/doi/full/10.1080/15456870.2015.972405. 40  ‘New Media and the Courts: The Current Status and a Look at the Future’. All the reports may be found at The third survey carried out in 2012 is interesting. It revealed: The participation of judges in the survey continued to climb, as did their use of the technologies surveyed.


Social Media

in nature; that there are fundamental continuing changes in the economics, operation and vitality of the news industry that courts have relied upon to connect with the public; and there are broader cultural changes in how the public receives and processes information and understands the world. These ‘new media’ pose a number of challenges to courts and their culture in that new media are decentralised and multi-directional whilst the courts are institutional and largely unidirectional; new media are personal and intimate whereas courts are separate, sometimes cloistered and by definition independent and new media are multimedia incorporating video and still images, audio and text whilst courts are highly textual. Into this cloistered and highly textual environment come jurors whose perceptions have been formed by the media to which they have been exposed. The report identified seven categories of new media technology that could impact upon the courts. These are: —— Social media profile sites (Facebook, Myspace, Linkedin, Ning) which allow users to join, create profiles, share information and view still and video images with a defined network of ‘friends’. —— Microblogging (Twitter, Tumblr, Plurk). A form of multimedia blogging that allows users to send and follow brief text updates on micromedia such as photos or audio clips and publish them on a website for viewing by everyone who visits the website or by a restricted group. Microbloggers can submit messages in a variety of ways, including text messaging, instant messaging, email or digital audio. —— Smart phones, tablets and notebooks (iPhone, iPad, Droid and Blackberry)— defined by those mobile devices that can capture audio, as well as still and video images, and post them directly to the Internet. These devices also enable users to access the Internet, send and receive emails and instant messages, and otherwise connect with online networks and communities through broadband or Wifi access. —— Monitoring and metrics (Addictomatic, Social Seek, Social Mention, Google Social Search, Quantcast) which includes the large and increasing body of sites that aggregate information about Internet traffic patterns and what is posted on social media sites. They display analysis of how a particular entity is portrayed or understood by the public.

The percentage of judges who strongly agree that their own use of the technologies in the survey poses no threat to professional ethics has doubled since the first year of the survey. This applies whether the technologies are used in personal or professional lives. The percentage of judges who strongly agree that courts as institutions can use the technology without compromising ethics has also doubled since 2010. The percentage of judges who strongly agree that new media are necessary for public outreach has doubled since 2010. The 2012 report at p 3 pointed to developments that had occurred since the 2011 report indicating increased court social media use and further use of communications systems to inform the public in varying levels of detail and sophistication of court processes.

Social Media Meets the Law


—— News categorising, sharing and syndication (Blogs, RSS, Dig, Reddit, Delicious)—a broad category that includes websites and technology that enable the easy sharing of information, photos and video, and the categorisation and ranking of news stories, posts to blogs and other news items. —— Visual media sharing (Youtube, Vimeo and Flikr) allowing users to upload still and video images that are stored in searchable databases and easily shared and can be emailed, posted or embedded into nearly any website. —— Wikis. A Wiki is a website that allows for the easy creation and editing of multiple interlinked web pages via a web browser using a simplified mark-up language or a WYSIWYG (what you see is what you get) text editor. Among the uses for wikis are the creation of collaborative information resource websites, power community websites and corporate intranets. The most widely recognised and used wiki is the collaborative encyclopedia Wikipedia. Another much lesser known wiki that has an impact on the judicial system and is the subject of study in the new media project is Judgepedia.41 All of these categories of new media involve the creation, assembly and dissemination of information. Many of these utilities have been adopted by mainstream media on the Internet to the extent that there is a significant element of media convergence.42 Not only may information about cases be disseminated in a multitude of ways by mainstream media but may be the subject of commentary discussions and opinion on blogs and twitter. In addition, modern technology means that the Internet is accessible virtually anywhere—the quality of permanent connectedness. Portable wireless devices mean that an individual may blog or tweet from anywhere, including inside a courtroom. Miniaturised devices such as smartphones mean that such activity may be carried out discreetly. The ubiquity of handheld devices like tablets and smartphones mean that anyone can be a court reporter. ‘Live tweeting’ from court presents challenges for the integrity of the trial process especially if the identity of a confidential witness is revealed or a reference is made to evidence which may be challenged or ruled inadmissible. The phone camera can be easily and surreptitiously deployed to photograph participants in a trial including jurors and witnesses. I recall one incident where a witness was photographed by a member of the public who was a member of a criminal gang—obviously to intimidate. The action encompassed a matter of seconds and the intervention of courtroom security staff meant that the phone was secured and the person was dealt with. Nevertheless, the disruptive effect upon the proceedings was considerable.

41 Judgepedia provides information on the Federal and State Judiciaries. It was absorbed into ­Ballotpedia—the encyclopaedia of American Politics—in early 2015, 42  NZ Law Commission, The News Media Meets ‘New Media’—Rights, Responsibilities and Regulation in the Digital Age (Law Commission, Wellington, December 2011 Issues Paper 27) 20–29.


Social Media

It is sometimes difficult for a judge to keep an eye on everything that is going on in court, so surreptitious photography—be it still or video—can take place. The problem then is exacerbated when the photo or the video is distributed via social media or, in the case of video, live streamed. In New Zealand there is a regime in place which allows for the presence of cameras in court after application is made to the judge by a recognised news media representative. New In-Court Media Guidelines were recently approved by the Chief Justice.43 There are certain occasions when the proceedings of the court may be livestreamed44 but in most cases immediate communication of court proceedings is subject to a delay period of 10 minutes in the event that there may be a challenge to the admissibility of evidence or there may be a requirement for confidentiality. A case where the defendant was charged with breach on non-publication orders was steeped in aspects of Internet culture.45 The defendant himself, represented by counsel, sought leave to ‘live blog’ the case, although that was not pursued once he realised that his attention would be more profitably directed to the matter in hand. A news media organisation wanted to report the case in a ‘blow-by-blow’ fashion and post the story to their website as it developed. The judge directed that the ‘10 minute’ rule would be applicable and the reporters ran a system of ‘relay ­reporting’. Once the 10-minute period had expired the reporter in court would leave to post content on the website and be replaced by another reporter and so it proceeded over the hearing. The result was an excellent example of court reporting. The 2014 Conference of Court Public Information Officers (CCPIO) New Media Survey46 observed that in the United States, nine out of 10 adults carry a mobile device and 75 per cent of those between 25 and 29 admitted to sleeping with their mobile phones. The question was posed ‘why would they leave them at the courthouse door?’ Although most of the challenges posed by new media and handheld devices can be dealt with by way of established procedures such as contempt of court, they have become such an integral part of society that judges and courts are increasingly focusing on containing the use of electronic devices rather than banning them from the courthouse and are getting more comfortable with allowing court proceedings to be shared through social channels. However, although courts are recognising the expectation of the general public to take a cell phone or mobile device into the courtroom, judges and officials generally prefer they don’t use the devices during court proceedings.

43 In-Court Media Coverage Guidelines 2016, media-centre/INCOURTMEDIACOVERAGEGUIDELINES2016.pdf. 44  The appeal by Kim Dotcom and others against an extradition decision and an application for judicial review was the subject of an application to livestream the proceedings. The order was granted subject to conditions. See Ortmann, Dotcom, van der Kolk and Batato v USA (High Court Auckland CRI 2015-404-429 Ruling of Gilbert J 30 August 2016). 45  Police v Slater above n 9. 46  2014 CCPIO New Media Survey Conference of Court Public Information Officers 6 August 2014,

Social Media Meets the Law


The 2014 New Media Survey also points out that courts are recognising that social media can be used for court communication purposes. In 2104 37 per cent of the courts surveyed have a social media policy and Facebook, Twitter and ­YouTube use by court administration is increasing. Twitter may be used to notify the release of decisions. There is an increasing recognition of the fact that social media is a necessary tool to enable the courts to connect with the public. There is, however, a particular problem that arises when decision-makers become involved in the use of social media and social networking in the case of a trial. There have been occasions where judges have used social media either in the course of a trial or in circumstances where there may be a perception of bias. The use of social media by the judiciary is a contentious issue. Some Judicial Conduct Guidelines prohibit judicial engagement with social media. Others offer ­cautionary advice.47 Some support the judicial use of certain social media platforms.48 ­Difficulties occur when a Facebook friend or a Twitter follower may subsequently be a litigant. The mere existence of social network relationships between a judge and one of the parties appearing before him creates an appearance or perception of bias and raises possible concerns regarding the risk of ex parte communications.49 Law practitioners must exercise a considerable degree of restraint in the use of social media, bound as they are by the requirements of professional conduct and ethics standards. The Law Society of England and Wales released a Practice Note in June 2015 addressing best practice for social media use for practitioners, emphasising the benefits in terms of commercial opportunities and enhanced engagement with clients along with professional networking and the opportunity to debate issues, as well as the risks involved in blurring boundaries between ­personal and

47  For example Kentucky Judicial Ethics Opinion JE-119 Judges’ Membership on Internet-Based Social Networking Sites (January 2010). The Ethics Committee concluded that the current answer is a qualified yes; Ohio Judicial Ethics Advisor Opinion 2010-7—Supreme Court of Ohio a judge may be a ‘friend’ on a social networking site with a lawyer who appears as counsel in a case before the judge, but cautions, ‘As with any other action a judge takes, a judge’s participation on a social networking site must be done carefully in order to comply with the ethical rules in the Code of Judicial Conduct’. www.supremecourt.ohio. gov/Boards/BOC/Advisory_Opinions/2010/Op_10-007.doc; South Carolina Advisory Committee on Standards of Judicial Conduct Opinion No 17-2009 Re: Propriety of a magistrate judge being a member of a social networking site such as Facebook—The Committee concluded that ‘Allowing a Magistrate to be a member of a social networking site allows the community to see how the judge communicates and gives the community a better understanding of the judge. Thus, a judge may be a member of a social networking site such as Facebook’. For further examples from the United States of America see ‘Implications of Judges and Attorneys Using Social Media’ National Center for State Courts Social Media and the Courts Resource Guide, 48  David Lat, ‘Judges on Twitter: Is This a Problem’ Above the Law (30 September 2014) http:// 49  David Lat, ‘A Federal Judge and His Twitter Account: A Cautionary Tale’ Above the Law (18 November 2015); David Lat, ‘An Update on the Federal Judge and “His” Twitter Account’ Above the Law (30 November 2015)


Social Media

professional use and recognising that ethical and professional standards apply as much online as they do in the real world.50

IV.  The Googling Juror The use of social media and access to the Internet by jurors poses some very real problems for the integrity of the trial process. The phenomenon has been described as ‘The Googling Juror’.51 The real issue is the approach that should be adopted when jurors have engaged in social media use during the course of the trial and have disclosed views about the case or jury deliberations, have engaged in a social media relationship with an accused or other participant, or have gone outside the evidence presented in court and have undertaken private research. Some detailed research into the extent of juror use of the Internet has been carried out by Professor Thaddeus Hoffmeister52 who carried out one of the first surveys on jury service in the Digital Age and in England by Professor Cheryl Thomas.53

A.  Professor Thomas’ Research The study by Professor Thomas was carried out in Nottingham, Winchester and London and included 62 cases and 668 jurors. It covered both extended, high profile cases as well as standard cases lasting less than two weeks that had attracted little media coverage. Professor Thomas concluded that all jurors who looked for information about their case during the trial looked on the Internet.54 Further, she found that: —— More jurors said they saw information on the internet than admitted looking for it on the internet. In high profile cases 26 per cent said they saw information on the internet compared to 12 per cent who said they looked. In

50  The Law Society Social Media Practice Note 18 June 2015 For some observations from 2007 see Laurence Eastham ‘Editorial—Web 2.0 and its Impact on Society and Legal Practice’ SCL Journal (2 July 2007) www.scl. org/site.aspx?i=ed980. 51  David Harvey, ‘The Googling Juror: The Fate of the Jury Trial in the Digital Paradigm’ (2014) New Zealand Law Review 203. This section draws on my earlier article. 52  Thaddeus Hoffmeister, ‘Google Gadgets and Guilt: Juror Misconduct in the Digital Age’ (2012) 83 University of Colorado Law Review 409. Professor Hoffmeister’s article, although focusing upon American practice, contains an interesting and informative discussion of the problem and poses a number of solutions which are common to most American writers on this subject. 53  Cheryl Thomas, Are Juries Fair? (Ministry of Justice Research Series 1/10, February 2010) at vii–viii (at d). 54  ibid, viii.

The Googling Juror


standard cases 13 per cent said they saw information compared to 5 per cent who said they looked. —— In the study jurors were admitting to doing something they should have been told by the judge not to do. This may explain why more jurors said they saw reports on the internet than said they looked on the internet. —— Among all jurors who said they looked for information on the internet, most (68 per cent) were over 30 years old. Among jurors on high profile cases, an even higher percentage (81 per cent) of those who looked for information on the internet were over 30.55 Thus the problem was not limited to younger jurors. However, this must be viewed within the context of the demographics. 67 per cent of the jurors were between the ages of 30 and 59 whereas 17 per cent were within the 18–29 age bracket for the Nottingham Crown Court. The figures were 59 per cent and 18 per cent respectively for the Winchester Crown Court. Thus, the majority of jurors comprised the >30 age bracket. That they also comprised the majority of Internet researchers cannot be surprising in itself.56 Professor Thomas continued: The findings raise a number of questions that should be examined further: do jurors realise they are not supposed to use the internet? How do they use the internet: do they just look for information or do they also discuss the case on social networking sites? What type of judicial instruction would be most effective in preventing jurors from looking for information about their case on the internet?57

The results of the survey show that in high-profile cases almost three-quarters of jurors will be aware of media coverage of their case. As Professor Thomas says, it would be helpful to know how these jurors perceive this media coverage, what particular type of pre-trial coverage jurors’ recall and what type of coverage some jurors find difficult to put out of their minds.58 These findings must be read alongside the fact that new technologies have allowed for media convergence and an availability of material and information via the Internet that challenges the ‘fade factor’ or ‘practical obscurity’. In a subsequent article Professor Thomas considered that a specific combination of factors could arise in jury trials, creating a ‘perfect storm’ of juror misconduct. Those factors were: 1. when jurors do not understand that they should not look for information (via the internet or elsewhere) about their case during the trial; 2. when jurors did find such information and share it with other members of the jury; and


ibid, viii. ibid, 57. 57  ibid, viii. 58  ibid, 44. 56 


Social Media

3. where, even if other jurors know this behaviour is wrong, they are unwilling or do not know what to do to ensure that any verdict they return is fair.59

Professor Thomas used these factors to identify steps that may be taken to avoid juror misconduct in the future.

B.  Information Flows In considering the issue of juror Internet use the nature of the information communication must be considered. This can be viewed in terms of the direction of the ‘information flow’.60 ‘Information in’ arises where untested information come into the jury room. ‘Information out’ may occur where a juror communicates updates on jury room experiences including deliberations or may blog about the experience of jury service. ‘Information in’ provides the most significant challenge in that it may include untested evidence, opinions, incorrect definitions of law or even information about the past criminal history of the accused. One of the best examples of ‘information in’ may be demonstrated by the case of Attorney-General v Dallas.61 In that case Theodora Dallas, a juror in a trial, told fellow jurors that a man on an assault charge who was in their charge had previously been accused of rape. She had conducted her research at home and had clearly deliberately disobeyed the trial judge’s instructions not to search the Internet. She was sentenced to six months’ imprisonment for contempt of court. In Australia there is a specific statute that addresses the ‘information in’ phenomenon. In June 2011, after a lengthy investigation by Victorian Police, a juror who went online and sought information during a high profile trial which ended in deadlock, pleaded guilty and was fined $1,200. He was the first person to be prosecuted under laws introduced to crack down on ‘do it yourself ’ jurors who could imperil trials.62

59  Cheryl Thomas, ‘Avoiding the Perfect Storm of Juror Contempt’ (2013) Criminal Law Review 483 at 484. 60  The ‘information flows’ approach was developed by Professor Ian Cram. See Ian Cram, ‘Twitt(er)ing Open Justice? or threats to fair trials in 140 characters)—A Comparative Perspective and A Common Problem’ (Unpublished paper delivered at Justice Wide Open Conference, City University London, 29 February 2012) see I am indebted to Professor Cram for providing me with his paper that he presented at the City of London Conference and for his analysis of information flows. The full paper may be found at 61 

Attorney-General v Dallas above n 11. ‘Protect our jury system’ Herald Sun (online edn, Australia, 9 May 2010); Editorial, ‘Juror in hot water for online research’ Herald Sun (online edn, Australia, 19 June 2011). 62  Editorial,

‘Information in’ problems have also arisen in the United States. In Allan Jake Clark v State of ­Maryland No 0953/08 (Md Ct Special App Dec 3 2009), a juror, confronted with the fact that he had disobeyed judicial instructions by carrying out his own research and bringing the results into the jury room, said ‘to me that wasn’t research. It was a definition.’ Steve Lash, ‘Md jury’s Wikipedia search voids murder conviction’ The Daily Record (online edn, Baltimore, 7 December 2009);

The Googling Juror


‘Information out’ may challenge the confidential nature of jury deliberations, may inhibit robust and free-flowing discussion and may have an adverse effect upon the deliberative process. In England, New Zealand and Australia there are strict prohibitions against jurors revealing the nature of jury room discussions post trial.63 However, although there are no restrictions on jurors talking about the case post-trial in the United States there have been a number of instances of ‘information out’ during the course of a trial.64 In England, the case of Joanne Fraill illustrates the problems of Facebook and ‘information out’ as well as demonstrating the dangers that use of social media may pose to the conduct of a fair trial. Fraill, a juror in a long-running trial, was sentenced to eight months imprisonment for contempt of court for ­communicating with a defendant, Jamie Sewart, who had already been acquitted in a multimillion pound drug trial in Manchester. After Sewart had been acquitted on all counts with which she was charged, contact was made by Fraill who sent an email to Sewart’s Facebook account. The two continued contact by means of Facebook and some of the communications related to jury deliberations. Fraill had actively conducted Internet searches for defendants and others who had been of importance during the trial, thus introducing an ‘information in’ component to the mix. The court

In Pennsylvania a juror in a shaken baby murder case was indicted for contempt after she conducted Internet research on the symptoms that the child had, including the term ‘retinal detachment’. She offered to share research with her fellow jurors. A mistrial was declared. Brian Grow, ‘Juror could face charges for online research’ Reuters Legal (online edn, Atlanta, 19 January 2011). 63  The restrictions are not so strict in the United States as a result of constitutional free speech considerations. 64  Among these are a Judge who updated his colleagues on the course of a trial while he was sitting as juror, Debra Cassens Weiss, ‘Lawyer May Cite Judge-Juror’s “Livin’ the Dream” E-Mails in New Trial Bid’ ABA Journal (online edn, Chicago, 16 April 2010); Editorial, ‘Fresno judge’s jokey jury chatter ruled immaterial’ Associated Press (online edn, California, 10 August 2010); People v Ortiz No F060792 (Cal App 5th Dist Appeal filed 11 August 2010); for recent observations on the propriety of a judge using social media to comment on a trial see State v Thomas New Mexico Supreme Court No 34,042 20 June 2016,042.pdf noted by Emil J Khiehne, ‘NM Supreme Court restricts judges’ use of social media. Did it go too far?’ New Mexico Appellate Law Blog (21 June 2016), a juror who tweeted the outcome of a civil case in circumstances where it was argued that the tweets were indicative of bias—see Ebony Nicolas ‘A Practical Framework for Preventing “Mistrial by Twitter”’ (2010) 28 Cardozo Arts & Entertainment Law Journal 385 at 391. Peter Mychalcewycz, ‘Man’s Improper Tweeting Could Cause Mistrial’ (18 March 2009); see also ‘What a Twit! Twitter-using juror may cause $12.6 million mistrial’ NY Daily News (online edn, New York, 13 March 2009). One tweet read as follows: ‘So, Johnathan, what did you do today? Oh, nothing really. I just gave away TWELVE MILLION DOLLARS of somebody else’s money!’ and ‘Oh, and nobody buy Stoam. Its bad mojo, and they’ll probably cease to exist, now that their wallet is $12M lighter.’ and a juror who tweeted that jury deliberations were in train. United States v Fumo 639 F Supp 2d 544 (ED Pa 2009). The message read ‘This is it … no looking back now!’ The comment disclosed no discernible prejudice because it was vague and unclear. The juror said that he used Twitter as a brief stream of consciousness diary of his thoughts and stated while it is possible to respond to Twitter postings and to read other users’ responses to tweets, he did not use such functions during the trial. Nothing in the comment referred to the trial or indicated any predisposition toward any party in the proceedings, and there was no evidence of any discussion of such matters with fellow jurors.


Social Media

accepted that she had no oblique motive, nor was her Internet activity intended to influence the verdict. Although the court recognised that she may have wanted to commiserate with Sewart’s personal problems, the communications went beyond expressions of compassionate concern.65

C.  Dealing with Juror Misconduct There are a number of steps that can be taken to address and deal with ‘The Googling Juror’. The first is the greater deployment of technology for information communication within the courtroom which I have discussed in chapter eight.66 The second is a proper educative process for jurors and the judiciary.67 The third addresses steps that lawyers, judges and courts can take during the trial process to enhance juror engagement within the courtroom.68 The fourth is a more nuanced approach to juror misconduct based upon the nature of the information sought and the impact that it may have had on the outcome of the trial. The measures that are currently used to deal with juror misconduct appear to fall into three main categories. The first is the deterrent category involving sanctions. These may vary from court to court whereas they should in fact be enforced consistently and vigorously to be effective. But, at the same time, they have a difficulty in that they fail to recognise changing societal attitudes to information technology use. The second category may be described as preventative and is the area upon which most commentary has focused. Within this category are the educational approaches of continued reasoned and informative jury instruction explaining why jurors should desist from trial related queries or communications on the Internet and which I have already discussed. The third category is remedial and focuses upon the preservation of the right to a fair trial. It will be manifested by such actions as a juror’s removal or a retrial where it is found that juror(s) actions in researching or communicating with others about the trial may have compromised the verdict.


Attorney-General v Fraill above n 10. have used the term ‘information communication’ rather than evidence, because I believe that the trial process is an exercise in information communication at all stages. Openings, evidence, closings, summings up all involve the communication and processing of information. In the wider context, the whole practice of law is about the communication, sharing and processing of information. Information technology may, and should, be used to assist or enhance in this communicative process. 67  See Dennis M Sweeney, ‘Worlds Collide: The Digital Native Enters the Jury Box’ (2011) 1 Reynolds Courts and Media Law Journal 121 at 134 et seq; Nancy S Marder, ‘Jurors and Social Media: Is a Fair Trial Still Possible’ (2014) 67 SMU Law Review 617 especially at 649 and following. Gareth S Lacy, ‘Untangling the Web: How Courts should respond to Juries using the Internet for Research’ (2011) 1 Reynolds Court and Media Law Journal 169 at 189; for an example of a jury instruction see Chief Judge Donald E Shelton, ‘No Googling- No Texting’ Jury Instruction Video (2010) Jury-Selection-Trial-and-Deliberations/Resource-Guide.aspx; Thomas, above n 59, 501; Antoinette Plogstedt, ‘E-Jurors: A View from the Bench’ (2013) 61 Cleveland State Law Review 597 at 640. 68  ibid, Sweeney, 138, ibid, Lacy, 189. 66  I

The Googling Juror


I suggest a fourth more nuanced pathway to address the problem of recognising the changing nature of society’s expectations of information technology use, yet preserving the jury trial. It has aspects of the remedial about it, as well as deterrent and preventative elements, but focuses upon the circumstances and context of technology use by jurors. I propose a three stage inquiry that will allow the judge to gauge the appropriate judicial response to the use of technology.

D.  The Nuanced Approach The first stage in the nuanced approach is to identify the nature of the information flow. Is it ‘information in’ or ‘information out’? ‘Information in’ is likely to pose more risks, especially if it is shared with other jurors, but much will depend on the nature of the information. ‘Information out’—unless it solicits a response, in which case there is a mixed ‘information out/in’ scenario—is probably less ­harmful unless, of course, it is to disclose the jury verdict before it is given or invite input into the deliberation process. In the latter case there could be wider implications for the integrity of the justice system. For example, the ramifications in a case involving a business could be considerable if early information about a verdict were to have an impact upon a share price. The second stage requires an evaluation of the nature of the communication. Differing Internet protocols—Google, Facebook, Twitter and whatever else may be around the corner69—will involve differing levels of communication, interaction and content. This enquiry emphasises the importance of a judiciary educated in social media and communications technology developments. Once that evaluation has been concluded a consideration of the likely impact of the communication can be made. The third stage involves a consideration of whether or not information has been communicated to the jury and the impact that this may have had. Cases in the past suggest that some jurors are not adverse to drawing a court’s attention to misconduct on the part of their fellow jurors and indeed part of a jury instruction may be that jurors should not be afraid to be ‘whistleblowers’ in the interests of the integrity of the trial process.70


Thanks to ‘permissionless innovation’. R v Atkinson. DC Auckland CRI-2010-004-014676, 30 October 2012 a juror made enquiries, probably on the Internet, and brought them into the jury room. He was described as marginalised from the rest of the jury and although the document was made available to other jurors, it had not received consideration and therefore no jury contamination. The juror was discharged. In the case of Aiono v R [2013] NZCA 280 the jury had been sworn and had chosen a foreman, but had not heard the Crown opening. While pre-trial argument was proceeding it was discovered that one of the jurors had researched the case on the Internet and told four other jurors about the findings, and the Judge discharged the five of them. 36 members of the original jury panel returned to Court and five new jurors were empanelled. It was argued that the Judge should have postponed the start of the trial and empanelled a new jury. The Court of Appeal considered replacing the five jurors was unusual but 70 See


Social Media

Once these inquiries have been completed a judge will be able to consider whether there has been a mistrial or whether the trial may continue. A similar approach may be adopted by an appeal court when considering an appeal against a verdict where there has been an allegation of juror misconduct involving Internet use. On a basis similar to that of assessing the nature of the misconduct, so the judicial response may be tailored proportionately to the level of misconduct. While the rhetoric of the judges in Fraill and Dallas makes an important point, it is debatable whether a tweet about the fact that the jury is about to consider its verdict should amount to a sentence of imprisonment. A sliding scale of culpability, again depending upon the nature of the breach of the instructions, is desirable with imprisonment reserved for the case where egregious misconduct has clearly prejudiced the fair trial. This section began with a consideration of the research of Professor Thomas and it will end with some of her conclusions. She focuses upon the behavioural changes that are driven by new technologies. An understanding of these behavioural changes would mean: ‘then less emphasis would need to be placed on legal restrictions over online content. Some legal restrictions would need to remain, but this should lead to a reduced need for overly severe (and probably ineffective) external controls’.71 Thus Professor Thomas’ view is that the effects of juror access to online content may well be mitigated in the long term by a clearer understanding of why behavioural change takes place and whether new methods of conveying information to juries may be adopted to counter the effects of such behavioural change. The problem, according to Professor Thomas, is not just about Internet use, it is part of a wider issue that illustrates and highlights a lack of understanding about contempt rules among jurors.

V.  Lost in Translation—Interpreting Social Media Messages There may be occasions where online communications may appear at first glimpse to run foul of the law but which, when carefully examined, are exercises in either humour, satire or frustration. The case of Chambers v Director of Public Prosecutions is one such example.72 It has become a flag bearer case for freedom of speech on the Internet, partly for its demonstration of the unwillingness of legal institutions to understand the nature of humour, but more importantly for the collision it presents between content using a mass distribution system where the traditional

s­ atisfied herself that none of the other jurors had conducted Internet research nor had been influenced by the research that had been carried out. There being no risk of jury contamination, there was no risk of a miscarriage of justice. 71  72 

Thomas, above n 59, 501. Chambers v Director of Public Prosecutions [2012] EWHC 2157.

Lost in Translation—Interpreting Social Media Messages


‘one to many’ model utilised by monolithic media organisations has been usurped by a ‘many to many’ model where user-generated content is potentially available to all, and how that content should be interpreted in the context of law. Mr Chambers, in a fit of frustration about the closure of an airport as a result of bad weather, tweeted this message to his 600 followers. Crap! Robin Hood Airport is closed. You’ve got a week and a bit to get your shit together otherwise I am blowing the airport sky high!!

No action was taken by the Airport Police and when he was interviewed about the matter by the South Yorkshire Police they observed that there was no evidence to suggest that the tweet anything other than a foolish comment posted as a joke for only his close friends to see. It was only when the Crown Prosecution ­Service became involved that Mr Chambers was charged. He was convicted in the ­Magistrate’s Court and his appeal to the Crown Court was dismissed. He appealed further to the Divisional Court which allowed his appeal and in doing so undertook a careful analysis of the way in which Twitter worked. The way in which it defined the medium had an impact upon the way in which it considered the message. The Divisional Court also considered the important issue of context, observing: In short, a message which does not create fear or apprehension in those to whom it is communicated, or who may reasonably be expected to see it, falls outside this provision, for the very simple reason that the message lacks menace.73

There were a number of other contextual features identified by the Court including the following: —— The message was posted on ‘Twitter’ for widespread reading by his followers drawing attention to himself and his predicament. —— It was not sent to anyone at the airport or anyone responsible for airport security, or any form of public security but rather was an expression of frustration that the airport was closed. —— The language and punctuation were inconsistent with the writer intending it to be a serious warning. The double exclamation marks provided an example. —— The sender of the message identified himself—something that was unusual in terrorist messages. —— There was ample time for the threat to be reported and extinguished given the large number of followers who were recipients of the tweet. —— None of those who read the message during the first few days thought ­anything of it. This included Airport Security and the Police. It was when the matter came into the hands of the Crown Prosecution Service that it was given a serious interpretation.74 73 

ibid at [30]. It has subsequently been reported that, notwithstanding advice from the Crown Prosecution Service that prosecuting the appeal may no longer be in the public interest, the Director of Public Prosecutions decided to proceed. ‘The CPS even sent Chambers and his solicitor, free-speech campaigner 74 


Social Media

—— No weight appeared to have been given by the Crown Court to the lack of urgency which characterised the approach of the authorities. What the decision does address in the sub-textual sense is the way in which new technologies may challenge established lines of thought. The 2003 Communications Act and the use of the term ‘public electronic communications network’ brings the legislation into the digital paradigm. Yet it was necessary to carefully examine the operation of Twitter to ascertain whether it fell within the scope of the Act and whether there was any difference between what the author has suggested may be dynamic as opposed to latent content. This analysis demonstrates that the medium is just as important as the message. Mr Chambers’ comment in the pre-Internet age would perhaps have been made at work, around the dinner table or perhaps even gratuitously to a group gathered at a bus stop or a train platform. As such it would have probably gained little ­attention and would have been recognised for what it was—a statement born out of frustration with a heavy dose of hyperbole. The fact that it was made via an Internet platform means that some of the underlying properties of the Internet come into play—a worldwide audience beyond ‘friends’ or ‘followers’ and exponential dissemination are the crucial ones in this case. Whilst terrorism threats must be taken seriously, the threats must be serious and verifiable ones. Given the sophisticated use of Internet based communications protocols by terrorist organisations, Twitter is hardly going to be the platform that will be preferred. Law enforcement officials are going to have to be discerning and careful in evaluating what truly amount to communications that fall within the offence provisions of legislation in future. In New Zealand there was a similar case. Police v Joseph involved a message communicated by means of a video posted to YouTube purportedly from the hacktivist organisation, ­Anonymous.75 Joseph was charged with a breach of section 307A(1)(b) of the Crimes Act 196176 in that he,

David Allen Green, papers stating that it now agreed that the case should end. However, at the last minute the DPP, former human rights lawyer Keir Starmer, overruled his subordinates, it is alleged’. N Cohen, “‘Twitter joke’ case only went ahead at insistence of DPP’ The Guardian (28 July 2012); see 75 

Police v Joseph [2013] DCR 482. The convoluted provisions of s 307A Crimes Act 1961 read as follows: 307A  Threats of harm to people or property 76 

(1) Every one is liable to imprisonment for a term not exceeding 7 years if, without lawful justification or reasonable excuse, and intending to achieve the effect stated in subsection (2), he or she— (a) threatens to do an act likely to have 1 or more of the results described in subsection (3); or (b) communicates information— (i) that purports to be about an act likely to have 1 or more of the results described in subsection (3); and (ii) that he or she believes to be false.

Lost in Translation—Interpreting Social Media Messages


without lawful justification or reasonable excuse and intending to cause a significant disruption to something that forms part of an infrastructure facility in New Zealand, namely New Zealand Government buildings, did communicate information that he believed to be about an act namely causing explosions likely to cause major property damage. Mr Joseph, a secondary school student at the time, created a video clip using his laptop that lasted a little over three minutes which, by accessing voice software he created, featured messages of threats to the New Zealand Government. The clip was not available to the public by means of a search. It was unlisted and could only be located by a person who was aware of the link to the particular clip. The defendant provided the link to news organisations and to a ‘fake’ New Zealand Prime Minister John Key Facebook page that he created. The clip came to the attention of the Government Communications Security Bureau (GCSB) who dismissed it as a ‘crackpot random threat’ and confirmed that its communication was ‘completely outside the Anonymous MO’.77 However, the matter was referred to the police who decided to prosecute. The issue was one of intention. The Judge accepted that the investigators did not consider any responses were necessary, and that would be consistent with the defendant’s own assertions that the extreme statements that had such violent connotations were only meant to be a ‘joke’. The Judge also noted that the intention had to be a specific one. He found that the intention of the defendant was to have his message seen and observed on the Internet and, although his behaviour in uploading the clip to YouTube in an Internet café and using an alias could be seen as pointing to an awareness of unlawful conduct it did not, however, point to proof of the intention to cause disruption of the level anticipated by the statute. It transpired that the defendant was aware that the clip would probably be seen by the authorities and also that he expected that it would be ‘taken down’. The charge was dismissed. (2) The effect is causing a significant disruption of 1 or more of the following things: (a) the activities of the civilian population of New Zealand: (b) something that is or forms part of an infrastructure facility in New Zealand: (c) civil administration in New Zealand (whether administration undertaken by the Government of New Zealand or by institutions such as local authorities, District Health Boards, or boards of trustees of schools): (d) commercial activity in New Zealand (whether commercial activity in general or commercial activity of a particular kind). (3) The results are— (a) creating a risk to the health of 1 or more people: (b) causing major property damage: (c) causing major economic loss to 1 or more persons: (d) causing major damage to the national economy of New Zealand. (4) To avoid doubt, the fact that a person engages in any protest, advocacy, or dissent, or engages in any strike, lockout, or other industrial action, is not, by itself, a sufficient basis for inferring that a person has committed an offence against subsection (1). 77 

Police v Joseph above n 77.


Social Media

Police v Joseph presents some interesting aspects of the use of the medium. ­ erhaps most significantly, the video that was placed on YouTube was not made P available for public searching. It was available only to those who were aware of the link. This means that the ability to distribute the message is in the hands of the person uploading the material to YouTube. A person may therefore post something to YouTube that is truly frightening or menacing, but which may never be available to the public. In such a situation the ‘communication of information’ may be far removed from the mischief that the statute seeks to address. This demonstrates the care with which one must approach the issue of dissemination of information on the various platforms available on the Internet. The utilisation of the medium may have an aggravating or mitigating effect upon the message. Mr Joseph went a step further. Instead of making the link available to a select few close friends, he distributed the message to a number of ‘public’ organisations, news media among them. Interestingly enough, his threat was not published in the mainstream media and it was left to those who assess communications threats—the GCSB—to do something about the message. However, by making the link available to news media and other organisations it was clear that Mr Joseph wanted to get his message widely published. Both Chambers and Joseph are illustrative of the sometimes hyperbolic ­communication that characterises some content on platforms on the Internet. In this respect it may be useful for those investigating and considering laying charges which seem to amount to ‘harmful digital communication’ to consider carefully the overall context of the communication. In Chambers the Court did just that, observing that Mr Chambers was easily identifiable, used double exclamation marks, and provided his own explanation. Within a wider framework, the two cases demonstrate some of the underlying enabling qualities of the digital technologies and the Internet. The development of Web 2.0 and the rise of citizen journalism by bloggers, and the ways in which usercreated content can become available to a worldwide network via social media such as Twitter, Facebook, blogs and YouTube, pose fresh challenges for those who have to assess threats. Some of these qualities have been described elsewhere.78 Those of participatory information creation and sharing, dynamic information, persistence of information, dissociative enablement and permanent connectedness seem to be applicable in this case. These are qualities underlying the Internet and digital communications systems that are going to pose problems for those upon whom harmful digital communications have an impact, whether as recipients, investigators or decision makers. What is of concern is that the opportunities afforded by the Internet in terms of giving effect to freedom of expression run up against fear and misinterpretation. Chambers and Joseph demonstrate the care that must be taken.


See ch 2.

Other Aspects of Social Media


VI.  Other Aspects of Social Media The cases discussed demonstrate some general observations about the use of social media. The first is that social media enables comment. It enables comment and observation that might otherwise have been restricted to the private arena or might be made within a limited public one. The second point is that social media is in the nature of a conversation but the quality of dissociative enablement or online disinhibition means that users are far more likely to be more acerbic in their criticism than they may be face to face. This can result in the phenomenon known as ‘flame wars’79 which are lengthy exchanges of angry or abusive messages between users of the Internet on message boards or social media. The phenomenon of ‘trolling’ or the purposeful instigation of argument by the posting of outrageous or abusive messages are both behaviours which create more heat than light in a communication. In addition there is another aspect of Internet disinhibition which involves the ability to respond immediately and often instinctively to a message which results in an ill-considered and often emotionally charged or driven response rather than a considered or measured one and often is exemplified by pushing ‘send’ with the inability to retract or withdraw the message once it has gone out. These aspects of social media can cause a number of difficulties in many spheres of life including employment, on-going relationships as well as to reputation. One such example is that of Justine Sacco who on 20 December 2013, before she left London to travel by air to South Africa, tweeted to her small 170 person following ‘Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white.’80 She landed in South Africa 11 hours later and found that her reputation was in tatters, her message had been retweeted worldwide and the ‘twitterverse’ judged her and found her guilty in a flood of highly judgmental and insulting expressions in an example of highly public Internet shaming. As a result Ms Sacco, a senior director of communications with a public relations company, lost her job.81 Ms Sacco’s case demonstrates the problem of free speech in the social media space. Her ill-considered message had ramifications far and away beyond the content of the message itself which was a wry and probably innocuous attempt at humour if expressed in a private forum but which became ruinous as a result of


See the definition and discussion of ‘Flaming’ at a discussion of Justine Sacco’s case study see Jon Ronson, So You’ve Been Publicly Shamed (London, Picador, 2015) 63 and following. See also Jon Ronson, ‘How One Stupid Tweet Blew Up Justine Sacco’s Life’ New York Times Magazine (12 February 2015) magazine/how-one-stupid-tweet-ruined-justine-saccos-life.html?_r=0; Lucy Waterlow, ‘“I lost my job, my reputation and I’m not able to date anymore”: Former PR worker reveals how she destroyed her life one year after sending “racist” tweet before trip to Africa’ Mailonline (16 February 2015) www. html#ixzz3uccMeRJG. 81  I shall deal in more detail with aspects of reputational damage and shaming in ch 11. 80  For


Social Media

the comments and response which followed. Certainly social media provides a wonderful opportunity for the former silent majority to have a voice and could be characterised as democracy on steroids. But the potential harms that can follow are significant and the way in which the law may address such harms will be considered in chapter 11.

VII. Conclusion In 2015 a prosecution was brought against Bahar Mustafa alleging that she sent a communication conveying a threatening message and with sending a grossly offensive message via a public communication network. The police discontinued the prosecution in October of 2015 on the grounds that there was insufficient evidence to provide a realistic prospect of conviction, although, at the time of writing, that decision could be reversed by the Crown Prosecution service. Once again, Twitter was involved and an interesting observation was made by Robert Sharp of the writers’ association PEN. It’s a shame this investigation took so long to conclude, but the police are working with laws that are no longer fit for purpose. These charges were brought under communications legislation that was written for fax machines, not social media. The law needs an urgent update.82

To mitigate the suggestion that the law is no longer appropriate to the technologies of modern communication in general and social media in particular, following the case of Chambers v DPP the Director of Public Prosecutions in England adopted Interim Guidelines of prosecuting cases involving communication sent via social media.83 The Guidelines recognise the enormous volume of material that is communicated via social media and the wide language of the Malicious Communications Act 1988 and the Communications Act 2003 meant that, given the often vituperative content that is often communicated, a very large number of cases could come before the courts. Thus, the first problem identified was a potential significant increase in workload. The guidelines also recognise the balance that must be struck between free speech and the right not be harassed. This requires a prosecuting authority to consider whether there is clear evidence of an intention to cause distress and anxiety or if the platform84 was merely ‘[t]he expression of

82 Jessica Elgot, ‘#killallwhitemen row: charges dropped against student diversity officer’ The Guardian (3 November 2015) 83  Director of Public Prosecutions, Interim guidelines on prosecuting cases involving communications sent via social media (19 December 2012) 84  The Guidelines specify Twitter, Facebook and LinkedIn.



unpopular or unfashionable opinion … even if distasteful to some or painful to those subjected to it’. There are conflicting aspects of the public interest that have to be considered as well—that of prosecuting individual cases as well as the more general public interest in being able to say potentially upsetting things without fear of prosecution. This means that prosecutors must consider more than the mere content of the offending message, beyond cases of obvious and direct threats: did the suspect show remorse, was the target intended to see the message, was any distress caused intentional?85 The Guidelines demonstrate a response by a prosecuting authority to an identifiable problem created by social media and how the law and law enforcement should be addressed in a democracy where, as a result of social media, conversations and expressions of opinion that may have been restricted to a few are now conducted with potentially a worldwide audience. The problem is exacerbated by the nature of exponential dissemination and the quality of information persistence which has the potential for allowing the conversation to continue, as indeed it does for example on Twitter under the hashtag #twitterjoketrial. In light of these qualities which must be taken into account in governing the approach by lawmakers and judges the nuanced approach suggested in dealing with the phenomenon of the googling juror in my opinion must be applied in all situations. If new rules are to be made in regard to social media communications the definitional problems which I have demonstrated mean that the development of such rule systems must be of a generalised nature based on broad principles—a form of platform neutrality if you will. Efforts to target a platform will have limited applicability. However, the intricacies of the technology must be understood both by legislator and judges. Skilled Internet users are very adept at finding loopholes and developing workarounds. Although existing laws may be available to deal with some of the problems that arise with social media, the continuing tensions between free speech and what is acceptable speech will pose significant problems especially as the demographic of the Digital Native increases and with that, a changing system of values and expectations about the communication and use of information. In a democratic society the problem is summed up by the title of a book by Anthony Lewis86 which although it deals with tensions arising within the First Amendment to the US Constitution demonstrates that a legal response to information communication, irrespective of method must be a measured one.87

85 For a discussion of the Guidelines see Sarah Ditum, ‘These CPS guidelines make the law around social media vastly more clear’ The Guardian (19 December 2012) commentisfree/2012/dec/19/cps-guidelines-social-media. 86 Anthony Lewis Freedom for the Thought That We Hate: A Biography of the First Amendment (New York, Basic Books, 2010). 87  However, in New Zealand a separate standard for on-line of digital communications has been the subject of legislation in the Harmful Digital Communications Act which will be dealt with in ch 11.

10 Information Persistence, Privacy and the Right to be Forgotten I. Introduction In his book Delete1 Vicktor Meyer-Schönberger tracks what could be termed ‘the end of forgetting’ that arises from the quality of information persistence that underpins digital systems. In so stating his case, he demonstrates the nature of the particular collision that I wish to examine. Whilst Meyer-Schönberger locates his discussion in the way in which old and often outdated information may come back to haunt us, preventing us from escaping earlier incidents of misbehaviour or stupidity, putting the past behind us and effectively re-inventing ourselves, the technical reality means that perhaps we need to reassess the virtue—if virtue there is—in ‘forgetting the past’ should we accept that the digital storage of information presents if not a truth, then evidence from which friends, prospective employers and others may make as assessment about us that is uncoloured by forgetfulness about an unhappy or unwise act, be that forgetfulness real or contrived. The fact of the matter is that we all try to present as positive an image as possible to those around us, and we do have a tendency to ‘overlook’ those ‘embarrassing’ details which may require explaining if disclosed. The problem is, of course, that if such ‘embarrassments’ are wilfully overlooked then there is a form of dishonesty by omission, and if denied outright, they amount to a lie. Another aspect of the issue of ‘forgetting’ and its collision with ‘digital memory’ lies within the large and complex area of privacy. Meyer-Schönberger’s argument is that the wider complexities of privacy do not concern him, but rather he presents forgetting as a human attribute that is challenged by digital systems. Nevertheless, the way in which the so-called ‘right to be forgotten’ has developed it is entwined to a certain degree with certain aspects of the development of privacy theory, and, underlying the European Data Directive upon which the Google Spain decision is based,2 are aspects of privacy and the manner in which data should be handled. 1  Viktor Mayer-Schönberger, Delete: The Virtue of Forgetting in the Digital Age (Princeton, Princeton University Press, 2011). 2  Google Spain SL, Google Inc v Agencia Española de Protección de Datos (AEPD), Mario Costeja González European Court of Justice 13 May 2014 C-131/12. document_print.jsf?doclang=EN&docid=152065.

Privacy Themes


This chapter is about Meyer-Schönberger’s thesis and deals with privacy as an aspect of individual autonomy and identity. It is about who we are and how we present ourselves to the world in the new spaces that are created by technology, which is both an enabler and a disadvantage to individual privacy and control of the self. This chapter, therefore, will commence with a consideration of some of the themes underlying privacy theory, and especially the development of privacy theory within a context of technological development. Like Meyer-Schönberger I focus upon privacy in other than the context of the state/individual, but rather upon an individual’s assertion of a private space. I will then consider, before moving on the discuss the ‘right to be forgotten’ in detail, a possible approach that could be considered by the courts in determining privacy issues based upon normative standards rather than a consideration of particular technologies and whether they engage particular privacy expectations. A discussion will follow about the right to be forgotten, including a consideration of Mayer-Schönberger’s basis for the right. I shall then discuss the Google Spain Case and discuss a possible compromise outcome that preserves the integrity and reliability of search engines but also provides an opportunity for an individual to assert autonomy over information provided in a search.

II.  Privacy Themes The starting point for any discussion about privacy is the articulation of the right by Warren and Brandeis in 1890.3 It is not my intention to analyse this article nor to summarise its argument. What is important to observe is the context within which it was written. What prompted the concerns that the authors expressed? The answer is developing technology. Warren and Brandeis were concerned about news media that were increasingly interested in gossip and were revealing personal things about individuals without their consent. They wrote: The press is overstepping in every direction the obvious bounds of propriety and decency. Gossip is no longer the resource of the idle and of the vicious, but has become a trade, which is pursued with industry as well as effrontery.4 Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual … the right ‘to be let alone’ … Numerous mechanical devices threaten to make good the prediction that ‘what is whispered in the closet shall be proclaimed from the house-tops.5


Samuel D Warren and Louis D Brandeis, ‘The Right to Privacy’ (1890) 4 Harvard Law Review 193. Chemerinsky, ‘Rediscovering Brandeis’s Right to Privacy’ (2006) 45 Brandeis Law Journal

4  Erwin

643. 5  Warren and Brandeis above n 3, 195 (emphasis added).


Information Persistence and Privacy

What prompted these concerns? In an exchange of correspondence 15 years after The Right to Privacy was published, Warren and Brandeis agreed that it was ‘a ­specific suggestion of [Warren’s], as well as [Warren’s] deep-seated abhorrence of the invasions of social privacy, which led to our taking up the inquiry’.6 Glancy suggests that: The immediate catalyst for the article was apparently Warren’s pique at finding intimate details of the Warren family’s home life spread out on the society pages of such newspapers as The Saturday Evening Gazette.7

Thus, a driving force was the public disclosure of what could be considered private facts or behaviour in the mass media, accompanied by the technology of photographs. In their article, Warren and Brandeis urged tort law protection for public disclosure of private facts and their crystallisation of a right to privacy formed the foundation for the tort. It is important to note, however, that their focus was entirely on what could be termed informational privacy. Alan Westin, in a seminal book on the right to privacy, focused on the information in our lives. ‘He analogised it to a series of circles within circles. The innermost circle contains the things we tell no one about ourselves. The next inner-most circle contains the things about us that are known only by those with whom we are most intimate. The circles continue until one reaches the information that is known by all’.8 Warren and Brandeis sought a remedy for the invasion of the private space or the next innermost circle. Since their article the concept of privacy infringement has developed with varying strengths in a number of jurisdictions. England does not recognise a tort of breach of privacy. New Zealand9 and Canada10 do. Tipping J in Hosking v Runting11 defined privacy as, among other things, ‘the right to have people leave you alone if you do not want some aspect of your personal life to become public property’.12 But what of information that falls within the scope of Westin’s outmost circle—information that is known by all? This challenge is recognised by Chemerinsky in calling for greater recognition of a remedy for information privacy in the United States. He points out: First, there is unprecedented ability to learn the most intimate and personal things about individuals. For example, the human genome project offers the prospect of genetic

6  Letter from Brandeis to Warren (8 April 1905). Warren agreed: ‘You are right of course about the genesis of the article.’ Letter from Warren to Brandeis (April 10, 1905). Both letters are quoted in Urofsky and Levy (eds), I Letters Louis D Brandeis, 1870–1907: Urban Reformer 303 (New York, SUNY Press, 1971). 7  Dorothy J Glancy, ‘The Invention of the Right to Privacy’ (1979) 21 Arizona Law Review 1 at 6. 8  Alan F Westin, Privacy and Freedom (New York, Athenaeum, 1964) 33; Chemerinsky, above n 4, 649. 9  Hosking v Runting [2005] 1 NZLR 1. 10  Jones v Tsige (2012) OR (3d) 241, 2012 ONCA 32; Doe v D 2016 ONSC 541. com/file/d/0B_bUaJvZ9k_BQk5RRlNsS1Z6ZW4yMTQ1WUMzSGNMZGZ6QkpN/view. 11  Above n 9. 12  ibid para [238].

Privacy Taxonomies


a­ nalysis that can discover a wide array of information about individuals, including their propensity for diseases, addictions, and personal characteristics. Second, there is unprecedented access to information about individuals. Computerized records and databases store information in a way that it can be accessed by others. The Internet makes it potentially available to many.13

Thus, like Warren and Brandeis, Chemerinsky locates his call for protection of informational privacy within the context of new technologies. But does this, and should this, include protection against material that is within Westin’s outer circle; information that may have been provided by the individual him or herself; information that is a matter of public record? Or should the technology underlying information availability and retention in the Digital Paradigm dictate what should or should not be available?

III.  Privacy Taxonomies The Warren and Brandeis approach not only defined privacy as a basis for redress for those who had suffered unwarranted intrusions or where there had been an unsanctioned public disclosure. Underlying their approach was the concept that the zone of privacy was an aspect of human dignity worthy of protection as an element of inviolate personality. Thus it was not viewed as a proprietary interest.14 Privacy has a wide scope and has attracted a considerable degree of attention from scholars and legal commentators.15 Issues surrounding human dignity lie behind many definitions or concepts. Other definitions focus upon certain aspects of privacy infringement such as access to information (or people),16 control of information or the private space.17


Chemerinsky above n 4, 656. for example in New Zealand Tucker v News Media Ownership Ltd [1986] 2 NZLR 716 per ­Jeffries and McGechan JJ where it was considered that the tort of breach of privacy (if recognised) might be an adaptation of the tort of the intentional infliction of emotional distress. 15  For an excellent introduction see D Solove, Understanding Privacy (Cambridge, Harvard University Press, 2008). 16 For example see Westin above n 8; J Reiman, ‘Privacy, Intimacy and Personhood’ (1976) 6 ­Philosophy and Private Affairs 26; S Bok, Secrets: On the Ethics of Concealment and Revelation (New York, Pantheon Books, 1982); F Schoeman, ‘Gossip and Privacy’ in R Goodman and A Ze’ev (eds), Good Gossip (Kansas, University of Kansas Press, 1994); H Nissenbaum, ‘Protecting Privacy in an Information Age: The Problem of Privacy in Public’ (1998) 17 Law and Philosophy 559; N Moreham,‘Privacy and the Common Law: A Doctrinal and Theoretic Analysis’ (2005) 121 LQR 628. 17 See J Inness, Privacy, Intimacy and Isolation (New York, Oxford University Press, 1992); ­Nissenbaum above n 25, 592; R Parker, ‘A Definition of Privacy’ 27 Rutgers Law Review 275; Reiman above n 16; J Rachels, ‘Why Privacy is Important’ (1975) 4 Philosophy and Public Affairs 323; J DeCew, In Pursuit of Privacy: Law, Ethics and the Rise of Technology (Ithaca, Cornell University Press, 1997); S Todd (ed), The Law of Torts in New Zealand 5th edn (Wellington, Brookers, 2009) 846. 14  See


Information Persistence and Privacy

Control over information about oneself was an important theme in the ­Warren and Brandeis discussion. This is a narrow aspect of the overall concept of privacy but it comes into sharp focus when one considers the elements of privacy that attach to a world wide communications system like the Internet where the exchange is information is a fundamental activity. Yet information forms the basis of many rule systems about privacy.18 Daniel Solove’s approach to privacy is a comprehensive one. He sees privacy as ‘a web of interconnected types of disruptions’.19 These disruptions include disclosure of personal information, surveillance and other forms of disruption. Solove is of the view that other commentators have not identified a common denominator for conceptualising privacy and that the various different approaches to privacy can be considered under six headings: a) b) c) d) e) f)

The right to be left alone Limited access to the self Secrecy Control of personal information Personhood; and Intimacy.20

These often overlap and each category has a distinctive perspective on privacy. Yet collectively all are valid and in any discussion about privacy may assume more or less importance depending upon the context of the discussion. Essentially, Solove argues for a ‘bottom-up’ approach, beginning in each case with identifying the problem itself rather than trying to fit the problem within a pre-determined category.21 But the privacy problem with new communications technologies goes beyond that. Information becomes permanent. Once information is available on the Internet it is persistent—it may be recovered. It may be copied. It may be distributed and continue to be distributed. It may even have been placed on the Internet by the person who later complains of an invasion of privacy. These aspects of the Digital Paradigm challenge aspects of obscurity that once may have surrounded information.

18  Examples may be found in the Privacy Act 1993 (NZ) which is concerned only with information or data privacy and in which there is no definition of privacy. See also the EU Data Protection Directive, the objective of which is to give citizens control of their personal data and simplify the regulatory environment for business. See European Commission, ‘Protection of Personal Data’ http://ec.europa. eu/justice/data-protection/. 19  Solove above n 15, 1130. 20  ibid 1094. 21  ibid 1154.

Obscurity of Information—Practical and Partial Obscurity


IV.  Obscurity of Information—Practical and Partial Obscurity Obscurity of information has two aspects. These aspects demonstrate how the technologies of searchability and retreivability have altered our understanding and expectations of access to information and up until now have had important implications for privacy. Their continued validity as a foundation for privacy protection is challenged by the digital paradigm. The terms are practical and partial obscurity which are both descriptive of information accessibility and recollection in the predigital paradigm. The terms, as will become apparent, are interrelated. Practical obscurity refers to the quality of availability of information which may be of a private or public nature.22 Such information most often is in hard copy format, may be indexed, is in a central location or locations, is frequently locationdependent in that the information that is in a particular location will refer only to the particular area served by that location, requires interaction with officials, bureaucrats or other individuals to locate the information and, finally, in terms of accessing the information, requires some knowledge of the particular file within which the information source lies. Practical obscurity means that information is not indexed on key words or key concepts but generally is indexed on the basis of individual files or in relation to a named individual or named location. Thus, it is necessary to have some prior knowledge of information to enable a search for the appropriate file to be made. Partial obscurity addresses information of a private nature which may earlier have been in the public arena, either in a newspaper, television or radio broadcast or some other form of mass media communication whereby the information communicated is, at a later date, recalled in part but where, as the result of the inability of memory to retain all the detail of all of the information that has been received by an individual, has become subsumed. Thus, a broad sketch of the information renders the details obscure, only leaving the major heads of the information available in memory, hence the term partial obscurity. To recover particulars of the information will require resort to film, video, radio or newspaper archives, thus bringing into play the concepts of practical obscurity. Partial obscurity may enable information which is subject to practical obscurity to be obtained more readily because some of the informational references enabling the location of the practically obscure information can be provided. Peter Winn has made the comment: When the same rules that have been worked out for the world of paper records are applied to electronic records, the result does not preserve the balance worked out

22 The term ‘practical obscurity’ was used in the case of US Department of Justice v Reporters ­Committee for Freedom of the Press 489 US 749 (1989).


Information Persistence and Privacy

between the competing policies in the world of paper records, but dramatically alters that balance.23

Is technology going to dictate privacy expectations? Or is there a place for a ­normative approach to privacy? Or is the strength of a privacy expectation dependent not just upon new information technologies but upon other contextual factors such as the actions of the person claiming privacy protection? And is there room for a concept of individual autonomy over information gathered by digital systems that may, when aggregated, present a digital persona or shadow that bears little resemblance to the ‘real life’ person?

V.  Judicial Approaches Justice Renee Pomerance from Canada considers that rather than having privacy entitlements and invasions determined from the standpoint of technology, the issue may be defined by a return to basic principles—a normative approach—that is aspirational, based upon what privacy should be rather than whether or not a particular technology in particular circumstances may or may not have infringed a privacy interest. This considers how the law approaches privacy on the basis of societal interest. The normative approach views individual cases through the broader lens of how we want to live as a society. It transcends the factual minutiae of a given case. It reminds us that just as there is a societal interest in effective law enforcement, so too is there a societal interest in the right to be left alone—that aspect of privacy that allows self-definition. It tells us that we must recognise the importance of privacy despite, or perhaps, because of the ubiquity of technology.24 The alternative to this normative approach, and one that gained traction in Canada in the 1990s, was that privacy had to be measured against the subjective expectations of the suspect and whether his or her behaviour was consistent with the expectation of a privacy interest.25 The recent cases of in R v Ward,26 and R v Spencer27 suggest a return to a normative approach. Both cases dealt with ­privacy in subscriber information held by Internet Service Providers. In both cases,

23  Peter A Winn, ‘Online Court Records: Balancing Judicial Accountability and Privacy in an Age of Electronic Information’ (2004)79 Washington Law Review 307, 315. 24  Renee Pomerance, ‘Flirting With Frankenstein: The Battle Between Privacy and our Technological Monsters’ (2016) 20 Canadian Criminal Law Review 149. Justice Pomerance draws as examples the cases of R v Wong [1990] 3 SCR 36, 60 CCC (3d) 460, [1990] SCJ No 118, R v Ward (2012) ONCA 660, [2012] OJ No 4587 and R v Spencer (2014) SCC 43, [2014] 2 SCR 212, [2014] SCJ No 43. 25  Although it should be acknowledged that the normative approach was adopted in R v Tessling 2004 SCC 67, [2004] 3 SCR 432 and R v Patrick 2009 SCC 17, [2009] SCJ No 17. 26  Above n 24. 27 ibid.

The Internet and Privacy


the courts endorsed a normative view of privacy. Justice Pomerance ­considers these two cases are important for the following reasons: First, they instruct us that, in defining privacy, we must not ask overly narrow questions. Where subscriber information is concerned, the question is not whether there is a privacy interest in one’s name and address. Rather, the question is whether there is a privacy interest in information from which it can be inferred that the suspect accessed certain websites. Privacy is defined, not only by the information sought by law enforcement, but also by the inferences and uses that flow from that information. Secondly, these cases recognize that a vital component of privacy is the right to anonymity, the right ‘to merge into the situational landscape’. This has implications for, among other things, privacy in public places.28

The important thing about the normative approach is that it gives us a way to measure societal expectations. Privacy should not be defined by the subjective predilections of the presiding judge. Because it is difficult to describe privacy in concrete terms, it has traditionally had an ‘I know it when I see it’ quality. Police often come to court asking for forgiveness rather than permission, in part, because it is difficult to predict in advance when a warrant is required. To some extent, privacy will always lie in the eye of the beholder but judges have a duty to strive toward objectivity. The judicial role is expressed by Aharan Barak in the following way: The consensus within which judges usually ought to operate should be a consensus grounded in the fundamental values of the legal system. Judges should not act according to a consensus formed by transient trends that are inconsistent with the society’s fundamental values. Judges’ social framework must be central and basic, not temporary and fleeting. When society is not being true to itself, judges are not required to give expression to its passing trends. They must stand firm against these trends while giving expression to the social consensus that reflects their society’s fundamental principles and tenets.29

Societal values should be based upon principle and not whim or perceived transient societal trends. If we adopt this approach to defining privacy we avoid the ‘technological trap’. The difficulty is that in the Digital Paradigm, technology and privacy expectations are inextricably entwined and it is to the collision between the Internet and privacy that I now turn.

VI.  The Internet and Privacy Given the concerns of many about the rise of computers and the gathering and retention of data on citizens by the state, there is an irony that the development

28  29 

Pomerance above n 24. Aharan Barak, The Judge in a Democracy (Princeton, Princeton University Press, 2006).


Information Persistence and Privacy

of the Internet has realised some of these worst fears, especially following the Snowden revelations.30 Yet at the same time, the development of the Internet and especially the rise of social networking31 has given rise to the ‘look-at-me’ Internet users who seem to have no inhibition about diarising their every move with tweets, Facebook entries or the ubiquitous ‘selfie’.32 One wonders whether users are aware of the fact that by placing personal information on a social networking site via social networking applications, they are compromising not only privacy but also any expectation of control that they might have had over the information. Thus the difficulty that arises with social networking once again involves a failure, primarily on the part of participants, to understand the nature or properties of digital technologies.33 The apparent carelessness, even recklessness, in divulging personal information challenges traditional or conventional approaches to privacy and indeed can potentially put users at personal and physical risk. At the same time, it may be that social network sites will present a future challenge to expectations of privacy altogether. Perhaps one of the ultimate ends of online systems and the digital paradigm will be a recasting of the law of privacy as we know it. Those last four words demonstrate the nature of the problem. Those of us whose value systems were formed in the reaction to state interference and totalitarianism, and who saw the threat that computer systems posed to privacy, derive privacy expectations from a different value background to those who today willingly share personal information and images as a part of a requirement for a proper level of acceptance within the social groups to which they belong. The enhanced qualities of digital technologies all underpin these changes in attitude. 30  In June 2013, Edward Snowden who worked for a company contracted to the NSA released t­housands of classified NSA documents to journalists. The documents contained information about the extent of surveillance activities carried out by the NSA, particularly over the Internet. 31  I have discussed social media in the preceding chapter. This chapter focuses upon some of the privacy implications of social media. 32  The problems of the selfie are summed up as follows:

The selfie: that omnipresent form of portraiture, apparently a sign of our culture’s mass ­narcissism. It has been more than a decade since we first started snapping ourselves but ­self-confidence is still, apparently, not okay. It’s not a coincidence that many of the undesirable personality traits associated with selfie— superficiality, vanity, desperation—are also used as misogynistic traits. Selfie-shaming reveals a lot about society’s turbulent relationship with feminism. With selfies you have control. There’s no pressure. They are an instant confidence boost—as opposed to the frantic scramble the morning after a party to untag unflattering photos on ­Facebook. Finally nailed wing eyeliner? Selfie. Smiled for the first time since you got dumped? Selfie. Jessica McAllen, ‘Haters Gonna Hate’ Sunday Star Times Sunday Magazine (24 January 2016). 33 Much of this section appeared in another form in Judge David Harvey, ‘Privacy and New ­Technologies’ in S Penk and R Tobin (eds), Privacy Law in New Zealand 2nd edn (Wellington, Thomson Reuters, 2016).

The Internet and Privacy


A.  Social Networking and Privacy The diagrams illustrating social networking sites in the previous chapter demonstrate some of the key characteristics of social networking sites and applications. Sharing information among groups is a principal feature. The act of sharing may be limited to a number of ‘friends’, followers or other subscribers, but the quality of exponential dissemination enabled by networked systems together with the ease of which copying takes place demonstrate large risks for privacy. The enormous number of social media sites and applications provide equally enormous opportunities for individuals to provide information about themselves which, when aggregated, can define an online persona or give rise to the concept of the ‘digital shadow’,34 which may develop from the information that is provided by individuals themselves or by others. Furthermore the information will contribute to the enormous datasets known as Big Data—collections of disparate information awaiting the application of data analytics. Grimmelmann35 suggests that people have social reasons to participate on social networking sites and value this aspect of social media, despite the risks to privacy and the fact that those risks are underestimated. The drivers that prompt these risks are the same elements that arise from the use of social networking sites in the first place—identity, relationship and community. Identity allows users to say who they are and what they are about—thus it allows users to define themselves. Much of the information that is a part of a profile may simply express issues that are important to a user. The problem is that providing this information within a social networking context allows contacts to make comments about some of the elements that are a part of a profile or the issues that underlie them.36 This means that the comments become an integral part of the profile, something over which a user has no control. The perceptions of others become factored into a user’s profile and are thereby shared with others, thereby modifying the original self-definition. It is this sharing of information that underlies the issue of the quality of online relationships. Relationships with friends and acquaintances are often defined by the extent to which one reposes confidences and the extent of those confidences. A level of trust arises from the level of intimacy that has developed. In a ‘real-world’ relationship, trust and intimacy develop over a period of time. One is hardly likely to acquire a large number of pieces of personal information on a first meeting in the real world, and yet the addition of a person to one’s contact list does just that.


For discussion see below in this section and also in s VIII.A.iii. Grimmelmann, ‘Saving Facebook’ (2009) 94 Iowa Law Review 1137 at 1140. Grimmelmann’s article deals specifically with Facebook but there are some general observations that can be distilled from it that are applicable to other social media applications and sites. 36  For some of the consequences of this aspect of social networking the example of Justine Sacco discussed in ch 9 and generally ch 11. 35  J


Information Persistence and Privacy

A level of intimacy is immediately assumed. As such, it is an element of online relationships that the development of intimacy levels takes place at a greater pace than in the ‘real world’. There are two associated threats in using social networking sites that demonstrate some of the problems posed by the new technology. In the same sense as the quality that enables the concept of the ‘document that does not die’, social ­networking sites allow and enable the perpetuation of a ‘digital footprint’.37 The Internet not only enables a user’s every activity to be traced—such as the time one logs in, the IP addresses visited during a session and the websites d ­ ownloaded—but it also preserves that information. Social networking information is no exception. The difficulty arises when information posted on a social networking profile is stale or the site has not been used for some time, but information posted remains. This information will be available to anyone who wishes to carry out a search, and may prove to be embarrassing at a later date—for example at a job interview.38 The ‘practical obscurity’ that allows one to engage in infantile or indecent behaviour at a university party and that is forgotten over time may come back to haunt one, especially if the social network profile includes compromising photos taken at such an event. And Facebook information, once posted, becomes rather difficult to remove. Social networking sites (along with other Internet platforms) also enable the ‘digital shadow’ to which reference has already been made. A social networking application not only allows a user to post information about him or herself; it also allows others to post information about the user. This information from a ‘third party’ or contact about a person will remain as part of their profile or the ‘third party’ profile. If the remark or the information is of an embarrassing or, even worse, untrue nature, it may remain as part of the person’s profile to return at a later date to haunt them. It is information over which they have little, if any, control.

B.  Why Compromise Privacy? So why is it that social networking users are so casual about disclosing private information that in many cases and in other circumstances would be protected by privacy principles? Some explanations have already been offered: the assumption

37  As distinct from the ‘digital shadow’. The footprint is created by the individual him or herself. The shadow, discussed below, is created or arises from the activities of others. 38  Social networking sites have become so pervasive and so widely used that they provide a valuable resource for employers who may obtain some ‘background’ information about a job-seeker. Journalists regularly use social networking sites to obtain details about ‘people of interest’ or to locate friends, associates or ‘contacts’ who may be able to provide some ‘deep background’. What may have taken a journalist days using conventional journalistic investigation may now take only minutes. Author’s interview with Lee Chisholm, Operations Manager, Netsafe, 29 October 2008.

The Internet and Privacy


of risk without full awareness of how the site works or the distributive powers of new information and communications technology (ICT); a wish to be part of a group already participating; the ‘fish school’ ‘it-won’t-happen-to-me’ mentality; and many of the behavioural expectations of the real world that do not necessarily exist online—the expectation of confidence among friends, an expectation that people we have met online share our values and will not ‘betray’ us. Are most users unaware of the privacy risks and implications of social network sites and, more especially, unaware of the ramifications of online communication and information sharing that arise from the unique and enhanced properties of digital system? This may be the case with young people who believe that their information will be shared within a limited group, or who have an undeveloped sense of the longterm consequences of placing unguarded information on a Social Network site.39 There are some users who simply do not care about privacy implications or do not mind if their information is shared. In some respects, this may be a form of narcissistic ‘look at me’ behaviour or an aspect of Andy Warhol’s suggestion that everyone should have their 15 minutes of fame.40 Some users even film themselves in provocative poses and make those images or videos available online so that those inclined can view such images on the user’s profile. The risks associated with this are enormous, especially if the profile contains sufficient information to identify the user and his or her location. On the other hand there may be the users who, aware of the privacy implications of social media, are prepared to make a compromise—to cede elements of privacy in return for the convenience or necessity of using online tools in their daily lives. This is a reality of the move to the digital space and an example of the way that the tools that we use become essential to fundamental aspects of our lives—our socialisation—and thereafter dictate our attitudes and cause us to revisit elements of our values system. One way of making this compromise is to utilise what is referred to as temporary social media where users may share information or pictures but only for a short time. One example of such a social media site is Snapchat which lets users take photos or short videos and then decide how long they will be visible to the recipient. After 10 seconds or less, the images disappear forever. Interactions in temporary social media can be brief and transitory and, unless the recipient saves the image or the file,41 will not become part of one’s digital footprint.

39 ibid.

40 A Warhol, ‘Warhol Photo Exhibition, Stockholm, 1968’ in J Kaplan (ed), Bartlett’s Familiar Quotations 16th edn (New York, Little, Brown & Co, 1992) 758. 41  Images that were meant to vanish can still be saved if the recipient uses a screen-capture feature to take a picture of the message during the seconds it appears. If the recipient does this, Snapchat notifies the sender, but by then it’s too late to stop the image from being preserved and shared.


Information Persistence and Privacy

C.  Social Networking and the Future of Privacy It can be seen from the preceding discussion that whatever the basis for using social networking—compromise, ignorance, carelessness or narcissism—it challenges our preconceptions of privacy from two perspectives. The first is the challenge that the sites themselves pose in allowing the free sharing of information. This is often countered by steps that the sites themselves have put in place to provide privacy protections, but those protections are limited and do not extend to prohibitions on contacts disseminating information among their own contacts. The technology can only go so far and cannot prevent people ‘being human’. The second lies in the attitude of users who are prepared for whatever reason to make otherwise private information available. It may well be that there is an attitude of indifference to the availability of private information. Is it possible that such attitudes by the Generation Y users on social networks are, in the future, going to redefine fundamental privacy values? Are the values of privacy that derived from concerns about monolithic information systems which gather information and then make it available for purposes other than the primary purpose of acquisition going to fall by the wayside? Will the reasonably stringent rules surrounding the protection of private details be eroded by a willingness on the part of ICT users to make such information available regardless of consequences? Social network sites seem to be at the forefront of these potential challenges to traditional approaches to privacy and the handling of personal information.42 Privacy in the Digital Paradigm becomes a complex issue when individuals cede privacy about aspects of their lives via social media. In many respects, it would be difficult to maintain an invasion of privacy in the light of such disclosures. But even without such voluntary disclosures via social networks, the ever-changing technological landscape makes privacy regulation increasingly difficult. A further problem arises from the ability to locate information via a search engine. I have observed that searchability and retrievability of information are fundamental characteristics or qualities of the Digital Paradigm. This easy location of information creates a further challenge to the pre-digital concepts of partial and practical obscurity and search engines have become the battleground for a form of privacy protection.

42  There is another aspect of social networks that is developing and that is their contribution to ‘Big Data’. ‘Big Data’ is an all-encompassing term for any collection or sets of data so large and complex that it becomes difficult to process them using traditional data processing applications. Information comprised in a ‘Big Data’ dataset may be derived from a number of sources. In the past large datasets in the fields of meteorology, genomics, connectomics, complex physics solutions and biological and environmental research have typified ‘Big Data’. However, the gathering of Internet-based data and its analysis and use for purposes other than science by both government and private organisations is becoming common. Such data acquisition and analysis can be used to determine market and social trends but may have limited utility, given that it may not comprise a representative sample and is largely historical rather than current. This is the basis for a different discussion around information data analysis.

The Right to be Forgotten


VII.  Search Engines and Information Retrievability The development of the World Wide Web was, in the vision of Tim Berners-Lee, to assist in making information available and to create a method of accessing stored information and sharing it.43 Yet it had already become clear, ev