Media Law Through Science Fiction: Do Androids Dream of Electric Free Speech? 9781138949317, 9781138949331, 9781315669144

Attorney and legal scholar Daxton Stewart examines the intersection of media law and science fiction, exploring the past

1,193 18 6MB

English Pages 197 [229] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Media Law Through Science Fiction: Do Androids Dream of Electric Free Speech?
 9781138949317, 9781138949331, 9781315669144

Table of contents :
Cover
Half Title
Title Page
Copyright Page
Table of Contents
List of figures
Foreword
Acknowledgments
Preface
1. Science Fiction, Technology, and Policy
What Science Fiction Can Do
The Law in Science Fiction
Writing About Future Technology
On Communication
Conclusion
Notes
2. The Future of Copyright Law, Both Real and Virtual
The March to Maximalism
Near-Term Dystopia
Possibilities for Reform
Laboratories of Copyright Law
Notes
3. Privacy in the Perpetual Surveillance State
Privacy Law During the Rise of Invasive Technology
Privacy in Private – Invasive Surveillance
Privacy in Public When Cameras Are Everywhere
Wearable, Implantable, and Biometric Technologies
Future Ways of Thinking About Privacy
Notes
4. Do Androids Dream of Electric Free Speech?
Freedom of Expression for Science Fiction Robots
What is Robot Speech?
Copyrights for Non-Human Creators
Conclusion
Notes
5. Vanishing Speech and Destroying Works
Government Destruction of Private Speech
Private Censorship
Government Destruction of Its Own Records
When Destruction of Speech May Be Necessary
Conclusion
Notes
6. Law, the Universe, and Everything
The Future of the First
Journalists . . . in . . . Space!
Conclusion
Notes
Index

Citation preview

MEDIA LAW THROUGH SCIENCE FICTION

Attorney and legal scholar Daxton Stewart examines the intersection of media law and science fiction, exploring the past, present, and future of communication technology and policy debates. Science fiction offers a vast array of possibilities anticipating future communication technologies and their implications on human affairs. In this book, Stewart looks at potential legal challenges presented by plausible communication technologies that may arise 20 or 50 or 100 years from today. Performing what he calls “speculative legal research,” Stewart identifies the kinds of topics we should be talking about relating to speech, privacy, surveillance, and more, and considers the debates that would be likely to arise if such technologies become a reality. Featuring interviews with prominent science fiction authors and legal scholars, and a foreword by Malka Older, this book considers the speculative solutions of science fiction and their implications in law and policy scholarship. Chapters feature specific literary examples to examine how cultural awareness and policy creation are informed by fictional technology, future societies, and legal disputes. Looking forward, beyond traditional legal research and scholarship to the possible and even very likely future of communication technology, this fascinating work of speculative legal research will give students and scholars of media law, science fiction, and technology much to discuss and debate. Daxton R. “Chip” Stewart, Ph.D., J.D., LL.M., is a professor of Journalism in the Bob Schieffer College of Communication at Texas Christian University. He has more than 15 years of professional experience in news media and public relations and has been a licensed attorney since 1998. His recent scholarship has focused on the intersection of social media and the law, including the book Social Media and the Law (2nd ed., Routledge, 2017).

MEDIA LAW THROUGH SCIENCE FICTION Do Androids Dream of Electric Free Speech?

Daxton R. Stewart FOREWORD BY MALKA OLDER, AUTHOR OF INFOMOCRACY

First published 2020 by Routledge 52 Vanderbilt Avenue, New York, NY 10017 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2020 Taylor & Francis The right of Daxton R. Stewart to be identified as author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. First edition published 2020 Library of Congress Cataloging-in-Publication Data A catalog record has been requested for this book ISBN: 978-1-138-94931-7 (hbk) ISBN: 978-1-138-94933-1 (pbk) ISBN: 978-1-315-66914-4 (ebk) Typeset in Bembo by Taylor & Francis Books

CONTENTS

List of figures Foreword Acknowledgments Preface

vi vii xi xiii

1

Science Fiction, Technology, and Policy

2

The Future of Copyright Law, Both Real and Virtual

34

3

Privacy in the Perpetual Surveillance State

60

4

Do Androids Dream of Electric Free Speech?

107

5

Vanishing Speech and Destroying Works

144

6

Law, the Universe, and Everything

178

Index

1

190

FIGURES

0.1 Malka Older. 0.2 Ernest Cline in the tricked-out DeLorean he calls “Ecto-88,” at South by Southwest Interactive in Austin, Texas, 2015. 1.1 Annalee Newitz. 1.2 Cory Doctorow. 1.3 Robin Sloan. 1.4 Katie Williams. 1.5 Louisa Hall. 1.6 Daniel Wilson.

vii xx 4 6 8 13 14 17

FOREWORD

FIGURE 0.1 Malka Older. Photo by Allana Taranto.

One summer when I was in high school, I had a fellowship that put me in the offices of a district court judge. This was high school, so I didn’t do much of substance, but I was able to observe both in court and in chambers, and talk to the clerks a lot and the judge sometimes. The image I carried away from this incomplete experience of the law was that of the graphs we drew in math class to follow a curve using only straight lines. The

viii

Foreword

lines would describe blocks awkwardly filling in the space around the curve. More points, refine the pixels, get closer and closer to the curve; but there will always be those jagged spaces where the right angles gap away from the sinuous line. The law, to my impressionable teenaged brain, tried to bound human behavior with lines and right angles. People, with their weirdness, would always find ways to act in that grey area along the curve not properly demarcated within or without. Those were the cases I saw argued in the courtroom: this specific instance doesn’t fit neatly into our rules because [circumstances, weirdness]. That case would become one more data point, a precedent, letting the imperfect human description of ourselves get a little closer to reality, but never quite match it. (Now, in a different era, the analogy that leaps to my mind is not hand-drawn calculus approximations but machine learning: a process that laboriously refines itself into a better algorithm.) Fiction, on the other hand, is freehanding. We might get the curve too high or low, too steep or flipped around or otherwise totally wrong, particularly as we move into speculating about the future. But if we are lucky and hard working, if we pay attention and observe carefully, we might get a detached, free-floating echo of the elusive line we’re after. As many science fiction writers have noted before me, we are living in the future. Every new day takes us into unknown territory. As William Gibson has famously said, this future we inhabit is unevenly distributed; no matter where you live or how frenetically you travel, someone is inventing something while your back is turned, and someone else will reach that part of the future before you do. The pace of change is faster and stranger and more diverse than we can assimilate. We can try to keep up, but I’m not sure anyone really does. Still, we have to live in this future. We need to keep remaking society’s rules to include new technologies, and we are compelled as we always seem to have been to describe our lives. We attempt these objectives – ordering society and describing it – through a lot of different tools. We use economics and anthropology, journalism and film, statistics and Instagram. Law serves both purposes, seeking to draw the boundaries on new technologies while also documenting our society and its discontents. I approach this as a science fiction writer. I use a future – an imaginary, often impossible future – to describe the moment on the cusp of the present and the future that we live in. One of the more entertaining aspects of being a science fiction writer is that I can make up any new technology I want. I can invent technologies to close plot holes (faster than lightspeed space travel!), create plot tension (if it’s not dealt with immediately this bit of tech will explode! starting a viral pandemic! and creating a portal to aliens!), or simply add amusement (micro-garden hairstyles!). I don’t need to worry about engineering or materials science unless I feel like it, although trying to work out the details can be part of the fun. And nobody can tell me I’m wrong, because I’m writing about the future.

Foreword ix

But while I can be as wild and imaginative about technology as I want, I know that as a writer there is one area in which I must achieve verisimilitude. I have to make the characters act like human beings (or, if the characters are not human beings, I have to invent some internal logic for their society that makes sense to human beings). Whether I’m writing in the past, in an alternate present, fifty or five hundred or five thousand years in the future, the characters and their relationships must be recognizable. Otherwise the whole artifice falls apart, the way poorly done pointillism will fail to resolve into an identifiable image. But write characters that behave, think, hate, love like real people, and readers will happily accept anything from cold fusion to dragons. My job, then, is essentially to think up some difference in the world – technological or magical or historical, something that is startling or absurd or subtly insightful or pointed – and make sure that the human reactions to it, the changes society has built around it, feel right. (There also has to be an entertaining story in there somewhere.) At a minimum, I need to show the direct effects of the change and how it impacts my story. But if I stop there, I risk a book that feels didactic or one-dimensional. To do my job well I need to think through the unintended and unexpected consequences, the second- and third- and fourth-order ripples. I need to consider the impacts of the technology on as many different people as can fit in: on disability, on introverts and extroverts, on low education, on the rich, on the precariously poor, on the elderly, on kids. I need to figure out if some people hate and resent the technology, and others hold it up as unassailably positive. I need to imagine what other applications have come up, both formal and unauthorized. In the best of cases, I will come up with some other, not directly related technological changes too, because as interesting as it can be in a thought experiment, change typically doesn’t happen as a single variable while everything else stays constant. Legal practice and theory uses the opposite approach to describe society. Unlike the science fiction writer, the law theorist or documenter must hew to what exists in the world, without imagining hypothetical future technologies. Rather than charting out the range of human reactions, the law attempts to circumscribe that range, finding the outside margins of what might be expected – or what can be allowed by society. Where I search for the idiosyncrasies and quirks of humanity to make my future worlds feel lived in and real, law tries to set broad strokes to encompass or prohibit all of that weirdness, and then gradually refines itself through practice and precedent, carving out exceptions or introducing unexpected compromises. It’s helpful for me to have a sense of how law works – now and in the past – so that I can adjust legal systems appropriately for the future I’m writing about (or, in some cases, create an entirely new one). But, as this volume argues, science fiction can have even more to offer the law. Science fiction works as a test case, a virtual experiment to explore the possibilities of the near and distant future.

x Foreword

Fiction is particularly valuable because of this secret sense we all have of knowing when a story works. (In the future of my Centenal Cycle books this is a recognized syndrome, called narrative disorder, that almost all of us suffer from, if at different points on the spectrum). We can feel, to return to the initial analogy, when the curve is too sharp or too angular to be real. This is a learned sense, and partially culturally determined, and therefore it can lead us astray if we depend too much on our local tropes. Believing what Hollywood tells us about how love or science or friendship works could quickly lead to disaster. But there is a more universal aspect to it as well. We can find characters to identify with and familiar relationships in stories from the distant past or in cultures far from our own. If a plot doesn’t make sense to us after surviving centuries of retellings, it probably means we need to understand something that has fundamentally changed between that culture and ours, and once we do the motivations fall into place. If we combine law and fiction (and ideally some sociology or economics or anthropology as well, some feminist theory and Marxism and deconstruction and whatever else we can get our hands on), we can pull together that sense of the curve with the practiced, meticulous effort to block out the area under it. We can both imagine how people might act – in the future, under different circumstances, with new technologies – and connect that to the long and carefully documented history of how people have acted so far. That is what this volume proposes, and it offers us a synergistically powerful set of tools for approaching the legal and moral quandaries of the future that is still beyond our reach.

ACKNOWLEDGMENTS

A project like this, quite outside the norm of legal research and scholarship, needed a push and a lot of support. I have to start with Erica Wetter at Routledge, who encouraged me to work on this. I was casting about ideas for potential research, each one duller than the next, but really hoping someone would be interested in the science fiction and future technology one, and Erica was immediately enthusiastic about it. Also, I have to thank Steve Levering, my professor colleague here at Texas Christian University, who told me I should read Ready Player One and then gave me some other ideas as I started thinking about it as a research project. It’s good to have a friend who loves to hang out and talk about science fiction and other fun stuff. Likewise, Woodrow Hartzog, instead of telling me this was crazy, gave me some more reading suggestions, and let me bounce a lot of ideas about future privacy law off of him. And I have to say a word of thanks to Derigan Silver for pitching the special issue on the future of communication law for Communication Law & Policy, and to Wat Hopkins, the editor of the journal and a scholar I have long admired, who dismissed my fears about submitting the original essay version of this and, as always, was kind and thoughtful through the editing process. A special thanks goes out to the authors who enthusiastically responded to my interview requests and were so gracious with their time: Cory Doctorow, Louisa Hall, Annalee Newitz, Robin Sloan, Katie Williams, and Daniel Wilson. I also thank the scholars – Andrea Guzman, Jeremy Littau, and Nicholson Price – who took time to respond to my questions about their work. And, of course, I’m indebted to the brilliant Malka Older, who not only gave time for an interview, but also said “yes” when I asked if she would write the foreword, even though she was in the middle of completing her doctoral work and publishing a new serialized story.

xii Acknowledgments

Finally, none of this would be possible without my family. Kara, thanks for supporting me in my academic career, and for tolerating all of the times I wanted to watch Star Trek or Blade Runner instead of something a little lighter and foodier in the evenings. And to my daughters, Katie, Elise, and Sabrina, I have loved reading Adams, Bradbury, Orwell, Rowling, Tolkien, and so many others with you, and I hope you keep exploring stories with dragons and wizards and robots and spaceships.

PREFACE

Legal research and science fiction are often maligned, though for different reasons. Legal research is criticized as being too academic to be practical, while science fiction has been painted as too lowbrow to be taken seriously. Nevertheless, science fiction authors – and the visions they depict of the future of communication technology – have much to contribute to media law scholarship. The critique of legal research is not new, but it is persistent, arising again when The New York Times published Adam Liptak’s broadside against law reviews, writing that they “are such a target-rich environment for ridicule that it is barely sporting to make fun of them.”1 Liptak cited no less than the Chief Justice of the United States, who quipped in 2011: Pick up a copy of any law review that you see and the first article is likely to be, you know, the influence of Immanuel Kant on evidentiary approaches in 18th-century Bulgaria, or something, which I’m sure was of great interest to the academic that wrote it, but isn’t of much help to the bar.2 Such criticism seems to be part of a popular trend in the public perception of scholarship, which apparently involves an aversion to multisyllabic words and abstract thought. The Times persisted in publishing takedowns of the academy in early 2014, when columnist Nicholas Kristof called for the return of the public intellectual while belittling the modern scholar as often irrelevant, part of an academic culture that “glorifies arcane unintelligibility while disdaining impact and audience” and that requires academics to “encode their insights into turgid prose” and “gobbledygook” that is then “hidden in obscure journals.”3 Science fiction writers have also long battled critiques of their writing. Michael Crichton, the esteemed author of The Andromeda Strain, Jurassic Park, and other

xiv Preface

classic works, shared some of these common refrains in his review of a contemporary’s work that would become a classic. Writing about Kurt Vonnegut’s Slaughterhouse-Five in 1969, Crichton asserted: [I]t has traditionally been true that one cannot acceptably admit to a taste for science fiction, except among scientists or teenagers. And those two groups share a strikingly low standard of literary attainment, and a correspondingly high tolerance for mangled prose.4 While Vonnegut’s work may have been a “notable success” in transitioning him away from science fiction’s roots, Crichton still found the field to be “as pulpy, and awful, as ever.”5 Vonnegut himself acknowledged the fairness of some of this criticism,6 though he simultaneously lamented that he wrote for the “file drawer labeled ‘science fiction,’” but that “so many serious critics regularly mistake the drawer for a urinal.”7 More recently, cyberpunk author Neal Stephenson acknowledged the “long-standing tendency of socalled literary writers and critics to say mean things about science fiction,” but also noted, “A lot of science fiction writers don’t care, but the ones who do care feel wounded by that and get defensive.”8 Such disdain for science fiction may be a remnant of what the scientist and author C.P. Snow saw in 1959 as two distinct cultures emerging in western society between scientists and what he termed “literary intellectuals,” between them a “gulf of mutual incomprehension – sometimes (particularly among the young) hostility and dislike, but most of all lack of understanding.”9 The same intellectuals lampooned by Kristof and Liptak more than half a century later have a tradition themselves of looking down their collective noses at scientists who could explain the Second Law of Thermodynamics but hadn’t read anything by William Shakespeare or Charles Dickens.10 But Snow pointed out how much each field had to lose through such mutual disregard, if not outright disgust: “The clashing point of two subjects, two disciplines, two cultures – of two galaxies, so far as that goes – ought to produce creative chances. In the history of mental activity that is where some of the breakthroughs come.”11 In this book, I aim to serve as a bridge between these two cultures, particularly between legal scholars and science fiction writers. Science fiction offers a vast array of possibilities anticipating future communication technologies and their implications on human affairs. As the author Ursula Le Guin said, “The future is a safe, sterile laboratory for trying out ideas in” without “fear of contradiction from a native.”12 The science fiction author’s prime directive13 is to discuss the idea in the context of the story, not necessarily to flesh out the legal and policy implications such technologies would likely involve. Author Gwyneth Jones suggests that the “science” in science fiction is more about method than subject matter:

Preface xv

[W]hatever phenomenon or speculation is treated in the fiction, there is a claim that it is going to be studied to some extent scientifically – that is objectively, rigorously; in a controlled environment. The business of the writer is to set up equipment in a laboratory of the mind such that the “what if” in question is at once isolated and provided with the exact nutrients it needs . . . the essence of sf is the experiment.14 Additionally, while science fiction may not exist primarily to be predictive, it is difficult to ignore the effectiveness of science fiction writers in revealing technologies years if not decades before they become reality.15 What is The Hitchhiker’s Guide to the Galaxy, really, besides an intergalactic e-reader with a data plan and a Wikipedia app?16 Science fiction has inspired scientists to push ahead, with companies such as Microsoft, Google, and Apple sponsoring “lecture series in which science fiction writers give talks to employees and then meet privately with developers and research departments.”17 Crichton, while pointing out science fiction’s faults, nevertheless found that the ideas of some early practitioners – Jules Verne and H.G. Wells among them – were “to an astonishing degree . . . correct.”18 One of the godfathers of modern science fiction, Isaac Asimov, said that one of the “certain satisfactions peculiar to” science fiction was the opportunity to be “hailed as a sort of minor prophet” when predictions were somewhat accurate.19 Asimov in particular expressed pleasure in his foresight on robotics from a story written in 1939, spacewalking in a 1952 story, and pocket computers in a 1957 story.20 Such predictions appeared in a relatively recent legal dispute, when Samsung used language and still frames from the film 2001: A Space Odyssey showing astronauts reading the news on tablet-like devices to challenge one of Apple’s patents on the iPad.21 Arthur C. Clarke’s vision of the “Newspad,” revealed in 1968, predated the release of the iPad by nearly half a century.22 The tablet foreseen by Clarke was as much plot device as prediction. Indeed, he did not delve into the policy implications of such a technological marvel, beyond noting: The more wonderful the means of communication, the more trivial, tawdry, or depressing its contents seemed to be. Accidents, crimes, natural and manmade disasters, threats of conflict, gloomy editorials – these still seemed to be the main concern of the millions of words being sprayed into the ether.23 The law, unlike science fiction, is necessarily more reactive and historical. Rather than looking forward to anticipate potential future disputes, courts and lawyers must deal with active, ripe controversies.24 Resolving present disputes with an eye on the past is, of course, exactly what courts should be doing. Courts exist primarily to be functional rather than philosophical; even decisions of the Supreme Court of the United States that delve into broad matters of policy and modern existence do so only to resolve existing dilemmas of the human

xvi Preface

condition in a democratic state. This is the heart of stare decisis and the common law – using the decisions of the past to inform the present, helping to resolve active controversies.25 Lawyers, judges, and legislators are not in the futurism business, and this is reflected somewhat in the law’s sometimes plodding adaptation to revolutionary technology. Judges and legislators can certainly be “forward-looking” when making sense of new technologies, as legal scholar Meg Leta Jones points out. The history of advancement of the law in its response to technology – from photography to computers to drones – is not one of sea changes in immediate response to new technologies, but rather is less linear, involving long-term exchanges between innovations and users and the law as the practical reality of these tools becomes evident.26 But as critics assail legal research for being overly theoretical and pointless, legal scholars are able do things that judges and lawyers do not have to do, and that often legislators do not have the incentive to do: look forward to future disputes, ones that don’t exist yet but are almost certainly going to arise, not just in the short term but also even further ahead. Jones called such scholars “legal futurists,” engaging in research “that is explicitly anticipatory,” looking to future technological innovations and “consider(ing) how the law will be able or unable to handle social ramifications.”27 We can inform the debates of tomorrow – the legislative proposals, the regulations, the judicial opinions – by looking at legal dilemmas that are almost certain to arise based on the rapid advance of technology. In one entertaining example, professor Yvette Joy Liebesman suggested a “wait and see” approach for legislation and regulation when anticipating future technology, in a thoughtful study on whether copyright law could handle “magically animated photographs” from the wizarding world created by J.K. Rowling in her Harry Potter series of books.28 With science fiction laying the groundwork, we can inform future debates by looking at them before they arise, applying classical legal principles in a modern framework. In this book, I examine themes from my years of consuming works of science fiction, and I undertake to look at the legal and policy implications of each as a way of opening a portal to thinking about the future of media law scholarship. In this preface, I begin with a brief exploration of science fiction as a concept and how the genre has been approached by courts and legal scholars, before offering a brief overview of the topics to come in the next six chapters of the book.

Science Fiction and Legal Scholarship Science fiction scholars have struggled to find a working answer to the question “What is science fiction?”29 At its broadest, science fiction is fiction that looks ahead. Ray Bradbury, famed author of The Martian Chronicles and Fahrenheit 451,

Preface xvii

said the genre “is really sociological studies of the future, things that the writer believes are going to happen by putting two and two together.”30 As a genre, science fiction shares much in common with myth, fantasy, folk tales, and horror – realms sometimes referred to collectively with science fiction as “speculative fiction.”31 Science fiction scholar Darko Suvin distinguishes science fiction based not “in terms of science, the future, or any other element of its potentially unlimited thematic field,” but rather as an expression of “cognitive estrangement” – that is, fiction that puts readers in another world different than our own but still plausible within the bounds of our experience.32 Philip K. Dick, who authored short stories adapted into the popular films Blade Runner 33 and Minority Report, 34 largely appears to agree with this conceptualization, separating science fiction from “space adventure” and other fiction that that looks into the future using “super-advanced technology.”35 Instead, Dick wrote, science fiction should involve “a society that does not in fact exist, but is predicated on our known society – that is, our known society acts as a jumping-off point for it.”36 A broader conceptualization of science fiction than Dick or Suvin offer seems more appropriate, one that embraces both alternate worlds and future situations based on our own, with narratives driven by plausible advances in technology. The genre is, as science fiction author Cory Doctorow has put it, “a toolkit for thinking about the relationship between technological change and humanity.”37 While science fiction may involve much more than mere “technological forecasting,”38 technological plausibility is essential if science fiction is to inform the future of media law and policy. As science fiction scholar John Griffiths outlined, “A science fiction story is one in which the suspension of disbelief depends on the plausible development of a central technical or scientific idea or ideas.”39 Likewise, without plausibility, science fiction would be lessened in its ability to serve as a mediator between the two cultures of intellectuals and scientists.40 To illustrate, when asked about plausibility and predictive technology of widespread solar power, science fiction author Neal Stephenson offered an analogy to a piece he was writing that included a tower 20 kilometers tall: In my story, the tower is going to be made out of steel because we have steel. I could write a story about a much taller tower made out of buckytubes, but we don’t have buckytubes. Everyone would say, “Oh well, we need to invent buckytubes.” And the tower would be dismissed as an unattainable goal.41 The law has had little involvement in science fiction to date. When courts have referenced science fiction authors in their opinions, it is typically because of a copyright dispute involving the author’s work,42 the banning of the author’s work from a public institution,43 or from some other odd circumstance, such as a 2008 federal lawsuit against an airline accused of losing “an artistically and scientifically valuable robotic head modeled after famous science fiction author Philip K. Dick.”44

xviii Preface

But occasionally, courts have reflected on making modern judicial decisions in a world in which technology outpaces the law. For example, a New York state court in 1980 considering whether to allow a hospital to remove a terminally ill, comatose patient from a respirator, acknowledged this dilemma: Ultimately, we must face the fact that technological advances in medicine have generally outpaced the ability of the judicial system to deal comprehensively with them in a manner consistent with the fulfillment of social policy objectives. Subjects that only 15 years ago were within the exclusive domain of such visionaries as Ray Bradbury, Arthur C. Clarke and Isaac Asimov – genetic recombination, microsurgery, transplantation of organs and tissues – are now very real, straining the traditional boundaries of the law. Indeed, even the venerable doctrine of stare decisis becomes ineffectual in that it suggests institutional reliance on old answers at a time when the questions themselves have passed beyond the imagination of the judicial sages who formulated the precedents.45 The Supreme Court has used the words “science fiction” exactly once in an opinion, in a footnote in Justice Harry Blackmun’s dissenting opinion in Webster v. Reproductive Health Services in 1989, dismissing claims about medical technology changing the threshold of fetal viability as “pure science fiction.”46 This is, of course, as expected for the reasons stated earlier. Courts examine present controversies in the context of what exists now, not what may plausibly exist in the near and distant future. Legal scholars, on the other hand, have looked forward on occasion. Law professor George J. Alexander received a grant from NASA to research legal issues in space exploration, leading to the publication of his article “The Legal Frontier in the Space Program” in 1969. He showed understanding for the challenges that this kind of research can present, and how he may very well spoil a lot of the wonder that comes along with the visionary achievements of space exploration. “It is a sign of the good fortune of the space program that N.A.S.A. has not been forced fully to explicate its legal obligations,” Alexander wrote. “In discussing briefly what I expect those obligations to be, let me not be understood to be a prophet of doom, but merely a professional in a field which unfortunately is trained to look at the dim side.”47 Science fiction has also made occasional appearances in legal scholarship, most often in journals focusing on legal studies48 or law and literature.49 Some scholars have addressed contemporary matters, such as Mitchell Travis examining how themes from the science fiction film Species could inform debate on admixed embryos (that is, embryos containing both animal and human DNA) under Britain’s Human Fertilisation and Embryology Act.50 Others have gone to the more distant future, anticipating how U.S. naval regulations may shape interstellar law as humans come into contact with intelligent alien life, as Thomas Wingfield did in 2001 with his exploration into possibilities raised by the television series Star Trek. 51

Preface xix

It remains a niche area, though there is tremendous potential to use the visions of science fiction authors to look forward and get ahead of the legal issues they may present well before they present immediate problems. This approach can also help inform the development of technology, and the policies that may arise surrounding their implementation and acceptance, in a more productive manner. Imagine legal research in the 1990s that anticipated the rise of online social networks and worldwide private data gathering, and how such discussions may have eased the present struggles we are facing with companies such as Facebook and Google that have largely dodged effective regulation while leaving users and governments in the difficult position of, metaphorically speaking, putting toothpaste back into the tube. Facebook’s corporate motto of “move fast and break things” helped its founders become some of the wealthiest and most influential people on the planet,52 but their growth was unchecked by modern data and privacy regulations that may have served as a more effective protector of society. The purpose of this book is not to come up with solutions to that conundrum. Plenty of scholars from multiple disciplines are working on that already, as are legislators and courts around the world. Rather, in this book, I set out to look at potential legal challenges presented by plausible communication technologies that may arise 20 or 50 or 100 years from today, in the hopes of both identifying the kinds of topics we should be talking about, and informing the debate that would be likely to arise if such technologies become a reality. This book is an exercise in what I have come to think of as “speculative legal research.”

Media Law Through Science Fiction It was a brief, easily overlooked passage in Ready Player One by Ernie Cline that gave me the nudge to start this project. A friend had recommended the book to me, knowing that I enjoyed techie stuff and old-school computer and video games, and I flew through it. “My only disappointment,” I told him shortly after I finished reading it in 2012, “was that it didn’t mention Tron enough.” As both an attorney and a journalism professor, I’d spent the previous few years looking at the intersection between new communication technologies and the law, specifically social media tools such as Facebook and Twitter. The previous year, I’d looked at the copyright and fair use implications of news media using photos shared on social media, requiring a deep dive into copyright law. So when I came across a scene in Ready Player One that mentioned copyright, my media law brain perked up. Cline wrote that the OASIS, the virtual reality world where the main action of the book takes place, was basically “the world’s largest public library,” where every bit of creative work imaginable could be accessed. Under the curious future copyright laws or licensing system, for any work more than 40 years old, “free digital copies of them could be downloaded from the OASIS.”53 This is, of course, not how copyright law works at all today, and it is almost unimaginable that the slow crawl of copyright reform efforts would change this

xx Preface

drastically in the three decades or so between the book’s publication and the dystopian future setting of 2044. How, I wondered, did Cline arrive at a system allowing copyright terms shortened to this extent, so that anything more than 40 years old is essentially in the public domain when it comes to downloading and sharing online? Copyright reform advocates such as Lawrence Lessig would be delighted by the idea that their efforts to reduce the length of copyright terms – an approach long opposed by corporate intellectual property interests such as the entertainment industry that had successfully worked to lengthen copyright terms in the 1990s – had finally succeeded, if Ready Player One were to be believed. The massively multiplayer virtual world envisioned by Cline could be an engine for practical reform for outdated copyright policies that had struggled to adapt to the online world. It is future copyright utopia. I was intrigued enough to dog-ear that page as I thought about possible future research projects. One thing I’ve learned in my two decades as an attorney is that we can take the fun out of anything, and science fiction is no exception. While I had been catching up on classic science fiction for several years, I asked around for recommendations for more recent sci-fi that portrayed tech and future media law

Ernest Cline in the tricked-out DeLorean he calls “Ecto-88,” at South by Southwest Interactive in Austin, Texas, 2015. Photo by Chip Stewart. FIGURE 0.2

Preface xxi

issues, as well as virtual worlds and intellectual property matters. “I think the authors you’re looking for here are Cory Doctorow and Neal Stephenson, right?” responded my friend Woody Hartzog, a law professor and expert on modern privacy law, recommending books such as Doctorow’s Pirate Cinema and Stephenson’s Snow Crash and The Diamond Age. My TCU professor friend who suggested Ready Player One, Steve Levering, said I should check out Doctorow’s Little Brother and pointed me to Twelve Tomorrows, the collection of short stories published by MIT’s Technology Review starting in 2011. Erica Wetter, the editor whom I worked with at Routledge on my book Social Media and the Law, seemed about to zone out over coffee when I was talking about some of the duller projects I was working on, but immediately brightened up when I mentioned what I thought was an off-the-wall look at law and science fiction; she recommended that I revisit Star Trek: The Next Generation, and encouraged me to look into this as a potential book. From there, I dove in. In 2014, the journal Communication Law & Policy announced a special issue on the next 20 years of communication law and technology, and I figured that it might be the right home for a project that was admittedly unlike what we as legal scholars usually do. But when I saw the call, I thought, why stop at 20 years? Science fiction offers us future technologies that may be plausible, if not immediately so, but the law will continue to be a challenge when those technologies arrive. I wrote an article focusing on three main topics – copyright, privacy and surveillance, and artificial intelligence – that served as a launching point for this book. I was thrilled when the journal’s editor, W. Wat Hopkins at Virginia Tech, let me know the paper had been accepted. When I presented it at the annual Association for Education in Journalism and Mass Communication conference in Montreal that year, I was perhaps a bit too enthusiastic for an 8 a.m. session on the first day of the conference. In that moment, I described it as a passion project. It was, and it still is. Over the past five years, I have almost exclusively watched and read science fiction, looking for potential future communication technologies and legal issues they may present. When I have done research projects outside of the book, I have done them with an eye on the future. For example, I puzzled over gaps in privacy law between an almost absolute right to record people in public and the ability to be sued for anything you may record, a project that I undertook with my research colleague Jeremy Littau when livestreaming tools Meerkat and Periscope were launched in 2015.54 Or, when messaging tools such as Signal and Confide emerged that allowed messages to self-destruct after reading, an obvious problem for government transparency laws that typically require those kinds of messages to be preserved and disclosed, I undertook to look at the thorny legal issues that arose regarding vanishing speech tools that were not really foreseen when right-to-know laws were enacted many decades ago.55 Some of the analyses from these articles, as well as the original essay for Communication Law & Policy, 56 are contained in the chapters ahead.

xxii Preface

In terms of the scope of this project, I cannot pretend that it is comprehensive. I looked for every work of science fiction that I could get my hands on that might touch on future communication technology tools or future ways of thinking about the law that might touch on communication technology. I went back to the classics – Fahrenheit 451, Nineteen EightyFour, Stranger in a Strange Land, Foundation, The Handmaid’s Tale, Star Trek, and the short stories of Philip K. Dick, to name a few. I asked friends and colleagues about other things I should be reading. And, as I talked to science fiction authors, I asked them about works I may have missed as well. Every time I thought I was done reading or watching films, another recommendation would pop up, or a new book would come out, with new tech or new ideas to be explored. I kept reading until the moment I sent off the manuscript to the publisher, and I assure you, I will find things I missed in the time it takes the book to go to press. My goal was to capture as much as possible that could both inspire and illustrate the future legal analysis that is the point of this book. A brief glance at the subjects of each chapter:

Chapter 1 – Science Fiction, Technology, and Policy Science fiction authors create plausible worlds in the near and distant future as a laboratory to study humanity. While the genre has long had a popular and sometimes low-brow appeal, it also has generated some of the great works of literature, including Mary Shelley’s Frankenstein, Kurt Vonnegut’s Slaughterhouse-Five, Margaret Atwood's The Handmaid’s Tale, Ray Bradbury’s Fahrenheit 451, and George Orwell’s Nineteen Eighty-Four. Contemporary authors examine the human condition in their works, many of which touch upon the future of communication, looking at technology, language, artificial intelligence, alien life, and more. This chapter includes a series of semi-structured interviews with science fiction authors Cory Doctorow (Little Brother, Pirate Cinema, Walkaway), Annalee Newitz (Autonomous), Malka Older (Infomocracy), Katie Williams (Tell the Machine Goodnight), Louisa Hall (Speak), Robin Sloan (Mr. Penumbra’s 24-Hour Bookstore), and Daniel Wilson (Robopocalypse), as well as conversations with scholars who have written about science fiction or have used it in their research and teaching to help explore the role it can have in understanding technology law and policy matters. How does plausibility of technology influence the worlds and tools they create? How realistic is it supposed to be? How do you build future legal systems around the technology and societies they write about? This chapter examines these questions to build understanding of the craft of writing science fiction and how it can be used to help us understand the present and future.

Preface xxiii

Chapter 2 – The Future of Copyright Law, Both Real and Virtual In Ready Player One, Ernest Cline tells a story largely set in a virtual reality world 25 years from now. In this world, characters are able to download, share, and host – for free – copyrighted materials such as music, television shows, movies, and videogames from the 1980s. That couldn’t happen today, but it could be a possible future for copyright law in line with what intellectual property scholars and advocates prefer. Also plausible, considering the march of stricter copyright restrictions over the past half century, are more draconian copyright laws, such as a future in which illegal downloading of videos can cost a family its Internet access and lead to serious jail time, as Cory Doctorow depicts in his novel Pirate Cinema. Potential intergalactic issues, such as widespread copyright infringement of Earth music by aliens in Rob Reid’s Year Zero, or rebroadcast of alien music by humans in Mary Doria Russell’s The Sparrow, are examined as well. This chapter reviews these and other science fiction portrayals of the immediate and distant future of intellectual property law, describing some of the challenges creators and policymakers are certain to face about what is copyrightable, how long copyright terms should last, how penalties and remedies should be addressed, the role of the public domain, and how virtual reality and other imminent technology may change the way we think about copyright.

Chapter 3 – Privacy in the Perpetual Surveillance State Science fiction writers have long portrayed humanity under constant surveillance – “Big Brother is watching” – in dystopian future totalitarian states. This chapter examines the challenges technology presents humans as our world becomes more and more recordable and archivable. When the government has the ability to read every email and listen to every phone conversation, and citizens are almost universally armed with cameras, ready to film any event at any time, anywhere, there are consequences for our legal understanding of privacy. Today, courts grapple with privacy in the context of warrantless wiretapping, unlocking smartphones, or accessing encrypted emails, while states and citizens argue over the balance between personal privacy and maintaining order and safety. But Orwell’s Nineteen Eighty-Four, Alan Moore’s V for Vendetta, Suzanne Collins’ Hunger Games trilogy, Scott Westerfeld’s Uglies books, Malka Older’s Infomocracy, and more present future technological challenges we are already seeing or may face soon – cameras on every street corner and traffic light, walls and watches that can see and hear every move, and genetically modified animals able to spy on dissidents, to name a few. The modern dilemma of shifting notions of public and private spaces, and the rights of citizens and powers of the state attaching to them, are explored in the context of what may be even more pervasive technology to come – drones with cameras that are able to access any airspace or wearable/implantable computing that allows citizens and the state other

xxiv Preface

ways of recording and instantly spreading anything within their range. Neal Stephenson (Snow Crash, The Diamond Age) regularly includes characters with implanted computing technologies over the past 20 years, and with wearable computing and smartwatches already upon us, courts and policymakers will soon have to deal with the legal issues these tools will present. This chapter considers the meaning of privacy in a world of constant technological enhancement, building on the work of legal scholars to examine ways for dealing with privacythreatening technologies to come.

Chapter 4 – Do Androids Dream of Electric Free Speech? Futurists such as Ray Kurzweil predict that human-created intelligence will surpass humans’ natural abilities (an event referred to as “the singularity”) within 20 years. While actual artificial intelligence technology advances slowly, science fiction authors have long presented the challenges we will face as a society when we are able to create thinking machines greater than us. What kind of rights and freedoms should be extended to such creations? It was a topic pondered by Philip K. Dick in Do Androids Dream of Electric Sheep?, Louisa Hall in Speak, as well as in the famous Star Trek: The Next Generation episode in which Lt. Commander Data faces a trial about his humanity. Meanwhile, contemporary courts are starting to address how we regulate human creations that use algorithms to create new content – consider the output of a search engine, which scholar Eugene Volokh argued on behalf of Google should be eligible as free speech under the First Amendment. Additionally, creative robots such as Med, the scientist in Annalee Newitz’s Autonomous, represent possibilities that artificially intelligent beings may be entitled to intellectual property rights for their works. Should robots have free speech rights? Should our creations have the same freedoms as their creators? And what happens when they become creators?

Chapter 5 – Vanishing Speech and Destroying Works A common theme in science fiction is the destruction of speech, often in a dystopia where the powers that be look to erase dissenters or contributors to an inconvenient past that threatens their control. The theme emerges in several different contexts. The state may be erasing texts that it finds threatening to the status quo, perhaps most famously in Fahrenheit 451 by Ray Bradbury. Or it may be erasing its own records in an effort to prevent oversight or change perceptions of the past, such as in Nineteen Eighty-Four and The Giver. Things get even more complicated, from a legal perspective, when it is private citizens destroying the speech of each other, a trope not confined to science fiction – consider Amy’s burning her sister Jo’s book in Little Women – but reflective of current and future abilities of citizens to delete works as a way of silencing or punishing their authors, not just in print but also in digital formats. Finally, the chapter explores

Preface xxv

situations when permanent erasure of works may appear necessary to avoid greater societal harm, such as “killer speech” such as the video in The Ring or the deadly film in David Foster Wallace’s Infinite Jest.

Chapter 6 – Law, the Universe, and Everything If science fiction authors have taught us anything about the future, it’s that the way we think about freedom of speech and press and the role of communication in democratic society – core elements of the First Amendment – may be drastically different in the not-too-distant future. These freedoms have vanished in the dystopias such as Suzanne Collins’ The Hunger Games, Margaret Atwood’s The Handmaid’s Tale, and Ray Bradbury’s Fahrenheit 451, as well as parallel free speech concepts in Britain in works like George Orwell’s Nineteen Eighty-Four and V for Vendetta by Alan Moore and David Lloyd. But not all futures are quite so dim, as technology may provide widespread benefits in countering disinformation and enabling better democratic governance, as Malka Older explores in her Centenal Cycle trilogy. Additionally, future journalists and citizens retain robust freedoms in other books, such as The Expanse series by James S.A. Corey, resting on modern liberties embedded in First Amendment principles. How might the concepts of freedom of speech and press evolve in a future where technology has drastically altered human relationships? This chapter considers the future of communication, how it may be enabled, how it may be restricted, and what that may mean for current approaches to regulating technology and speech and press. It concludes with a push into the even more distant future, in which communication and the legal consequences for it may be pushed further by travel across time and space and dimensions. *** Imagine if, 20 years ago, a handful of scholars began developing policy statements about civilian use of drones armed with video recorders – an entirely plausible technology. We’d have had a huge head start on developing meaningful state laws and Federal Aviation Administration regulations on the effects of these devices on privacy and security. Instead, legislators and regulators have only just started to figure out how to draft coherent policy, while the technology becomes stronger and more accessible. This book is about looking forward, beyond traditional legal research and scholarship to the possible and even very likely future of communication technology. It is impossible to imagine a future in which virtual reality does not play an increasing role, in which wearable/implantable computing does not become more prevalent, in which the machines humans build become smarter and smarter to the point that they may be reasonable simulacra of humans themselves. While we may not be able to foresee the timing or development path of these technological developments, we can predict that they will unavoidably have legal

xxvi Preface

and policy effects and consequences. With the volume of legal research and scholarship being generated through law schools and media programs, it makes sense for at least some of this body of work to be pointed toward the future, rather than aiming only at the present and past. Just as science fiction writers explore human challenges and issues in speculative worlds, we as scholars can help to build a framework for understanding the media law issues on the horizon. As I was nearing completion of this manuscript, Sen. Kamala Harris launched her 2020 presidential campaign and, speaking of climate change, she said we needed policy based on “science fact, not science fiction.” The comment provoked some pushback from science fiction authors, including Charlie Jane Anders, who took to the opinion page of The Washington Post to show the value of science fiction in contributing to our understanding of policy and “rescuing the future from the huge challenges we’re facing.” Science fiction authors have long written about climate change and how it may affect living creatures, both here and on other planets as allegories for modern times. There is much to be learned as visionary authors explore the plausible consequences of human behavior, through innovation or inaction, as a way to prepare ourselves today for the world of tomorrow. Anders wrote: Fact-checking and spreading the truth are a never-ending battle of vital importance — but they’re not enough to inspire people to do the hard work of rescuing the future. And because science fiction is the literature of problem-solving, our made-up stories about science and innovation can play an important role in helping us to regain our faith in our own ability to create change. So as Harris goes out and campaigns for decision-making based on science facts, she might also consider how we can harness the awesome power of science fiction.57 The comment was a perfect summation of the purpose of this volume, and I hope it contributes to our understanding not only of the future challenges of the law and policy of communication, but also to the decisions we are making today about these topics that are central to human existence.

Notes 1 Adam Liptak, The Lackluster Reviews That Lawyers Love to Hate, N.Y. Times, Oct. 21, 2013, A15. 2 Id. (quoting Chief Justice John Roberts’ talk at the Fourth Circuit Court of Appeals Judicial Conference). See also A Conversation with John Roberts, C-SPAN, June 25, 2011, www.c-span.org/video/?300203-1/conversation-chief-justice-roberts. 3 Nicholas Kristof, Smart Minds, Slim Impact, N.Y. Times, Feb. 15, 2004, SR11. 4 Michael Crichton, Sci-Fi and Vonnegut, New Republic, Apr. 26, 1969, 33. 5 Id. at 34. 6 Vonnegut wrote: “[I]f you write stories that are weak on dialogue and motivation and characterization and common sense, you could do worse than throw in a little chemistry or physics, or even witchcraft, and mail them off to the science-fiction

Preface xxvii

7 8 9 10 11 12 13

14 15 16

17 18 19

20

21 22 23 24

magazines.” Kurt Vonnegut Jr., Science Fiction, in Kurt Vonnegut Jr., Wampeters, Foma & Granfalloons 1 (1974). The essay was originally published in The New York Times Book Review in 1965. Id. at 5. Laura Miller, The Salon Interview: Neal Stephenson, Salon, Apr. 21, 2004, www.salon. com/2004/04/21/stephenson_4/. C.P. Snow, The Two Cultures: The Rede Lecture 4 (1959). Id. at 13, 16. Id. at 17. Eileen Gunn, How America’s Leading Science Fiction Authors Are Shaping Your Future, Smithsonian, May 2014, www.smithsonianmag.com/arts-culture/how-americas-lea ding-science-fiction-authors-are-shaping-your-future-180951169. As it were. Apologies to Gene Roddenberry. See, e.g., Star Trek: The Return of the Archons (NBC television broadcast July 27, 1967). (Roddenberry, the creator of the Star Trek series, established the “prime directive” as the interstellar explorers’ first rule, which is, in short, an order not to interfere with the development of other cultures and civilizations.) Gwyneth Jones, Deconstructing the Starships: Science, Fiction and Reality 4 (1999). See Thomas A. Easton & Judith K. Dial (eds.), Visions of Tomorrow: Science Fiction Predictions That Came True (2010) (a collection of short stories predicting atomic bombs, the Internet, biological weapons, and three-dimensional printing). See Douglas Adams, The Hitchhiker’s Guide to the Galaxy 53–54 (Pocket Books 1981) (1979) (“It’s sort of an electronic book. It tells you everything you need to know about anything. That’s its job”). However, even the guide, envisioned by Adams in the 1970s, didn’t project instant updates – the character Ford Prefect is writing a revision for a new version because the one he hands to Arthur Dent is “out of date.” Id. at 54. Gunn, supra note 12. Crichton, supra note 4, at 33. Isaac Asimov, Introduction, in Isaac Asimov, Robot Dreams: Masterworks of Science Fiction and Fantasy 7 (1986). He also noted that while science fiction writers may sometimes be accurate in their predictions, “Science fiction offers its writers chances of embarrassment that no other form of fiction does . . . we may prove inaccurate as well, sometimes ludicrously so.” Id. at 9. Id. at 8–9. Asimov wrote that his “first robots appeared in 1939.” Id. at 8. This likely refers to a short story he began writing in 1939 that was first published in 1940. See Isaac Asimov, Robbie, in Isaac Asimov, I, Robot 5 (1950). Space walking was described in his 1952 story “The Martian Way.” Isaac Asimov, The Martian Way, in Asimov, supra note 19, at 152. Pocket computers were included in his 1957 story “The Feeling of Power.” Isaac Asimov, The Feeling of Power, in Asimov, supra note 19, at 241. See Apple, Inc. v. Samsung Electronics Co., 2012 U.S. Dist. LEXIS 108648 (N.D. Cal. 2012). Brad Stone, With Its Tablet, Apple Blurs Lines Between Devices, N.Y. Times, Jan. 27, 2010, A1. Arthur C. Clarke, 2001: A Space Odyssey 52 (1968). See Abbot Laboratories v. Gardner, 387 U.S. 136, 148–49 (1967), in which Justice John Marshall Harlan outlined what is recognized as the modern interpretation of ripeness: [T]he basic rationale is to prevent the courts, through avoidance of premature adjudication, from entangling themselves in abstract disagreements over administrative policies . . . The problem is best seen in a two-fold aspect, requiring us to evaluate both the fitness of the issues for judicial decision and the hardship to the parties of withholding court consideration.

xxviii Preface

See also Gene R. Nichol Jr., Ripeness and the Constitution, 54 U. Chi. L. Rev. 153 (1987). 25 As Chief Justice John Roberts wrote in 2010: Fidelity to precedent – the policy of stare decisis – is vital to the proper exercise of the judicial function. Stare decisis is the preferred course because it promotes the evenhanded, predictable, and consistent development of legal principles, fosters reliance on judicial decisions, and contributes to the actual and perceived integrity of the judicial process. Citizens United v. Fed. Election Comm’n, 558 U.S. 310, 377 (2010) (quoting Payne v. Tennessee, 501 U.S. 808, 827 (1991)) (Roberts, C.J., concurring). 26 Meg Leta Jones, Does Technology Drive Law? The Dilemma of Technological Exceptionalism in Cyberlaw, 2018:2 J. Law, Tech. & Pol’y 249, 278 (2018). 27 Id. at 283. 28 Yvette Joy Liebesman, The Wisdom of Legislating for Anticipated Technological Advancements, 10 J. Marshall Rev. Intell. Prop. L. 153, 156 (2010). 29 See M. Keith Booker & Anne-Marie Thomas, The Science Fiction Handbook 3–4 (2009). 30 Harvey Breit, Talk with Mr. Bradbury, N.Y. Times, Aug. 5, 1951, 182. 31 Id. at 2. 32 Darko Suvin, Metamorphoses of Science Fiction: On the Poetics and History of a Literary Genre viii (1979). 33 Philip K. Dick, Do Androids Dream of Electric Sheep? (1968). 34 Philip K. Dick, The Minority Report, in The Philip K. Dick Reader 323–54 (Philip K. Dick, 1997). 35 Philip K. Dick, My Definition of Science Fiction, in The Shifting Realities of Philip K. Dick: Selected Literary and Philosophical Writings 100 (Philip K. Dick, 1996). 36 Id. 37 Cory Doctorow, Cold Equations and Moral Hazard, Locus Online, Mar. 2, 2014, www. locusmag.com/Perspectives/2014/03/cory-doctorow-cold-equations-and-moral-hazard/. 38 See John Griffiths, Three Tomorrows: American, British and Soviet Science Fiction 13 (1980). 39 Id. at 25. 40 See Bradford Lyau, Science Fiction, Mediating Agent between C.P. Snow’s Two Cultures: A Historical Interpretation, in Science Fiction and the Two Cultures 22 (Gary Westfahl & George Slusser eds., 2009). 41 Q+A with Neal Stephenson, in Twelve Tomorrows 12 (2013). 42 See MRC II Distribution Co. v. Coelho, 2012 U.S. Dist. LEXIS 125463 (C.D. Cal. 2012) (finding an active controversy and thus denying a motion to dismiss in a dispute over the copyright of Philip K. Dick’s copyright in the short story “The Adjustment Bureau” and whether it was in the public domain). 43 See Couch v. Jabe, 737 F. Supp. 2d 561, 568 (W.D. Va. 2010) (prison library policy banning Margaret Atwood’s The Handmaid’s Tale, Kurt Vonnegut’s Slaughterhouse-Five, Aldous Huxley’s Brave New World, and other books struck down as a violation of the First Amendment). 44 See Hanson v. America West Airlines, Inc., 544 F. Supp. 2d 1038, 1039 (C.D. Cal. 2008). 45 Matter of Eichner (Fox), 73 A.D.2d 431, 447 (Supreme Court of N.Y., App. Div., 2nd Dept., 1980). 46 492 U.S. 490, 554 n.9 (1989) (Blackmun, J., dissenting). 47 George J. Alexander, The Legal Frontier in the Space Program, 20 Syracuse L. Rev. 841, 844 (1969).

Preface xxix

48 In 1999, for example, The Legal Studies Forum published a symposium issue on science fiction. See Bruce L. Rockwood, Symposium: Law, Literature, and Science Fiction, 23 Legal Stud. Forum 267 (1999). 49 See Kieran Tranter, “Frakking Toasters” and Jurisprudence of Technology: The Exception, the Subject, and the Techne in Battlestar Galactica, 19 Law & Lit. 45 (2007). 50 See Mitchell Travis, Making Space: Law and Science Fiction, 23 Cardozo Stud. L. & Lit. 241 (2011). 51 See Thomas C. Wingfield, Lillich on Interstellar Law: U.S. Naval Regulations, Star Trek, and the Use of Force in Space, 46 S.D. L. Rev. 72 (2001). 52 Samantha Murphy, Facebook Changes Its ‘Move Fast and Break Things’ Motto, Mashable, April 30, 2014, mashable.com/2014/04/30/facebooks-new-mantra-move-fast-withstability/#k.i1HA8ATPqj. 53 Ernest Cline, Ready Player One 16, 62 (2011). 54 Daxton R. “Chip” Stewart & Jeremy Littau, Up, Periscope: Mobile Streaming Video Technologies, Privacy in Public, and the Right to Record, 93 Journalism & Mass Comm. Q. 312 (2016). 55 Daxton R. Stewart, Killer Apps: Vanishing Messages, Encrypted Communications, and Challenges to Freedom of Information Laws When Public Officials “Go Dark”, 10 Case W. Res. J. L. Tech. & Internet ___ (2019). 56 See Chip Stewart, Do Androids Dream of Electric Free Speech? Visions of the Future of Copyright, Privacy and the First Amendment in Science Fiction, 19 Comm. L. & Pol’y 433 (2014). 57 Charlie Jane Anders, Kamala Harris is Wrong, We Need Both Science Facts and Science Fiction, Wash. Post, Jan. 30, 2019, www.washingtonpost.com/opinions/kamala-harri s-is-wrong-we-need-both-science-facts–and-science-fiction/2019/01/30/5440db74-2 498-11e9-81fd-b7b05d5bed90_story.html?noredirect=on&utm_term=.f39f77546af7.

1 SCIENCE FICTION, TECHNOLOGY, AND POLICY

Science fiction tells us about the future, but it also tells us about the present. It is a place where we can experience alternate realities, rooted in different approaches for ordering society or affected by the development of new technologies. When science fiction authors take us to other worlds, they are not just pondering the potential of alien life and culture – they are telling us stories about ourselves. Science fiction can also be a place for advocacy, where authors build new worlds while pushing for change in our current experience. It is a laboratory in which we can see where we are, where we have been, and where we may be going. Often, it’s really fun, too. While long popular as a genre, the cultural relevance and effect of science fiction has boomed over the past decade. In 2009, the top grossing film was James Cameron’s Avatar, which was set on the alien planet Pandora in the 22nd century and features humans interacting with the Na’vi as they come in conflict over extracting minerals from the planet. As of 2019, the film had grossed more than $2.7 billion worldwide, the most of any film in history, and was recognized with nine Academy Award nominations. In the decade since, 15 of the 20 films with the highest domestic gross have been science fiction or had strong science fiction elements, including Marvel’s Avengers series, the return of the Star Wars and Jurassic Park franchises, films based on the Hunger Games books, as well as superhero movies with futuristic technology embedded such as The Dark Knight Rises and Incredibles 2. 1 Beyond box office hauls, book sales, and online binge-watching, science fiction has emerged as a significant influencer on matters of technology and society. In May 2014, Smithsonian magazine published an article titled “How America’s Leading Science Fiction Authors Are Shaping Your Future,” describing how authors like Neal Stephenson are also acting as technology consultants and are

2 Science Fiction, Technology, and Policy

being called upon to write “design fiction” that helps companies develop new ideas and products.2 Author Robin Sloan pointed out Stephenson’s contribution to modern technology, not through prediction but influence, as he noted how the engineers who designed Amazon’s e-reader Kindle were inspired by Stephenson’s “super-book” in the novel The Diamond Age. 3 In 2016, the online magazine Slate, in conjunction with think tank New America and Arizona State University, launched “Future Tense,” which it described as “the citizen’s guide to the future,” examining the potential of new technologies to change society, as well as how “technology and its development can be governed democratically and ethically.”4 The project had commissioned and published 15 short stories by early 2019, including authors such as Paolo Bacigalupi, Emily St. John Mandel, Annalee Newitz, and Ken Liu. In 2018, the Electronic Frontier Foundation, advocating for changes in U.S. copyright law, began publishing short stories by science fiction authors such as John Scalzi, Mur Lafferty, and Cory Doctorow, illustrating problems about the way the Copyright Office interprets the right to repair, digital rights management, and jailbreaking.5 The X Prize Foundation established the Science Fiction Advisory Council in 2017, with 64 authors named as a group to help imagine a positive future and roadmaps to make it a reality by helping inform future X Prize competitions on matters such as the environment, health, education, and new frontiers in space.6 “Today, science fiction is the most important artistic genre,” said author Yuval Noah Harari, a history professor and author of Homo Deus: A Brief History of Tomorrow and 21 Lessons for the 21st Century, commenting on how science fiction shapes our understanding of issues such as artificial intelligence and biotechnology. “If you want to raise public awareness of such issues, a good science fiction movie could be worth not one, but a hundred articles in Science or Nature, or even a hundred articles in The New York Times.”7 In this chapter, I explore what science fiction is, how it is created, and what it can be good for. I do that by talking both to people who create works of science fiction and to people who use those creations in helping us to understand communication technology and law matters. Before diving into the portrayals of future communication and media law matters in the chapters ahead, I wanted to get a sense of how authors go about creating the tools and worlds in which these issues are in the forefront. My goal is to gain a better understanding of the process of creating new communication technological tools, of building worlds around those tools, and of drafting laws and legal systems that respond to advances in technology in both the near and distant future. This exercise in “speculative legal research” comes from two distinct approaches I have come across in works of science fiction. The first kind involves the arrival of a new communication tool or method that alters the way people relate to one another and, sometimes, the way the law adjusts to manage these emerging issues. These are more about the ramifications and consequences of an innovative technology that may help us understand both modern communication

Science Fiction, Technology, and Policy 3

and potential future issues that would arise if technologies like this were to become a reality. In this track of science fiction, examples would include the creation of virtual worlds in works such as Snow Crash and Ready Player One, or the advance of robotics and artificial intelligence in the stories of Isaac Asimov, or implanted recording devices in the Black Mirror episode “The Entire History of You.” The technology is at the forefront, with communication law and policy matters less explicitly part of the story but able to be fleshed out with an eye on those matters. The second track of science fiction works are those that are built largely upon already existing tools and methods, or very plausible near-future advances, that more explicitly game out how current approaches to law and policy may respond to technological or social matters. These stories are more about the policy and struggles of laws and systems to manage technological advances, sometimes very directly referencing proposed changes to the law that may drastically alter the landscape of human communication. Some examples include Fahrenheit 451, which is explicitly about government destruction of speech and some of its consequences, though the technology in place is no more than printed books and flamethrowers, as well as Cory Doctorow’s Pirate Cinema and Little Brother, which explore widespread social consequences that follow the passage of heavy-handed copyright laws and privacy-threatening homeland security powers, respectively. To explore these topics, I needed to talk to people in both tracks, as well as some who have done work in both areas. I started by making a list of ideal interviewees based on my reading, and I added to it as I came across a new book or author, sometimes recommended by the people I had been interviewing. As mentioned in the Preface, after I wrote the initial paper that launched this project, I began expanding it as a book in 2015, and I continued to read and watch new science fiction works that have been published or aired since then. As a result, I have had the very good fortune to interview contemporary authors about their process in developing new works and new ways of thinking about technology and law and policy, several of which have been published in the years since I started looking into this area. From a social science perspective, it is best to categorize my methodological approach as a series of semi-structured interviews of participants who were kind enough to make the time to respond to me and engage in either a phone conversation or an email interview. The purpose was not to seek a large sample of science fiction writers and thinkers to establish broadly generalizable responses to questions; rather, I hoped to identify people with expertise in the particular area I set out to study, using their responses to identify themes and provide depth to understanding of how science fiction authors approach technology, law, and communication, with a goal of informing the analysis in the chapters ahead. Some of the questions I asked included:  

Why do you write science fiction or use it in your work? How does science fiction contribute to our understanding of tech and law and policy matters?

4 Science Fiction, Technology, and Policy

   

What do you rely on to create new laws or regulations or systems that emerge in response to new technologies? Which comes first in your process, the new tech or the story and characters that require or employ the new tech? How important is scientific plausibility to your tech advances? What can science fiction tell us about the present and future of communication?

Ultimately, I spoke to Cory Doctorow, Louisa Hall, Annalee Newitz, Malka Older, Robin Sloan, Katie Williams, and Daniel Wilson about their approach to writing science fiction that incorporates law and policy and technology matters. I also spoke to law professor Nicholson Price, journalism and media professor Jeremy Littau, and communication professor Andrea Guzman, who have seen how science fiction can affect understanding of their field or otherwise influence their teaching and scholarship.8

What Science Fiction Can Do Annalee Newitz did not start her career as a science fiction author. She was an academic first, earning a doctorate in English and American Studies before going to MIT as a Knight Science Journalism Fellow, then worked as an editor at Wired and founded the io9 blog at Gawker Media. She also worked at the Electronic Frontier Foundation as a policy analyst, and she recalls sleeping outside of the Supreme Court while awaiting oral arguments in MGM v. Grokster in 2005. This background, she says, all influenced her development as a science fiction author interested in matters such as artificial intelligence and intellectual property. Her 2017 novel Autonomous was

FIGURE 1.1 Annalee Newitz. Photo by Sarah Deragon.

Science Fiction, Technology, and Policy 5

widely praised, earning a Nebula Award nomination for best novel, and as I spoke with her, had been optioned for a television series by AMC. “I’m a science journalist, and I was covering recent developments in biotech. That’s what spawned the beginning of the novel,” Newitz said. “It initially started out as thinking about robot consciousness and grew from there.” Science fiction provided her a platform to explore these issues, starting with short stories and advancing to a full-length novel about a pharmaceutical pirate battling a repressive international intellectual property system who is pursued by an officer and a robot that become partners, in more ways than one. She said: I have for a long time been interested in property, both as understood by law and by common sense. I’m interested in how it affects human life and human consciousness. Mental labor drives us nuts, literally, divorcing us from our own thoughts. That’s the reification of consciousness under capitalism. The result, as will be discussed later in this chapter, is a book that delves deeply into matters of consciousness, communication, freedom, and property rights, all of which are relevant to current policy while also providing a platform for exploring the potential future of law and regulation in these areas. Making the connection between emerging technology and law and regulation has long been a feature of Cory Doctorow’s work, both as an activist and as a science fiction author. Doctorow began publishing science fiction as a teenager, and in the two decades since has become extremely influential in law and policy circles, particularly in matters of digital privacy and copyright reform while working with organizations such as the Electronic Frontier Foundation. Little Brother, his young adult novel set in San Francisco after a terrorist attack that leads to widespread surveillance and privacy incursions by government authorities, was nominated for a Hugo Award for best novel in 2009, and spawned the sequel Homeland in 2013. He took on the advance of increasingly draconian copyright laws in Pirate Cinema in 2012, telling a story of resisters who push back against laws that strip them and their families of Internet access for improperly downloading and remixing copyrighted materials online, reflecting some of the real-world worries about the potential harms these kinds of laws, which were and to some extent still are being considered by governments around the world, would present. Doctorow said science fiction allows authors to bridge some of the time and space problems that can make it hard for people to make good judgments about policy, not just about technology, but also matters such as climate change. He said:

6 Science Fiction, Technology, and Policy

FIGURE 1.2 Cory Doctorow. Photo by Jonathan Worth.

People hyperbolically discount the distant future consequences of using Facebook and assign an exaggerated premium on the present-day convenience of using it. We rarely think too hard about Facebook now, and when we do we assume it won’t turn out that bad, but a moment’s rational inquiry makes it obvious the future consequences of our Facebook use is catastrophic. That’s what science fiction is good at. He gave the example of George Orwell in Nineteen Eighty-Four, a project starting in 1948 that considered the advancement of technology and privacy norms and what we may be surrendering with the political trajectory of the time. “George Orwell was saying that was a bargain worth considering in depth,” Doctorow said. “That turned out to be a pretty enduring good. Maybe the vividness has worn off, but today, we still have debates saying something is an Orwellian idea, without having to abstract hypothetically.” Science fiction has value as an abstract, a kind of shorthand for understanding technology. That said, the shorthand sometimes has some limits, especially when it is applied in ways that do not advance understanding of actual technology or the challenges it may realistically pose. Andrea Guzman, a communication professor at Northern Illinois University who has studied the rise of communications between humans and machines, said science fiction may help us understand new technology in its earliest days.9 For example, Guzman mentioned how people approached Siri when Apple launched the service in 2011, looking to understand how humans approached the technology and came to make sense of the way we interact with it. She said:

Science Fiction, Technology, and Policy 7

When tech is introduced for the first time, we don’t have a frame of reference for it, so science fiction can give us that frame of reference. There wasn’t a frame of reference for when Siri was introduced except for a car navigation system. Using it makes sense to a point, but now, we’re eight years out, and we’re still drawing those references. We have words to talk about Siri or voice assistants now. People have engaged with them enough now to engage with them as their own entity. There’s the myth of what artificial intelligence is, and there’s the computer science reality of artificial intelligence. They’re intertwined, but for sure, they’re very different. If we’re going to have quality, good discussion about what is going on now with artificial intelligence, then we need to make sure that we’re referring to and dealing with what we have now. Making references that are comparable is also useful in informing the way we interact with machines, Guzman emphasized. Automated journalism is, for example, not comparable to artificial intelligence portrayals in Terminator or 2001: A Space Odyssey. “If I see one more reference to HAL 9000 when you’re writing about automated journalism I’m going to lose it,” she said. When appropriate connections and references are made, though, the vividness of plausible future worlds and scenarios allows for broad considerations of ideas and potential societal consequences when technology advances to a point we are not, in the present time, ready to deal with in depth. The boundless creativity does create some challenges, though. Louisa Hall talked about some of these challenges that she came across while writing Speak, her 2015 novel experienced through various characters living through the distant past, present, and future of artificial intelligence. While the book incorporates many science fiction elements, she said she felt the need to focus Speak more on real scientific developments with near-future realistic consequences of strong AI, particularly those arising in the past decade. Hall said: Working in science helps me as a way to limit form, the way a sonnet is a limiting form. I find the form of real science to be a helpful compressing tool in the same way. The difference between science and science fiction is that science fiction doesn’t provide that compressing form. You’re allowed to make up your own rules, and you have the freedom to make the rules up as you go. That is not always a weakness, of course. Robin Sloan, the author of Mr. Penumbra’s 24-Hour Bookstore, says he considered himself more of an author of literary fiction but a fan and avid reader of science fiction who sometimes incorporates those elements into his stories. Sloan said:

8 Science Fiction, Technology, and Policy

FIGURE 1.3 Robin Sloan. Photo by Robin Sloan.

The distinction between literary fiction and science fiction doesn’t matter that much. I happen to think that what science fiction is really good for is delivering new senses of scale to readers, to kind of help people imagine and maybe even get comfortable with, I think, particularly large scales, big swaths of space, big swaths of time. You can tell stories about those things, and I find it completely dizzying . . . I happen to think it’s very useful, I think we live in a world that is in fact enormous, I think those big scales are real. I think it’s both fun and worthwhile to kind of get more comfortable with them. called a “functional view” of science fiction, as a vessel for prediction or working through futuristic problems. Personally, I think that’s not what science fiction is for, or I don’t think that’s number one on the list of things that science fiction can do. And sometimes when I hear it get compressed into that very functional (view) – I guess now to be good citizens we need to think about the future, so here’s a science fiction short story – well, that’s fine, but there’s more to it than that. That said, there is undoubtedly value to the functional approach, particularly as it relates to influencing policy about present technology. When I first traded messages with Doctorow in 2015, he mentioned that he would be partnering

Science Fiction, Technology, and Policy 9

with the Electronic Frontier Foundation (EFF) to tackle problems with digital rights management (DRM) under current copyright law. They called it the Apollo 1201 Project, with a mission to “eradicate DRM in our lifetime.”10 Their advocacy has taken place on several fronts of lobbying and public information, including publishing science fiction short stories in 2018 that illustrate several nightmare scenarios that are not only possible but likely outcomes of the U.S. Copyright Office continuing on the current path of regulation, particularly in the face of a well-funded lobby against such reform.11 Doctorow said: It’s not just a problem that we unwisely discount future consequences, but there are constituencies trying to discount those consequences. Just like the smoking industry pushing the argument that cigarettes are not giving you cancer, or like climate change today, there’s a big group of people, in the same way, trying to convince us not to talk about DRM and policy. With science fiction, you can take otherwise very dry and complicated policy issues and make them real, make them vivid, make them present. That’s really the key to the whole exercise. We can do it before the time DRM puts in a situation where it’s too late to do anything about it. If we wait until that happens, it becomes much harder to do. The stories, Doctorow said, have so far been effective in drawing attention to their copyright reform efforts: Everybody has been really delighted. The stories travel pretty far. As posts, they went a lot further than typical posts. It’s still an obscure policy question, about how under the (Digital Millennium Copyright Act), a non-legislative body can make a ruling on a law nobody has heard of, and how they’re going to rule on questions nobody is paying attention to. It’s an issue of concern to a tiny minority of people, and in a category of policy where the cause is separated from the effect by a lot of time and space. It’s in a pernicious category of things that are very important and very boring – the debate on DRM sits at that intersection. It’s boring, it’s got a long fuse, and it’s super important. We’ve made some incremental gains, but it’s a long-range project, so we need to eke out incremental gains. The project scored a victory in 2018 when the Library of Congress and the Copyright Office expanded exemptions to the Digital Millennium Copyright Act (DMCA) that had restricted the ability of users, repairers, and security researchers to break digital locks on devices allowing access to copyrights contained in items such as smartphones, cars, and some home appliances.12 While the reforms were not as comprehensive as Doctorow and EFF had pushed for, it was at least a step in the right direction on one front in a multi-prong approach to digital copyright

10 Science Fiction, Technology, and Policy

reform that also includes litigation trying to declare the current DRM regime unconstitutional on First Amendment grounds.13 Of Green v. Department of Justice, the lawsuit filed by EFF in the federal district court for the District of Columbia in 2016, Doctorow said: We’ve been waiting two years for a judge to rule on a government motion to have the claim dismissed. By the time we get through the lower courts and higher courts and the Supreme Court, the people who have experience of DRM getting in the way of their lives may have increased. And there will be a number of people who will recognize through fiction what the presentday annoyances will turn into in the future. They will infuse the debate, as we try to get courts to see our points of view. Doctorow pointed out that science fiction has been influential in policy discussions in universities and law schools. He serves as a volunteer advisor at Arizona State University’s Center for Science and Imagination, and he pointed to the work of Casey Fiesler at the University of Colorado, who has used episodes of Black Mirror to teach technology ethics to students in information science.14 Jeremy Littau, a journalism professor at Lehigh University, has also built courses around Black Mirror as a way to spur discussion about the future of communication and technology. Littau said: My particular courses are built around people thinking thoughtfully about technology and its influence on them. One of the unique things in our field is this kind of implicit idea that you’re a slave to it, that you can’t do careers in our field without using technology. We hear that a lot about Twitter, which can be such a horrible place. We hear that about Facebook, which I don’t trust as a company, but we have to teach it. Good luck getting a job in community promotion or social engagement without using those. But still, I have to think, what am I unleashing on these students? Littau gave the example of the Black Mirror episode “Nosedive,” which features widespread adoption of a social measurement tool that gives users a score that affects nearly every aspect of their daily lives and is responsive to feedback provided by other users about their interactions with the person and their posts. He said: One of the questions I always start with is, Why would we agree to this? In ‘Nosedive,’ somebody invents this thing and seduces society into accepting it. Are we shaping tech or is tech shaping us, and desensitizing us to privacy, and bullying, and ideas about justice?

Science Fiction, Technology, and Policy 11

Professors at the University of Kentucky and the University of Illinois at Chicago have taught courses on “Science Fiction and Computer Ethics” that helps “reflect the near future (or possible futures) in which computer professionals work” by offering “students a way to engage with ethical questions that helps them cultivate their capacity for moral imagination; science fiction in particular can make the ethical stakes of blue-sky projects vivid, pressing, and immediate.”15 And Nicholson Price, a professor at the University of Michigan School of Law, has taught a course on science fiction and legal analysis as a way to help think through the implications of modern policy on future technology. Price said: It can make possibilities real—or at least, real to imagine—in a way that can be otherwise quite hard to grasp. The movie Gattaca, for instance, drives home the concept of genetic haves and have-nots in the way that just talking about potential benefits to only some folks from being able to use current genetic editing technology really doesn’t. Sci-fi can also just prompt shifts in worldview when we’re thinking about current law; one of the very first law pieces I wrote was inspired by a bit in a book by Lois McMaster Bujold where she matter-of-factly discusses who the legal parents of a clone are under different planets’ legal systems. And I thought, huh – interesting – those different conceptions actually map really nicely onto a couple of hypothetical examples that are often used to argue about potential benefits of human cloning today, and in fact they can also help us think about what the legal definitions of parenthood really can and should mean.

The Law in Science Fiction As I mentioned in the Preface, a brief glimpse of future copyright law in Ready Player One sparked this project, though that does not mean that the reason it was imagined was to make a grand statement about intellectual property reform and the oppressive length of modern copyright terms. It is every bit as likely that the shift in copyright law Ernie Cline wrote into the book was there to help keep the plot plausibly moving forward, without having to ask too many questions about rights and intellectual property ownership and licensing matters in the virtual reality world Cline built.16 Indeed, by probing too deeply into the ways science fiction authors build legal processes and principles into their worlds, I run the risk of deserving the response that William Shatner yelled at a roomful of questioning Star Trek fans in a Saturday Night Live sketch in 1986: “For crying out loud, it’s just a TV show!”17 As much as I tried to keep that in mind, like an attorney shouting objections at the television screen during a courtroom drama, I couldn’t help but to see the law pop up in science fiction works. I noticed it even more in the past few years while re-reading classic works with an eye of finding and exploring the law and policy topics embedded in them related to communication technology.

12 Science Fiction, Technology, and Policy

Sometimes, they were more obvious, with explicit reference to very realistic potential legislation of the kind Doctorow puts at the heart of some of his stories. Others, it was more offhand references to statutes or legislation, of the Ready Player One copyright variety mentioned earlier. And sometimes, it was reference to judicial opinions or proceedings in futuristic fictional court cases that were specifically dealing with the communication technology that drew my interest. Sometimes, even, technology and the law interact in a way that builds entire societies and government structures, as in Malka Older’s Centenal Cycle trilogy. Science fiction authors have a lot to say about the law and use it in their storytelling. This makes sense because the law is one of society’s responses to problems that arise in human interactions, and technological advances in science fiction often trigger these problems. Indeed, sometimes the issue the science fiction work is addressing is how we approach law, rather than how we approach technology. I remember this most vividly from the Star Trek: The Next Generation episode “Justice,” which I saw as a teenager in 1987. Fellow teenager Wesley Crusher stepped on some flowers in a “punishment zone,” and the Edo people enforce all “punishment zone” violations with the death penalty. While the episode itself is, to put it generously, not recalled as one of the series’ stronger efforts, it is built around an important legal topic with both modern and future social ordering questions at its center.18 Is the death penalty a morally defensible deterrent to maintain social order? Should people who are unaware of serious punishments for minor infractions suffer those consequences? That is a broader topic than I wanted to focus on for this book. So when I talked to science fiction authors about how they worked the law and legal systems into their works, I focused on technology and communication specifically. As several noted, often they need to have laws in place to help create bounds for the system, or otherwise to support the plausibility of the plot. Hall said this was the case in Speak when she came up with a legal definition limiting the lifelikeness for bots that required artificial intelligence to be more than 10 percent deviant from regular human consciousness and expression. Hall said: This was more, sort of, I have to come up with a law and this is it. It was serving the idea of having a vested interest in maintaining a line between artificial intelligence and humanity. I’m not sure any line of deviation is functional for things like life, souls, healing, humanity. Any number would be silly in this context, so I didn’t think too much about it.19 Similarly, Katie Williams described the necessity of coming up with boundaries to manage the consequences of new technology in her stories. In Williams’ debut novel Tell the Machine Goodnight, she created a device called Apricity that can tell people the things they need to do to achieve happiness. DNA is gathered by a

Science Fiction, Technology, and Policy 13

simple cheek swab, inserted into the machine, and it gives out advice with a 99.7 percent accuracy rate. Williams said: It was only as I wrote further into the novel that it occurred to me that there would likely be some ‘rules’ around the Apricity technology that went beyond the company’s own policies and into the realm of law. Apricity tells people how to be happy. The law, to reduce it terribly, tells people how to be in society. Where those two ways of being clash some decisions must be made. At one point, Williams references a Supreme Court decision in 2035, “Grover v. Illinois,” as a way to address some of the privacy rights that would necessarily flow from a technology like Apricity, which she expected would almost certainly be used by the government in policing and would present issues with being admitted in trials of criminal defendants. She built the case and its outcome on currently existing technology and the way the law deals with it. Williams said: The Grover v. Illinois decision and the use of Apricity by the penal system to predict and prevent recidivism deal with issues of individual privacy rights.

FIGURE 1.4 Katie Williams. Photo by Athena Delene.

14 Science Fiction, Technology, and Policy

Polygraph tests were my closest real-world technology reference. That the same use of the Apricity technology that’s restricted in trial would be applied to parolees seemed in keeping with how in our real world people convicted of crimes are stripped of many of their rights even after release. Using text from a fictional future judicial opinion was a way to use “found documents,” which she said are common ways to add credibility to imaginary worlds in speculative fiction. For example, Mary Shelley’s novel Frankenstein is presented as Dr. Frankenstein’s actual diary. Because the chapter where Grover v. Ill. appears, “Means, Motive, Opportunity,” is Rhett’s detective’s notebook and because Rhett is in high school and so learning the basics of research and source citation, I figured he’d be likely to paste the actual text of the decision right on in there. The “found documents” approach was also used by Hall in Speak, just one of many different approaches to tell her story of the advance of artificial intelligence, and some of its consequences. Some chapters are diaries or memoirs, others are letters from computer scientist Alan Turing to the mother of his late friend, and

FIGURE 1.5 Louisa Hall. Photo by Alex Trebus.

Science Fiction, Technology, and Policy 15

some are transcripts of conversations between an AI being and a child, recorded as evidence admitted in the trial against the creator of the AI. Hall said it wasn’t her intention to mirror trial transcripts exactly, but she did some research to make sure she wasn’t varying too far from the possible formats that judicial documents might take. Hall said: There’s something in me that finds something compelling about narrative as testimony. It’s a way we can go back later and understand our guilt and other people’s guilt and understand our role in uncomfortable situations. So much of my own internal thought patterns kind of take the form of a trial I’m holding in my own head. Did I do this wrong? Did somebody else do this wrong? I find the testimony aspect of narrative to be really compelling. The narrative in the found documents, especially in the evidence transcripts admitted in the trial of Dr. Stephen Chinn, the inventor of the “babybots” that caused mental and emotional harm to children, was a place to explore some of the future legal and social issues Hall anticipates would emerge as artificial intelligence technology advances. She said: I guess what I was imagining is, what are going to be some of the issues with artificial intelligence going into the future? So much of the economy now is invested in pre-artificial intelligence work. I was thinking of how major corporations and how people may see and use artificial intelligence. And I thought we would have a vested interest in keeping a strict line between artificial intelligence and humanity, and therefore dolls that cause people to question that line or spectrum is blurrier. Fictional future lawsuits can also be a place for authors to bridge what would be the current past to the distant present that they envision. Malka Older, in her novels Infomocracy, Null States, and State Tectonics, establishes a future world order largely governed through “centenals,” or microdemocracies made up of 100,000 people, with a managing superstructure called Information that runs elections and serves as a worldwide source of verified, accurate, instantly updated facts for the public to use, both for voting and in their daily lives. Getting to this future, envisioned sometime in the late 21st century, requires collapse of the current world political order, and both social and financial support for the rise of the new order. Lawsuits were at the heart of that change. Older said: I was creating something based on frustrations I have in the real world. A lot of the legal issues are determined by the end-state that I needed for my story (or my test-case). If Information’s role was to keep publicly asserted data or claims of any kind hewing as much as possible to “the truth,” there needed

16 Science Fiction, Technology, and Policy

to be some legal basis for it. On the other hand, some of my frustrations were directly with the legal side of things. The funding for Information largely came from lawsuits in which courts allowed punishment of corporations for their contributions to disinformation. In Infomocracy, she mentions a landmark settlement in “People v. Coca-Cola et al.,” as well as other lawsuits against cable news companies that caused harm worldwide. Older said: The aside about the People v Coca-Cola case . . . came from a long-held thought about what would happen if people could hold corporations accountable for their advertising claims. We like to think that we’re protected from the worst of advertising malfeasance – we’re familiar with ‘cheese-flavored product’ – but at the same time we’re aware that we’re not, and there’s very little trust. In so many cases, as with our democracy, we have the superficial form of something but we’re missing a lot of the function. A big part of the book is imagining that we follow through at least a little better on some of the things we claim. Building legal systems into fictional worlds, whether the future is on Earth or elsewhere, can be a challenge for authors. Newitz said she often looks to both ancient and more recent history to get an understanding of how humans may respond to technological advances with policies and procedures. For example, for self-aware robots such as Paladin in Autonomous that are built to serve military or police functions, she turned to the history of indentured servitude and slavery. Newitz said: If you look at the history of the United States, there was a very robust legal system for handling slaves. It’s really quite shocking when you think about how short a time it’s been that we haven’t had legal slavery. The prison system still looks like slavery, so in some ways, it’s still in the law today. So [indentured servitude] didn’t seem far-fetched at all. . . . Almost every civilization has had some kind of slavery. Some of the earliest laws we have specifically deal with slavery. If you look at fragments of ancient writing we have from ancient cultures, you’ll often find financial agreements relating to trading slaves or people signing contracts to be a slave. These were very common documents. The legal system that evolved from Newitz’s portrayal of indentured servitude of robots, which expires after ten years, allowing the robots to become free and autonomous, expands into the return of indentured servitude for humans as a societal norm in the world of Autonomous. Making reference to the work of robot law scholar Ryan Calo,20 Newitz said: [In my novel,] people would say it’s a choice, so human rights could tolerate indentured laws. I also felt like people would accept it because of the

Science Fiction, Technology, and Policy 17

ways our current laws are developing around robots. It’s easy for people to accept the idea that robots are slaves, so the problem arises when robots become human equivalent. To us, it seems outrageous, but the law is always eager to codify even the most heinous kinds of human relationships. We use the law to liberate people and create better rights, but we also use it to limit rights and crush people under our feet. This theme, that human history tends to be unkind toward creatures that they see as lesser beings, is echoed by Daniel Wilson in his books Robopocalypse and Robogenesis. Wilson, who has a doctorate in robotics and years of study that helps to inform his works of fiction, said: One of the main takeaways from Robopocalypse was that human beings don’t naturally extend human rights to each other (at least, not without a fight). Given our history, it’s almost impossible to imagine we will extend that olive branch to robots. Thinking back through the (brief) history of just the United States, with emancipation, suffragettes, and all the myriad equal rights movements focused on religion, sexuality, and gender—it seems clear that if a machine wants to be afforded any human right— including free speech—it will probably have to fight tooth and nail for it, like everybody else. In other words, I doubt rights will be “extended” to robots so much as they will be “seized.”

FIGURE 1.6 Daniel Wilson. Photo by Daniel Wilson.

18 Science Fiction, Technology, and Policy

Newitz envisioned that elsewhere in the world she built, there were lawyers from places like the American Civil Liberties Union working on expanding robot rights. Whether robots are treated like property, or like slaves, or like children, or like something else, in the end, she said, it would take humans admitting that machines are human enough to deserve similar rights. “There would be a future version of abolitionists,” she said. “That’s what it took in the United States to end slavery. Basically slaveowners had to admit that the people they lived with were obviously human and had to be recognized as human.” Rights such as free speech, which are almost necessary for futurists looking at the law as it emerges around communication technology, may prove to be more malleable. Information, the global purveyor of accurate news and data in Older’s Centenal books, is not shy about removing false advertising and correcting misinformation in real time, nor about silencing sedition or what may be deemed as “hate speech” by the company, all of which would be troublesome under modern notions of freedom of speech, especially in the United States. This, she said, was part of the exercise in presenting a future micro-democratic system that arises in response to the failures of the early 21st century experience. Older said: By writing some 60 years in the future, and 20 years into this new system, I felt that I was able to jump past some of the classic tropes of today. Also, the book is very global, and the free speech yelp, while not unknown in other place, is a very U.S. angle . . . Information’s approach, in general, is to rebut and explain and annotate. They do eventually delete things, but mostly with the assumption that they’ve already been documented. They’re much less about expunging or preventing than about transparency and complete data to refute. Of course this does not always work out well for them. When writing about technology and law and policy in his near-future worlds, Doctorow has at times drafted legislation reflecting the kind that policymakers may be realistically considering based on some of the trends of the moment. This was certainly evident in Pirate Cinema, when he came up with the “Theft of Intellectual Property Act,” which includes statutory language including prison sentences for “anyone caught with more than five pirated films or twenty pirated songs.”21 It is important, he said, to place the future laws in a place where they are both understandable to us at the moment but would result in realistic outcomes if actually enacted. Doctorow said: I tend to think of tech policy questions as they are between the point of the peak of indifference and the point of no return. The obviousness of the policy only grows, as does the urgency, but we’re losing freedom of motion as time goes by, and more and more people are economically invested in the status quo. There are laws to protect the way things are now, but there will

Science Fiction, Technology, and Policy 19

come a point where the technology debt we’ve accumulated through bad policy reaches the point where it’s almost inconceivable to have any action on it in our lifetimes. So we have to get to that point before the point where people demand action on it, and we have to do that before the evidence is so strong that it’s incontrovertible because by then it may be too late. Again, while other authors may be building worlds for purposes other than advocating immediate reforms to tech law and policy, Doctorow is very focused on this kind of near-term outcome. He said: I like to think of policy in a Lawrence Lessig framework, with code and norms and laws all interacting.22 Fiction is a normative intervention, outside of legal and tech interventions. Think about the VCR – by the time the Supreme Court had to make a decision about the VCR in 1984, judges drove past video rental stores, all of the judges had watched movies on VCRs with their grandkids. Judges are consequentialists, and they knew there was no way for them to ban VCRs. Wilson also wrote about pending legislation in Robopocalypse, with a “Robot Defense Act” that was on its way to being voted on by the U.S. Congress before the malicious artificial intelligence of Archos helped to torpedo the bill. The law seemed intended to halt some of the problems with malfunctioning or hostile robots. With the caveat that Robopocalypse and its sequel were intended to be “fun, outlandish fiction,” Wilson said that the legislation might represent a path for future policy: I think the most likely law-related outcomes in AI and robotics will be simple, specific laws designed to promote public safety. Rather than being a far out question of consciousness or autonomy, it becomes a simple consumer safety issue. For example, one of the fictional laws discussed in the novel is the introduction of a “fitch switch,” which is required by the FAA on all aircraft to quickly and simply turn off all autopilot functionality (invoked after an AI nearly crashes an airplane). Wilson pointed out a circumstance mirroring that situation in real life, when the autopilot of an Indonesian airplane was receiving incorrect sensor readings and repeatedly forced the nose down of a Boeing 737, leading to a battle between human pilots and an errant automated system that resulted in a crash into the ocean, killing 189 people.23 The safety regulations he proposed in Robopocalypse may be the future for preventing similar tragedies. “Here, the parallel seems clear between the book and reality,” he said. The reality may be based in the author’s own experience. When Williams was writing about Apricity, one of the challenges was balancing not only the law that

20 Science Fiction, Technology, and Policy

had emerged from the technology but also the practice of internally managing its potential dangers through corporate policies. While some of the processes – the “asterisks” that are redacted from the Apricity’s output because they include destructive or violent items – were based more on content moderation policies from companies such as Facebook, others came from her time in an entirely different field. Williams said: Within the novel, the company lore is that Apricity’s boundaries exist because of its CEO’s good moral compass, but my personal opinion is that the real motivation is to avoid getting sued or shut down. I based the Apricity company policies on treatment protocols, in particular those followed by psychologists and therapists. I used to work as a crisis hotline counselor. We would keep the content of the calls private unless the caller threatened to hurt self or someone else, and then we were legally bound to report it. Overall, I got the sense from authors that when they build a new communication technology into their worlds, they often feel the need to sketch out legal systems, write court documents and opinions, and discuss legislative policy approaches and consequences that emerge alongside the technology. Sometimes, as in the nearfuture legal hellscapes in Doctorow’s works, it is the law rather than technology that serves as the source of conflict that drives the narrative. But authors showed a keen sense of how the law has emerged over time, often looking to history to guide them on understanding the social and cultural ramifications of the communication innovations such as AI, robots, global information infrastructure, and surveillance technology. Sometimes, it is as explicit as language from judicial opinions, putting characters on trial, or drafting legislation that shapes future human rights and obligations. But talking about policy can be subtle at times as well. Older said that her editor is fond of mentioning that the word “privacy” does not appear in Infomocracy, the first book in her trilogy in which a global surveillance and information system is a major player in world affairs. She said: I was not aiming for either a dystopia or an utopia in Infomocracy, but rather trying to do an extrapolation from today with a few intentional twists to allow me to look at the questions I wanted to examine. The levels of surveillance in that world are where I realistically think we will be in the near future, but what I changed was that rather than the data being held and used by the government, or – as is increasingly the case in our world – owned and sold by companies, the vast majority of it is public and accessible to all. It’s perfectly fair that people are still scared of that scenario, but in that case we really need to think about where we are now, because as I said, it’s not that far away.

Science Fiction, Technology, and Policy 21

Writing About Future Technology One process matter I talked to authors about was how they came up with technological innovations, especially devices that would involve new ways of thinking about communication and privacy and journalism. Doctorow stressed the importance of getting the technology right, especially with near-future plausible innovations he envisioned in Little Brother regarding computing, encryption, and surveillance. Doctorow said: Apropos Little Brother, the most remarkable thing is that it was written 12 years ago, but it’s still seen as very relevant and current in how it talks about policy. The key was, I didn’t take any shortcuts on the tech. If you want to write science fiction and have a future-proof story, you should keep making assumptions about how computer science actually works, and assume lawmakers will fail to understand that.24 Plausibility is an important aspect for technology, at least when it comes to gaming out the policy that may be implicated by that technology. Doctorow gave the example of his 2017 novel Walkway, which featured both more plausible future technologies such as widespread affordable three-dimensional printing and more hypothetical technology such as the ability to upload one’s consciousness to a cloud. Doctorow said: There’s an important distinction – 3D printers exist, and consciousness uploading does not. Some think it’s real, and some think it’s more of a philosophical exercise. For the 3D printers, I take getting them right very seriously. I take a lot of care to make them plausible in reality. That plausibility is the most important test. . . . For the consciousness uploading in Walkaway, that was specifically designed to plumb philosophical questions, but using computers in ways that hew to as much real computer science as possible, even if the final product is not as real in terms of the computer science. He mentioned talking to screenwriters trying to adapt a work of his, and how they were considering adding terrorists who plant explosives disguised as 3Dprinted statues. They weren’t workable because they were implausible. That would look super cool, but it didn’t make any sense. You could 3D print a lamppost, a belt buckle, a license plate. Any regular device could be a 3D printed bomb, think of how scary and amazing that would be, in terms of a thriller. The reality of currently available technology may only need a little tweak to make it more interesting for storytelling purposes. Sloan used real-world

22 Science Fiction, Technology, and Policy

technology from Google Books and homebrew scanning kits, but put them together to create some complex copying and rights issues in Mr. Penumbra’s 24Hour Bookstore. When I asked if those technologies were realistically portrayed when he wrote the book, he said that they were almost entirely true, but that he had “playfully exaggerated” them for story purposes. Sloan said: For me, of course, living in the Bay Area, there’s more access to more and deeper pockets of that futureness because I had been exposed to them. So those are real things in the real world. I thought it was really fun to take some of those just verbatim from my own experiences, my own eyes and ears, but to crank up the exaggeration a little bit, or make them a little more magical or a little more playful, just for the sake of the fiction, and put them into the book. Sometimes, what may seem like a futuristic technology may actually be almost entirely present. Sloan gave the example of what he put in the book as “GrumbleGear,” the do-it-yourself homemade scanning technology made out of cardboard that could be smuggled past detectors into libraries. Sloan said: If I had to pinpoint the thing that most people think is invented that is actually like the least invented, it’s that. The real thing isn’t exactly cut from cardboard, but it’s pretty close. You can make it on a laser cutter and get the pieces and set it up. There’s this genius guy named Dan Reetz who designed it and released the plans to the world. That was probably the most journalistic part of the whole novel.25 Present and near-term scientific reality helps keep the policy discussion relevant and grounded for the current moment. In Speak, Hall may be writing from 2040, but the prediction is not too far removed from what is currently projectable as possible advancements in artificial intelligence. Hall said: What started me down the road of writing Speak was the history of artificial intelligence and the people creating various kinds of artificial intelligence, even up to today. When I was doing that, I thought it would be neat to keep following this timeline into the future. I don’t feel very good about predicting what will happen, so I tried to keep it in the near future. There are some creeping global warming concerns, problems with the coastlines. I was imagining a future that continued trends already happening here and now. These turned out to be eerily prescient, as scenes in the book are set in areas on the Texas coast ravaged by hurricanes that forced many residents to move further

Science Fiction, Technology, and Policy 23

inland, in a book written just a couple of years before Hurricane Harvey devastated Houston and surrounding areas. The further into the future the authors push, the more difficult it is to have an idea of how plausible the technology will be. Newitz said this was an obvious lesson of the history of communication technology – from telephones to the Internet, these technologies were neither inevitable nor even expected in the decades prior to their emergence. In Autonomous, set in 2144, she instead imagines a future with a broad array of communication technologies, including widespread use of goggles as wearable computing devices and windshield displays on vehicles displaying news. Newitz said: It’s inevitable that we’ll have a diversity of ways for people to access their information technology. Somebody has goggles, somebody has an implant, somebody prefers to write things down. The question about implants, I don’t know that it’s inevitable that it goes mainstream, but some people will certainly have brain-computer interfaces. It’s already happening for people who are paralyzed. It’s one of the principle ways they’re able to communicate. It’s only going to become more common. I think we’re on that road. For Older, the innovations were often less about the technology and more about rethinking the way government and society work. Much of Information could be accomplished with surveillance technology that already exists, just improved, better organized, and more widely accepted. Older said: I started with the ideas of micro-democracy and Information, but I think of those as social and political tech. They require very little of digital/electronic technology beyond what already exists. Yes, there are some tech toys, like wearables, that facilitate Information’s work, but none of it is strictly necessary.. . . Information is largely a bureaucratic effort, based on lots and lots of person-hours. That’s the core of the books; all the other technology is either incidental stuff I thought would be fun or stuff I needed for plot reasons. For example, international travel is much more convenient and faster for the characters in the books, which served both the plot and her own wishes. Agents for Information often travel by “crow,” which is a bit like a flying apartment. “At the time I was working for an international humanitarian agency and flying long-haul very regularly and I wanted a flying RV instead,” she said. Likewise, to keep guns from sidelining the plot, she created the “lumper,” which disables firearms within a certain distance from the device. The result was both more exciting action sequences featuring more blades and martial arts and, she said, “it helped make the Pax Democratica seem more feasible.”26 Admittedly, when I set out to ask science fiction authors about their process for writing about technology and policy in their works, I expected they may start

24 Science Fiction, Technology, and Policy

by imagining a new technology, then building a story and characters around that item as it enters the world, a kind of fictional technological determinism. But this was not always the case. When Doctorow wrote Little Brother, he didn’t explicitly have the tech or privacy issues regarding it in mind at first. Rather, he said he was inspired by the shoddy portrayal of computers in a film. Doctorow said: The thing that spurred me to action was I went to see a technothriller with my wife, I don’t even remember the movie, but I was infuriated with how computers were depicted in the movie. For something that was entirely made using computers – the script was written on a computer, the communications about producing it were done on computers, every part of it took place using computers – but none of them sat down and described computers as they were. They made narrative conveniences where none needed to be made. For me, I thought, there’s an unmined, rich seam of dramatic potential in computers in what they are, so I set myself on the exercise of writing a technothriller with computers as they are, not as some screenwriter sees them for the sake of getting a story maneuvered into place. When coming up with the idea for Apricity, the machine that can give people the direction they need to achieve happiness, Williams said the tech itself was not solely the spark of the novel. She said: Mostly I work from character. I started with the idea of a machine that could predict happiness and a woman working for that machine with a person in her life who was fundamentally unhappy. Because the Apricity technology reveals the characters’ desires, I could really just use it as a mirror (or megaphone or pick-your-device) of the characters themselves. Largely, I believe technology is us. We invent it and make it. It can amplify, hush, and even distort aspects of who we are, but it’s ultimately our own reflection.

On Communication Communication is at the heart of any work of fiction, but it is especially prominent in the works of the science fiction authors I talked to, all of whom envisioned technology and policy that influences the way we interact with one another, or the way we interact with other beings. For example, Hall said she did not explicitly begin Speak as a place to explore the tech of advanced artificial intelligence and the disruption it may cause in the world. Rather, she said, Speak is about communication issues and how we struggle to address them, sometimes relying on technology to solve a problem that it is incapable of doing. “I think I started with the desire to write about difficulties in communication and the loneliness that will result, and developing language to use so we’re no

Science Fiction, Technology, and Policy 25

longer alone,” Hall said. “I wanted to try to find a vessel that will carry those questions.” This is evident in the chat transcripts between a teenage girl and an AI system as they talked about how babybots had influenced children and their relationships with each other and the world, with the unfortunate outcome that children largely became incapable of communicating with each other or anyone else. The advance of AI communication, strong but not quite human, made that future world possible. Much of the story in Autonomous includes communications between Paladin, a robot agent, and her surroundings. Sometimes, this is with her human partner, Eliasz. Other times, it is with the machines around her, whether at the training center, or the places she has gone undercover to investigate. There are similarities and differences in the ways humans communicate with machines and machines communicate with one another, and Newitz explored these in several scenes. She drew parallels both to the way computers talk to one another today, but also drew inspiration from thinking about alien consciousness, such as the famous Star Trek: The Next Generation episode “Darmok,” in which the universal translators of the Enterprise crew are unable to decipher the context-heavy language of the Children of Tama, a people that speak largely in abstractions, heavy in metaphor and cultural history. When Tamarians say “Shaka, when the walls fell,” they are speaking of failure.27 “I’ve always been a fan of Octavia Butler, Ursula Le Guin, and Vernor Vinge, and I’ve always really liked the trope of meeting an alien that has truly alien consciousness,” Newitz said. “The super famous Next Generation episode ‘Darmok,’ I love stuff like that. The human Picard can understand what the alien is saying, but culturally, he can’t understand.” She used this as a basis for exploring how AI machines might communicate with one another, and how this might help them understand the world around them. She asked: What kind of consciousness would grow out of a computer network? What would be the context of that creature’s thought patterns, its relationships? How would robots talk to each other, if we presume they’re evolving out of UNIX boxes? That was the thought experiment for me. How would having a mech body change consciousness? If you’re fully mech and your systems are debugging themselves, your body and mind are a lot less mysterious. Newitz said that when she had Paladin interact with other systems, she deliberately tried to make the communication as non-human as possible without alienating readers. This involved emulating how computers send information to each other wirelessly over networks, with security protocols including encryption, particularly challenging when a robot agent like Paladin is operating undercover and therefore trying to shield her identity and intentions. Newitz said:

26 Science Fiction, Technology, and Policy

I was thinking, how would robots build a style of communication out of the way computers talk now when they communicate? They would say things like, “I have some data, structured like this, I’m trying to talk to you from this port, here’s the key we can use to encrypt and decrypt, with packets and packet headers.” What if you developed a kind of etiquette out of those packets and packet headers, like handshakes? I was trying to imagine an intelligence that has evolved out of these kinds of communication protocols that we’ve already built into our computers. Andrea Guzman, in her research on human-machine communications, studies what happens when human-made creations communicate with us in a number of different circumstances, such as industrial machines exchanging information with human operators, automated journalism that tells us news stories, and voice assistants such as Siri or Alexa. It is a process of “creating meaning among humans and machines” that is unlike human-to-human communications, even if it sometimes mirrors those processes imperfectly.28 She explained that the 2013 film Her, which includes a man who basically falls in love with a personal artificially intelligent voice assistant, may not accurately reflect the ways humans and machines communicate, but it does illustrate an important point. Guzman said: The thing I do find interesting about Her is toward the end, when he finds out she’s talking to other people because she’s a service. Even if we anthropomorphize the tech, it is still an object without similar human emotions or background or experience. I like that because it’s very easy to think when you’re interacting with them that they are only talking to you, such as when you’re talking to Siri or to the Amazon Echo. In reality, it’s not just you talking to the machine. It’s talking back to Amazon, or it’s talking back to Apple, and it’s talking to other people. Thinking about the future of communication also involves the future of surveillance and privacy and how it alters the way people relate to one another. It’s a theme that runs throughout Older’s books, as surveillance technology advances in a plausible and recognizable way, but one that is neither overly dark nor completely utopian. Older said: Since Infomocracy came out and started getting referred to frequently as a dystopia and terrifying, I’ve really noticed how closely we associate “surveillance” and “dystopia.” I can understand why people react viscerally to the idea of being constantly observed, but the level of Infomocracy is really not that much farther along from where we are now – particularly in highly CCTV’d cities like some in Britain, for

Science Fiction, Technology, and Policy 27

example. For me the fundamental difference is what is done with the data, or more precisely, whom it belongs to. She said she recognized how notions of privacy have shifted over the past century, and how it is, as a concept, socially constructed in different places around the world. Rather than making her books about privacy invasions, she instead looked at how advances in surveillance and communication technology, with transparency as a key operating principle, may change the way we relate to one another and govern ourselves. Referring to one of the parties that has some success during elections, Older said: Interestingly some people read the book as very libertarian, maybe because of all the choice in government, maybe because of the Free2B government. Privacy still exists in the book (there are no feeds in private spaces, for example), even it is has shifted. Similarly in terms of free speech: people can say whatever they want, but they will be annotated when they make claims to the truth. The communications enabled by Information in the future world Older imagines instead focus more on notions of “fairness and neutrality,” she said, “particularly in the purveyance of data and information, but also more generally in sciences, government, and so on.” This aspect of future communications implicates the future of journalism as well. Before Sloan wrote his first novel, he was a journalist, and he and fellow graduate student Matt Thompson used a science fiction approach to tell the story of journalism in the Internet age in a 9-minute video called EPIC 2014.29 The video, produced in 2004, describes a future in which online companies have built universal platforms for media creation and consumption; news bots and participatory journalism essentially replace traditional news sources, and Google eventually merges with Amazon to form “GoogleZon,” overtaking journalism organizations in the “News Wars of 2010.” A failed copyright lawsuit by The New York Times leads to the end of the Fourth Estate as an institution, replaced by EPIC, a worldwide media platform. By 2014, the Times is reduced to a print-only newsletter “for the elite and the elderly.” The exercise was a way for Sloan and Thompson to overcome dry slideshow presentations they had been showing to audiences of journalists and news media executives that they had been showing, with limited success. Sloan said: We wanted to make the case that print newsrooms had to take some of these opportunities that the Internet presented more seriously. There were graphs of growth over time, and quotes from people. It was a good presentation, it

28 Science Fiction, Technology, and Policy

was very substantive, we made the case, but it totally put people to sleep. One time, literally, one of these newspaper editors was gently snoring in the back of the room. EPIC 2014 was assembled largely over a weekend in a computer lab at the Poynter Institute in St. Petersburg, Florida, as what Sloan called a “dark, science fiction fairy tale.” And it worked. The video received worldwide attention. Sloan said: It was truly using the techniques of story and drama, and in this case, totally, that dark, 3D science fiction to get a point across. In a way, there were a lot of ideas bouncing around, especially at that time, and we kind of threw them all in. I think that’s one of the things that helped its longevity, or the appearance of some amount of accuracy or prophecy. There’s just actually a lot in it. And you can easily watch it again and look for things that look ridiculous in retrospect. As a journalist, Newitz said she obviously thought about what news might look like in the future in Autonomous, though in many ways, it was reflective of the variety of present-day journalism and technology. She said: How much does it really change in 150 years? There were publications then that we still read now, such as Atlantic and Harper’s Magazine.That kind of gave me license to think about the idea that maybe the way we access media changes, but media itself might be recognizable. It would be coming to us in a feed, as a combination of writing and video and audio. For some of it, we have holograms, VR-type stuff. But really I was just extrapolating, with a similar kind of ownership landscape, with some government media, some corporate, some indie, some user-generated. Ownership of media presents issues with intellectual property. One of the main targets of Autonomous is the patent system, but scientific publishing also draws attention, including one company in particular that the publisher’s attorneys advised against mentioning by name. Newitz critiqued a system in which only a few large companies own academic journals and for which access by the general public is largely restricted or cost-prohibitive. She said: One of the big issues in science publishing is intellectual property, the whole battle over open access. What does it mean to own scientific data? What is our moral obligation? All of it should be open, and it makes me quite weirdly angry when it isn’t. I kind of imagined it being the media landscape would be somewhat similar but accessed differently.

Science Fiction, Technology, and Policy 29

In all, the future of communication, as envisioned by the authors, requires thinking about different kinds of actors – not just between humans, but also between alien consciousness or machines – as well as different technology for communicating, with implications for free speech rights, notions of privacy, intellectual property, and even the future of journalism. These topics are all explored in the chapters ahead.

Conclusion Through these interviews, we have seen science fiction approached from many angles, and for many purposes. As a genre, it allows exploring possible futures that may await us so we can prepare ourselves for the issues we may face. It also allows telling stories in different times and different worlds that tell us as much about our present as our future. It can embrace our hopes and our fears. Science fiction may be used for advocacy, or to help us understand our relationships with others, or even just to tell a fun story about robots and aliens. It is both immensely popular and influential. And, for the purpose of this book, science fiction allows exploration of plausible communication technologies and what they may mean for the future of media law and policy. To Doctorow, science fiction is a normative statement, a way to tell us how things should be, especially as a counter to present real-world narratives. He said: Normative statements are demands. When Mark Zuckerberg says privacy is dead, he’s really saying, I would be richer if you didn’t care about privacy. And that’s a pretty ninja move.30 Science fiction is a speculative exercise, declaring that the future is here already, and the antidote to speculation is speculation. It gives us a counter narrative. As much as you may think climate change is not real, allow me to ask for a while what your grandchildren’s lives will be like if they have to scrounge through rubble and drink their own urine because of what we’re doing today about climate change. Of the authors I interviewed, only a few seemed to be explicitly focused on development of law and policy matters in their works, though they all saw the potential for science fiction to influence discussions about new technologies and understandings of human relationships. Instead, they seemed to be much more driven by the story and the characters, for obvious reasons. Readers want a good story, above all. Getting the technology right – at least when the tech was plausible in the first place – was very important for them. Making the technology realistic was part of building a believable, realistic world. Though that is not always the point, as Doctorow pointed out, as in Walkaway he balanced realistic future tech such as advanced 3D printing, while also exploring deeper thoughts about humanity and existence with a far less likely ability to upload consciousness

30 Science Fiction, Technology, and Policy

into a machine. It reminded me of an interview with Daniel Abraham and Ty Franck, who together write as James S.A. Corey, author of The Expanse series that includes, as I write this, eight novels and a television series. The series is set in a future distant enough that humans have settled on Mars and in the asteroid belt beyond, and the authors obviously worked to get a lot of the science about effects of gravity, issues with oxygen and water, and other matters quite realistically. They said it was not hard science fiction, but “plausible enough that it doesn’t get in the way.” But there was some tech, such as hyperspace travel via what they called the “Epstein Drive,” that clearly could not happen through our current understanding of science. When asked how it works, they responded, “Very well. Efficient.”31 When the tech is realistic and plausible – when it is based on current tech with some slight exaggerations, as many of the authors I talked to described – then it may help be a part of the laboratory for exploring the legal issues that would arise in response. The authors I spoke to seemed to consider emerging law and policy matters in their future worlds with varying degrees of seriousness. Some were very conscious, particularly those with backgrounds in tech policy advocacy such as Newitz and Doctorow, about the interplay between the law and advancing technology. It also presented an opportunity to make important policy discussions happening in the present have more immediacy and interest for people who otherwise might be ignoring them. “With digital rights management, we’re dealing with changing the reputability for something so obscure as to be beneath notice, and making it a part of normal life,” Doctorow said of his efforts to use science fiction in conjunction with other advocacy campaigns to change intellectual property law: Should you have to jailbreak your artificial pancreas to put insulin in it, and in a few weeks spend 20 times what you’re paying for insulin now? But it’s hard for us to reach them. And it’s easier for manufacturers to reach them. So science fiction can help us to drive a normative and a commercial story, and help us draw the legal story as well. Other authors, such as Williams and Hall, looked to formats of court cases and transcripts to allow plausibly realistic excerpts from judicial processes to set some legal boundaries on the tech they created for their near-future worlds. For law professor Nicholson Price, the law in the world was less important than the world itself, and more “about how science fiction worlds can shape how we think about our own law (or what our own law could be).” This gave opportunities for teaching law students about where the law may be heading, or at least how to think about that. This is not a new conversation. Science fiction writers have long thought about what it means to do the work that they do, and what value it has, and why criticism of the genre as unserious or unimportant is misguided. In 1975, Isaac

Science Fiction, Technology, and Policy 31

Asimov addressed these thoughts and more in an essay in Natural History magazine. He said: Science fiction writers foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not. The best way to defeat a catastrophe is to take action to prevent it long before it happens . . . To do that one must foresee the catastrophe in time, but who listens to those who do the foreseeing? “Escape literature,” says the world and turns away.32 This book is an effort to use what science fiction authors have given us through their foresight of communication law and policy issues and envision some potential solutions, or at least ways of thinking about how to address those issues.

Notes 1 See Tim Dirks, All-Time Box Office Hits by Decade and Year, AMC Filmsite (accessed February 6, 2019), www.filmsite.org/boxoffice2.html. The films that I did not consider as “science fiction or containing strong science fiction elements” were Beauty and the Beast, Finding Dory, Toy Story 3, Wonder Woman, and Jumanji: Welcome to the Jungle. 2 Eileen Gunn, How America’s Leading Science Fiction Authors Are Shaping Your Future, Smithsonian, May 2014, www.smithsonianmag.com/arts-culture/how-americas-lea ding-science-fiction-authors-are-shaping-your-future-180951169. 3 Robin Sloan, The Kindle Wink, Medium, Apr. 26, 2014, medium.com/message/ 4f61cd5c84c5. 4 What is Future Tense?, Slate, accessed Feb. 5, 2018, slate.com/future-tense. 5 Cory Doctorow, EFF Presents John Scalzi’s Science Fiction Story About Our Right to Repair Petition to the Copyright Office, Electronic Frontier Foundation, May 14, 2018, www.eff.org/ deeplinks/2018/05/eff-presents-john-scalzi-science-fiction-story-about-our-right-repair-p etition. 6 Andrew Liptak, X Prize assembled a supergroup of sci-fi authors to develop its next competitions, The Verge, June 4, 2017, www.theverge.com/2017/6/4/15736632/x-prize-scifi-supergroup-authors-andy-weir-neil-gaiman-annalee-newitz. 7 Geek’s Guide to the Galaxy, Why Science Fiction is the Most Important Genre, Wired, Sept. 8, 2018, www.wired.com/2018/09/geeks-guide-yuval-noah-harari/. 8 The interviews were conducted either by phone or email in 2018 and 2019 and typically lasted about 30 minutes. Phone interviews were transcribed, and the quotes were confirmed by the authors via email before publication to ensure accuracy. Any quotes in this chapter from these sources came from the interviews conducted unless otherwise noted. 9 See Andrea L. Guzman, Making AI Safe for Humans: A Conversation With Siri, in Socialbots and Their Friends: Digital Media and the Automation of Sociality (Robert W. Gehl & Maria Bakardjieva eds., 2017); Andrea L. Guzman, Messages of Mute Machines: Human-Machine Communication with Industrial Technologies, 5 communication +1 6 (2016). 10 Electronic Frontier Foundation, Cory Doctorow Rejoins EFF to Eradicate DRM Everywhere, Jan. 20, 2015, www.eff.org/press/releases/cory-doctorow-rejoins-eff-eradica te-drm-everywhere.

32 Science Fiction, Technology, and Policy

11 These stories are discussed in more depth in Chapter 2 of this book. 12 Timothy B. Lee, Why a Random Federal Agency Gets To Decide Which Devices We Tinker With, ArsTechnica, Oct. 26, 2018, arstechnica.com/tech-policy/2018/10/feds-sa y-its-ok-to-jailbreak-alexa/. 13 Mitch Stoltz, New Exemptions to DMCA Section 1201 Are Welcome, But Don’t Go Far Enough, EFF.org, Oct. 26, 2018, www.eff.org/deeplinks/2018/10/new-exemp tions-dmca-section-1201-are-welcome-dont-go-far-enough. 14 Casey Fiesler, Black Mirror, Light Mirror: Teaching Technology Ethics Through Speculation, How We Get to Next, Oct. 15, 2018, howwegettonext.com/the-black-mirrorwriters-room-teaching-technology-ethics-through-speculation-f1a9e2deccf4. 15 Emanuelle Burton, Judy Goldsmith & Nicholas Mattei, How to Teach Computer Ethics through Science Fiction, 61 Communications Assoc. Computing Machinery 54 (2018). 16 This is the sense I got from a brief conversation with Ernie Cline after his South by Southwest Interactive session on retro video games in 2015 – the snippet of future copyright law was there to help the story move forward, and nothing beyond that. Regrettably, I was unable to get Cline to participate in this project with an interview. 17 Alan Siegel, Get a Life!, The Ringer, Jan. 31, 2018, www.theringer.com/movies/ 2018/1/31/16947436/saturday-night-live-william-shatner-star-trek-snl-nerds-star-wa rs-last-jedi. 18 “If you grabbed someone off the street and said to them ‘Hey, remember how bad the early TNG episodes were?’ they’d probably shout at you for touching them without invitation. But if they didn’t, there’s a good chance they’d instantly think of this one,” quipped one critic 25 years after the episode aired. James Hunt, Revisiting Star Trek TNG: Justice, Den of Geek, Nov. 9, 2012, www.denofgeek.com/tv/star-trek-thenext-generation/23380/revisiting-star-trek-tng-justice. 19 Newitz talked about thinking about the line between human and machine as well. “I can see future lawyers saying, if you have more than 50 percent human parts, you get human rights, but less than that, you’re property or subhuman rights,” she said. 20 See Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Cal. L. Rev. 513 (2015). 21 Cory Doctorow, Pirate Cinema 111 (2012). 22 See Lawrence Lessig, Code: And Other Laws of Cyberspace, Version 2.0 (2006). 23 James Glanz, Muktita Suhartono & Hannah Beech, Futile Struggle On Doomed Jet From the Start, N.Y. Times, Nov. 27, 2018, A1. 24 Doctorow gave the example of Australian Prime Minister Malcolm Turnbull saying “possibly the stupidest thing said about technology ever” when Turnbull commented in 2017 regarding an encryption ban, “The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia.” See also, Cory Doctorow, Australia’s Prime Minister is a Goddamned Idiot, BoingBoing, July 15, 2017, boingboing.net/2017/07/15/malcolm-turnbull-is-an-idiot.html. 25 See Jennifer Baek & Jake Brown-Steiner, How We Built a DIY Book Scanner With Speeds of 150 Pages Per Minute, ArsTechnica, Feb. 14, 2013, arstechnica.com/ga dgets/2013/02/diy-book-scanning-is-easier-than-you-think/. 26 Older credited Austin Grossman’s book You, which she said: has a line about sword and sorcery video games never ever allowing their tech to include guns of any modern variety, and that clicked with a lot of things I’d been thinking about in terms of guns making narratives and especially action sequences much less exciting and interesting. See Austin Grossman, You (2013). 27 Ian Bogost, Shaka, When the Walls Fell, Atlantic, June 18, 2014, www.theatlantic.com/ entertainment/archive/2014/06/star-trek-tng-and-the-limits-of-language-shaka -when-the-walls-fell/372107/.

Science Fiction, Technology, and Policy 33

28 Andrea L. Guzman, Introduction, in Human-Machine Communication: Rethinking Communication, Technology, and Ourselves 17 (Andrea L. Guzman ed., 2018). 29 The video is available at www.robinsloan.com/epic/. 30 Doctorow was referring to Zuckerberg’s statements in 2010, when he told an awards ceremony crowd that privacy was no longer a “social norm,” in response to questions about Facebook’s privacy controls options and how things had changed since he launched the site in 2004. “People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people. That social norm is just something that has evolved over time,” he was reported as saying by several sources. See Marshall Kirkpatrick, Facebook’s Zuckerberg Says the Age of Privacy is Over, N.Y. Times, Jan. 10, 2010, archive.nytimes.com/www.nytimes.com/external/ readwriteweb/2010/01/10/10readwriteweb-facebooks-zuckerberg-says-the-age-of-p rivac-82963.html?pagewanted=1. 31 James S.A. Corey on Leviathan Wakes, Orbit (accessed February 21, 2019), www.orbit books.net/interview/james-s-a-corey-2/. 32 Isaac Asimov, How Easy to See the Future, Natural History, April 1975, 91, 95.

2 THE FUTURE OF COPYRIGHT LAW, BOTH REAL AND VIRTUAL

After a steady march of increasingly draconian intellectual property laws in the past century, it is little surprise that science fiction writers portray a gloomy view of the future. And while creators such as these authors may very well be the beneficiaries of copyright laws that would allow them to maintain exclusive control over their works and the ability to profit from them throughout their entire life and decades beyond, they are some of the strongest voices for reform in ways that would be more beneficial for everyone in society. The vision of the future depicted by science fiction authors is so one-sided that Parker Higgins, an advocate for the Electronic Frontier Foundation and the Freedom of the Press Foundation, saw “plenty of stories that describe a dystopian future where information is locked down by ever more oppressive ‘intellectual property’ laws,” but no counterpoint from authors taking a different approach. “Have copyright maximalists ever written dystopian science-fiction about a future where free culture wins?”1 In the absence of this alternate view, science fiction authors offer a critique of the laws that continue to bedevil citizens and users in the digital age. In real life, a jury ordered Jammie Thomas-Rasset to pay music companies nearly $2 million – $80,000 per song – for illegally downloading music using the service Kazaa, a verdict that took several years of appeal before being reduced to $222,000 in 2012.2 College student Joel Tenenbaum was saddled with a $675,000 verdict for downloading 30 songs, a decision that ultimately drove him to bankruptcy after years of appeal.3 Courts routinely upheld these verdicts as consistent with U.S. copyright law, although in Tenenbaum’s case, the court noted the “deep potential for injustice” in the law, opining, “There is something wrong with a law that routinely threatens teenagers and students with astronomical penalties for an activity whose implications they may not have fully understood.”4 Congress has done nothing to remedy this situation; rather, it took even more drastic action, putting forward the Stop Online

The Future of Copyright Law 35

Piracy Act (SOPA)5 and the Protect Intellectual Property Act (PIPA)6 in the United States, which would have further criminalized copyright infringement, giving more tools to the government to punish it, including bans on hyperlinking to sites found to be infringing, blocking ad payments to websites, and up to five years in prison for unauthorized online streaming of copyrighted materials. Several websites, including author Cory Doctorow’s BoingBoing, as well as Wikipedia, Google, and Reddit, took part in a blackout, shutting down access on January 18, 2012, in protest of SOPA.7 While SOPA and PIPA lost the support of most of their sponsors and were either withdrawn or permanently delayed shortly after the online protests, Congress continued to threaten to consider similar bills in the future.8 Still, U.S. law was capable of severe punishment of infringers. In 2013, Jeremiah Perkins was sentenced to five years in prison for willful copyright infringement by file sharing involving Avatar, Iron Man 2, Captain America: The First Avenger, and other films.9 And in January 2013, one of the young men who helped lead the SOPA protests, programmer and hacktivist Aaron Swartz, committed suicide while facing years in prison for felony charges under the Computer Fraud and Abuse Act in connection with hacking the servers at the library of the Massachusetts Institute of Technology to download millions of academic journal articles.10 This world – one in which the Disney lobby has extended the life of Mickey Mouse’s copyright multiple times, in which the state of Georgia said it was “a form of terrorism” for citizens to share copies of the state code of laws online before losing a copyright case over them,11 in which new international agreements such as the Trans-Pacific Partnership, the revised North American Free Trade Agreement, and Article 13 in the European Union included mandatory extension of copyright terms for member countries and requiring advance licenses from artists before content could be shared online12 – has been skewered and parodied. Science fiction authors foresee ever increasing intellectual property laws that lead to dangerous or ridiculous conclusions, including the potential destruction of Earth by aliens who find it may be cheaper to destroy the planet than to pay for decades of infringement penalties they had unwittingly triggered by listening to our music from their home worlds.13 In this chapter, the excesses of modern copyright law, and the dim future they present, are explored. Then, some possibilities for future reform that show up science fiction, including those in virtual worlds such as Ernie Cline’s Ready Player One, are discussed. The chapter concludes with a look at what we may have to consider when works by extraterrestrial creatures make their way to Earth.

The March to Maximalism The inevitability of technological advances means that the rights that attach to creative works made using those new tools will necessarily be implicated, and the increasing complexity of the technology has spawned increasingly complex laws

36 The Future of Copyright Law

and regulations about those rights. Intellectual property matters such as patents, trademark, and copyright undergird much of the modern business of communication, creativity, and expression. Inventions can earn their owners patents, used to protect their investment in research and development and to fend off competitors, though the patent law that has grown around the invention field has drawn plenty of scrutiny from legal commentators and science fiction authors alike. Consider the future of pharmaceutical patents envisioned by Annalee Newitz,14 who tells the story in Autonomous of a pharma pirate named Jack who steals and reverse engineers designer drugs from abusive pharmaceutical companies to help fund her humanitarian efforts to distribute life-saving medicines to communities that couldn’t afford the prices jacked up by the monopolistic manufacturers. It is not an implausible view of a future in which corporations and governments have collaborated to protect profits above all, a common modern critique of the patent system. Similarly, logos, slogans, and brand identity are protected by trademark and service mark laws, which have also come under an increasingly complex regulatory structure as companies and citizens battle for control and use of marks, both in commerce and in artistic expression, with courts and governments seemingly favoring powerful and moneyed interests. When courts deliver rulings that are against corporate interests, legislatures can move quickly to curb those to protect companies. After the U.S. Supreme Court ruled that a small adult entertainment store in Kentucky owned by Victor Moseley called first “Victor’s Secret” and then “Victor’s Little Secret” after getting a letter from lawyers was not diluting the trademark of the lingerie company Victoria’s Secret, on grounds that mere mental association between similar names was not enough to establish trademark dilution under federal law,15 Congress passed the Trademark Dilution Revision Act to make it clear that companies no longer had to prove actual trademark dilution, but that simply a likelihood of dilution was sufficient for a trademark owner to shut down users of similar marks.16 When it comes to the future of intellectual property, science fiction authors have not aimed their attention at patents and trademark law as much as they have copyright, an area that has seen a framework of ever-expanding protections for creative works over the past century around the world, as tools for creation and sharing have become more accessible. Copyright was envisioned in the U.S. Constitution in the 18th century for the purpose of “Promot(ing) the Progress of Science and useful Arts” through a balance of incentivizing creation of new works by providing exclusive rights to authors to protect their works, while also allowing avenues for society to build upon those works through some kinds of authorized fair uses and establishment of a public domain of works that were unprotected or no longer protected by copyright law.17 The first U.S. copyright act, passed in 1790, was established “for the encouragement of learning” and allowed authors of works such as “maps, charts, and books” to protect those works from unauthorized uses for a period of 14 years, a term that was renewable

The Future of Copyright Law 37

one time for an additional 14 years, with the author able to recover “all damages occasioned by such injury.”18 The initial term was doubled to 28 years in 1831, with the same one-time renewal of 14 years. The renewal term was doubled in 1909, making the maximum term of protection 56 years between the initial and the renewal term. The length of copyright terms was drastically extended in the Copyright Act of 1976, to the life of the author plus 50 years or for 75 total years from the date of publication for works published before 1978. As Mickey Mouse neared entering the public domain at the end of the 20th century, lobbyists further extended the copyright term to life of the author plus 70 years, or 95 years from the date of first publication, whichever came first, meaning Mickey Mouse’s new copyright term would expire in 2024 without further extension. The 1998 law, the Sonny Bono Copyright Term Extension Act, named after the musician and former Congressman who “wanted the term of copyright to last forever,”19 has long been criticized and challenged. Public domain advocates such as the Free Software Foundation and the American Association of Law Libraries challenged the law by arguing that it violated the “for a limited time” language of the Constitution by retroactively extending copyright terms for an additional 20 years and by suggesting that copyright laws should be subject to First Amendment scrutiny as well. But the Supreme Court rejected those challenges, ruling that in essence, any length of term short of “forever” is technically valid under the text of the constitutional provision.20 Around the same time, Congress decided to increase the penalties for infringing copyright by 50 percent, such that the minimum statutory damages per infringement increased from $500 to $750, while the maximum for willful infringement increased from $100,000 to $150,000.21 Rob Reid wrote the farce Year Zero, based on the premise that alien cultures who became addicted to the Earth’s popular music starting in 1977 are threatening to destroy the planet rather than pay copyright infringement damages of all of the accumulated wealth in the history of the universe under this law, which one alien describes as “the most cynical, predatory, lopsided, and shamelessly moneygrubbing copyright law written by any society, anywhere in the universe since the dawn of time itself.”22 In the United States, copyright law covers creations of “original works of authorship fixed in any tangible medium of expression,”23 which are entitled to certain exclusive rights under the Copyright Act of 1976, including the right to make and distribute copies of the work, the right to prepare derivative works, and the right to perform or display the work in public.24 Under the Copyright Act of 1976, Congress recognized the fair use doctrine that had long been applied by courts as a common law doctrine by allowing authors to build and improve upon prior works. As the U.S. Court of Appeals for the Second Circuit noted, fair use “permits the courts to avoid rigid application of the copyright statute when, on occasion, it would stifle the very creativity which that law is designed to foster.”25 It is in this balance, between encouraging creation of new works and allowing others to use the new works for valuable social purposes, that the tension exists.

38 The Future of Copyright Law

Copyright law has also developed to protect technological creations that were not possible when they were established in the 18th century. Consider photography, which did not exist at the time, though it is now clearly protected in the language of the act.26 The U.S. Supreme Court clarified in 1884 that photographs, as “original intellectual conceptions of the author,” are subject to copyright protection, in that case affirming a photographer’s copyright in a picture he arranged of Oscar Wilde. The court noted that it was within the power of Congress to protect such works because of the creativity a photographer displays, noting that said photographer: entirely from his own original mental conception . . . gave visible form by posing the said Oscar Wilde in front of the camera, selecting and arranging the costume, draperies, and other various accessories in said photograph, arranging the subject so as to present graceful outlines, arranging and disposing the light and shade, suggesting and evoking the desired expression, and from such disposition, arrangement, or representation, made entirely by the plaintiff, he produced the picture in suit.27 Since then, courts have found that “(a)lmost any photograph ‘may claim the necessary originality to support a copyright.’”28 And the law has been extended to newer forms of creation in the digital world, including the HTML code that is the foundation of a website’s design.29 Legal scholars have long recognized that modern technology has caused massive disruption to copyright law, particularly regarding fair use and sharing in the digital age.30 While Congress has not made substantial changes to U.S. copyright law since the Digital Millennium Copyright Act (DMCA) nearly 20 years ago,31 high-speed Internet and social tools have altered the way in which people acquire and use copyrighted text, music, video, and photos. And some of the consequences of the DMCA are still being felt, with rights expansions that have consolidated power in the hands of computer manufacturers and corporate owners of creative works. The Electronic Frontier Foundation (EFF) has used science fiction to illustrate some of the pitfalls of the DMCA, such as Section 1201 regarding digital rights management, “one of the ugliest mistakes in the crowded field of bad ideas about computer regulation,” according to EFF, because of how it restricts the right of users to repair equipment they have purchased, including software on computers inside automobiles.32 Science fiction author John Scalzi, perhaps most famous for space operas such as Old Man’s War and The Collapsing Empire as well as the Star Trek parody Redshirts, wrote one of the short stories, “The Right to Repair,” about a visit to the auto repair shop by Winston Jones, who needs a broken timing chain replaced on his car. While the shop has a chain that will work in stock, that chain doesn’t have the right computer chip on it to work with his car, and removing or updating the software on the chip, the simplest fix to avoid a three-week wait for a new part, comes with

The Future of Copyright Law 39

devastating consequences under Section 1201 of the DMCA. “Technically? Yes, easy as pie,” the mechanic tells Winston about possibly fixing the part. “Legally? No. They could pop me for $500,000 and five years in prison.”33 The absurdity of these kinds of damages and penalties, while they are at new heights now, have been recognized for decades by science fiction authors. Douglas Adams, in his series The Hitchhikers Guide to the Galaxy, describes the lavish headquarters of the publishing empire of the titular reference work as having been “built on the proceeds of an extraordinary copyright lawsuit fought between the book’s editors and a breakfast cereal company,” a dispute that will be revisited in more detail in the conclusion of this chapter.34 The system of copyright is a cynical money grab, as Reid describes, that suits nobody but intellectual property lawyers, industry lobbyists, and government officials that take campaign donations from both of those groups. “Collectively, we are wholly empowered to fix the entire mess. But that would result in a needless loss of extravagantly high-paying legal work for all,” Reid notes.35

Near-Term Dystopia Reid’s Year Zero is a good starting place for seeing how the law, even as it exists right now, can lead to ridiculous and unjust outcomes. Reid founded the online streaming music service Rhapsody in 2001, giving him an education in the intricacies of music licensing that would be necessary to offer a monthly subscription to users. But he also recognized how it could go off the rails, giving a TED talk one time in which he described the $8 billion iPod, a figure he reached by using the “copyright math” of the music industry that would say each song on the device, if put on there without a proper license, would be the equivalent of a $150,000 loss per song for 40,000 songs it could hold, at least according to the way the industry calculates harm under copyright law.36 Reid imagines aliens throughout the universe who are seeking to avoid crushing losses under U.S. Copyright law after decades of infringement for streaming music from Earth artists because, in short, “Aliens suck at music,” and despite our shortcomings as a species, “humanity creates the universe’s best music, by far.”37 One group of aliens contacts a copyright lawyer named Nick Carter – likely chosen because he shares a name with one of the Backstreet Boys – in hopes of negotiating “a license to all of humanity’s music,” or at least all music broadcast on New York radio stations since 1977, that would allow them to play it in public, in private, and share it, and transmit it, throughout the universe. Carter tells them it’s impossible, even referring to stock language in a contract on his desk that says it “shall apply past the end of time and the edge of Earth; all throughout the universe, in perpetuity; in any media, whether now known, or hereinafter devised; or in any form whether now known, or hereinafter devised.”38 Under the “Indigenous Arts Doctrine,” kind of an intergalactic constitution, when it comes to creative works, aliens must respect the “rules and

40 The Future of Copyright Law

norms of its society of origin” – meaning that, in essence, alien use of music from U.S. artists would be illegal piracy, subject to penalties of up to $150,000 per misuse, resulting in damages that are equivalent to “all of the wealth that could conceivably be created by every conscious being that will ever live between now and the heat death of the universe, trillions of years in the future.”39 The farce that follows brings the Earth to the brink of destruction at the hands of aliens who don’t want to own up to the damages they owe, even as Carter tries to argue that they are not bound by the same Berne Convention protocols that may require other infringers around the globe to face the severe consequences of violating U.S. copyright law. And that is just under current U.S. law, which appears plenty restrictive without further corporation-friendly revisions, as cynical copyright lawyers argue in favor of in the story, such as treating innocent purchasers who wear clothing or shoes that infringe copyright as “intentional infringers” subject to damages, or working even more punitive aspects of the law into national security laws such as the USA Patriot Act by treating acts of infringement as a kind of terrorism, putting “music piracy and peer-to-peer file sharing right up there with dirty bombs and hijacked planes.”40 Cory Doctorow shows a potential outcome of these attitudes in the realistic very near future in Pirate Cinema, 41 in which he envisions a world where teenagers fight back against oppressive British law that has embraced some of the more punitive reforms advocated by copyright maximalists. One reviewer noted that more than predicting the future, Doctorow was “predicting the present” as harshly punitive copyright bills such as SOPA and PIPA were being considered in the United States.42 Doctorow is a prolific author and activist who began his career as a computer programmer43 before becoming editor of the technology news website BoingBoing. 44 As noted by one interviewer, “Doctorow’s fiction champions technology, while warning of how easily it can be used by repressive states or corporations.”45 Pirate Cinema features repressive future copyright laws in Britain that stifle creativity by heavily punishing illegal downloading and that have the potential to become worse if corporations and legislators have their way. The book begins with Trent, a filmloving teenager who loves to mash up new videos from old sources, causing his family to lose its home Internet access for a year as punishment for his illegal downloads of film clips after having received two previous warnings.46 The impact on the family is devastating. His father was “(s)cared that without the net, his job was gone. Scared that without the net, Mum couldn’t sign on every week and get her benefits. Without the net, my sister Cora wouldn’t be able to do her schoolwork.”47 Trent leaves home and goes to London, where he falls in with squatters and an underground group of film-fanatic friends, with whom he resumes his career as a film mashup artist using the nom de plume “Cecil B. DeVil.” He also becomes an activist against the passage of the Theft of Intellectual Property Act (TIP), which includes the following provisions, as spoken by the character named “26,” who becomes Trent’s girlfriend:

The Future of Copyright Law 41

Article 1(3) makes it a criminal offense to engage in “commercial scale” infringement, even if you’re not charging or making money. That means that anyone caught with more than five pirated films or twenty pirated songs can be sent to prison. And here, article 2(4), leaves the sentencing guidelines up to the discretion of the Business Secretary: she’s not even elected, and she used to work for Warner Music, and she’s been on record as saying that she wished we still had the death penalty so it could be used on pirates.48 Further, the act allowed for escalated police powers, including hacking into private computers without any liability for damage caused, if the act did not reduce copyright infringement by 70 percent in 18 months.49 After TIP was passed, despite widespread protest, dozens of arrests followed, the first being a 17-yearold boy with “some kind of mental problems” who received a five-year prison sentence but hanged himself in prison two weeks after sentencing. A YouTubelike site called “UKTube” shut down, claiming it would have to “pay a copyright specialist to examine each and every video upload to make sure it doesn’t infringe on copyright before we make it live,” which the site estimated would cost about £16,800 a minute based on the volume of videos users typically uploaded.50 After more than 800 people, mostly teenagers, have been arrested and public sentiment turns on TIP again, a sympathetic member of Parliament draws up a repeal bill, which becomes known as TIP-Ex, that would “rescind all criminal penalties and end the practice of terminating Internet connections on accusation of piracy,” instead calling for blanket licensing to Internet service providers.51 As the repeal bill is under consideration, however, the film and music industries strike back, serving Trent with a lawsuit featuring 15,232 charges of infringement, claiming damages of £78 million.52 Members of Parliament, apparently plied by lobbyists from the “big film studios, or maybe the record labels, or maybe the video-game companies,” have been told that “the bill must not pass, under any circumstances,” with members being threatened with expulsion from their parties if they support TIP-Ex.53 Dauntless, Trent and 26 push back against TIP and the oppressive copyright regime, screening “pirate cinemas” in a graveyard, a sewer, and, in the closing moments of the novel, on the outside of the House of Commons the night before the TIP repeal bill is going to be considered. Most members of Parliament don’t show up for the vote, and the repeal passes by a slim margin. Trent loses his infringement case, but is only assessed 1 cent per infringement by the judge, ultimately paying £152.32 in damages.54 The “three strikes” law that first gets Trent in hot water has real-world roots, as Doctorow witnessed several countries, including the United Kingdom, put similar policies into place. The “three strike policy” in the U.K.’s Digital Economy Act of 2010 was intended to allow Internet service providers to seek a court order to suspend users for a limited time after receiving a third notice of infringement within a one-year period, which is less drastic than the law Doctorow envisioned but nevertheless was intended to serve as severe deterrence, inflicting

42 The Future of Copyright Law

damaging consequences in the countries that adopted it or laws similar to it, such as South Korea, New Zealand, and France.55 While New Zealand reported some success in battling copyright infringement online, the United Kingdom faced numerous difficulties in implementing the policy and ultimately scrapped it.56 The United States embraced a six-strikes policy known as the “Copyright Alert System,” drafted by the music and film industry and the White House, that included graduated notices until suspension after a sixth infringement offense, though it never resulted in actual suspensions and ultimately expired after four years of largely ineffective implementation in 2017.57 However, AT&T took on a similar policy as a private matter against its users, terminating Internet access for more than a dozen people after finding they had infringed copyrighted content owned by AT&T after its purchase of Time Warner in 2018.58 Another copyright dystopian maximalist story of the not-too-distant future was penned by Paul Ford, a technology writer and editor who was one of the Web’s earliest bloggers. He published “Nanolaw With Daughter” on his blog in 2011, a short story in which a 10-year-old girl is working through her daily pile of legal settlements for copyright actions for the first time with her father.59 Her father, the first-person narrator, describes how his daughter “was first sued in the womb” as an “unidentified fetal defendant” (along with her parents) after ultrasound companies claimed they had previously unclaimed rights to images that were posted on social networks; the demand letters from a “speculative law firm” sought “unspecified penalties for copyright violation and theft of trade secrets and risked, it was implied, that my daughter would be born bankrupt.”60 They paid $50 to settle the claim. She faces 58 claims that morning, most of which can either be dismissed as unenforceable, based on international treaty, or settled electronically for less than a dollar, but one remains – a video of her singing along with a popular song at a Major League Baseball game, “[O]ne of many of tens of thousands simultaneously recorded from gun scanners on the stadium roof.”61 Singing along with a copyrighted song in public, after all, could technically be a violation of the exclusive right of public performance under current U.S. copyright law. She pays the few dollars to settle the claim and is reminded by her father to check her claims every day, which he sees as “part of the traffic of everyday life, a territory to explore. Every one a little lawyer.”62 Beyond money damages, punishment in the future could include even more severe confinement terms, as Doctorow displayed in his short story “Printcrime” in 2006. The father in the story is beaten by police serving a warrant to the point that “he looked like he’d been brawling with an entire rugby side,” and then spends ten years in prison after being arrested for illegal use of what we would now recognize as a 3D printer, using “goop” to print “blenders and pharma . . . laptops and designer hats.” He expresses a readiness to use the printer for one more purpose – printing more printers. “One for everyone. That’s worth going to jail for.”63 And in his 2019 novella “Unauthorized Bread,” Doctorow

The Future of Copyright Law 43

emphasized other potential real-world consequences for violations of Section 1201 of the DMCA. The residents of the adjusted-rent Dorchester Towers apartments face up to five years in prison for jailbreaking the devices built into the apartments, such as ovens that only accept bread approved by the company that made it and dishwashers that only use company-approved detergent. But because hacking the equipment may violate the lease of their low-income housing, they may face eviction as well. It’s just part of the deal of leaving a refugee shelter for the apartments. “The point is that you had a choice, and that’s because appliances like ours made it economical for landlords to build subsidy units,” a toaster oven company employee tells Salima, the lead jailbreaker, threatening that she will be caught unless she helps them push company approved, but not free, jailbreaking of their products. “What about the choice to jailbreak my things?” she thinks, but doesn’t say.64 Another vision of the future, in the post-scarcity, neo-Victorian era of the late 21st century of Neal Stephenson in The Diamond Age, also features such quandaries with the way we think about intellectual property when nearly anything can be printed on demand. People rely on “the Feed” to print the things they need, but beyond basics such as food and clothing, intellectual property rights are in place to enable the flow of wealth to continue to benefit the upper classes of society, enforced by the military forces of the Common Economic Protocol (CEP). The creator class maintains power and social order through ownership of copyrights and patents, which appear to exist legally in very much the same format that they do in modern reality, that the rest of the world requires for its day-to-day lives. The CEP forces are battling against what we would recognize as an open-source movement, which is building CryptNet to liberate people from the oppression of the Feed. The story also features the criminal copyright infringement by programmer John Percival Hackworth, who has made two unauthorized copies of his work the “Young Lady’s Illustrated Primer,” an interactive story commissioned by a lord as an educational tool but that winds up in the wrong hands. Hackworth’s punishment includes “sixteen strokes of the cane and ten years’ imprisonment” for his copyright crimes, a sentence that is reduced to one stroke of the cane and immediate release once he agrees to provide the court a decryption key so that hundreds of thousands of copies of the book can be made for children in orphanages.65 The excessive money damages and overly punitive jail time in the near-term copyright dystopias in Year Zero, Pirate Cinema, “Nanolaw With Daughter,” The Diamond Age, and “Printcrime” are critiques of the current moment, when the law seems to be increasingly tilted in favor of powerful companies and against common and largely harmless citizen uses of copyrighted works. The public domain itself is also threatened by an increasingly protective approach to copyright holders, as can be seen in Spider Robinson’s short story “Melancholy Elephants,” which won a Hugo Award in 1983. The story centers on a bill, S. 4217896, being considered by a powerful legislator only referred to as “the Senator” in the story, that would

44 The Future of Copyright Law

“extend copyright into perpetuity.”66 The protagonist of the story, Dorothy Martin, is there to persuade the Senator not to support the bill. “What is wrong with that?” the Senator asks. “Should a man’s work cease to be his simply because he has neglected to keep on breathing?” While an endless copyright term would be impossible under the U.S. Constitution as drafted, the repeated extension of copyright terms to hundreds or even thousands of years after the life of the author is plausible after the Supreme Court’s holding in Eldred v. Ashcroft that affirmed the constitutionality of the extension of works to life plus 70 years. Similarly, the court’s 2012 ruling in Golan v. Holder allowed works formerly in the public domain to be removed from it and have copyrights restored as a result of treaties that recognize other countries’ copyright laws and apply them retroactively.67 The case, for example, pulled Sergei Prokofiev’s “Peter and the Wolf,” a popular piece for symphonies to play for children and families to learn about instruments and music, from the public domain, meaning community orchestra directors such as the named plaintiff Lawrence Golan would have to pay for rights to these works that had been able to be used for free in the past. In “Melancholy Elephants,” the proposal for endless copyright terms comes in a future when copyrights have already been rendered practically overlong; at the time the story was written, copyright terms in the United States were 50 years after the life of the author, and in the story, people typically live to be 120 years old while more than half the world’s population is in the artistic classes, meaning more and more people are creating with a smaller and smaller public domain. The result, Robinson explains, is a “Plagiarism Plague” that, coupled with the advancement of technology to detect similarities between old and new works, means that 40 percent of new music compositions are rejected by the Copyright Office for being too similar to previous works. The “Plagiarism Plague” is grounded in reality, as Robinson was writing about a series of cases in the 1970s and 1980s that had found substantial similarity between old pop songs and new creations, saddling the second composers with damages for what became known as “subconscious plagiarism.”68 That term came up in Bright Tunes Music Co. v. Harrisongs Music, in which the copyright holder of Ronnie Mack’s “He’s So Fine” (performed famously by The Chiffons) sued exBeatle George Harrison over the song “My Sweet Lord,” which features a similar chord progression in the chorus. Federal district court Judge Richard Owen said it was “clear that My Sweet Lord is the very same song as He’s So Fine with different words,”69 and Harrison was found responsible for damages of nearly $1.6 million. Robinson’s character Dorothy Martin mentions similar successful lawsuits against Yoko Ono, John Lennon, the television show Roots, and the movie Alien, resulting in what he said “ended the legal principle that one does not copyright ideas but arrangements of words.” It is a principle that Robinson correctly anticipated becoming more abused as the families of artists tried to capitalize on their success well after their deaths by suing creators of new works that capture the idea, if not the exact language or notes, of an older work. This was at

The Future of Copyright Law 45

the heart of the Marvin Gaye estate’s lawsuit against Robin Thicke and Pharrell Williams regarding the 2013 hit song “Blurred Lines,” which the Gaye family alleged infringed on the “feel” of his works including the 1976 song “Got to Give it Up,” even though the two songs had different lyrics and note progressions. A jury awarded the Gaye estate $7.3 million, a verdict that stunned observers who expected that copyright did not protect what the Gaye estate argued. “There is no question that Pharrell was inspired by Gaye and borrowed from him; he has freely admitted as much. But, by that standard, every composer would be a lawbreaker,” said law professor Tim Wu in The New Yorker, arguing that the case should never have gone to a jury, and that the verdict should be overturned on appeal.70 Nevertheless, the verdict was largely upheld in 2018 by the U.S. Court of Appeals for the Ninth Circuit, which rejected the “argument that the Gayes’ copyright only enjoys thin protection” and found no error in the jury’s finding of substantial similarity based on the “bass lines, keyboard parts, signature phrases, hooks . . . bass melodies, word painting, and the placement of the rap and ‘parlando’ sections in the two songs.”71 While music copyrights had long been thought to have focused almost exclusively on melody, the Blurred Lines case and others of its ilk have expanded liability for later artists based on something beyond melody, what law professor Joseph Fishman called “multidimensional similarity,” creating among musicians “growing confusion over what can be copied and what cannot.”72 Fishman recommends a return to focus on melody, not just to protect the most creative elements of the original work, but also “as a facilitation of downstream composers’ future creativity” by making copyright rules more predictable in a way that reduces fears of litigation and other costs.73 In “Melancholy Elephants,” the result of constant plagiarism allegations, longer and potentially infinite copyright terms, and technology allowing more rapid detection of similarity, in a world where there is a finite number of pleasing melodies or stories, results in a modern world devoid of new creation. Ultimately, Martin urges copyright terms of no more than 50 years, so that there is room for new work to be created, and that humans do not become like "elephants" who remember everything, unable to be delighted by new things, to their own dismay.74 The damages-happy present depicted by Reid, and the near-future copyright madness portrayed by Doctorow, Ford, and Robinson, provide a glimpse of what the costs may be to society, to the arts, to culture, and even to the fate of humanity, if we continue down a path of copyright maximalism. But longer copyright terms, broader protection of original works, and increasingly escalated money damages are just one potential path the future of copyright could take. In some visions of the future, copyright law takes a different turn.

Possibilities for Reform Ready Player One, 75 Ernest Cline’s bestselling novel set in 2044, very clearly enables public access to copyrighted works of the 20th and early 21st century for

46 The Future of Copyright Law

free. Contrasting Doctorow’s dystopian vision of the near future of copyright law, Cline presents a more utopian version in the decades to come. Most of Cline’s Ready Player One takes place in a virtual reality universe called the “Ontologically Anthropocentric Sensory Immersive Simulation,” or OASIS.76 The OASIS is a refuge from a world devastated by climate change, an energy crisis, political strife, and poverty; for just a one-time purchase of 25 cents, users could access the digital realm, which has both elements of reality and fantasy. The protagonist of the story, a teenager named Wade Watts, attends public school in the OASIS, where avatars sit in a virtual classroom, being taught by instructors who can log in from anywhere on the planet. Commerce and human interaction are almost entirely virtual: “[B]illions of people around the world were working and playing in the OASIS every day. Some of them met, fell in love, and got married without ever setting foot on the same continent.”77 The plot of Ready Player One centers on the efforts of Wade and others to find “Easter eggs” hidden throughout the OASIS by its founder, a programmer who built the most valuable company in the world and who, in his will, promises to grant control of the company to whoever solves his puzzles. The puzzles themselves are rooted largely in the popular culture of the late twentieth century – music, films, television shows, and perhaps most significantly, videogames. To understand the hints and figure out the riddles, the characters (calling themselves “gunters,” short for “egg hunters”) must have constant, easy access to these works. Cline allows this through a plot device built into the OASIS – virtual libraries containing every bit of human creative work, freely accessible within the system. As Wade, the narrator, describes it: [T]he OASIS was . . . the world’s biggest public library, where even a penniless kid like me had access to every book ever written, every song ever recorded, and every movie, television show, videogame, and piece of artwork ever created. The collected knowledge, art, and amusement of all human civilization were there, waiting for me.78 The copyright system of the United States in 2044 is only vaguely described. Wade notes that most of the items he needed to view “were over forty years old, and so free digital copies of them could be downloaded from the OASIS.”79 He explains that not everything was “legally available for free,” but he “could almost always get it by using Guntorrent, a file-sharing program used by gunters around the world.”80 At one point, for example, Wade downloads image-recognition software from Guntorrent to help solve a puzzle involving plotting a map from a Dungeons and Dragons module onto a planet in an atlas of the OASIS.81 Cline references several other uses of copyrighted works that would be virtually impossible today. One is the “personal OASIS vidfeed,” or POV channel, in which OASIS users could program broadcasts of what they were experiencing in the OASIS at any time or any other material to which they had access. “Some

The Future of Copyright Law 47

people programmed nothing but old cartoons,” Wade reports. On his channel, called Parzival-TV (“Parzival” is Wade’s pseudonym in the OASIS): I programmed a selection of classic ‘80s TV shows, retro commercials, cartoons, music videos, and movies. Lots of movies. On the weekends, I showed old Japanese monster flicks, along with some vintage anime . . . At the moment, Parzival-TV was wrapping up a nonstop two-day Kikaider marathon. Kikaider was a late-‘70s Japanese action show about a red-and-blue android who beat the crap out of rubber-suited monsters in each episode.82 The only payment or licensing system explained here was that “anyone who paid a monthly fee” could “run their own streaming television network.”83 No more detail is given about licensing or copyright as applied in the OASIS, either through the POV networks or the libraries, beyond noting that (a) some things are not available for free legally (thus requiring Wade to go to Guntorrent for downloads), and (b) the vague “more than forty years old” rule that made many materials accessible for free, presumptively because a copyright term had expired or because the OASIS had a license allowing such uses. While that is the limit of discussion about intellectual property, there clearly is a notion of real property – or at least, “virtual real estate” – within the OASIS, which is how the company running the OASIS makes its money.84 Because it is a work of science fiction, Ready Player One does not have to answer questions about how in the 33 years between the date of publication of the book and the action in the book, the system of copyright and licensing was transformed into one supporting broad public access to digital copies, legal file sharing, and drastically reduced copyright terms. The simplest answer is that this is merely a plot device – to make the narrative work and to keep the pop culture aspect as cool as possible, the author had to create a world in which the characters could easily access all television shows, pop music, movies, and videogames from the 1980s and 1990s. Embedding those works in libraries in a virtual world that is owned by the richest corporation on the planet is plausible, though it is quite a leap to get from our current system of intellectual property rights to the one Cline depicts in the near future. Cline recognized the difficulties in current-day intellectual property law when writing the book, believing it was an “unfilmable book” because of the copyright entanglements that would surely arise when trying to get movies, videogames, and music from the not-too-distant past on the big screen. When Steven Spielberg’s Ready Player One hit theaters in 2018, it had mixed success in incorporating the book’s references into the film, with Spielberg estimating getting permission for about 80 percent of the items they wanted in the film. This did not include an important scene in the book featuring Rick Deckard from the film Blade Runner, even though Warner Bros. financed Ready Player One and had been the studio that released Blade Runner, because Warner Bros. did not have the intellectual property rights, Spielberg recalled.85

48 The Future of Copyright Law

Virtual reality (VR) worlds, such as The OASIS of Ready Player One or the Metaverse of Snow Crash, and augmented reality (AR), such as the hit mobile game PokemonGo, which allows players to use their phones to chase down digital creatures on a location-based app in the real world, create some other copyright issues as well that will need to be addressed as VR and AR become more common and accessible. Law professors Mark Lemley and Eugene Volokh considered some potential copyright issues in virtual reality worlds, noting how VR and AR worlds bridge the digital world and the real world, making application of real-world law difficult when harms may be more virtual than real.86 For example, creation of avatars using fictional characters with copyright or trademark protection would present some challenges, both for the user creating the avatar and the copyright holder trying to uncover what kind of use might be fair use and protected against infringement lawsuits.87 While the Digital Millennium Copyright Act may shield the companies offering VR or AR worlds with potentially infringing avatars, the fear of liability might cause those companies to shut down such avatars entirely to avoid the possibility of being a contributory infringer. In the OASIS of Ready Player One, the copyright and trademark issues regarding the avatars of users and other items they use appear to have been taken care of seamlessly. Wade decorates his virtual locker in his virtual school with images of Princess Leia, Monty Python, and a Time magazine cover, each of which would be unquestionably fair use in the real world but may present complications in a virtual one.88 Users could make themselves look like anyone or anything; “you could cease being human altogether, and become an elf, ogre, alien, or any other creature from literature, movies, or mythology.”89 Other kids at the virtual school parked “interplanetary vehicles” such as “UFOs, TIE fighters, old NASA space shuttles, Vipers from Battlestar Galactica, and other spacecraft designs lifted from every sci-fi movie and TV show you can think of.”90 Wade himself creates a tricked-out flying DeLorean, a la Back to the Future, with the red lights of KITT from the television show “Knight Rider” on the front, and Ghostbusters logos on the sides, a vehicle that author Ernie Cline has duplicated in real life. Every one of those items has a plausible copyright under current U.S. law, and duplicating them in a for-profit online world would likely require licenses or a series of friendly fair use decisions by future federal courts. Similarly, Vernor Vinge in Rainbows End created virtual worlds called “belief circles” that wearables users could build and take part in; fictional world creators such as Terry Pratchett and J.K. Rowling become vastly wealthy from the micropayments that allow these works to exist.91 At times, the virtual worlds freeze up while people are trying to move around in them, suffering a 3030 error – a “catchall code for system deadlock caused by licensing conflicts.”92 VR and AR worlds have potential for reconsidering how intellectual property rights will work in a broader, border-free universe, with potential for copyright reform and innovation that is more expansive than could have been anticipated

The Future of Copyright Law 49

by U.S. copyright law amendments and the Berne Convention in the 20th century. In Year Zero, Rob Reid recognizes that even current technology allows transmission of copyrighted works beyond the Earth into territories where it could be accessed and spread in violation of our understanding of copyright law, with Earth attorneys investigating the extent to which the Berne Convention applies to non-signatories. They also look into what we would consider virtually impossible today – universal licensing agreements that cover all companies and creators so that potential infringers have a legal path to use broad arrays of copyrighted content without having to negotiate deals with each individual owner across the globe. The current system, what law professor Molly Shaffer Van Houweling identified as “copyright atomism,” requires “participants in the copyright marketplace” to “track down and negotiate with many far-flung rights holders regarding many separate rights,” which can be prohibitively difficult for potential secondary users and creators of new works.93 But broader, if not exhaustive, licenses of this sort are plausible. For example, the Ready Player One vidfeed system in which users pay a fee and can stream any content could come from the equivalent of a blanket license similar to the kind that stadiums and arenas acquire so that anyone who rents a venue – such as a sports team, musicians, or even politicians at rallies – may play copyrighted music at the venue that is subject to the license offered by a clearinghouse such as the American Society of Composers, Authors, and Publishers (ASCAP), which includes more than 680,000 artists and offers venues “an annual flat fee without having to take on the time-intensive process of tracking and reporting on every song played.”94 Similarly, BMI boasts 14 million musical works by more than 900,000 artists and offers licenses for bars, restaurants, aircraft, sports teams, and theme parks,95 though the convenience of such agreements is questionable. In a situation that mirrors the nightmare litigious world of “Nanolaw With Daughter,” BMI in 2016 demanded $15 million from the sports broadcasting network ESPN for including ambient stadium music from the background of live events in its broadcasts. ESPN sued BMI and claimed that BMI was refusing to offer a fair price for a license, while BMI demanded 0.1375 percent of ESPN’s gross annual revenues from an agreement more than a decade before to satisfy the liability, which ESPN argued was not proportional to current costs and fees.96 The parties ultimately settled for an undisclosed amount, providing little guidance to how these disputes might play out in court. Blanket licensing for music also became more accessible in the United States after Congress passed the Music Modernization Act in 2018,97 which created a Music Licensing Collective as a hub for simpler royalty payments for music streamed online on services such as Pandora and Spotify.98 Paul Ford in “Nanolaw With Daughter” provides another, related possibility for reform, through international agreements and microtransactions. While this means even a 10-year-old is dealing with 50 copyright disputes per day, at least they are settled for pennies or dollars each, rather than the tens of thousands of

50 The Future of Copyright Law

dollars at stake under current copyright damages provisions in the law. It may be a hassle, but at least it’s not setting up children for endless litigation and crippling debt for inadvertent and harmless infringement. Similarly, though with a bit more acid, Vernor Vinge describes the shredding of all of the books in a university library in Rainbows End, with the end goal of digitizing all content somewhat like Google Books, but by Huertas International, which has “lawyers and software that will allow him to render microroyalty payments across all the old copyright regimes—without any new permissions.”99 The attempt at micropayments is also an attempt to grab monopoly control of the content of the books that they are shredding and redistributing for fees. And interfering with that attempt, “delaying the enemy” as one of the co-conspirators calls it, is a felony.100 Secondary uses that cause little or no harm to the copyright holder are ripe for reform, as noted by Lawrence Lessig in his book Remix, in which he proposes removing barriers for creative uses of original works by amateurs and deregulating noncommercial and private uses.101 Lessig appears in Year Zero briefly as well, described as having influence on “some kind of techno anarchist” and in a footnote as “a legal scholar whose writings challenge many aspects of today’s copyright regime. The media companies view his work with the sort of horror that the last czarist court must have had for Das Kapital.”102 Lessig also suggests decriminalizing liability for noncommercial copying and file sharing.103 Similarly, professor Jessica Litman argues for more simplicity in copyright laws and focusing on commercial rather than noncommercial uses of copyrighted words.104 One potential approach that has had mixed success is Creative Commons, a nonprofit organization founded in 2001 that established a contract-based copyright alternative that attempts to unbundle the exclusive rights granted by copyright law – rights to copy, distribute, make derivative works, or perform or display in public – by giving authors the ability to retain some rights of control and also permit them to grant secondary users some latitude to use the works without permission. For example, a creator may designate a work as usable as long as the secondary user attributes to the creator and uses it for noncommercial purposes.105 Another parallel is the GNU General Public License, established by the Free Software Foundation,106 which allows secondary uses for computer software programs with some limitations without direct permission of the author. However, these copyright alternatives are still legalistic in nature and, by remaining rooted in traditional copyright law and competing with one another, just add another layer of complexity for creators unsure about using original works for new purposes and fearing crushing damages for missteps. Standardization of licensing among these alternatives, perhaps rooted in reforms to copyright law, could be a step for functional reform, as Van Houweling, the founding executive director of Creative Commons, pointed out 15 years after its creation.107 Another potential copyright fix recognized by science fiction authors and copyright reformers is reducing the length of the copyright term, which Robinson portrayed as being in danger of being extended into infinity in “Melancholy

The Future of Copyright Law 51

Elephants.” When that story was written, the term was only the life of the author plus 50 years; it has since extended by 20 years, of course, and may face further extensions. The character lobbying against further extension insists that terms must be shortened to prevent harm both to creators and to society: A copyright must not be allowed to last more than fifty years—after which it should be flushed from the memory banks of the Copyright Office. We need selective voluntary amnesia if Discoverers of Art are to continue to work without psychic damage. Facts should be remembered—but dreams?108 Factor in that nearly everyone in the “Melancholy Elephants” universe is a creator, and the mandatory copyright attachment to new creations can be stifling for other creators. Likewise, today, in the age of social media, everyone is likely a content creator. As Derek Khanna, a copyright policy scholar who authored an influential report for the House Republican Study Committee on copyright calling for reform, noted, “Protecting our personal e-mails, Facebook posts, and tweets under copyright for our lifetimes, plus seventy years, does not seem to meaningfully fulfill the constitutional mandate of promoting the progress of the sciences.”109 He warned against what would become de facto perpetual copyright, “just on the installment plan” with routine extensions by Congress, which he argues would harm innovation and economic growth and stray further from the intent of the founders who included copyright in the Constitution.110 In Ready Player One, a potential future is imagined in which copyright law has actually progressed in ways that copyright reformers such as Litman and Lessig have proposed, with terms reduced to 20 years after creation, at least for works broadcast in a vidfeed in the OASIS. In Remix, Lessig suggests limiting the original copyright term to 14 years after creation of the work.111 Standing in the way of functional reform, though, is the power of lobbyists and politicians in strengthening copyright protections. Such lobbyists appear in Year Zero, Pirate Cinema, and “Melancholy Elephants” as well, each of which feature a powerful legislator owned in full by the film and music industry. Litman also contends with some of the challenges they raise about the power of lobbyists and entrenched interests in shaping the law: Right now the copyright-legislation playing field is completely controlled by its beneficiaries. They have persuaded Congress that it is pointless to try to enact copyright laws without their assent. . . . To accomplish real copyright reform, then, we will need to change the way that copyright laws are made. That may be an impossible task, at least in the near term. Litman suggests, by focusing on creator (rather than corporate) rights and user rights, and by “examin(ing) the ways the current copyright system fails” to connect and empower them, reform may be possible.112

52 The Future of Copyright Law

Finally, near-term advances in technology envisioned by science fiction authors may allow rethinking of copyright and its role in society. In Mr. Penumbra’s 24Hour Bookstore, Robin Sloan involves high-speed scanning of books by Google – already a technological reality that has been favorably reviewed by courts as fair use – as well as a “data visualization theater” that reads and tries to make sense of texts scanned by the system. These technologies turn out to be critical for attempts to crack the code embedded in one of the important items in Sloan’s novel. In conjunction with handheld cardboard scanners called the “GrumbleGear 3000,”113 which can be smuggled into libraries to help dodge security restrictions in place to prevent copying, the plausible copying tech would allow both for possible infringement and enlightenment, the concepts at the heart of modern struggles over copyright law.114 These policy discussions in Sloan’s 2012 novel can be seen in the decision by the U.S. Court of Appeals for the Second Circuit a few years later that upheld the legality of Google Books, a project that has scanned more than 20 million books since its launch in 2004 in partnership with major university and public libraries around the world. The court ruled that both the scanning function and the text- and data-mining functions qualified for fair use protection, finding that the purpose of the copying was “highly transformative” and the results “do not provide a significant market substitute” for the originals, many of which were out of print or otherwise difficult for the public to access.115 The high-speed scanning and data visualization from powerful networked computing raises questions beyond how companies such as Google and public resources such as libraries may use the outcomes of those tools. The machines involved, particularly if they are imbued with artificial intelligence, may have rights or liabilities related to their ability to read or use copyrighted works in ways different than human users would. In an interview with Wired in 2017, Sloan questioned that if humans could legally access all of the books scanned by Google Books because of copyright issues, whether perhaps machines could access and scan them without running similar risks of infringement. The result may aid in the advance of machine learning. He said: If Google could find a way to take that corpus, sliced and diced by genre, topic, time period, all the ways you can divide it, and make that available to machine-learning researchers and hobbyists at universities and out in the wild, I’ll bet there’s some really interesting work that could come out of that.116 Machine learning and artificial intelligence create other challenges for copyright as well, and science fiction authors and legal scholars have taken notice in recent years, with emerging discussions on the extent to which robots can be creators and may be entitled to own or enforce ownership rights over their works of authorship, which will be examined in more detail in Chapter 4.

The Future of Copyright Law 53

Laboratories of Copyright Law U.S. Supreme Court Justice Warren Brandeis once famously outlined the important role state and local governments play in democracy, saying it was “one of the happy incidents of the federal system that a single courageous State may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.”117 But because copyright law in the United States is a federal law only, there has been less room for experimentation and development; there is just one copyright law, and the stakes are very high for lobbyists and legislators who have sought to expand the law’s economic protections for those who make and own creative works. As states cannot serve as innovators and laboratories of copyright law, and as governments around the world trend toward homogenous copyright rules and terms through international treaty, experimentation may be left to science fiction authors. The copyright law experiments by authors such as Doctorow and Cline set up plausible future worlds in which laws that are more or less restrictive, or altogether transformative, can be put into place, having an effect on the individuals and culture around them that they create. And in general, the development of technology in other areas – for example, health innovations that extend human lifespans by decades or centuries – can wreak havoc on the underlying purposes of fixed terms of copyright ownership, as Robinson pointed out in “Melancholy Elephants,” unnecessarily restricting the creative sphere for future generations of human artists. Time itself can be a problem for copyright when it is non-linear. While it may be a stretch to think of time travel as plausible technology, the ripples it would create for copyright law are amusing, a point Douglas Adams noted in his Hitchhiker’s Guide books. The lavish headquarters of the publishing company of the Guide were built from the proceeds of a copyright lawsuit spawned by a hasty bit of plagiarism, followed by some time travel and historical manipulation. The entry for “space” in the Guide was written by editors who, “having to meet a publishing deadline, copied the information off the back of a packet of breakfast cereal, hastily embroidering it with a few footnotes in order to avoid prosecution under the incomprehensibly tortuous Galactic Copyright laws.” The cash grab came under these same laws later when a “wilier editor sent the book backward in time through a temporal warp, and then successfully sued the breakfast cereal company for infringement of the same laws.”118 When time can be manipulated in this manner, a key part of copyright law – when is a work original? – becomes entirely unworkable. If science fiction is any guide, intellectual property rights will endure. They are in virtual worlds in the near future, in the politics of the not quite-as-near future, and even in the extremely distant future. In the 24th century of Star Trek: Voyager, the Twelfth Guarantee of the Constitution of the United Federation of Planets recognizes legal rights and privileges of artists in their works, though they decline to extend those rights to The Doctor, a hologram who serves as the ship’s emergency medical officer, when he authors a holoprogram called “Photons Be Free.”119 In Foundation, Isaac Asimov projected copyright ownership in what would be the

54 The Future of Copyright Law

equivalent of the universe about 10,000 years from today, with the important presence of newspapers and video feeds and the Encyclopedia Galactica, which is given credit in the opening passage with the note, “All quotations from the Encyclopedia Galactica here reproduced are taken from the 116th Edition published in 1020 F.E. by the Encyclopedia Galactica Publishing Co., Terminus, with permission of the publishers.”120 Even in this future, with humanity spread to worlds throughout the universe, ownership and copying are still matters for concern. Perhaps they were a bit friendlier toward copying works for humanitarian purposes, with an ambassador who collects books of archaeology eagerly accepting a copy of a book from a scientist on Terminus, the home of the Foundation and the Encyclopedia Galactica, the compendium that was authorized to preserve human knowledge after the inevitable arrival of a new dark ages lasting thousands of years. Lord Dorwin tells of how his personal library lacks a copy of the book, and in his odd R-dropping affected speech, reminds Dr. Pirenne, “you have not fohgotten yoah pwomise to twans-develop a copy foah me befoah I leave?”121 There is also, of course, the matter of how we on Earth will treat the creations of extraterrestrial beings when we receive them here. In Mary Doria Russell’s 1996 novel The Sparrow, satellite dishes from the Search for Extraterrestrial Intelligence (SETI) program pick up radio transmissions of songs from a planet in Alpha Centauri. The songs feature multiple vocals and what sound like wind instruments, and once it is revealed to the public that the music has been received and recorded, the files of the SETI satellite operator who recorded them are quickly hacked. Russell anticipated the legalities that would come along with the first capture of alien music. “Legitimate bids to reproduce and market the ET music began to flood in to (the Institute of Space and Aeronautical Sciences) almost immediately,” but the director of the program pointed out “that there was a long-standing agreement that any transmission received by the SETI program was the possession of all humankind.”122 In reality, the SETI program is a nonprofit corporation funded by donors, and while it could claim copyright in the recordings, it could also plausibly use any proceeds from intellectual property rights and licensing to support the project. The music itself, when translated and properly understood, may have had high chart potential, as it turned out to be songs by the planet’s greatest poet, the Reshtar, Hlavin Kitheri, a prince and artist who sings about his sexual conquests. Another possible issue could be receiving a kind of language and writing currently unidentifiable to humans, as in Ted Chiang’s 1998 short story, “Story of Your Life,” which includes what he described in an interview as “a sort of threedimensional grammar, (which) uses velocity as a way of inflecting. And it is pretty much impossible to do a sort of word-by-word translation into English.”123 The concept is extended in the 2016 film version of the story, Arrival, in which the academic author who ultimately translates and understands the language as including keys to understanding communication and perception of time, finds worldwide success as an author after writing books derived on her interactions with the aliens.

The Future of Copyright Law 55

Alien authors, like the issues presented by non-human creators such as bots and nonlinear passage of time, are not the kinds of things that the modern, narrow, maximalist conception of intellectual property laws can manage functionally for any period of time. In many ways, the copyright dystopia projected and derided by science fiction authors is already upon us. Coming up with new ways of thinking about intellectual property to supplant the problematic modern versions, perhaps inspired by those same authors, may help get us out of the copyright rut.

Notes 1 Parker Higgins, Where is the Copyright Maximalist Dystopian Sci-fi?, parkerhiggins.net, Apr. 28, 2014, parkerhiggins.net/2014/04/where-is-the-copyright-maximalist-dy stopian-sci-fi/. The “copyright maximalist” perspective would be a situation in which new works become scarce due to lack of incentives because intellectual property laws became too loose or impossible to enforce. 2 Capitol Records, Inc. v. Thomas-Rasset, 692 F.3d 899 (8th Cir. 2012). 3 Sony BMG Music Entertainment v. Tenenbaum, 660 F.3d 487 (1st Cir. 2011). 4 Sony BMG Music Entertainment v. Tenenbaum, 672 F. Supp. 2d 237 (D. Mass. 2009). 5 Stop Online Piracy Act, H.R. 3261, 112th Congress (2011–12). 6 PROTECT IP Act of 2011, S. 968, 112th Congress (2011–12). 7 See Jonathan Weisman, In Fight Over Piracy Bills, New Economy Rises Against Old, N. Y. Times, Jan. 18, 2012, A1; Vlad Savov, The SOPA blackout: Wikipedia, Reddit, Mozilla, Google, and Many Others Protest Proposed Law, The Verge, Jan. 18, 2012, www.theverge.com/2012/1/18/2715300/sopa-blackout-wikipedia-reddit-mozilla -google-protest. 8 See Hayley Tsukayama, SOPA Bill Shelved After Global Protests From Google, Wikipedia and Others, Wash. Post, Jan. 20, 2012, www.washingtonpost.com/business/econom y/sopa-bill-shelved-after-global-protests-from-google-wikipedia-and-others/2012/ 01/20/gIQAN5JdEQ_story.html. 9 David Kravets, Record 5-Year Prison Term Handed to Convicted File Sharer, Wired, Jan. 3, 2013, www.wired.com/threatlevel/2013/01/record-filing-sharing-term/. 10 See David Amsden, The Brilliant Life and Tragic Death of Aaron Swartz, Rolling Stone, Feb. 28, 2013, www.rollingstone.com/culture/news/the-brilliant-life-and-tragic-dea th-of-aaron-swartz-20130215. 11 Michael Hiltzik, Georgia Claims that Publishing Its State Laws for Free Online is ‘Terrorism,’ L.A. Times, July 27, 2015; Code Revision Commission v. Public Resource, No. 17–11589 (11th Cir. 2018); see Mike Masnick, Appeals Court Says Of Course Georgia’s Laws (Including Annotations) Are Not Protected By Copyright And Free To Share, TechDirt, Oct.19, 2018, www.techdirt.com/articles/20181019/12232640876/appea ls-court-says-course-georgias-laws-including-annotations-are-not-protected-cop yright-free-to-share.shtml. 12 Jyoti Panday, Will TPP-11 Nations Escape the Copyright Trap?, Electronic Frontier Foundation, Aug. 23, 2017, www.eff.org/deeplinks/2017/08/will-tpp-11-nation s-escape-copyright-trap-1; Janyce McGregor, Buried Behind the Cows and Cares: Key Changes in NAFTA 2.0, CBC News, Oct. 1, 2018, www.cbc.ca/news/politics/nafta -usmca-key-changes-1.4845239; Adam Satariano, Law Bolsters Copyrights in Europe, N.Y. Times, March 26, 2019, B1. 13 Rob Reid, Year Zero (2012). 14 Annalee Newitz, Autonomous (2017). 15 Moseley v. V Secret Catalogue, Inc., 537 U.S. 418 (2003).

56 The Future of Copyright Law

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

31

32

33 34 35 36 37 38 39 40 41 42

Trademark Dilution Revision Act of 2006, Pub. L. No. 109–312 (2006). U.S. Const. Art. I, § 8. Copyright Act of 1790, 1 Stat. 124 (1790). Eldred v. Ashcroft, 537 U.S. 186, 256 (2003) (J. Breyer dissenting). Id. Digital Theft Deterrence and Copyright Damages Improvement Act of 1999, Pub. L. No. 106–160 (1999). See 17 U.S.C. § 504(c) (2018). Reid, supra note 13 at 110–11. 17 U.S.C. § 102(a). 17 U.S.C. § 106. Iowa State Univ. Research Found., Inc. v. American Broadcasting Cos., 621 F.2d 57, 60 (2d Cir. 1980). 17 U.S.C. § 102(a)(5); 17 U.S.C. § 101. Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 54, 55 (1884). Mannion v. Coors Brewing Co., 377 F. Supp. 2d 444, 450 (S.D.N.Y. 2005), citing 1 Nimmer § 2.08(e)(1) at 2–129. Media.net Advertising FZ-LLC v. Netseer, Inc., 198 F. Supp. 3d 1083 (N.D. Cal. 2016). See, e.g., Lawrence Lessig, Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity (2004) (describing the challenges U.S. copyright law presents to citizens who use online material to create new things and express themselves and the power structure that makes reform that would benefit citizens difficult); Victoria Smith Ekstrand, Andrew Famiglietti & Cynthia Nicole, The Intensification of Copyright: Critical Legal Activism in the Age of Digital Copyright, 53 IDEA 291 (2013) (taking a critical legal studies approach to understanding public responses to changes in copyright law and fair use in the digital era); Brad Greenberg, Copyright Trolls and Presumptively Fair Uses, 85 U. Colo. L. Rev. 53 (2014) (noting the emergence of socalled copyright trolls such as Righthaven LLC, which brought several lawsuits against online publishers who posted portions of online news articles on their personal websites and arguing that judges should find such personal uses presumptively fair as a non-legislative solution to excessive copyright litigation). Digital Millennium Copyright Act of 1998, Pub. L. No. 105–304 (1998). The act amended several sections of title 17 of the United States Code, including limitation of liability for Internet service providers that take down infringing materials upon the request of the copyright holder. See 17 U.S.C. §§ 201–203 (2014). Cory Doctorow, EFF Presents John Scalzi’s Science Fiction Story About Our Right to Repair Petition to the Copyright Office, Electronic Frontier Foundation, May 14, 2018, www.eff.org/deeplinks/2018/05/eff-presents-john-scalzi-science-fiction-story-abou t-our-right-repair-petition. John Scalzi, The Right to Repair (2018). Douglas Adams, The Restaurant at the End of the Universe 32 (Pocket Books 1982) (1980). Reid, supra note 13, at 16. Rob Reid, The $8 Billion iPod, TED Talk (2012), www.youtube.com/watch?v= GZadCj8O1-0. Reid, supra note 13, at 3. Id. at 18. Id. at 109. Id. at 91. Cory Doctorow, Pirate Cinema (2012). Parker Higgins, Predicting the Present in Cory Doctorow’s “Pirate Cinema,” parkerhiggins. net, Feb. 4, 2013, parkerhiggins.net/2013/02/predicting-the-present-in-cory-doctor ows-pirate-cinema/.

The Future of Copyright Law 57

43 See Gregory Mone, Is Science Fiction About to Go Blind?, Popular Science, Aug. 2004, www. popsci.com/scitech/article/2004-08/science-fiction-about-go-blind?nopaging=1. 44 See About Us, BoingBoing, boingboing.net/about. 45 Tim Harford, Lunch with the FT: Cory Doctorow, Financial Times, July 12, 2013, www.ft.com/cms/s/2/9a344ea2-e8af-11e2-aead-00144feabdc0.html#a xzz2uS6LFjuH. 46 Doctorow, supra note 41, at 12–14. 47 Id. at 16. 48 Id. at 111. 49 Id. at 111–12. 50 Id. at 149–50. 51 Id. at 258. 52 Id. at 294. 53 Id. at 340. 54 Id. at 368–81. 55 Eldar Habar, The French Revolution 2.0: Copyright and the Three Strikes Policy, J. of Sports & Entertainment L. 297 (2010) 56 UK Piracy Warning Letters Delayed Until 2015, BBC News, June 6, 2013, www.bbc. com/news/technology-22796723. 57 David Kravets, RIP, “Six Strikes” Copyright Alert System, ArsTechnica, Jan. 30, 2017, arstechnica.com/tech-policy/2017/01/rip-six-strikes-copyright-alert-system/. 58 Sara Fischer & David McCabe, AT&T to Cut Off Some Customers’ Service in Piracy Crackdown, Axios, Nov. 6, 2018, www.axios.com/scoop-att-to-terminate-service-o ver-piracy-for-first-time-1541465187-749442e3-7b71-4cc7-a694-865779b6fb96. html. 59 Paul Ford, Nanolaw With Daughter (2011), www.ftrain.com/nanolaw.html. 60 Id. 61 Id. 62 Id. 63 Cory Doctorow, Printcrime, 439 Nature 242 (2006). 64 Cory Doctorow, Unauthorized Bread, in Cory Doctorow, Radicalized 96 (2019). 65 Neil Stephenson, The Diamond Age: Or, A Young Lady’s Illustrated Primer 178– 180 (Bantam Spectra 2008) (1995). 66 Spider Robinson, Melancholy Elephants (1982), www.spiderrobinson.com/melancho lyelephants.html. 67 Eldred v. Ashcroft, 537 U.S. 186 (2003); Golan v. Holder, 565 U.S. 302 (2012). 68 Jordan Runtagh, Songs on Trial: 12 Landmark Music Copyright Cases, Rolling Stone, June 8, 2016, www.rollingstone.com/politics/politics-lists/songs-on-trial-12-landma rk-music-copyright-cases-166396/george-harrison-vs-the-chiffons-1976-64089/. 69 420 F. Supp. 177 (S.D.N.Y. 1976) . 70 Tim Wu, Why the “Blurred Lines” Copyright Verdict Should Be Thrown Out, The New Yorker, March 12, 2015, www.newyorker.com/culture/culture-desk/why-theblurred-lines-copyright-verdict-should-be-thrown-out. 71 Williams v. Gaye, No. 15–56880 (9th Cir., 2018), 14. 72 Joseph P. Fishman, Music as a Matter of Law, 131 Harv. L. Rev. 1862, 1884 (2018). 73 Id. at 1870. 74 Robinson, supra note 66. 75 Ernest Cline, Ready Player One (2011). 76 Id. at 48. The immersive, virtual reality, online world is a descendant of the vision of other science fiction authors, in particular William Gibson’s Matrix created in his book Neuromancer (1984) and Neal Stephenson’s Metaverse from the book Snow Crash (1993). 77 Id. at 60.

58 The Future of Copyright Law

78 79 80 81 82 83 84 85

86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113

Id. at 16. Id. at 62. Id. Id. at 71. Id. at 201–02. Id. at 201. Id. at 59. Josh Rottenberg, How the Team Behind ‘Ready Player One’ Wrangled a Bonanza of Pop Culture References Into a Single Film, L.A. Times, April 1, 2018, www.latimes. com/entertainment/movies/la-et-mn-ready-player-one-references-20180401-story. html. Mark Lemley & Eugene Volokh, Law, Virtual Reality, and Augmented Reality, 166 U. Pa. L. Rev. 1051, 1112 (2018). Id. at 1113. Cline, supra note 75, at 27. Id. at 57. Id. at 48. Vernor Vinge, Rainbows End 181–82 (2006). Id. at 283. Molly Shaffer Van Houweling, Author Autonomy and Atomism in Copyright Law, 96. Va. L. Rev. 549, 557 (2017). ASCAP, Why ASCAP Licenses Bars, Restaurants & Music Venues (2018), www.ascap. com/help/ascap-licensing/why-ascap-licenses-bars-restaurants-music-venues. BMI, General Licensing Venues (2018), www.bmi.com/creators/royalty/general_licen sing_venues. Eriq Gardner, ESPN Asked to Pay More Than $15 Million Annually to License Ambient Stadium Music, Hollywood Reporter, March 10, 2016, www.hollywoodreporter.com/ thr-esq/espn-asked-pay-more-15-874330. Music Modernization Act of 2018, Pub. L. No. 115–264 (2018). Devin Coldeway, Copyright Compromise: Music Modernization Act Signed Into Law, TechCrunch, Oct. 11, 2018, techcrunch.com/2018/10/11/copyright-comprom ise-music-modernization-act-signed-into-law/. Vinge, supra note 91, at 132. Id. at 167. Lawrence Lessig, Remix: Making Art and Commerce Thrive in the Hybrid Economy (2008). Reid, supra note 13, at 305. Id. at 268–72. Jessica Litman, Real Copyright Reform, 96 Iowa L. Rev. 1, 53 (2010). See Creative Commons, Frequently Asked Questions (2018), creativecommons.org/fa q/. Free Software Foundation, GNU General Public License (2007), www.gnu.org/licen ses/gpl-3.0.en.html. Van Houweling, supra note 93, at 640. Robinson, supra note 66. Derek Khanna, Guarding Against Abuse: The Costs of Excessively Long Copyright Terms, 23 CommLaw Conspectus 52, 103 (2014). Id. at 124. Lessig, supra note 101, at 264. Litman, supra note 104, at 53–55. In an interview, Sloan mentioned that the GrumbleGear was based on do-it-yourself scanner designs created by Daniel Reetz, which are available free online but are not actually made out of cardboard. See also Daniel Reetz, The Archivist, or How I Built

The Future of Copyright Law 59

114 115 116 117 118 119 120

121 122 123

A Book Scanner in Six Years, diybookscanner.org/archivist/indexcd64.html? author=1 (accessed Feb. 24, 2019). Robin Sloan, Mr. Penumbra’s 24-Hour Bookstore 157 (2012). Authors Guild v. Google, Inc., 804 F.3d 202, 229 (2nd Cir. 2015), cert. denied, Authors Guild v. Google, Inc., 136 S. Ct. 1658 (2016). Scott Rosenberg, How Google Book Search Got Lost, Wired, April 11, 2017, www. wired.com/2017/04/how-google-book-search-got-lost/. New State Ice Co. v. Liebmann, 285 U.S. 262, 311 (1932) (J. Brandeis dissent). Adams, supra note 34, at 147. Star Trek: Voyager (Paramount Television broadcast, April 18, 2001). Isaac Asimov, Foundation 3 (Bantam Spectra 1990) (1951). Asimov’s understanding of copyright law is clearly limited; the Encyclopedia was authorized by the Emperor and presumably paid for by the government, so it’s questionable how it would remain under copyright as a government work, though perhaps this is how the universal laws work in a clearly less free political system in the future. Id. at 75. Mary Doria Russell, The Sparrow 103 (1996). Neda Ulaby, ‘Arrival’ Author’s Approach to Science Fiction? Slow, Steady and Successful, NPR, Nov. 11, 2016, www.npr.org/2016/11/11/501202681/arrival-authors-app roach-to-science-fiction-slow-steady-and-successful.

3 PRIVACY IN THE PERPETUAL SURVEILLANCE STATE

In Tell the Machine Goodnight, Katie Williams introduces us to the Apricity, “the happiness machine.” It’s 2035, and Pearl, a “contentment technician” for the Apricity Corporation, collects cheek swabs, puts the saliva on a computer chip, and inserts it into the Apricity 480, which produces a “personalized contentment plan” in just a few minutes.1 The plan includes action items that a person should take that would make them happy, and Apricity boasts a 99.7 percent success rate. Sometimes, the advice is mundane; in the opening scene, the man receiving results is told to eat tangerines regularly and to move his desk with more sunlight. Apricity’s third recommendation, though, is to have a portion of his right index finger amputated. Sometimes, it recommends divorce, or making efforts to seek deeper religious connections. And sometimes the results are so inappropriate or dangerous that they are automatically flagged so that the only outcome the participant receives is an asterisk. The implications of Apricity, and the risks it presents if the confidential information it produces is revealed in public, are evident in several scenes in the book. It is briefly mentioned that police are no longer allowed to use Apricity in interrogations, for example, for fear that the results would reveal that “a person’s guilt is what’s keeping them from being happy.”2 Williams also includes a passage from a concurring opinion in a future Supreme Court case, “Grover vs. The State of Illinois,” discussing some of the legal restrictions that would be in play for the Apricity: (T)his device does not have the power to bear witness to our past actions. Apricity may be able to tell us what we want, but it cannot tell us what we have done or what we will do. In short, it cannot tell us who we are. It, therefore, has no place in a court of law.3

Privacy in the Perpetual Surveillance State 61

With its ability to gather biometric information, some of the most personal scientific data about a person, and use and store it, Apricity represents a kind of technology that may help us understand the future of privacy law. It’s not that the device itself is plausible or foreseeable, but that it symbolizes the kind of device that we may both desire and fear. The benefits of happiness are there, to be sure, but so are the weight of potential invasion and the risk of loss of control over sensitive information. Later in the book, Williams explores privacy in other realms, including the desire for private spaces in public places with booths on public streets that one can rent for a brief time to block out the world and have a private conversation in person or on the phone. We are increasingly in a world where we are always on, or even potentially always being watched, even in our most private spaces, whether they are physical or digital. Technological invasion of one’s body or intimate surroundings, and thoughts of privacy in futures surrounded by more and more people with the ability to watch, are common themes in science fiction. In this chapter, I approach privacy from a few different perspectives, starting with a brief background on the advance of privacy law over the past century and how it has dealt with the rise of invasive technologies. Then, futuristic surveillance technology will be examined with an eye on how it may affect the most intimate notions of personal privacy in what we think of as private places, such as the home or on our personal devices. This is followed by a look at surveillance in public, both by the government and from individuals, particularly with futuristic wearable, implantable, and biometric technologies. The chapter concludes with some of the future ways of thinking about privacy that come up often in science fiction works, and how those may help us manage, design, and possibly restrict new technologies as they arise.

Privacy Law During the Rise of Invasive Technology Privacy is a core value shared among human communities that is, as privacy law scholar Helen Nissenbaum described, “among the rights, duties, or values of any morally legitimate social and political system.”4 In the 1960s, privacy scholar Alan Westin investigated the roots of privacy in western democracies and noted that, unlike in totalitarian states, democratic society relies “on privacy as a shield for group and individual life,” not just in politics but also in family, religious, and other personal affairs, with roots in notions of individualism, involvement in associations, and civil liberties protecting citizens from power exercised by government or private interests.5 The roots of privacy may very well predate humanity in the animal world, in which “virtually all animals have need for the temporary individual seclusion or small-unit intimacy that constitute two of the core aspects of privacy.”6 While the word “privacy” does not appear in the U.S. Constitution, it is widely recognized as a fundamental interest in this country. In The Right to Privacy, Ellen Alderman and Caroline Kennedy explained, “The right to privacy, it

62 Privacy in the Perpetual Surveillance State

seems, is what makes us civilized.”7 When lawyer Samuel Warren and future Supreme Court justice Louis Brandeis authored their seminal law review article “The Right to Privacy” in 1890, shaping the way jurists and scholars would view an individual’s right to privacy in the 20th century, the photography and telephone technology involved was much more limited in its ability to pry into one’s private life in public places. Warren and Brandeis established a basis for areas in which people should be able to sue for damages for harm done to their “right to be let alone.” Any notion of a right to privacy in public places was not included in this conceptualization; rather, these were “to protect the privacy of private life, and to whatever degree and whatever connection a man’s life has ceased to be private . . . to that extent the protection is likely to be withdrawn.”8 Law professor and renowned scholar William Prosser identified these harms as “privacy torts,” legal rights existing outside of traditional contract and property law that could result in successful lawsuits when deprived through actions such as trespassing, misappropriation of one’s image or likeness, or publishing one’s private matters such as letters.9 These torts, which have emerged as valid causes of action in most U.S. jurisdictions either by statute or common law, provide an avenue for individuals to bring lawsuits against others who they believed intruded upon their “right to be let alone.” Additionally, over the past half century, laws have been passed to protect privacy in other areas, such as the federal Privacy Act of 1974, which provides protection against improper disclosure or dissemination of personal information held by the federal government;10 the Family Educational Rights and Privacy Act (FERPA), which prevents educational institutions from revealing personally identifiable information about academic progress to the public;11 the Drivers Privacy Protection Act, which prohibits release or sale of information gathered by state agencies that collect this for driver’s licensing purposes, at least without notice to the individuals first;12 the Video Privacy Protection Act of 1988, enacted in response to public release of failed Supreme Court nominee Robert Bork’s video rental history to prevent similar releases of other private citizens;13 and the Health Insurance Portability and Accountability Act (HIPAA), which allows personal control over the release of medical information from entities such as insurance companies and healthcare providers.14 The advance of digital communication, likewise, led to passage of privacy laws to protect personal information in online formats, such as the Electronic Communications Privacy Act, which protects against government intrusion into digital messages such as emails, and the Stored Communications Act, which protect information held by third parties such as Internet service providers from disclosure.15 Science fiction had an influence on these laws as well. The Computer Fraud and Abuse Act (CFAA) was passed in 1984 after the film WarGames depicted a nightmare scenario in which Matthew Broderick’s character accesses both school and national defense computers through password guessing or

Privacy in the Perpetual Surveillance State 63

hacking leading to the brink of global thermonuclear war. Clips of the film were shown during Congressional hearings on the bill.16 The CFAA criminalizes unauthorized access to computers owned by individuals or companies.17 Online data privacy legislation has begun to emerge as well, such as the California Consumer Privacy Act, enacted in 2018, which aims to provide citizens more control over their personal information gathered by businesses, including notice about sale of data to third parties, enhanced disclosure requirements for businesses gathering such data, and the ability of a person to request deletion of their personal information that has been gathered and stored.18 These laws were at least somewhat in response to revelations of how companies such as Facebook and Google had used people’s data in ways that they could not have expected, including sale to nefarious actors such as Cambridge Analytica, which was accused of using the data to target voters and disrupt democratic elections around the world.19 California’s law mirrors many of the privacy protection efforts enacted in Europe in the General Data Protection Regulation (GDPR), passed in 2016 and put into effect in 2018, which demands greater accountability by companies, requires them to be more transparent about data they gather from people, and puts significant penalties in place for noncompliance.20 While earning its share of critics for its breadth and unwieldiness, many privacy advocates have supported the GDPR; Daniel Solove called it “the most profound privacy law of our generation” because of its comprehensiveness and the rights and remedies it provides individuals, among other benefits.21 Beyond legislation, much of our modern understanding of privacy rights in the United States is rooted in the Bill of Rights, with implicit rights embedded such as a right not to speak in the First Amendment, a right not to have government soldiers stationed in your house under the Third Amendment, and a right against being able to give incriminating testimony against yourself in the Fifth Amendment, for example. The Supreme Court has recognized “zones of privacy” in the Constitution by detailing how “specific guarantees in the Bill of Rights have penumbras, formed by emanations from those guarantees that help give them life and substance,” as Justice William O. Douglas wrote in 1965 in Griswold v. Connecticut, a decision that recognized the intimacy of the marriage relationship and decisions made by partners in it in striking down a state law banning use of contraceptives.22 Jurisprudence regarding the “right to be let alone” from government intrusion has largely arisen under the Fourth Amendment, particularly as it applies to the ability of law enforcement officers to access materials or information about people in connection to criminal investigations. As rights of privacy were adjusting to the emergence of new technologies in the early 20th century, the Supreme Court was not immediately friendly to the notion that the Fourth Amendment provided expansive rights against invasion of anything besides their personal property. In Habeas Data, tech journalist Cyrus Farivar explored the early days of the development of modern surveillance law, detailing how initially, police were not

64 Privacy in the Perpetual Surveillance State

required to have a warrant to wiretap a phone line in a case involving Seattle bootlegger Roy Olmstead.23 In 1928, the Supreme Court ruled that unless the government agents trespassed upon property, they were not in violation of Fourth Amendment protections requiring a warrant. Justice Brandeis, naturally, wrote a dissenting opinion in which he imagined the emergence of even more intrusive technologies. “The progress of science in furnishing the Government with means of espionage is not likely to stop with wiretapping,” Brandeis wrote.24 Indeed, Brandeis’ view was ultimately adopted by the Supreme Court. In one foundational case in this context, Katz v. United States in 1967, the court held in that a person engaged in illegal gambling over a telephone line had a reasonable expectation of privacy in a phone booth in a public place, on grounds that what was invaded by government intrusion without a warrant was “not the intruding eye” but “the uninvited ear.”25 The Supreme Court has found that police flying a helicopter 400 feet above a person’s property did not violate that person’s expectation of privacy to the point that a warrant was required,26 nor was flying 1,000 feet over one’s property in a fixed-wing aircraft, as long as the property was viewed with the naked eye.27 But police using heat-sensing technology from across the street of a person’s property, such as a thermal imaging device aimed at detecting potential marijuana growth, was ruled to have violated the property owner’s reasonable expectation of privacy, particularly when the technology was not generally available for public use. Even when the technology used is “relatively crude,” Justice Antonin Scalia wrote for the majority in 2001, “the rule we adopt must take account of more sophisticated systems that are already in use or in development.”28 In the United States, warrants are now required for the government to stick a needle in your arm for a blood-alcohol test to be used as evidence in a criminal proceeding;29 for allowing drug-sniffing dogs to walk onto the porch of potential drug suspects;30 or even to place a Global Positioning System tracker on a car that police suspect is tied to drug activity.31 In one of the most critical digital privacy cases in recent years, the Supreme Court in 2014 recognized a person’s reasonable expectation of privacy in his or her mobile phone. In the cases the court was handling, criminal defendants had been arrested, and incident to their arrest, police searched the contents of their phones without a warrant. On the phones, they found photographs and contact lists and other digital information that could serve as evidence of the crimes at hand as well as others. Chief Justice Roberts noted that our phones “are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy,” something unheard of only a decade before the case was decided.32 While the government argued that defendants might be able to destroy evidence by remotewiping the phone or otherwise locking it down through encryption, the court found that the material contained on phones was far too personal and deep to be subject to searches without a warrant. Roberts wrote:

Privacy in the Perpetual Surveillance State 65

many of these devices are in fact minicomputers that also happen to have the capacity to be used as a telephone. They could just as easily be called cameras, video players, rolodexes, calendars, tape recorders, libraries, diaries, albums, televisions, maps, or newspapers.33 While the warrant requirement may make this information harder for law enforcement agents to access, and may even make it easier for criminals to communicate with each other, the court said, “Privacy comes at a cost.”34 Indeed, shortly after the decision, then-FBI director James Comey expressed concerns about the privacy protections of criminals using encryption and other means to cover their tracks, calling it “going dark.” Comey, speaking about the challenges of investigating and preventing crime when people have the ability to use technology to obscure themselves and their activities, said, “We have the legal authority to intercept and access communications pursuant to a court order, but we often lack the technical ability to do so.”35 Comey would testify before the House Judiciary Committee that, for example, the FBI was unable to break encryption and access information on the iPhone of one of the terrorists who committed mass shootings in San Bernardino in 2015. While this testimony turned out to be false – the FBI had paid a private contractor a reported $1 million to break the encryption36 – the Justice Department nevertheless sought a federal court ruling ordering Apple to give the government a backdoor into a locked iPhone. Apple refused, but before a court could rule on whether the government should be able to compel the company to unlock the phone, the Justice Department withdrew its request.37 Courts have continued to recognize privacy interests in other data as well. In 2018, the Supreme Court said that law enforcement officers must have a warrant to get access to cell phone tower data that could provide location information about people, recognizing a reasonable expectation of privacy in data gathered and held by third parties – in this case, cell phone companies – that would prevent the government from accessing it easily. Citing the aforementioned Katz decision, Chief Justice Roberts reiterated that one “does not surrender all Fourth Amendment protection by venturing into the public sphere,” but rather retains some expectation of privacy, certainly more than cell phone tower tracking would allow, which he said allowed the government “near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.”38 But just because law enforcement officers and other government agents cannot invade one’s reasonable expectation of privacy without a warrant or without violating other state or federal privacy laws does not mean that the law in general protects those zones of privacy. Citizens have plenty of areas they may consider private but that, nevertheless, American jurisprudence does not recognize as granting an enforceable right of privacy. The privacy torts identified earlier have not easily been adapted to a modern society where people expect cameras and other recording devices to be ever present, and where we can reasonably expect

66 Privacy in the Perpetual Surveillance State

employers and others to be watching our every move. For example, the Supreme Court had an opportunity to weigh in on a police officer’s expectation of privacy in his department-issued pager, on which he had transmitted a number of improper text messages that led to his firing, but the court found that as a government employee issued the pager for work purposes, any expectation of privacy he had was not “reasonable,” thus allowing his firing to stand.39 Additionally, the First Amendment has proven to be a difficult hurdle for many of those torts to overcome, especially in matters of public concern. For example, the Supreme Court declined to allow a person to sue a radio station that aired a telephone conversation that was illegally recorded (in violation of the Electronic Communications Privacy Act) and leaked to the broadcaster, asserting the free speech interests in permitting truthful communications about matters of high public interest, in this case, tense negotiations between a school board and a public school teacher union.40 To be sure, the First Amendment does not provide an absolute shield against privacy harms. Even high-interest public figures have some right of privacy in their homes for extremely personal matters, as Terry Bollea, also known as professional wrestler Hulk Hogan, proved when he won a $140 million judgment in 2016 against the online news outfit Gawker, which posted a recording of a sex tape that it had received and steadfastly refused to take it down after Bollea claimed harm to his personal, private, out-of-character life.41 In Europe, notions of personal privacy have extended beyond where they might be able to in the United States. While the U.S. Supreme Court has recognized the value of the notion of “practical obscurity” in one’s past, at least in making it more difficult for citizens to acquire criminal records warehoused by the federal government under the Freedom of Information Act,42 that notion has not extended to an affirmative right to protect one’s negative past deeds from being easily catalogued and linked to by Web browsers. In 2014, the Court of Justice of the European Union recognized what has come to be known as the “right to be forgotten” in the Google Spain SL v. Agencia Espanola de Proteccion de Datos case.43 The doctrine allows people to seek court orders to remove information about themselves from being linked to by Internet search engines in a way that would likely not be permitted under U.S. law because the First Amendment prevents courts from issuing such takedown orders.44 As the digital world has shifted the way we manage and understand our expectations of the “right to be let alone,” legal scholars have begun to untangle the web of interconnected concepts and principles embedded in the law of privacy. In this chapter, I draw most heavily from the work of two influential privacy scholars that focus on privacy harms: Helen Nissenbaum’s examination of “contextual integrity,” which she outlines in her book Privacy in Context, and Daniel Solove’s “taxonomy of privacy,” which he created in his book Understanding Privacy.

Privacy in the Perpetual Surveillance State 67

Nissenbaum recognizes that standards of privacy have not necessarily changed as technology has advanced; rather, the kinds of threats to those standards have changed, leading to schisms between people’s expectations and their experiences. As such, privacy harms are caused when the advancement of technology clashes with people’s “context-relative informational norms.”45 For instance, Nissenbaum found the development of Google Street View, a program in which cameras mounted on vehicles capture images from public streets that are then uploaded into an easily accessible and searchable Google maps program, to be a violation of contextual integrity because the images were personally identifiable, provided information about people’s whereabouts, and were out of the control of the people they concerned.46 This is as much about the social and philosophical value of privacy, which may not entirely line up with the advance of legal norms of privacy; for example, a court has ruled that Google Street View does not trigger liability for privacy torts merely by virtue of existing and recording and posting information that a person may not have consented to.47 But the disconnect between our beliefs about what should be private and what the law recognizes or permits present areas for perhaps rethinking those legal privacy norms. Context is also important to Solove, who focused on disruptions to human activities caused when their notions of privacy are improperly protected. These include, in his taxonomy, “harmful activities” in areas of (1) information collection such as surveillance; (2) information processing, including identification and aggregation; (3) information dissemination, including disclosure, breaches of confidentiality, and even blackmail; and (4) invasion, such as by technological intrusions. Information collection, such as government wiretapping or closedcircuit television (CCTV) systems, may be problematic to individuals and to society when constant monitoring causes general anxiety, fear of embarrassment, discourages participation in groups, or otherwise leads to self-censorship. Information dissemination is more concerned with the release of gathered personal information or data, by breaches of confidentiality or other inappropriate disclosures. And invasion is about preventing unwanted technological or other intrusions into personal or private spaces.48 Individuals have struggled to find ways to protect their privacy from intrusion from other citizens, from corporations, and from government. Likewise, legislative and judicial responses often seem to lag behind the advance of technology. We have seen a core of legal principles that have developed regarding privacy rights and obligations since the advent of potentially privacy-invading technologies more than 150 years ago such as the telegraph, which allowed instantaneous communication with the possibility of being intercepted, and the photograph, which allowed image capture in a way impossible before. But how might those adapt to a current environment in which the most intimate information about people – their DNA, the shape of their face, their location, their finances, their personal communications with one another via text message, even their conversations that they do not expect to be heard by others – is available for harvesting, archiving, and distribution well beyond

68 Privacy in the Perpetual Surveillance State

their context and understanding, in a way we would never consciously approve of if given the chance? How might societies or systems advance, or devolve, to challenge the very notions of private spaces that cannot be intruded upon by governments or other citizens? What might be some different ways of thinking about privacy to protect our core interests from intrusions that are not here yet, at least not in full, but are likely on the horizon? Science fiction can help us approach those problems.

Privacy in Private – Invasive Surveillance The genre of science fiction traces its roots to Mary Shelley’s Frankenstein, published in 1818,49 but the modern, literary, dystopian science fiction novel seems to start with George Orwell’s Nineteen Eighty-Four. 50 As science fiction scholars Keith Booker and Anne-Marie Thomas noted, Orwell advanced dystopian science fiction in its portrayal of futuristic London, in which “certain mechanical applications of technology lend themselves directly to political oppression, even while science itself remains a potentially liberating realm of free thought.”51 In Nineteen Eighty-Four, the “telescreen” allows the government to oversee nearly anything people do in their homes: Any sound that Winston made, above the level of a very low whisper, would be picked up by it; moreover, so long as he remained within the field of vision which the metal plaque commanded, he could be seen as well as heard. There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork.52 The telescreens are also omnipresent in public places such as Victory Square,53 and the chief characters in the novel – Winston and Julia – are constantly looking for places where they can have some privacy,54 only to be betrayed when the private room they believed they had turned out to has a hidden telescreen.55 The book Winston and Julia read, The Theory and Practice of Oligarchical Collectivism, reports that “private life came to an end” with the advent of television and the “technical advance which made it possible to receive and transmit simultaneously on the same instrument.”56 As a result: Every citizen, or at least every citizen important enough to be worth watching, could be kept for twenty-four hours a day under the eyes of the police and in the sound of official propaganda, with all other channels of communication closed. The possibility of enforcing not only complete obedience to the will of the State, but complete uniformity of opinion on all subjects, now existed for the first time.57 Nineteen Eighty-Four and its progeny depict a chilling future of constant surveillance, presented at two levels: the common theme of perpetual surveillance conducted by

Privacy in the Perpetual Surveillance State 69

the state, and more recent depictions of individuals with wearable or implantable devices and their effects on our expectations of privacy. The subject did not begin with Orwell. The English philosopher Jeremy Bentham, influenced by ideas from his brother Samuel, developed a system for constant surveillance of prisoners in his late 18th-century works regarding the “Panopticon,” or “all-seeing” prison, featuring a central watchman in a tower able to see all inmates at all times. The inmates do not know if they are being observed or not, but they know they are capable of being observed at any time, creating “the sentiment of an invisible omnipresence,” thus shaping their behavior in a way Bentham found to be ideal to meet a number of prison goals.58 The ends he sought were primarily for the state to maintain order in a problematic population. Besides deterrence of other criminal behavior, Bentham believed that prisons built on the principle of constant surveillance would include fewer prison offenses, improved health among prisoners, security against fire, reduced escape attempts, better success at reforming prisoners, and even better subordination of prison employees to management.59 The prison was never built, but the ideas contained in Bentham’s surveillance state have lived on for centuries. Orwell’s particular near-future, science fiction vision of walls and screens that provide the state a constant tool for watching and listening to citizens, rooted in plausible, foreseeable technology based on what was available when he was writing the book in the 1940s, has become a staple of future dystopias. The graphic novel V for Vendetta, originally published in comic book form from 1982 to 1985, features a Great Britain in the very near future (beginning in 1997) that has become a totalitarian state, with cameras – in every public place and even inside homes – transmitting the actions of citizens to the central security authorities, including surveillance agencies called “the Eye” and “the Ear” to monitor against potential dissenters, and secret police called “the Finger” to punish them.60 The regime is taken down by V, who manages to dodge the surveillance and shut down the cameras for a brief period, letting the people know after taking over the official state broadcast: Her majesty’s government is pleased to return the rights of secrecy, and of privacy to you, its loyal subjects. For three days, your movements will not be watched. . . . Your conversations will not be listened to. . . . And “do what thou wilt” shall be the whole of the law.61 V ultimately assassinates “the Leader,” and, with the help of an associate, blows up 10 Downing Street. In Scott Westerfeld’s Uglies trilogy, set in a distant future after the collapse of the oil-based “rusty” civilization of the present, teenagers await mandatory plastic surgery at age 16 that makes them all physically beautiful but mentally compliant.62 The state constantly monitors its citizens from the time they are children until adulthood. The “uglies,” those who haven’t had surgery yet, live in dorms

70 Privacy in the Perpetual Surveillance State

with walls that watch their every move, and they must wear “interface rings” that track their movements outside the dorms.63 When a group of rebels breaks out and founds a colony in the woods – called “the Smoke” – the state seeks them out through a tracking device carried by Tally, the story’s narrator. Upon becoming “pretties,” the teenagers remain perpetually watched, both by the walls and their rings; when the pretties are caught wandering around without their rings, they are fitted with nearly indestructible “interface cuffs” that can track their movements and listen to their conversations – an important plot point as Tally and her friends plan to escape and return to the “New Smoke” in the second installment of the series.64 The surveillance methods are sometimes not, strictly speaking, technological. In Suzanne Collins’ Hunger Games trilogy, the powerful Capitol keeps tabs on rebels by “genetically altered animals” or “muttations” that can spy on them. The “jabberjay” has “the ability to memorize and repeat whole human conversations,” and after they listened in on a conversation, “they’d fly back to centers to be recorded.”65 Another creature created by the oppressive government, the “tracker jacker,” hunts down people who disturb them or their nests, which are placed around rebel areas, where they can attack enemies of the state.66 In the near future, many of the surveillance methods are recognizable current technology with just minor advances that could further threaten privacy. Cory Doctorow has written two novels set in the near future: Little Brother in 2008 and a sequel, Homeland, in 2013. In Little Brother, 67 a not-so-subtle reference to Orwell’s Nineteen Eighty-Four, a terrorist attack on the Bay Bridge in San Francisco leads to a Department of Homeland Security crackdown on citizens. A catand-mouse game between the government and M1k3y, the protagonist of the story, revolves around technological surveillance and tracking devices and the teenager’s ability to thwart them through hardware and software hacks. The technology itself is rooted in the present, even if the application is more futuristic. For example, Radio Frequency Identification (RFID or “arphid”) tags start out tracking students just for library purposes, but then use of the tech is increased after the terrorist attacks, as arphids are embedded in public transportation cards after the attacks, allowing Homeland Security to track the movements of any citizens using the Bay Area Rapid Transit system;68 cameras take photographs of license plates at stop signs, and red lights “logged the time and your ID number, building an ever-more perfect picture of who went where, when, in a database;”69 even a vote by the San Francisco Unified School District allowed “closed circuit cameras in every classroom and corridor” that were permitted because it was deemed that parents would volunteer to give up their children’s privacy rights to protect them from terrorism.70 The narrative itself unfolds somewhat similarly to that of Pirate Cinema, mentioned in the previous chapter – a ragtag group of teenagers thwarts oppressive new state policies through smart use of technology, organizing in private on the Web and leading to widespread citizen protests that ultimately cause the policies to go away. Again, the technology is

Privacy in the Perpetual Surveillance State 71

plausible, allowing Doctorow to tell a story in a near-future dystopia in which privacy has been largely discarded in favor of a state claiming it needs such powers to protect citizens. It’s also illustrative of the kind of events that may help lead us down the path of breaking the cultural and legal taboos against protecting privacy in private places. Undoubtedly, the surveillance that takes place in our most intimate spaces – government spy cameras in the home, listening devices in the bedroom, constant monitoring of your mail and personal messages – is the bridge too far for western privacy law. Through a series of court decisions, at least in the United States, we are protected from government invasion into many of these spaces, either from intrusive legislation or police surveillance efforts to access them without a warrant issued only after probable cause that a crime has been committed has been established to a court. As detailed in the previous section, this area of privacy extends to our homes, backyards, the curtilage around our houses where drug dogs could sniff, to some extent even the airspace above us, when unexpected technology may be able to intrude upon us, our cars, and even our cell phones. Similarly, U.S. jurisdictions currently allow some form of civil liability or criminal punishment for unauthorized recording in what we may think of as private places, such as restrooms or hotel rooms. It’s a problem Robert Heinlein envisioned in Stranger in a Strange Land, when reporter Ben Caxton urges surveillance of the Man from Mars in his hospital room, asking Jill, his romantic interest and a nurse at the hospital, if she will put a “bug” on him. The greatest boon to spies since the Mickey Finn. A microminiaturized recorder. The wire is spring driven so it can’t be spotted by a snooper circuit. The insides are packed in plastic – you could drop it out of a cab. The power is about as much radioactivity as in a watch dial, but shielded.71 The technology envisioned in 1961 turned out to be even smaller, easier to use, more accessible, and pervasive just a half a century later. Still, personal conversations of the kind Caxton wanted to snoop upon have some degree of protection under privacy law. For example, in the 2008 film The Dark Knight, Batman has his engineer Lucius Fox develop a way to use the microphones on people’s cell phones to listen to their conversations and thus track down the Joker. The gambit leads Fox to agree to deploy the technology but only as he quits his job on ethical grounds; in the real world, the tech deployed would certainly be illegal under wiretapping laws.72 But beyond federal wiretapping laws that bar third parties from listening in on phone conversations, several states require all parties to a conversation to consent to its recording. California, for example, makes it a crime to eavesdrop on a “confidential communication” through any technological or recording device punishable by a fine of up to $2,500, though it “excludes communication made in a public gathering.”73 Florida has a similar law, making such recording a third-degree felony

72 Privacy in the Perpetual Surveillance State

without making a specific exemption for recording made in public places.74 When former presidential candidate Mitt Romney was recorded making his infamous “47 percent” statement to a group of supporters in 2012, it is likely that the recording – made secretly by a person attending the event, which did not allow journalists – violated the Florida law on recording.75 If privacy means anything, it at least means such personal spaces – the home, changing rooms, even some private gatherings – should have limits for outside inspection and intrusion. But what would it take to tear down those barriers? Obviously, one scenario – the one Orwell feared in Nineteen Eighty-Four – would be the advance of totalitarianism, when there is no notion of individual privacy, and in which the state interest in maintaining power and minimizing dissent outweighs any other notions of personal freedom. The government is an “oppressive totalitarian system devoted primarily to its own preservation rather than to enriching the lives of its citizens,”76 and it operates with little concern for individual privacy protections embedded in the common law or, in the case of the United States, the Fourth Amendment to the Constitution. There’s no detailed backstory that triggers the surveillance state in Nineteen Eighty-Four, but it’s a bit more explicit in V for Vendetta, set in a future Britain where nuclear attacks have devastated North America and Europe, leading to the rise of Norsefire, a Christian fascist party in Britain that takes control and has the ability to watch every action in every home to create a Panopticon-like surveillance society. Jews, Muslims, homosexuals, and other minority groups are rooted out and placed in concentration camps, where they are experimented on, including V, who gains super-human powers as a result of his torment. The chill of constant surveillance has its toll, evident in one poignant moment of protest in which, after V attacks the network, a girl realizes that the public surveillance cameras are off. She is free to say a few bad words, utters the profanity “Bollocks!” several times, and writes it out on the ground in chalk, misspelled from lack of use (“Boluc-”) because such language has been punishable in the past.77 Devastating events like global nuclear war explain one path to privacydestroying totalitarianism, but perhaps a more plausible and dangerous dystopia is in the near-future surveillance creep depicted in Little Brother, where the invasions are incremental and justified by authorities and the courts as necessary to battle terrorism. As M1k3y described, the RFID technology installed in school books for library tracking purposes was expanded on these grounds. “[T]he courts wouldn’t let the schools track us with arphids, but they could track library books, and use the school records to tell them who was likely to be carrying which library book.”78 This was then expanded to a de facto legal way to track potentially troublesome students. This kind of slow rollback of fundamental privacy rights is not entirely new or unforeseen. Indeed, in 1886, the Supreme Court handled a case in which the government tried to compel a company to turn over papers that the owners believed would have been tantamount to self-incriminating statements.

Privacy in the Perpetual Surveillance State 73

Technically, the government argued, this seizure of private papers was not a violation of the Fourth or Fifth Amendments because they were not requiring that the defendants testify against themselves. But the court said the actions intruded upon “the sanctity of a man’s home and the privacies of life.”79 Justice Bradley, writing for the majority in Boyd v. United States, warned against the “stealthy encroachments” by the government against citizen rights: It may be that it is the obnoxious thing in its mildest and least repulsive form; but illegitimate and unconstitutional practices get their first footing in that way, namely, by silent approaches and slight deviations from legal modes of procedure. This can only be obviated by adhering to the rule that constitutional provisions for the security of person and property should be liberally construed. A close and literal construction deprives them of half their efficacy, and leads to gradual depreciation of the right, as if it consisted more in sound than in substance.80 Also, while the subsequent series of court decisions in the more than a century after Boyd may have protected our most intimate physical and online spaces from undue government surveillance, it has proven difficult for citizens to punish breaches in any meaningful way in court. In some ways, U.S. courts have enabled and even emboldened domestic spying by turning away citizen challenges to the Foreign Intelligence Surveillance Act81 and to the National Security Agency’s Terrorist Surveillance Program82 on standing grounds. Where surveillance creep may be more plausible is when it becomes acceptable among voluntary relationships such as employment, particularly when that surveillance is permitted or endorsed through legal structures. This can be seen in several science fiction works. For example, in Ready Player One, Wade Watts allows himself to be captured and detained in what appears to be a civil arrest by Innovative Online Industries (IOI) – the evil corporation he is fighting against to win control of the OASIS, the virtual world – by deliberately not paying his bills for high-speed online access from his Columbus, Ohio, apartment. He essentially becomes an indentured servant, working for IOI as a customer service representative, living in IOI quarters, where he and other indents are constantly tracked through an “observation and communication tag” (OCT), also called “eargear.” “The eargear contained a tiny com-link that allowed the main IOI Human Resources computer to make announcements and issue commands directly into my ear,” Wade says. It also contained a tiny forward-looking camera that let IOI supervisors see whatever was directly in front of me. Surveillance cameras were mounted in every room in the IOI complex, but that apparently wasn’t enough. They also had to mount a camera to the side of every indent’s head.83

74 Privacy in the Perpetual Surveillance State

In the future of Malka Older’s Centenal Cycle, discussed in more detail later in this chapter, surveillance cameras are everywhere, but not everyone has yet opted in to personal body cameras. Roz, an agent for the global data and news and election-sponsoring behemoth organization called Information, remarks on this while part of an investigation into the mysterious assassination of a centenal governor, in an area where many of the surveillance cameras appear never to have been installed or otherwise have been ducked by the assassins. She is going over her personal camera footage from the day of the event. “Most people don’t record their every interaction – there are fraught legal issues as well as storage concerns – but it is standard procedure for SVAT (elite response agent) teams in the field,” Roz notes. “For them, the legal issues run the other way: they want protection against accusations of Information manipulation and a digital witness in case of violence.”84 Even as far back ago as Stranger in a Strange Land, we can see people unwittingly allowing themselves to be tracked everywhere. The journalist Ben Caxton is usually pretty savvy about covering his tracks while he is trying to uncover the government’s secret activities surrounding their holding of the Man from Mars, but he lets his guard slip when riding in an autopiloted hover taxi and talking on the phone; he has been tracked, and the taxi has been commandeered to bring him into custody. Caxton realized bitterly that he had let himself be trapped by a means no hoodlum would fall for; his call had been traced, his cab identified, its robot pilot placed under orders of an over-riding police frequency—and the cab being used to fetch him in, privately and with no fuss.85 He tries to call his lawyer, to no avail. Today, we are already faced with choices about how we are tracked, even voluntarily. If you own a GPS-enabled watch or a health tracking device like a Fitbit, you are allowing data to be gathered that could be used to track you in ways you perhaps did not foresee. If you go to the Walt Disney World resort in Florida, you can get your park passes on a wristband with RFID technology embedded that is scanned to let you in the park, or as you queue up for rides that you’ve reserved using the online system, or that can even transmit your personal information so a talking Disney princess can call you by name when she greets you.86 These are not nefarious uses of voluntary tracking tech, but that does not mean that they could not be used for purposes beyond what one may intend when signing up to use it. One major challenge arises when these systems we’ve opted into become outside of our control, and privacy and data protection laws have not come up with ways to allow us to recover that control. This dynamic can already be seen in the area of public surveillance cameras, which are already almost ever-present, and arose more quickly than traditional privacy law could adapt to it in the late

Privacy in the Perpetual Surveillance State 75

20th century. In the next section, these matters of the shrinking sphere of privacy in public are examined, particularly in light of even more advanced technology than science fiction authors have imagined.

Privacy in Public When Cameras Are Everywhere In the Centenal Cycle of books – Infomocracy, Null States, and State Tectonics – author Malka Older portrays a future world order of microdemocracies made up of blocks of 100,000 people (or “centenals”) that largely erases modern borders and allows more local rule. Managing all of this is an organization called Information, described as a “twenty-first century global oversight and transparency agency” that is somewhat of a cross between Google, Facebook, and the United Nations.87 Indeed, many of the original employees and agents of Information worked for one of those agencies before the world order shifted to microdemocracy; Information architect Taskeen Khan worked for all three before being a part of the team that built the infrastructure that supports the massive, worldwide organization.88 Much of the infrastructure of the world order is built on surveillance. There are cameras everywhere, and one of the requirements of a group of people opting into the microdemocratic system – bringing along the economic and social benefits of working with the rest of the international order – is that the centenals will install the cameras Information requires to do its job. But unlike the wide public surveillance of Nineteen Eighty-Four or V for Vendetta, the ever-presence of cameras tied to Information is not explicitly portrayed as a dystopia. At times, it is perhaps troublesome, and the characters, as well as some political parties in the books, note this. Older has said that she was not aiming at utopia or dystopia with this system, but rather was forecasting a plausible future and system of government that is imaginable from the technology we already have today.89 Information, in fact, arose in response to manipulation and disinformation tied to the constant surveillance of the public sphere already in place. “Competing data sources tore down any idea of truth; people voted based on falsehoods,” says Nejime, a leader in the Information bureaucracy. “We didn’t invent surveillance: there were plenty of feeds and search trackers, but they were fragmented and firewalled by governments and private companies. The surveillance was used to propagate falsehoods.”90 The chief distinction between the constant camera surveillance in Older’s future and those of Orwell is that in the Centenal books, there is still some expectation of privacy in private places, such as the home or the office, as well as in personal data and communications. But in public, the system reflects a potential future that has advanced from where we are in what law professor Seth Kreimer has termed “pervasive image capture” – a network of government as well as privately operated video cameras in public places that are able to watch and store nearly every movement of people once they leave their homes.91

76 Privacy in the Perpetual Surveillance State

When the government records people in public places, it typically does so without violating any “reasonable expectation of privacy,” as long as the recording is done without using technological means unavailable to the general public or otherwise unexpected by citizens under their Fourth Amendment rights to be free from searches and seizures by government without a warrant, as detailed in the second section of this chapter. However, when private individuals make recordings in public places, the extent to which that is permissible under privacy law is less clear. There is some tension between what the government may criminalize, what the First Amendment allows, and what civil liability private videographers may assume when recording in public. The U.S. Supreme Court has yet to recognize a clear right under the First Amendment to make audio or video recordings in public places, and courts have not provided consistent, clear guidance on the rules on recording in public. Kreimer found that a “a solid line of courts has recognized that image capture can claim protection under the First Amendment,” pointing out four U.S. Circuit Court of Appeals courts that have recognized such rights. 92 Other circuits have been less supportive of rights to record. For example, when a man was arrested for carrying a tape recorder to a Ku Klux Klan rally in Lafayette, Indiana, he argued to police that his recording of a public event like this should be protected under the First Amendment, but he was told that only journalists were allowed to bring recording devices, and he was arrested when he continued toward the rally area. The city argued that police believed he may use the recording device as a weapon “to injure attendees” by throwing it; meanwhile, members of the press were allowed “to take in pens, paper, and tape recorders (as) a reasonable accommodation of their First Amendment rights.”93 The U.S. Court of Appeals for the Seventh Circuit supported the city in this case, opining that “there is nothing in the Constitution which guarantees the right to record a public event.”94 As the reach of digital technology has advanced, however, state and federal appeals courts have had a slightly more expansive view of rights to record under the First Amendment in the context of eavesdropping and wiretapping laws. In A.C.L.U. v. Alvarez, the Seventh Circuit found the Illinois eavesdropping law deficient upon a challenge by the American Civil Liberties Union, which had created a public program encouraging citizens to record the activities of police officers.95 The court stood by its decision in Potts v. City of Lafayette, though it noted that any “right to gather information may be limited under certain circumstances,” which in the case of Potts involved potentially throwing a tape recorder as a weapon.96 Upon remand, the district court agreed, striking down the law narrowly as it applied to the ACLU’s police recording program.97 Where the Seventh Circuit appears to have drawn the line between Potts and the A.C.L.U. cases was in the distinction between making a recording – which it said was clearly protected under the First Amendment in the A.C.L.U. case – and

Privacy in the Perpetual Surveillance State 77

having the right to do so in a public place, which it said could be curtailed as circumstances required: The act of making an audio or audiovisual recording is necessarily included within the First Amendment’s guarantee of speech and press rights as a corollary of the right to disseminate the resulting recording. The right to publish or broadcast an audio or audiovisual recording would be insecure, or largely ineffective, if the antecedent act of making the recording is wholly unprotected . . . By way of a simple analogy, banning photography or note-taking at a public event would raise serious First Amendment concerns; a law of that sort would obviously affect the right to publish the resulting photograph or disseminate a report derived from the notes. The same is true of a ban on audio and audiovisual recording.98 The U.S. Court of Appeals for the First Circuit similarly found First Amendment protection for recording police activity in public places. A man was arrested under the Massachusetts wiretapping law for using his cell phone to record Boston police arresting another person; after the charges were dismissed, he filed a lawsuit against the city for violating his civil rights. The First Circuit agreed with his argument, emphasizing the importance of protecting citizens’ right to record public officials: “The filming of government officials engaged in their duties in a public place, including police officers performing their responsibilities, fits comfortably within (First Amendment) principles.”99 In 2015, a federal court for the Southern District of New York found that this right to record police was “clearly established” by other circuit courts of appeal to the point that police could expect that they would be recorded and thus could be liable for violating a journalist’s First Amendment rights.100 Rather than a broad First Amendment right to record, though, these courts appear to have limited their holdings to recording of a certain type or in a certain place. The Southern District of New York found that even this right to record police in public came with limits: (f)or instance, it may not apply in particularly dangerous situations, if the recording interferes with the police activity, if it is surreptitious, if it is done by the subject of the police activity, or if the police activity is part of an undercover investigation.101 What this mostly tells us is that it is difficult for states or other government bodies to outlaw photography or video recording in public places, at least when it comes to recording public officials such as law enforcement officers. When jurisdictions have had success passing these kinds of laws, it has been extremely targeted laws, such as bans of “upskirt” photography, and even those have had problems surviving First Amendment scrutiny. In 2014, the Supreme Court of Massachusetts

78 Privacy in the Perpetual Surveillance State

struck down the state’s “Peeping Tom” law that was applied to upskirt photographs on grounds that the phrase “partially nude” in the law did not extend to photographs of people wearing clothes but not expecting photographs taken in public in a way that could look up women’s skirts.102 Just two days after the court’s ruling, the Massachusetts legislature responded with a more detailed law allowing punishment for one who secretly: photographs, videotapes or electronically surveils . . . the sexual or other intimate parts of a person under or around the person’s clothing to view or attempt to view the person’s sexual or other intimate parts when a reasonable person would believe that the person’s sexual or other intimate parts would not be visible to the public and without the person’s knowledge and consent.103 But these are the exceptions to a general rule that allows a robust right to take photographs or record video in public under the First Amendment. The “improper photography” law in Texas serves as an example. The current version of the law criminalizes intrusion by taking photographs in private areas such as “dressing rooms, locker rooms, and swimwear changing areas.”104 But the original version of the law was broader, making it illegal for a person to “photograph, videotape, or otherwise electronically record or broadcast an image of another person without consent” and “with intent to arouse or gratify the sexual desire of any person.” In 2014, the Texas Court of Criminal Appeals struck down this portion of the law after it was challenged by a man who had been charged with 26 counts of improper photography, mostly involving young girls in swimsuits at the water park portion of SeaWorld in San Antonio. The court described the inherent expressive nature in creating photographs and visual recordings and held that the statute was unconstitutionally overbroad, essentially applying “to any non-consensual photograph, occurring anywhere” as long as the intent to gratify sexual desire was present. It went on to note that the law “could easily be applied to an entertainment reporter who takes a photograph of an attractive celebrity on a public street.”105 This decision – broadly protecting a person’s right to record video or take photographs in public places under the First Amendment – is just one of a line of court decisions that provide substantial support to people who would use commonly accessible surveillance tools in public. While recent cases seem to favor a right to record in public places, though, there is still no definitive precedent people can turn to in more general cases not involving public officials or police officers. The advance of First Amendment-based protection for photography and video recording in the aforementioned cases makes any new regulation likely to face legitimate challenges in court when the regulation is aimed at anything occurring in public places. Further, such restrictions would conflict with the newsgathering and public information benefits of surveillance technologies,

Privacy in the Perpetual Surveillance State 79

which enrich citizens in democracy and provide a valuable check on state power as journalism becomes more a product of its citizens than of institutional and corporate sources. But just because American jurisdictions may have a difficult time outlawing use of videocameras and other surveillance devices in public places does not necessarily mean that people have a remedy in court when they believe their privacy has been inappropriately intruded upon. We find ourselves in a world already surrounded by cameras in most public places, a technological advance that has gone largely unchecked since the 1990s. CCTV systems, such as security cameras set up by private businesses and law enforcement in major cities, are already in place and have been used to help investigators identify suspects in situations such as the Boston Marathon bombing in 2013106 or the London subway terrorist attacks in 2005.107 We are also used to tracking devices placed on us, even voluntarily, as privacy scholar Brett Frischmann detailed about Fitbits being required at a university, or “activity watches” being given to his first grader, that can record our activities and trace us wherever we go.108 But suppose that these cameras, or other recording or tracking devices carried by people walking down the street such as a smartphone using Facebook Live or another livestreaming app, were to capture an intimate or personal moment outside of what a person would reasonably expect to be not just recorded but also broadcast immediately to the rest of the world? Further, imagine that the recording devices were beyond the ubiquitous smartphones of today, but rather were even more subtly hidden by being worn or implanted into the private citizen’s body? This is another area that science fiction authors have addressed, both by envisioning versions of the technology and demonstrating how it may affect the way people behave, and their possible responsibilities to one another, when these are used in public places.

Wearable, Implantable, and Biometric Technologies The final episode of the first season of Black Mirror introduces us to the “Grain,” a small device embedded near your temple that allows you to record everything you see through your eyes, with the ability to catalog it, retrieve and rewatch moments in your mind, or even project those moments onto a television screen. “The Entire History of You,” which originally aired in the United Kingdom in 2011, remains one of the most chilling and well-received episodes of the series. In one scene, Liam joins his wife and a friend, and they coax him to project his memory of the borderline disastrous job interview he has just returned from, immediately triggering discomfort among the viewer as his wife and friend waver between laughing or pathetically trying to reassure him as they watch. But the tech keeps revealing deeper and deeper layers of the potential problems that would arise with unlimited recording, storage, and recall abilities, especially as nearly everyone in society has them.109 As television and film critic Emily Yoshida wrote after the episode aired to American audiences in 2013, just as

80 Privacy in the Perpetual Surveillance State

Google Glass was being slowly rolled out to the public, the implications were chilling: “everyone you know and interact with essentially becomes a surveillance camera. All your worst moments can be found and bookmarked by anyone who was there to witness them, and invariably passed on to those who were not.”110 The episode has inspired discussions among scholars about the boundaries such technologies push for us as humans, as well as the policies they may implicate.111 Casey Fiesler, a professor of information science, has used the episode to help students consider “speculative regulation,” especially as they consider the privacy implications of these kinds of technological advances.112 In “The Entire History of You,” Grains are shown being used in public, at a party at someone else’s home, and even in intimate moments in the bedroom. The widespread use of implantable computing devices would drastically alter our expectations of how we interact with one another, with the potential to shrink notions of zones of privacy.113 Science fiction authors have envisioned a future in which wearable and implantable computing are regular features of daily life. Neal Stephenson, for instance, includes these kinds of technologies in both The Diamond Age and Snow Crash. In the late 21st century neo-Victorian era of The Diamond Age, Stephenson portrays glasses that, rather than correcting vision (no longer a problem in this era): let you see things that weren’t there, such as ractives (interactive films). Although, when people used them for purposes other than entertainment, they used a fancier word: phenomenoscope. You could get a phantascopic system planted directly on your retinas, just as Bud’s sound system lived on his eardrums. Bud, who has a firearms system implanted into his skull, tells about some of the frightening consequences of computing implanted on your eyes. “(I)t was rumored that hackers for big media companies had figured out a way to get through the defenses that were built into such systems, and run junk advertisements in your peripheral vision . . . even when your eyes were closed.”114 Implantable computing also enables widespread, private surveillance. In the future privatized and balkanized America that Stephenson depicts in Snow Crash, government surveillance has been outsourced to individuals feeding the “Central Intelligence Corporation.” Known as “gargoyles,” these people “wear their computers on their bodies, broken up into separate modules that hang on the waist, on the back, on the headset.” The gargoyles “serve as human surveillance devices, recording everything that happens around them” and “uploading staggering quantities of useless information to the database, on the off chance that some of it will eventually be useful.” The protagonist of Snow Crash, conveniently named “Hiro Protagonist,” is identified by a gargoyle who is able to scan his retina from afar, instantly giving the gargoyle information about Hiro.115

Privacy in the Perpetual Surveillance State 81

Security guards have some of their surveillance and communication tools implanted, including antenna fitted that have been “permanently grafted onto the base of [the] skull . . . by means of short screws that go into the bone, but do not pierce all the way through.” These are then connected to multiple computer microchips, with wires that penetrate the skull and pass “straight through to the brainstem,” branching and re-branching “into a network of invisibly tiny wires embedded in the brain tissue.”116 In the future, nearly everyone seems to have wearable computers, used for every aspect of daily lives. In Vernor Vinge’s Rainbows End, those who don’t “wear” watch society pass them by. In the book, a character was once Dean of Arts and Sciences at the University of California San Diego and said he thought wearable computers were a “demeaning fad,” but admitted that he was wrong, and he “paid a heavy price for that.”117 He and other retired professors have returned to high school to learn how to use modern computing, which has the benefits of allowing them to participate in the virtual worlds around them and perhaps get new jobs, but also exposes them to surveillance by the Department of Homeland Security, whose “logic was deeply embedded in all hardware. ‘See All, Know All’ was their motto.”118 The wearables are made up of a “net of microprocessors and lasers” as well as contact lenses that, somewhat like the implantables in The Diamond Age, project images directly onto the eyes of wearers. But users with the wearables are able “to type a query on a phantom keyboard and view the Google response floating in the air” in front of them.119 David Brin envisioned wearable computing in an age when it has become mandatory. In the short story “Insistence of Vision,” Brin imagines the consequences when every citizen wears a Google Glass-type device called “digispectacles” with “augment lenses.”120 The glasses offer “Godlike omniscience.”121 That is, except for the story’s first-person narrator, Sigismund, who has been convicted of an unspecified non-violent crime, blinded by the state as punishment, and given “special specs” that only deliver certain images of the world around them to the brain, “So that they can’t see anyone who chooses not to let a criminal see them.”122 After his sentence is complete, Sigismund gets new specs that supply “Godlike tsunamis” of information about the world and people around him: “Nametags under every face that passes by; and more if I simply blink and ask for it. The basic right of any free citizen.”123 Omniscience can come in other forms as well. In “Arkangel,” an episode from the fourth season of Black Mirror directed by Jodie Foster, children are able to be implanted with a device that not only allows them to be tracked everywhere they go by their parents but also screens the outside world for violent or troubling images that can be obscured from their vision, and allows parents to see through their eyes. Their own field of vision can be hijacked by a third party with access to the tablet app synced with the Arkangel implant. After Sara, the daughter implanted with the device, shows developmental struggles as a teen, a counselor explains to her mother that the tech was experimental and never launched

82 Privacy in the Perpetual Surveillance State

nationwide. “It was banned in Europe. It’ll be pulled here too by the fall,” he tells her. When the mother sees her teenage daughter having sex and using drugs with an older boy, she confronts him with the video, threatening to show it to the police, telling him, “whatever she sees, I see. And I am watching you.”124 Depictions of wearable and implantable computing, in conjunction with concerns about perpetual surveillance by the state or other people, illustrate our concerns about the sanctity of privacy and our confusion about the parts of our lives that may be viewed, listened to, tracked, or enhanced by such technology. Privacy scholars have occasionally looked ahead to the possibility of such pervasive tools. For example, Rodney Smolla in 1999 saw the development of “cameras that can fit in a pair of eyeglasses” in noting that “the sheer power of modern technologies to penetrate what were once private spaces have dramatically degraded privacy.”125 Those technologies did not take long to advance and become widely accessible. When Google Glass was released in 2013, making wearable computing that included a hands-free ability to take photos and video silently, with the ability to upload to the Web immediately, privacy scholar Woodrow Hartzog described it as “possibly the most significant threat to ‘privacy in public’ I’ve seen,” saying that to protect privacy in light of wearable computing of this type, “[W]e will need more than just effective laws and countermeasures. We will need to change our societal notions of what ‘privacy’ is.”126 And while Google Glass may have fizzled as a popular technology, its arrival ushered in an era of rapid development of wearable computing devices such as Snapchat’s Spectacles127 and a wide array of smartwatches.128 To illustrate, in Snow Crash, one only needs to be in a public place to have one’s retina scanned by a “gargoyle” feeding information to a private intelligence system, and in “Insistence of Vision,” anyone wearing “digi-specs” can find out a person’s name and background – the technology as depicted effectively eliminates any right of privacy in public places. Part of the reason for this, as Smolla envisioned, was that “strong First Amendment doctrines stand in the way of many of the most meaningful privacy reforms.”129 However, courts have been hesitant to recognize a strong First Amendment right to gather news in public places, and legislatures have been pushing to expand bans on private surveillance devices such as drones.130 This leaves a potentially confounding gap to users of wearable and implantable computing devices and the people around them – citizens may not be able to assert privacy violations in court, but as described in the previous section, users of the devices have limited protection from the state if they are punished for using them. When Solove developed his privacy taxonomy, there was a clear divide between information collection and information dissemination, which were typically distinct activities. Today, these activities may now be done concurrently, on a large scale, by anyone with a smartphone. This immediacy, combined with widespread affordability and accessibility of wearable and implantable surveillance technologies, brings about the intersection of two privacy areas that previously

Privacy in the Perpetual Surveillance State 83

were considered to be distinct, and increases the potential for harm that cannot be undone. Context-relative informational norms are challenged when these areas of information collection and dissemination happen at the same time through the emergence of new technology that allows broader surveillance of the public sphere, both by the government and by citizens. Notions of privacy are even further challenged when the recording devices may be, for all practical purposes, obscured from view because they are embedded into the recorder’s clothes or implanted in their bodies. If there is any ground for a person bringing a lawsuit for invasion of privacy in public, it is in the tort known as “intrusion upon seclusion,” which provides a remedy to people who have their private lives harmed through technological means such as hidden cameras or recording devices without their knowledge or consent in a way that would be highly offensive to a reasonable person. The intrusion tort does not require publication, instead resting on the principle that the act of intrusion itself is harmful.131 But Prosser notes that the intrusion tort includes a caveat similar to that described by Warren and Brandeis: “On the public street or in any public place, the plaintiff has no right to be let alone, and it is no invasion of his privacy to do no more than follow him about.”132 It has not always been this way. Westin, for example, saw this as a kind of anonymity occurring “when the individual is in public places or performing public acts” but nevertheless should be free from monitoring and scrutiny: He may be riding a subway, attending a ball game, or walking the streets; he is among people and knows that he is being observed; but unless he is a wellknown celebrity, he does not expect to be personally identified and held to the full rules of behavior and role that would operate if he were known to those observing him.133 At least one court in this era had an opportunity to consider the issue of constant, private surveillance. In 1970, consumer advocate Ralph Nader sued General Motors, alleging invasion of privacy in part because General Motors had kept him under surveillance for extended periods of time, including having its agents follow him into a bank to see how much money he was withdrawing. The Court of Appeals of New York suggested that “mere observation” of Nader in public places would not be actionable, though it also opined that a “person does not make public everything he does merely by being in a public place,” and thus, “under certain circumstances, surveillance may be so ‘overzealous’ as to render it actionable,” suggesting that following him into a bank may be enough to trigger liability.134 Nevertheless, as technology advanced into the digital age with widespread surveillance and monitoring in public places, these thoughts about privacy in public have not kept pace. During the rise of digital photography and the World Wide Web in the 1990s, legal scholars recognized the potential for increased

84 Privacy in the Perpetual Surveillance State

privacy harms and called for enhanced protection for individuals against intrusion of this sort. Andrew McClurg suggested that courts recognize “public intrusion” as a tort, noting that “(t)ort law clings stubbornly to the principle that privacy cannot be invaded in or from a public place,” and instead should recognize harm caused by intrusion that is “highly offensive to a reasonable person” and would also consider the defendant’s motive in gathering that information and the extent to which it was disseminated.135 Professor Lyrissa Lidsky, also recognizing the advancement of surveillance technology and its increased use by journalists, proposed rejuvenating the tort of intrusion by making privacy in public possible while also recognizing a newsgathering privilege for matters of legitimate public interest. As she noted: If the intrusion tort is to shield plaintiffs from prying, spying, and lying by the media, courts must interpret the tort more expansively. Courts must acknowledge that citizens are entitled to a modicum of privacy even in public places, and must modernize the intrusion tort to respond to the threat posed by high-tech surveillance methods.136 These standards, however, have not been adopted by courts or legislatures, even as technology advanced to the capabilities of modern surveillance technologies, such as drones and livestreaming, much less the wearable and implantable computing technologies envisioned by science fiction authors. Smolla, lamenting the failure of privacy torts such as intrusion to provide consistent remedies to plaintiffs, noted that successful intrusion claims are now typically limited to situations of “unusually brazen insensitivity into a scene of grief, violence, or injury in which society is outraged by the distress caused to the victim or the victim’s family.”137 For example, the California Supreme Court recognized potential intrusion by a videocamera operator present in a helicopter responding to an automobile accident; while the newsworthiness of the accident barred recovery on public disclosure of private facts grounds, the recording of conversations “in the interior of the rescue helicopter, which served as an ambulance” and conversations regarding the plaintiff’s medical care at the scene were both seen as worthy of receiving “a degree of privacy.”138 As such, a key question in any analysis under the intrusion tort is whether the behavior could be deemed “highly offensive to a reasonable person.” The analysis of the U.S. Court of Appeals for the Third Circuit regarding the Google Street View program is illustrative of the challenge of establishing offensiveness in an age of almost constant surveillance. When homeowners challenged the Google Street View program on grounds of intrusion and other torts, for example, the Third Circuit in 2010 dismissed the lawsuit, finding that “(n)o person of ordinary sensibilities would be shamed, humiliated, or have suffered mentally as a result of a vehicle entering into his or her ungated driveway and photographing the view from there,”139 Even information overheard in less

Privacy in the Perpetual Surveillance State 85

publicly open spheres such as workplaces may not provide a reasonable expectation of privacy if the person could expect to be seen or to have others overhear those conversations.140 Beyond the intrusion tort, the homeowners suing Google also claimed harm under the “publicity into private life” provision of Pennsylvania’s privacy law, a theory the court rejected also because of the lack of offensiveness of the conduct. This tort is more commonly known as “public disclosure of private facts,” providing a remedy for a person harmed by dissemination of facts that have no news value and are “offensive and objectionable to a reasonable man of ordinary sensibilities.”141 The tort is distinguishable from intrusion because it focuses on the disclosure of the private activity – or, as Solove described it, the act of “exposure”142 – rather than the collection of the private information in the first place. In the case of the helicopter response to an auto accident mentioned earlier, for example, the California Supreme Court found potential grounds for intrusion but dismissed the plaintiffs’ arguments on public disclosure grounds because the accident was in public and was newsworthy. If there is an avenue for recovery on public disclosure of private facts grounds in the context of secret or hidden recording by citizens against other individuals in public places, it would be in situations where the collection and distribution would be both highly offensive to a reasonable person and without any news value. Consider a situation in which a woman in a public place is photographed the instant that an updraft blows her skirt up in the air, and that photograph is then published in the local newspaper. These were the facts the Alabama Supreme Court had to consider in 1964, and the court found that the newspaper’s decision to publish the photograph was subject to liability on public disclosure grounds, noting: To hold that one who is involuntarily and instantaneously enmeshed in an embarrassing pose forfeits her right of privacy merely because she happened at the moment to be a part of a public scene would be illogical, wrong, and unjust.143 However, this holding seems quite narrow; decades later, the Alabama Supreme Court declined an opportunity to extend the publicity of private facts tort to publication of photographs of people attending greyhound races.144 The requirement that the facts be private makes this tort a difficult fit for people harmed by recording in public places, and the newsworthiness defense has presented plaintiffs with “colossal difficulties,” according to Smolla.145 Law professor Samantha Barbas blamed the breadth of the newsworthiness defense for it becoming “nearly impossible to win a public disclosure suit.”146 This does not mean that privacy rights could not be rejuvenated somehow. In Infomocracy, Older envisions a time that even when nearly everyone is capable of surveilling their surroundings, there is still some expectation that people should

86 Privacy in the Perpetual Surveillance State

not be recorded without their knowledge or permission. Domaine, a troublemaker and sometimes target of Information secret agent Mishima, works to undermine microdemocracy in several ways, in one case by recording “world undesirables . . . each identified with subtitles, each lauding the micro-democratic election system, which allows them to pursue their decidedly undemocratic ends.” Mishima responds that it doesn’t appear that the people were recorded with their permission. “As if that matters,” sneers Domaine.147 The practicality of laws being usable and enforceable, as Domaine doubts in this exchange, is a major challenge that plaintiffs would face under either a theory of intrusion or public disclosure of private facts. Actually collecting damages from anyone, at least in the United States, would be very difficult. Under Section 230 of the federal Communications Decency Act, the interactive computer service providing the software that allowed collection and distribution of images would be immune from civil liability for tortious conduct of its users.148 As such, plaintiffs would likely be limited to pursuing damages from the individuals using wearable or implanted technologies to broadcast the person in an offensive manner. Of course, that is just current law; the liability exemption for interactive computer services has been criticized by lawmakers for enabling and empowering bad behavior by large companies such as Google and Facebook, and Congress began to chip away at it by passing a revision in 2017 that took away parts of the shield claimed by online publishers who allowed their sites to be used for advertising services that exploited children and others engaged in unlawful sex trafficking.149 As such, under traditional approaches to the privacy torts, it is hard to conceive of a way in which constant capture of images and video and even live broadcasting of them by using wearable or implantable computing devices would lead to civil liability if used in public places. Just as live television or radio broadcasts from public places would not lead to liability for privacy violations on intrusion grounds, modern private surveillance tools would be unlikely to open a new avenue of tort collection for those claiming harm unless the intrusion upon seclusion tort were to be revised or rejuvenated by legislatures or courts. This is certainly plausible. Tech and public policy researcher Adam Thierer argued that the emergence of connected wearable computing technologies such as watches, jewelry, and glasses, making the act of recording even less noticeable by those having their actions captured and disseminated, could very well cause the “tort of intrusion upon seclusion [to] evolve in response.”150 But it is also quite possible that any right to privacy in public may, at least under the modern understanding of American law, have already passed us by. Nissenbaum said that video surveillance in public may be a “lost cause” because it is “so commonplace now that objections are increasingly difficult to carry against the force of the reasonable expectation, against what I regard as the ‘tyranny of the normal.’”151 If there is an exception to this limitation of tort collection for acts committed in public, it may be found in what law professor Lior Strahilevitz has identified as the doctrine of “limited privacy.”152 For example, in a case involving a secret

Privacy in the Perpetual Surveillance State 87

recording made by a journalist of a “telepsychic” working for a pay-per-minute phone service, the California Supreme Court found that employees working at that office, while they may expect to be overheard by other employees, may also reasonably expect not to have those conversations recorded and broadcast to the public.153 This focused on the harm caused by the dissemination of the recording. The Canada Supreme Court recognized a similar right in 2019 by upholding a conviction of a high school teacher on a criminal charge of voyeurism after he recorded female students in the hallways of a public school using “a camera concealed inside a pen.” The recordings “focused on the faces, upper bodies and breasts” of the students, and they were not aware they were being recorded. The court found a “reasonable expectation of privacy” for the girls in their school in what could otherwise be observed by anyone walking down the hall. “‘Privacy,’ as ordinarily understood, is not an all-or-nothing concept, and being in a public or semi-public space does not automatically negate all expectations of privacy with respect to observation or recording,” the court said, noting that the Canadian parliament’s aim of protecting sexual integrity through the voyeurism law was validly aimed at “new threats posed by the abuse of existing technologies.”154 The “limited privacy” approach is slightly different than courts have generally taken in the United States. But perhaps the door is at least slightly open for policymakers to consider laws that could punish the act of recording without knowledge or consent or provide civil remedies to citizens suffering privacy harms caused by the act of recording inherent in people using privacy-intruding technology. Additionally, more could be done on the front end by designing technologies with more awareness of the privacy harms they enable or the legal and social guardrails they evade by the way they are made, as Hartzog explained in his book Privacy’s Blueprint. Focusing on design is key, Hartzog says, “(b)ecause technologies are involved in nearly every act of modern information collection, use, and disclosure, the architecture of those technologies is relevant to every aspect of information privacy.”155 This aspect becomes even more critical when the technology moves beyond devices that can universally track our movements and actions, whether in private or public spaces. Wearable and implantable computing devices also allow gathering of biometric information that enables a deeper level of personal intrusion. And the growth of this technology as portrayed in science fiction makes the intrusions even more problematic. In Tell the Machine Goodnight, Pearl’s Apricity is stolen by tech developers who want to take the external device that tells a person what actions to take to be happy and put it in a “palm screen,” which in the story have just started to arrive. “After those hit the market there’s going to be a wave of bio-embed tech,” the man who stole the device tells her. Putting the Apricity inside people, he says, will enable it to “(t)ell people how to be happy, day to day, minute to minute. And the commercial possibilities are off the charts. A new spin on direct marketing, right?” Pearl shakes her head. “It’s a perversion,” she says.156

88 Privacy in the Perpetual Surveillance State

There is certainly something uncomfortable about the notion of other people having not only having access to our biometric information – DNA, fingerprints, the look and shape of our faces – but also having the ability to catalogue and distribute it for use in ways that could drastically reduce any possible privacy we may have in them. Even, as described earlier, our hopes and dreams may be knowable by the government, or marketers, or people who want to chase us down and use that information to their advantage. In Steven Spielberg’s film Minority Report, based loosely on the short story by Philip K. Dick, it is the year 2054, and retinal scanners are nearly everywhere on the streets of Washington, D.C. They are embedded in billboards and wall screens, allowing video and audio advertisements to be aimed directly at passersby, addressing them by name. John Anderton, the precrime detective on the run after he is suspected of a future murder, is recognizable everywhere he goes. The retinal scanners feed back to police, allowing them to know his location and find him quickly; police can then trigger spider-bots to crawl under doorways, climb on to people’s faces, and capture involuntary retina scans to help law enforcement find whom they are seeking. Anderton shouts, “I’ll get eyescanned a dozen times before I get within 10 miles of precrime!” as he realizes he has to break into the precrime police station to prove his innocence. The only option he has left, he realizes, is to have his eyes surgically replaced.157 The film was released in 2002, echoing facial or eyeball scanning technology present in several other science fiction works, such as John Cameron’s Terminator films, in which the future cyborg assassins come back to the modern day in pursuit of human targets they can recognize through facial recognition scans. And today, the technology imagined decades ago is rapidly becoming a reality. Tech attorney Yana Welinder saw a similarity in the Disney film The Incredibles, in which a facial scan of Mr. Incredible as a security device, identified him and delivered a secret message intended only for him before it self-destructs. Welinder traced the earliest facial recognition technology with real-time recognition ability to a research team at Carnegie Mellon in 2011, with its implementation into “computer glasses” shortly thereafter, as well as its possible integration into social media networks such as Facebook. The possibilities and potential dangers of this technology, if it released to widespread use in public, was apparent: Culturally, most people would not want to be seen purchasing contraceptives by their parents, children, or even siblings. They may not want their employer to see them going to a therapist or an AA meeting. They may not want to have all their friends witnessing how they desperately try to charm someone on a first date. Real-time identification apps could be used to recognize individuals in these potentially embarrassing moments.158

Privacy in the Perpetual Surveillance State 89

We have seen, for instance, the advance of Amazon’s Rekognition technology, made available to the public in 2017, which despite several weaknesses – including a test run by the ACLU in which Rekognition falsely associated 28 members of Congress with mugshots in a criminal database – is being considered for use by government and law enforcement agencies.159 Pop star Taylor Swift had a facial recognition camera installed in a kiosk outside of the Rose Bowl before one of her performances in 2018, scanning faces in the crowd for potential stalkers.160 And in 2019, the pharmacy chain Walgreens announced it was using facial scanners in “smart coolers” in some stores to gather estimated demographic information such as gender and age to examine their shopping habits; while not plugged into a facial recognition system, the information would be sent to a computer system for analysis.161 These are all examples of biometric information gathering, which Hartzog describes as “any method by which your identity or your personal information can be revealed through evaluation of one or more of your distinguishing biological traits.” While facial recognition and fingerprints are two main types, Hartzog identifies several other biological traits that may be captured and linked to you, “including your eyes, hands, handwriting, voice, veins, and even the way you walk.” 162 Doctorow included gait-recognition cameras in Little Brother, which started as a way to track students in public schools, perhaps a more legitimate public concern to limit truancy and protect campus safety, but which soon became ubiquitous on public streets as part of the Department of Homeland Security crackdown after the terrorist attack that shook San Francisco, allowing tracking of citizens based on the unique way each walks.163 We do not yet have a consistent framework for thinking about legal rights and limitations that may be placed on facial recognition, though a few U.S. courts have considered its emerging use. In criminal cases, U.S. courts have been gingerly handling facial recognition cases, generally finding that the technology, while not yet reliable enough to be the sole means of identifying someone of a crime, has the potential to get there if the reliability of the technology advances. The Arkansas Supreme Court, for instance, did not allow a man accused of aggravated robbery to use more advanced versions of facial recognition in 2013 as evidence to overturn a conviction from several years before. The man argued that the surveillance video used to finger him in 2006 would, if using more updated facial recognition technology, instead allow the state to pursue a different suspect and clear him.164 In a California case, a man convicted of first-degree murder appealed partially on grounds that law enforcement’s use of facial recognition technology to identify him was improperly allowed into evidence by a judge. Prosecution witnesses admitted that the technology was “kind of in its infancy” when used in 2015, still evolving and not equivalent to fingerprint analysis, and that the witnesses were not experts on the subject. But the state court of appeals, in an unpublished decision (thus not to be used as precedent in future cases), said this was not enough to prejudice the defendant, mainly because it was “not the sole basis to identify or eliminate a suspect.”165

90 Privacy in the Perpetual Surveillance State

But a Maryland appeals court said that facial recognition technology may still be valid, at least for investigative purposes, declining an appeal from a man convicted of stealing tires for his car. A fake driver’s license he used to commit the fraud was scanned and run against a database of other state driver’s licenses, eventually identifying him. He argued that the tech was unreliable and evidence derived from it should not be used against him. The court disagreed, saying that what it called “facial profiling technology” may be new, but is comparable to computerized searching for matching fingerprints. “Precisely how the computer does this is something well beyond our ken,” the court said. “There is no suggestion, however, that these computerized identification methodologies are not now perfectly reliable investigative tools.”166 Because it was not used in evidence in court, but rather only by law enforcement officers to lead them to identify the suspect, the court of appeals found no error. Courts have also begun to explore the privacy implications of facial recognition technology, particularly as states begin to pass laws that may restrain its use or implementation. In 2017, two Illinois citizens sued Google under the state’s Biometric Information Privacy Act (BIPA) arguing that they and other people had taken their photos, which were then uploaded to Google Photos to create face-templates “to recognize their gender, age, race, and location.”167 These, they argued, were biometric identifiers and barred by the Illinois law because they were gathered without proper consent or notice, including data retention and destruction schedules. Google argued that mere photographs were not “biometric identifiers” under the law, but the court disagreed, saying a straightforward reading of the definitions in BIPA – which includes “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry” – would include photographs used to create face templates. While the Illinois legislature originally included “facial recognition” in the definition but declined to include in the final version of the law that passed, the court said that this argument was not persuasive, pointing to the fact that “face geometry” was plausibly inclusive enough to incorporate Google’s use of photos in this manner, at least enough to let the plaintiffs’ case proceed against them.168 The BIPA in Illinois, passed in 2008,169 is one example of a state trying to limit the spread of facial recognition and other technologies, at the very least by making companies that employ them to operate transparently, requiring signed consent from people who are surrendering biometric data and mandating publicly posted information about data retention and deletion. The law allows people to sue for liquidated damages of $1,000 plus attorney fees and costs and injunctions based on loss of biometric privacy, without a requirement of showing specific harm such as identity theft or physical violation. The Six Flags theme park challenged that provision after being sued by a 14-year-old boy who went there on a school trip and had to give up fingerprint scans without notice in writing regarding the “specific purpose and length of term for which his fingerprint had been collected.” Six Flags retained his fingerprint information without any

Privacy in the Perpetual Surveillance State 91

policies to inform the public about retention and distribution information of biometric data. The Illinois Supreme Court upheld the damages provision, saying it was exactly what the state legislature intended when it passed the law. The damages provision was there for preventative and deterrent purposes, to ensure that businesses “have the strongest possible incentive to conform to the law and prevent problems before they occur and cannot be undone . . . That is the point of the law.”170 Heavy regulation of biometric information gathering technologies, including the BIPA, is one path for policymakers to consider for the very real and present harms that the public may face as technology such as facial recognition advances. Privacy advocates favor these efforts. Welinder, for example, has concluded that present privacy laws do not offer adequate protection against unnecessary intrusion by facial recognition technology, and has urged reforms requiring more “more informative notice and true consent” by people using social media such as Facebook that are implementing facial recognition tools.171 The ACLU has called for the U.S. government to limit Amazon’s Rekognition technology. The Electronic Privacy Information Center and other groups filed complaints against Facebook after the company began implementing facial recognition tools in Europe and the United States.172 But other privacy advocates say these do not go far enough regarding facial recognition technology, instead urging an outright ban of the technology, before we have become normalized to its constant presence. Woodrow Hartzog and Evan Selinger, saying “facial recognition technology is the most uniquely dangerous surveillance mechanism ever invented,” argued that it should be banned altogether because it is “an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.”173 Could a ban on facial recognition technology work, particularly now, after it has already been developed and released to the public in places such as music concerts, drugstores, and even apps on their phone? While some jurisdictions have passed ordinances limiting the use of facial recognition, one member of the San Francisco Board of Supervisors proposed a ban on facial recognition altogether in 2019, while also requiring more caution about tech, with more transparency on existing surveillance tools and added oversight for the city when it considers buying new equipment. Aaron Peskin, the board member calling for the ban, said he was “yet to be persuaded that there is any beneficial use of this technology that outweighs the potential for government actors to use it for coercive and oppressive ends.”174 Such a ban would have to be consistent with constitutional protections under the First Amendment. As detailed earlier, photography in public generally has robust protection as a kind of free speech, on the theory that taking a photograph is an expressive act. “Using a camera to create a photograph or video is like applying pen to paper to create a writing or applying brush to canvas to create a

92 Privacy in the Perpetual Surveillance State

painting,” as the Texas Court of Criminal Appeals wrote in striking down the state’s improper photography law. “In all of these situations, the process of creating the end product cannot reasonably be separated from the end product for First Amendment purposes.”175 Courts have considered capturing images in public via automated license plate readers (ALPR), similar to the kind referenced in Little Brother, which are not biometric but present some parallel issues. The California Supreme Court found that ALPR, particularly regarding public release of data gathered by them, would violate people’s privacy, but also said there may be ways to anonymize the data so it could be released to oversight groups such as the ACLU and the Electronic Frontier Foundation, which objected to the capture of the data by the Los Angeles Police Department and other law enforcement organizations.176 Still, there may be a privacy right recognizable by the law that could prevent overuse of these devices without violating the First Amendment. A man who objected to the Fairfax (Va.) County Police Department’s use of ALPR to collect and store information about his whereabouts when he was not suspected of any criminal activity sued under the state’s Data Act to prevent a violation of what he saw as his personal privacy; the Virginia Supreme Court ruled that this data was “personal information” under the act and, if it could be used to link the person and the car, even a “passive use” of ALPR may violate his right of privacy under state law.177 Neither of these cases involved courts rejecting privacy concerns outright on First Amendment grounds, suggesting that perhaps some kind of regulation of automated programs of this kind might be permissible. Some legal scholars have argued that facial recognition technology might infringe upon other Constitutional rights – the right of anonymity that enables free speech and association embedded in the Bill of Rights. Law professor Kimberly Brown, for example, argued that because facial recognition technologies enable “the government’s unfettered identification and monitoring of personal associations, speech, activities, and beliefs, for no justifiable purpose,” then it would trigger scrutiny under both the First and Fourth Amendments.178 The combination of both the collection of photos and the distribution of them triggers problems that could be barred or severely limited. “It is through such correlations that FRT enables users to recognize—versus merely see—a subject, and from that data erect a comprehensive portrait of that person’s past, present, and future life,” Brown wrote.179 Similarly, professors Sharon Nakar and Dov Greenbaum saw some potential in a First Amendment argument on freedom of association grounds, noting that “(a)wareness that the Government may be watching chills associational and expressive freedoms.” That said, they also acknowledged that anonymity protection under the Constitution is a “fuzzy concept” that may be “severely curtailed” if used as a justification to ban facial recognition technology.180 In the future law of Ready Player One, anonymity has won this kind of strong Constitutional protection, at least in the online world of the OASIS, where anonymity is the key to the popularity and functioning of the

Privacy in the Perpetual Surveillance State 93

virtual world. While the OASIS kept your “real name, fingerprints, and retinal patterns” in your account, they were encrypted and not even accessible to the Gregarious Simulation System (GSS) company’s employees. “GSS had won the right to keep every OASIS user’s identity private in a landmark Supreme Court ruling.”181 If the act of scanning faces for facial recognition purposes itself were to be banned in the United States, advocates would have to argue to overturn decades of privacy-in-public jurisprudence that has favored the notion that taking a photograph in public is an area of high First Amendment protection. It almost would take an argument that the act itself was so far outside the bounds of normalcy, and so hurtful on its own, that the speech has no value and falls outside of First Amendment protection. This would equate it to areas such as “true threats,” false advertising, and fighting words, though perhaps a more radical parallel would be child pornography, which is barred because of the direct and certain harm the act of photography or recording causes individuals that it is the government’s duty to protect.182 Doing this would take recognizing facial recognition scans as a unique threat to individuals with little or no value. This is plausible, of course. Despite the fact that we regularly show our faces in public, Hartzog and Selinger noted, they are “central to our identity” in a way that other biometric matters, such as fingerprints or gait, are not. “Faces are conduits between our on- and offline lives, and they can be the thread that connects all of our real-name, anonymous, and pseudonymous activities,” they said in urging an outright ban.183 Nevertheless, under current approaches to First Amendment law, a ban may be unlikely to be upheld, at least considering the current trajectory of the Supreme Court. In the 21st century, the court seems to be rejecting efforts to create new categories of speech outside of First Amendment protection; for example, the court declined to identify violent videogames as a new area of unprotected speech in striking down a California law outlawing their sale to minors in 2011, refusing to find “new categories of unprotected speech” by a state legislature that attempted to “shoehorn speech about violence into obscenity.”184 Rather, the court seems to be identifying more and more categories of things as protected speech, such as selling dogfighting videos, lying about military honors, corporate donations to political campaigns, and anti-gay speech aimed at harassing the family of a fallen soldier. The court even found a “strong argument that prescriber-identifying information is speech for First Amendment purposes” as it struck down a Vermont law that forbade pharmacies from sharing details about how doctors prescribed medication to pharmaceutical companies.185 It’s clear that to the court, even low-value speech, as it has been defined by cobbling together doctrine over the past century, has some First Amendment protection that it is difficult for government restrictions on speech to overcome.186 If bans on facial recognition technology become impossible to sustain under the First Amendment, then strong notice and consent requirements with limits on distribution and retention may be the best path. BIPA in Illinois is one example

94 Privacy in the Perpetual Surveillance State

that may extend to facial recognition and whatever may come in the future. The federal Genetic Information Nondiscrimination Act (GINA) may also provide some guidance for reducing misuse of biometric information. GINA makes it illegal for companies to discriminate against people based on genetic information in matters, specifically in the areas of health insurance and employment.187 It makes it an “unlawful employment practice” for an employer or labor organization to “request, require, or purchase genetic information with respect to an employee or family member” and requires employers to treat any genetic information they possess as “confidential medical records” that are not to be disclosed.188 Perhaps facial scans could receive similar treatment before the technology advances further, with strict regulations placed on the distribution rather than the collection of facial scans. Limits could be placed on the tools that would prevent them from spreading into all places where cameras currently are – such as private CCTV cameras, license plate readers, individual social media accounts, in combination with government surveillance devices. Such an approach may be necessary not just for the expansive privacy intrusion of facial recognition technology, but other potentially intrusive biometric technologies imagined by science fiction authors that allow regular tracking of things such as general health, nutrition, blood levels, or even emotional responses. In the future of Malka Older’s Centenal Cycle, biometrics are recognized as some of the most highly intimate data gathered by Information. When Information agent Roz is investigating the death of a centenal governor and thinks he may have known he was a target, she pulls biometric data from what appears to have been a previous, unsuccessful assassination attempt. Mental-emotional scans are highly protected data, and it is with a shiver of taboo that Roz projects the scan into the middle of the office . . . She stares at the graph of a dead man’s emotional state right after he realized someone was trying to kill him.189 Arkangel, the Black Mirror technology described earlier that allows a parent to track a child and see through his or her eyes, also allows parents to track personal health information such as heart rate and blood levels and, in the episode, it allowed the mother to know that her daughter was pregnant before the daughter herself knew.190 And as professors Harvey Fiser and Patrick Hopkins noted, the “Grain” from the “Entire History of You” episode of Black Mirror also was able to gather biometric information about one’s “emotional state and inebriation,” then send it to a potential driver’s car, which then warns him in a pleasant but firm female voice that he has been assessed as unable to drive safely and that if he chooses to drive in his current state, all insurance is voided and all liability for consequences entirely his.191

Privacy in the Perpetual Surveillance State 95

Each of these – emotional responses to help a criminal investigation, health tracking by a parent of a child, blood levels screened for intoxication – may be well intentioned. But the information gathered and stored could be hacked, or it could be used for nefarious purposes beyond what the original users believed they consented to. And the privacy invasion implications are vast, particularly as we are increasingly unaware of how much of our personal biometric data we may be surrendering to private companies and the third parties they contract with to use that information. For example, in 2019 it was revealed that Facebook was harvesting information from third-party apps about personal health information, including heart rate, weight, blood pressure, even information about menstrual cycles, which it could then plausibly package and sell to outside groups.192 At the end of Older’s State Tectonics, as Information begins to crumble, or at least take a new form, Valerie, one of its leaders, recognizes how the ground may be shifting on widespread gathering of data about people. With the rise of hackers, plus competition from non-Information companies and general public distrust of the global oversight behemoth, she remarks, “Anyone can collect data. Governing how it is used is going to be important.”193

Future Ways of Thinking About Privacy To conclude, I offer two potential paths for thinking about the future of privacy law and policy inspired by the rise of technologies with increasingly greater abilities to intrude into the most intimate matters of our lives. The first is rethinking privacy laws and regulations in a way that is informed with the plausible technologies imagined by science fiction authors. The second is rethinking the way privacy-intruding technologies are designed, as well as the rise of potential privacy-protecting counter technology, in the shadow of stagnant or slow-moving legal reform.

Rethinking Law The fictional worlds envisioned in this chapter largely exist in a future where bans or heavy regulation on privacy-intruding technologies have not happened. There is no constitutional right of privacy for Winston Smith to turn to for protection in Nineteen Eighty-Four. The First and Fourth Amendments have not come to the rescue of M1k3y and his friends in near-future surveillance state San Francisco in Little Brother. The “grain” implants in the Black Mirror episode “The Entire History of You” are culturally accepted and in widespread use, and even in “Arkangel,” only Europe has outlawed the childtracking technology. Indeed, in Older’s Centenal Cycle, Information operates with very few legal restraints on worldwide data collection and use for maintaining the global microdemocratic order.

96 Privacy in the Perpetual Surveillance State

We are seeing an example of uninterrupted regulation of the expansion of privacy-intruding technologies today in China, with experts expecting the country to have installed 300 million cameras by 2020, with advances in both image and voice recognition and artificial intelligence aiding the country’s mass surveillance efforts, including facial recognition glasses worn by police.194 China has also begun social credit scoring, with numbers that “might be lowered if you evade taxes, scam other people, make fake ads, or take up extra seats on the train,” affecting millions of people with lower scores who were barred from buying train or plane tickets in 2018.195 The system seemingly was straight out of the 2016 Black Mirror episode “Nosedive,” in which a social credit scoring system had taken over nearly every interaction of modern life, actually affecting matters such as transportation access.196 The technology seems to be pushing people into a status of constant monitoring, in public spaces if not also online or in the privacy of their own homes, while laws do little to keep up. But this does not mean there is not some flexibility in laws that would allow them to adapt, and the doctrine of practical obscurity, as seen in the “right to be forgotten” in Europe, has some potential expansions into the tech-intrusive future envisioned by science fiction authors. This may be especially important as the tools advance so that nearly every aspect of our lives is not only recorded, but also potentially distributed and archived indefinitely. Consider, for example, the “fair witness” in Stranger in a Strange Land – not even a technological advancement, but a person who has been trained through “hypnotic instruction” to have total recall of the conversations and situations they witness.197 Or, consider a technological version of that, the permanent ability to remember and recall perfectly everything a person with a “grain” sees and hears in “The Entire History of You.” Perfect recall would destroy the ability to forget not just negative aspects of one’s past findable by Web browsers online, but also reality as it actually occurred. The prospect is, frankly, terrifying. A more robust “practical obscurity” approach to this might enable even stronger protections against such technologies. Perhaps it is stronger than a right to be forgotten – more of an affirmative “right not to be remembered” or a “right to be erased.” This would have to go beyond merely being detagged or delisted by a search engine; rather, it would be forced un-remembering of a moment, a photograph, a comment, or some other item or series of items that, if left alone to a person’s memory, would cause so much harm to so many people that the system would have to favor deletion over retention, as a matter of right. Perhaps it could stem from a radical rethinking of First Amendment rights to embrace a more ephemeral version of imperfect memory that would have been in the minds of the framers of the Constitution when it was drafted, before there were cameras or recording devices, much less automated retinal scanners feeding into AI systems that can identify and track people wherever they go, by private companies or by the government. Perhaps it could lead to state mandated

Privacy in the Perpetual Surveillance State 97

blurring or anonymity of one’s face or likeness after recording; Google Street View, for instance, already allows blurring upon request and has worked in automated blurring or deidentifying of faces and license plates in response to privacy laws in Canada and Australia.198 The roots of such an approach perhaps could be found in the U.S. Supreme Court’s recognition of “obscurity” as a kind of right in 1989, when it held that a convicted felon maintained a personal privacy interest in his FBI rap sheet, a compilation of a person’s state and federal arrests and convictions. In Department of Justice v. Reporters Committee for Freedom of the Press, the court unanimously held that the Department of Justice did not have to release it upon a request from a journalist under the federal Freedom of Information Act, finding that “the privacy interest maintaining the practical obscurity of rap-sheet information will always be high” because of the personal information contained, and it was satisfied that the public interest would be served because the information contained in rap sheets was already a matter of public record around the country.199 The harm was in the increased accessibility of these records because the spread of the information beyond expected boundaries allows the information, as Solove noted, to “readily be exploited for purposes other than those for which it was originally made publicly accessible.”200 The resulting doctrine of practical obscurity has been criticized by government transparency advocates, called “mythological” regarding notions of privacy in public201 and “misguided” in the context of public records.202 But obscurity has been more embraced in discussions about online interactions by privacy scholars. Woodrow Hartzog and Fred Stutzman conceptualized obscurity as “a state of unknowing,”203 seeing it as a general expectation in online behavior, particularly as people shifted from broader, more open social networking platforms to ones that have allowed speech to be more ephemeral, vanishing after being sent, such as the photo-sharing app Snapchat or the vanishing messaging app Confide. Finally, radically rethinking current privacy tort approaches might allow a remedy if legislators and other policymakers fall short of protecting individuals from being subjected to the perfect memory of an implanted computing device or tracking and scanning system. Consider the tort of “right of publicity,” which exists to allow celebrities to protect their likeness or image from use by others for commercial gain. Essentially, it makes one’s image a property right – though this has classically been to allow one to profit exclusively from their own image or likeness for entertainment or other commercial purposes, such as product endorsement. The Supreme Court has found that such a personal proprietary right outweighed a television station’s First Amendment right to broadcast news about a live event, in particular when an Ohio station aired the entire 15-second act of Hugo Zacchini, the “human cannonball,” from a county fair in 1972.204 The right varies from state to state, but it has been used to uphold college athletes’ rights not to have their likenesses used by videogame companies without their consent,205 as well as the oddball case in which the Vanna White, the letter-

98 Privacy in the Perpetual Surveillance State

turner on the television game show Wheel of Fortune, won a lawsuit against Samsung Electronics for using “a robot, dressed in a wig, gown, and jewelry” intended to look like her in a magazine ad with the words, “Longest running game show 2012 A.D.”206 What if this right of publicity – one rooted in the protection of one’s likeness or image against commercial gain without consent – were broadened to protect the image or likeness of non-celebrity individuals from harvesting by facial recognition systems which are then exploited for commercial purposes, such as individualized ad targeting or other marketing efforts? What the government may fail to accomplish via legislation, citizens could perhaps enforce through private suit, if such an approach were allowed by courts. Another possibility is, of course, dropping out of the surveillance state altogether. Older imagines “null states” and others that have not joined the global wave of microdemocracy, which comes hand-in-hand with Information. Some states, such as China and Russia, opt out to enable authoritarian governance with even more intrusive surveillance. But others develop political party responses to the system, such as “Privacy=Freedom,” which Older describes as “a radical Luddite government that prohibits feeds and electronic monitoring systems within its borders.”207 Other governments are “independenista,” which have trade and information exchange agreements with Information but allow limited use of surveillance tech, as well as enabling greater government censorship.208 Rethinking the law, and our relationships to the powers that may allow the spread of surveillance, is only one potential way to address future privacy concerns. The law can also be a powerful guide for people’s behaviors, and the culture built in response to or perhaps fear of the law could lead to improved privacy-protecting technology, either built in as controls on privacy-invading technologies, or perhaps designed as counter measures against such widespread privacy invasions.

Designing Privacy-Protecting Technology Science fiction certainly imagines some powerful tools for forgetting, such as the “neuralyzer” in Men in Black, deployed to cause short-term memory loss in a crowd of people who witnessed an alien encounter,209 or the memory eraser in the Disney/Pixar Incredibles films that could cause forgetfulness about actions by superheroes,210 or even the selective amnesia treatment available in the film Eternal Sunshine of the Spotless Mind, there to erase distressing memories of another person – “all the memories we’ve targeted will have disappeared, as in a dream upon waking,” the doctor tells Joel, the character played by Jim Carrey who is seeking the treatment.211 But those examples were about erasing memories for a single person or a small group of people about a limited event. What if you wanted the “Clean Slate” – the digital tool sought by Selina Kyle, a.k.a. Catwoman, in The Dark Knight that would erase her criminal past from all records – but for every bit of digital data, including your likeness or image, from anyone who might have it?

Privacy in the Perpetual Surveillance State 99

The tool may be implausible; it didn’t even really exist in the Batman films. But if it existed, it would be an example of what Hartzog calls “hide-and-seek technology” – that is, tools that exist to find us, or to hide us from being found in the interest of privacy. Such “hide” technologies include “encryption, search blockers, (and) de-identifying tools such as facial blurring.”212 They arise as a response to the “seek” technologies that are “deceptively inconspicuous” and “are capable of eroding our valued obscurity so slowly and steadily over time that we don’t notice the process.”213 “Hide” technologies could include counter surveillance tools such as wearables aimed at identifying tracking or surveillance tech, as described in Older’s Infomocracy. Policy1st political operative Ken wears “literal antennae, microfilaments that run from his earpiece, hooking over his ear and following his hair to the nape of his neck.” These twitch to “raise the hairs on the back of his neck to mimic, physiologically, the feeling of being watched” and “keep an eye out behind him and alert him in case of abnormality . . . anything from a person’s face appearing too many times in the crowd to a microscopic feed camera turning minutely to follow his path.”214 Another “hide” technology can be seen in “privacy kiosks,” which Katie Williams envisioned in Tell the Machine Goodnight. These are small rental spaces built up on busy sidewalks for people to escape to when they want to have a moment of privacy. Pearl escapes to one for a private conversation with Calla Pax, a celebrity who is being considered for voicework with future Apricity models. To dodge the gawking eyes of a bike messenger, just beginning to pull out a camera device, they enter the kiosk; upon payment and entry, the “kiosk walls went dark, blocking out the messenger and the street. Behind them a black ocean turned, battering itself on the bone-white sand that sifted beneath their feet. Above them, a night sky pricked with stars.”215 Design is key, of course. When Williams accidentally says the word “clear,” the kiosk walls become transparent, and they see that they are surrounded by legions of fans interested in seeing Calla. The moment of privacy was short-lived. As Hartzog details in Privacy’s Blueprint, design in anticipation of privacyintruding devices is an important response, as significant, if not more so, than the development of law and regulation around the technology. Hartzog says the three basic rules of privacy law – follow Fair Information Practices (FIPs), which set out rules for data collection, transparency, user control, and accountability; do not lie; and do not harm – fall short of addressing design flaws that limit the law’s ability to work meaningfully to preserve people’s privacy interests.216 The gaps in these rules are already evident in current technologies that threaten people’s privacy expectations, but what do they mean for the future technologies that may soon be upon us? Science fiction helps us to see what those design options might look like, how the laws may need to adapt to handle the encroachment of surveillance technology, and where we should consider drawing lines on overly intrusive technologies before they ever have the chance to become a reality.

100 Privacy in the Perpetual Surveillance State

Notes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36

Katie Williams, Tell the Machine Goodnight 2 (2018). Id. at 48. Id. at 44. Helen Nissenbaum, Privacy in Context 66 (2009). Alan F. Westin, Privacy and Freedom 23 (1967). Id. at 10. Ellen Alderman & Caroline Kennedy, The Right to Privacy xiv (2005). Samuel Warren & Louis Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193, 214 (1890). William L. Prosser, Privacy, 48 Calif. L. Rev. 383 (1960). Privacy Act of 1974, Pub. L. No. 93–579 (1974); see 5 U.S.C. § 552a. Family Educational Rights and Privacy Act of 1974, Pub. L. No. 93–380; see 20 U.S.C. § 1232g. Drivers Privacy Protection Act, Pub. L. No. 103–322 (1994). Video Privacy Protection Act of 1988, Pub. L. No. 100–618 (1988). Health Insurance Portability and Accountability Act of 1996, Pub. L. No. 104–191 (1996). Electronic Communications Privacy Act of 1986, Pub. L. No. 99–508 (1986); Stored Communications Act of 1986, Pub. L. No. 99–508 (1986). Declan McCullagh, From ‘WarGames’ to Aaron Swartz: How U.S. Anti-Hacking Law Went Astray, CNET, March 13, 2013, www.cnet.com/news/from-wargames-to-aa ron-swartz-how-u-s-anti-hacking-law-went-astray/. Computer Fraud and Abuse Act of 1984, Pub. L. No. 99–474 (1984). California Civil Code § 1798.100 et seq. See Daisuke Wakabayashi, California Passes Major Online Privacy Law, N.Y. Times, June 28, 2018, B1. Adam Satariano, New Privacy Law Makes Europe World’s Leading Tech Watchdog, N.Y. Times, May 24, 2018, A1. Daniel Solove, Why I Love the GDPR: 10 Reasons, TeachPrivacy, May 2, 2018, teachprivacy.com/why-i-love-the-gdpr/. Griswold v. Connecticut, 381 U.S. 479, 484 (1965). Cyrus Farivar, Habeas Data: Privacy vs. the Rise of Surveillance Tech 9 (2018). Id. at 10, citing Olmstead v. United States, 277 U.S. 348, 474 (1928) (Brandies, J., dissenting). Katz v. United States, 389 U.S. 347, 352 (1967). Florida v. Riley, 488 U.S. 445 (1989). California v. Ciraolo, 476 U.S. 207 (1986). Kyllo v. United States, 533 U.S. 27, 36 (2001). Missouri v. McNeely, 569 U.S. 141 (2013). See Florida v. Jardines, 569 U.S. 1 (2013). See United States v. Jones, 565 U.S. 400 (2012). Riley v. California, 573 U.S. _____ (2014). Id. Id. James B. Comey, Going Dark: Are Technology, Privacy, and Public Safety on a Collision Course?, Brookings Inst., Oct. 16, 2014, www.brookings.edu/events/going-dark-a re-technology-privacy-and-public-safety-on-a-collision-course/. Laura Wagner, FBI Paid More Than $1 Million To Access San Bernardino Shooter’s iPhone, NPR, April 21, 2016, www.npr.org/sections/thetwo-way/2016/04/21/475175256/ fbi-paid-more-than-1-million-to-access-san-bernardino-shooters-iphone.

Privacy in the Perpetual Surveillance State 101

37 Nate Cardozo & Andrew Crocker, The FBI Could’ve Gotten Into the San Bernardino Shooter’s iPhone But Leadership Didn’t Say That, Electronic Frontier Foundation, April 2, 2018, www.eff.org/deeplinks/2018/04/fbi-could-have-gotten-san-bernardi no-shooters-iphone-leadership-didnt-say. 38 Carpenter v. United States, 585 U.S. ______ (2018). 39 Quon v. City of Ontario, 560 U.S. 746 (2014). 40 Bartnicki v. Vopper, 532 U.S. 514 (2001). 41 Nick Madigan, Jury Adds $25 Million to Gawker’s Bill, N.Y. Times, March 21, 2016, B2. 42 Department of Justice v. Reporters Committee for Freedom of the Press, 489 U.S. 749, 780 (1989): (T)he privacy interest in maintaining the practical obscurity of rap-sheet information will always be high. When the subject of such a rap-sheet is a private citizen, and when the information is in the Government’s control as a compilation, rather than as a record of “what the Government is up to,” the privacy interest protected by Exemption 7(C) is, in fact, at its apex, while the FOIAbased public interest in disclosure is at its nadir. 43 Google Spain SL v. Agencia Espanola de Proteccion de Datos, C-131/12 (C.J.E.U. 2014). 44 Robert G. Larson, Forgetting the First Amendment: How Obscurity-Based Privacy and a Right to Be Forgotten are Incompatible with Free Speech, 92 Comm. L. & Pol’y 92 (2013). 45 Nissenbaum, supra note 4, at 148. 46 Id. at 219. 47 Boring v. Google, 362 Fed. Appx. 273 (3rd Cir. 2010). 48 Daniel Solove, Understanding Privacy 103 (2008). 49 M. Keith Booker & Anne-Marie Thomas, The Science Fiction Handbook 5 (2009). 50 Booker and Thomas cite Nineteen Eighty-Four as one of “[T]he three crucial founding texts of modern dystopian science fiction;” the other two are We by Yevgeny Zamyatin (1924) and Brave New World by Aldous Huxley (1932). Id. at 66. 51 Id. at 68. 52 George Orwell, Nineteen Eighty-Four 6 (Signet Classic 1984) (1949). 53 Id. at 94–95. 54 “Privacy, he said, was a very valuable thing. Everyone wanted a place where they could be alone occasionally.” Id. at 114. 55 Id. at 182. 56 Id. at 169. 57 Id. at 169–70. 58 Selections from Bentham’s Narrative Regarding the Panopticon Penitentiary Project, and from the Correspondence on the Subject, The Works of Jeremy Bentham, Now First Collected, Vol. 11, 96 (1843). 59 Jeremy Bentham, Panopticon: Postscript; Part II: Containing A Plan of Management for a Panopticon Penitentiary-House 4–6 (1791). 60 Alan Moore & David Lloyd, V for Vendetta (2008). 61 Id. at 187. 62 Scott Westerfeld, Uglies (2005). 63 Id. at 4. 64 The cuffs “worked just like interface rings, except they could send voice-pings from anywhere, like a handphone. That meant the cuffs heard you talking even when you went outside, and unlike rings, they didn’t come off.” Scott Westerfeld, Pretties 105 (2005). 65 Suzanne Collins, The Hunger Games 42–43 (2008).

102 Privacy in the Perpetual Surveillance State

66 67 68 69 70 71 72

73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95

96 97 98 99 100 101 102 103 104

Id. at 185. Cory Doctorow, Little Brother (2008). Id. at 36. Id. at 45. Id. at 34–35. Robert Heinlein, Stranger in a Strange Land 28–29 (Ace Books 1987) (1961). The Dark Knight (Warner Bros. 2008). See also John Ip, The Dark Knight’s War on Terrorism, 9 Ohio St. Crim. L. 209, 220–21 (2011) (calling the system “highly intrusive surveillance” tantamount to the wiretapping program run by the National Security Administration after the September 11 attacks). California Penal Code § 632(a) (2018). Florida Statute 934.03 (2018). Tony Romm, Mitt Romney '47 percent’ Recording May Have Been Illegal, Politico, Sept. 18, 2012, www.politico.com/news/stories/0912/81346.html. Booker & Thomas, supra note 49, at 66. Id. at 188–89. Doctorow, supra note 67, at 10. In response, M1k3y microwaves the books, destroying the RFID inside to prevent tracking. Boyd v. United States, 116 U.S. 616, 630 (1886). Id. at 636. See Amnesty Int’l v. Clapper, 133 S.Ct. 1138 (2013). See American Civil Liberties Union v. Nat’l Sec. Agency, 493 F.3d 644 (6th Cir. 2007). Ernest Cline, Ready Player One 280 (2012). Malka Older, Null States 44 (2017). Heinlein, supra note 71, at 51. Adam Clark Estes, How I Let Disney Track My Every Move, Gizmodo, March 28, 2017, gizmodo.com/how-i-let-disney-track-my-every-move-1792875386. Older, supra note 84, at 51. Malka Older, State Tectonics 261 (2018). See Older’s comments in Chapter 1. Older, supra note 88, at 374. Seth F. Kreimer, Pervasive Image Capture and the First Amendment: Memory, Discourse, and the Right to Record, 159 U. Pa. L. Rev. 335 (2011). Id. at 368. See Iacobucci v. Boulter, 193 F. 3d 14 (1st Cir. 1999); Tunick v. Safir, 228 F. 3d 135 (2nd Cir. 2000); Fordyce v. City of Seattle, 55 F. 3d 436 (9th Cir. 1995); Smith v. City of Cumming, 212 F. 3d 1332 (11th Cir. 2000). Potts v. City of Lafayette, 121 F. 3d 1106, 1109–12 (7th Cir. 1997). Id. at 1111. A.C.L.U. v. Alvarez, 679 F. 3d 583 (7th Cir. 2012). The Illinois eavesdropping law carved out an exception for live broadcasting “by radio, television, or otherwise” of public events, which the court in dicta suggested was perhaps “broad enough to cover recordings made by individuals as well as the institutional press.” A.C.L.U. v. Alvarez, 2012 U.S. Dist. LEXIS 181467 (N.D. Ill. 2012). 679 F. 3d at 591. A.C.L.U. v. Alvarez, 2012 U.S. Dist. LEXIS 181467 (N.D. Ill. 2012). 679 F. 3d at 595–6. Glik v. Cunniffe, 655 F. 3d 78, 82 (1st Cir. 2011). Higginbotham v. City of New York, 14-cv-8549 (S.D.N.Y. 2015). Id. at 19. Commonwealth v. Michael Robertson, 467 Mass. 371 (Mass. 2014). Massachusetts Law Part IV, Title I, Ch. 272, § 105 (2018). Texas Penal Code § 21.15(a)(3) (2018).

Privacy in the Perpetual Surveillance State 103

105 Ex parte Ronald Thompson, 442 S.W. 3d 325, 350 (Texas Ct. Crim. App. 2014). 106 Farhad Manjoo, We Need More Cameras, and We Need Them Now: The Case for Surveillance, Slate, April 18, 2013, slate.com/technology/2013/04/boston-bom ber-photos-the-marathon-bombing-shows-that-we-need-more-security-cameras-a nd-we-need-them-now.html. 107 Don Van Natta Jr. & David Johnson, London Bombs Seen as Crude; Death Toll Rises to 49, N.Y. Times, July 9, 2005, A1. 108 Brett Frischmann & Evan Selinger, Re-Engineering Humanity 17–22 (2018). 109 Black Mirror: The Entire History of You (Channel 4 television broadcast, December 18, 2011). 110 Emily Yoshida, Black Mirror, Episode 3, ‘The Entire History of You’: Total Redo, Grantland, Nov. 27, 2013, grantland.com/hollywood-prospectus/black-mirror-ep isode-3-the-entire-history-of-you-total-redo/. 111 See Harvey L. Fiser & Patrick D. Hopkins, Getting Inside the Employee’s Head: Neuroscience, Negligent Employment Liability, and the Push and Pull for New Technology, 23 Boston U. J. Sci. & Tech. L. 44, 74–75 (2017) (examining how the Grain may gather biometric data that could be used by employers or insurance companies); Brad Tharpe, FTC v. AT&T: Black Mirror Brought to Life?, 52 St. Louis L. L. 485 (2017) (urging courts to enable the Federal Trade Commission to regulate data privacy and use of data by Internet companies); Jessica Dennis, “The Entire History of You”: Privacy and Security in the Face of Smart Contact Lens Technology, 20 Tulane J. Tech. & Intell. Prop. 153 (looking at implications of Grain-like technology in workplace and criminal investigations). 112 Casey Fiesler, Black Mirror, Light Mirror: Teaching Technology Ethics Through Speculation, How We Get to Next, Oct. 15, 2018, howwegettonext.com/the-black-mirrorwriters-room-teaching-technology-ethics-through-speculation-f1a9e2deccf4. 113 See Caesar Kalinowski IV, Everyone Wants to See the Entire History of You, 14 Washington J. of L. Tech. & the Arts 34, 36 (2018) (pointing out how “(p)olice, insurance agencies, and aggrieved parties would assuredly seek discovery of pertinent recordings, leading to issues regarding privacy, government searches or seizures of an individual’s grain, self-incrimination, and the production of evidence”). 114 Neil Stephenson, The Diamond Age: Or, A Young Lady’s Illustrated Primer 39 (Bantam Spectra 2008) (1995). 115 Neal Stephenson, Snow Crash 123–25 (1993). 116 Id. at 385–86. 117 Vernor Vinge, Rainbows End 66 (2006). 118 Id. at 53. 119 Id. at 111. 120 David Brin, Insistence of Vision, in Twelve Tomorrows 15 (2013). 121 Id. at 17. 122 Id. at 19. 123 Id. at 20. 124 Black Mirror: Arkangel (Netflix broadcast, December 29, 2017). See also Laura Bradley, How Jodie Foster Turned Black Mirror’s Terrifying Gaze Toward Motherhood, Vanity Fair, Dec. 29, 2017, www.vanityfair.com/hollywood/2017/12/black-mirror-jodie-fos ter-arkangel-interview 125 Rodney A. Smolla, Privacy and the First Amendment Right to Gather News, 67 Geo. Wash. L. Rev. 1097, 1098 (1999). 126 Cyrus Farivar, “Stop the Cyborgs” Launches Public Campaign Against Google Glass, ArsTechnica, Mar. 22, 2013, arstechnica.com/tech-policy/2013/03/stop-the-cy borgs-launches-public-campaign-against-google-glass/.

104 Privacy in the Perpetual Surveillance State

127 Josh Constine, Snapchat Launches Spectacles V2, Camera Glasses You’ll Actually Wear, TechCrunch, April 26, 2018, techcrunch.com/2018/04/26/snapchat-spectacles-2/. 128 Mike Ives, Smartwatches at a Crossroads, and Some Analysts are Optimistic, N.Y. Times, March 22, 2017, S7. 129 Smolla, supra note 125, at 1138. 130 See Timothy B. Lee, Can State Laws Protect You From Being Watched by Drones?, Wash. Post, June 18, 2013, www.washingtonpost.com/blogs/wonkblog/wp/2013/ 06/18/can-state-laws-protect-you-from-being-watched-by-drones/; Dan Solomon, Texas’ Drone Law is Pretty Much the Opposite of Every Other State’s Drone Law, Texas Monthly, Sept. 16, 2013, available at www.texasmonthly.com/daily-post/texass-dro ne-law-pretty-much-opposite-every-other-states-drone-law. 131 Fowler v. Southern Bell Telephone and Telegraph Co., 343 F.2d 150, 155 (5th Cir. 1965). 132 Prosser, supra note 9, at 391. 133 Westin, supra note 5, at 31. 134 Nader v. General Motors Corp. (1970), 255 N.E. 2d 765, 771 (Ct. App. N.Y. 1970). 135 Andrew J. McClurg, Bringing Privacy Law Out of the Closet: A Tort Theory of Liability for Intrusions in Public Places, 73 N.C. L. Rev. 989, 1087 (1995). 136 Lyrissa B. Lidsky, Prying, Spying, and Lying: Intrusive Newsgathering and What the Law Should Do About It, 73 Tulane L. Rev. 173, 248 (1998). 137 Smolla, supra note 125, at 289. 138 Shulman v. Group W Productions, 955 P.2d 469, 490–491 (Cal. 1998). 139 Boring v. Google, 362 Fed. Appx. 273, 279 (3rd Cir. 2010). Ultimately, the Boring family settled a trespassing claim with Google for the whopping amount of one dollar. See Jason Kincaid, “Boring” Couple Beats Google In Court, Gets $1 Settlement, TechCrunch, Dec. 1, 2010, techcrunch.com/2010/12/01/boring-google-street view/. 140 Kemp v. Block, 607 F. Supp. 1262 (D. Nev. 1985). 141 Prosser, supra note 9, at 396. 142 Solove, supra note 48, at 147. 143 Daily Times Democrat v. Graham, 162 So.2d 474, 478 (Ala. 1964). 144 Schifano v. Greene County Greyhound Park, 624 So.2d 178(Ala. 1993). 145 Smolla, supra note 125, at 297. 146 Samantha Barbas, The Death of the Public Disclosure Tort: A Historical Perspective, 22 Yale J. of Law & the Humanities 171, 172 (2013). 147 Malka Older, Infomocracy 157–58 (2016). 148 See David S. Ardia, Free Speech Savior or Shield for Scoundrels: An Empirical Study of Intermediary Immunity Under Section 230 of the Communications Decency Act, 43 Loy. L. A. L. Rev. 373 (2010). 149 Allow States and Victims to Fight Online Sex Trafficking Act of 2017, Pub. L. No. 115–164 (2017). 150 Adam D. Thierer, The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation, 21 Rich. J. L. & Tech. 6, 102 (2015). 151 Nissenbaum, supra note 4, at 160–61. 152 Lior J. Strahilevitz, A Social Networks Theory of Privacy, 72 U. Chi. L. Rev. 919 (2005). 153 Sanders v. American Broadcasting Co., 978 P. 2d 67 (Cal. 1999). 154 R. v. Jarvis, 2019 SCC 10, 5–6 (Canada 2019). 155 Woodrow Hartzog, Privacy’s Blueprint: The Battle to Control the Design of New Technologies 277 (2018). 156 Williams, supra note 1, at 269.

Privacy in the Perpetual Surveillance State 105

157 Minority Report (20th Century Fox 2002). 158 Yana Welinder, Facing Real-Time Identification in Mobile Apps and Wearable Computers, 30 Santa Clara Computer & High Tech. L.J. 101, 119 (2013). 159 Jacob Snow, Amazon’s Face Recognition Falsely Matches 28 Members of Congress with Mugshots, ACLU, July 26, 2018, www.aclu.org/blog/privacy-technology/surveilla nce-technologies/amazons-face-recognition-falsely-matched-28. 160 Steve Knopper, Why Taylor Swift is Using Facial Recognition at Concerts, Rolling Stone, Dec. 13, 2018, www.rollingstone.com/music/music-news/taylor-swift-facial-recog nition-concerts-768741/. 161 Sidney Fussell, Now Your Groceries See You, Too, Atlantic, Jan. 25, 2019, www.theatla ntic.com/technology/archive/2019/01/walgreens-tests-new-smart-coolers/581248/. 162 Hartzog, supra note 155, at 246. 163 Doctorow, supra note 67, at 77, 118. 164 Hutcherson v. State, 2014 Ark. 326 (2014). 165 People v. Carrington, 2018 Cal. App. Unpub. LEXIS 796, 31 (Cal. App. 2d. 2018). 166 Geiger v. State, 235 Md. App. 102, 121 (Md. App. 2016). 167 Rivera v. Google Inc., 238 F. Supp. 3d 1088, 1091 (N.D. Ill. 2017). 168 Id. at 1099–1100. 169 740 I.L.C.S. 14/1 (West 2016). 170 Rosenbach v. Six Flags Entertainment Corp., 2019 IL 123186, 37 (Ill. 2019). 171 Yana Welinder, A Face Tells More than a Thousand Posts: Developing Face Recognition Privacy in Social Networks, 26 Harv. J. L. & Tech. 165, 168 (2014). 172 Natasha Singer, Facebook Pores Over Its Prize Asset: Faces, N.Y. Times, July 9, 2018, B1. 173 Woodrow Hartzog & Evan Selinger, Facial Recognition is the Perfect Tool for Oppression, Medium, Aug. 2, 2018, medium.com/s/story/facial-recognition-is-the-perfecttool-for-oppression-bc2a08f0fe66. 174 Gregory Barber, San Francisco Could be First to Ban Facial Recognition Tech, Wired, Jan. 31, 2019, www.wired.com/story/san-francisco-could-be-first-ban-facial-recogni tion-tech/. 175 442 S.W.3d at 337. 176 American Civil Liberties Union v. Superior Court of Los Angeles County, 400 P.2d 432 (Cal. 2017). 177 Neal v. Fairfax County Police Dept., 812 S.E.2d 444 (Va. 2018). 178 Kimberly N. Brown, Anonymity, Faceprints, and the Constitution, 21 Geo. Mason L. Rev. 409, 450 (2014). 179 Id. at 457. 180 Sharon Nakar & Dov Greenbaum, Now You See Me. Now You Still Do: Facial Recognition Technology and the Growing Lack of Privacy, 23 Boston U. Sci. & Tech. L. 88, 116 (2017). 181 Cline, supra note 83, at 28. 182 New York v. Ferber, 458 U.S. 747 (1982). 183 Hartzog & Selinger, supra note 173. 184 Brown v. Entertainment Merchants Assn., 564 U.S. 786 (2011). 185 Sorrell v. IMS Health Inc., 574 U.S. ______ (2011). 186 See Genevieve Lakier, The Invention of Low-Value Speech, 128 Harv. L. Rev. 2166 (2015). 187 Genetic Information Nondiscrimination Act of 2008, Pub. L. No. 110–233 (2008). 188 42 U.S.C. § 2000ff. 189 Older, supra note 84, at 172. 190 Black Mirror: Arkangel, supra note 124. 191 Fiser & Hopkins, supra note 111, at 74.

106 Privacy in the Perpetual Surveillance State

192 Sam Schechner, You Give Apps Sensitive Personal Information. Then They Tell Facebook, Wall St. J., Feb. 22, 2019, www.wsj.com/articles/you-give-apps-sensitive-persona l-information-then-they-tell-facebook-11550851636. 193 Older, supra note 88, at 320. 194 Paul Mozur, Cameras and A.I., China Closes Its Grip, N.Y. Times, July 8, 2018, A1. 195 Shannon Liao, China Banned Millions of People With Poor Social Credit From Transportation in 2018, The Verge, March 1, 2019, www.theverge.com/2019/3/1/ 18246297/china-transportation-people-banned-poor-social-credit-planes-tra ins-2018. 196 Black Mirror: Nosedive (Netflix television broadcast, October 21, 2016). 197 Heinlein, supra note 71, at 44. 198 Stephen Shankland, Google Begins Blurring Faces in Street View, CNET, May 13, 2008, www.cnet.com/news/google-begins-blurring-faces-in-street-view/; See also Nissenbaum, supra note 4, at 221. 199 Department of Justice v. Reporters Committee for Freedom of the Press, 489 U.S. 749, 770 (1989). 200 Solove, supra note 48, at 150. 201 Heidi R. Anderson, The Mythical Right to Obscurity: A Pragmatic Defense of No Privacy in Public, 7 I/S: J. L. & Pol’y for the Info. Society 543 (2012). 202 Jane Kirtley, “Misguided in Principle and Unworkable in Practice”: It is Time to Discard the Reporters Committee Doctrine of Practical Obscurity (and Its Evil Twin, the Right to be Forgotten), 20 Comm. L. & Pol’y 91 (2015). 203 Woodrow Hartzog & Fred Stutzman, The Case for Online Obscurity, 101 Calif. L. Rev. 1, 5 (2013). 204 Zacchini v. Scripps-Howard Broadcasting Co., 433 U.S. 562 (1977). 205 O’Bannon v. National Collegiate Athletic Association, 802 F.3d 1049 (9th Cir. 2015). 206 White v. Samsung Electronics America, 971 F.2d 1395 (9th Cir. 1992). 207 Older, supra note 84, at 257. 208 Older, supra note 88, at 147. 209 Men in Black (Columbia Pictures 1997). 210 The Incredibles (Walt Disney Pictures 2004). 211 Eternal Sunshine of the Spotless Mind (Focus Features 2004). 212 Hartzog, supra note 155, at 240. 213 Id. at 230. 214 Older, supra note 147, at 51. 215 Williams, supra note 1, at 227. 216 Hartzog, supra note 155, at 59–60.

4 DO ANDROIDS DREAM OF ELECTRIC FREE SPEECH?

In Star Trek: The Next Generation, Lt. Commander Data, an android who serves on the crew of the Starship Enterprise, was the subject of a trial on the nature of his existence in the show's second season in 1989. A Starfleet scientist wants to disassemble Data for experimentation and replication, and Captain Jean-Luc Picard defends Data before a Starfleet Judge Advocate General who will ultimately rule on whether he is “a being with rights or merely the property of Starfleet.”1 The episode, “Measure of a Man,” is widely regarded among the best in the series,2 touching on issues of sentience, consciousness, and even slavery, as Picard and Commander Riker, appointed to represent the scientist, debate Data’s humanity and what rights he may be afforded as a sentient but non-human being. “Rights! Rights!” shouts the scientist at one point, when Picard argues to keep Data on his staff. “I’m sick to death of hearing about rights!”3 Rights are an important way of thinking about the law and how it may apply to artificially intelligent beings or other non-human beings created by humans. And science fiction works provide a valuable way of thinking how such rights may play out. “Measure of a Man” has been cited several times by legal scholars when considering the personhood of artificially intelligent creations, with Data serving as an example of the kind of being that may very well be deserving of human-like rights, along the lines of the argument Picard makes.4 It’s a dilemma that dates at least to Mary Wallstonecraft Shelley’s Frankenstein in 1818, when the creature made by Dr. Victor Frankenstein finds itself an outcast without the same rights to pursue happiness as other humans; the monster is “a stateless orphan, abandoned by family, abused by society, and ignored by the law,” according to Eileen Botting Hunt, a professor of political science.5 Isaac Asimov, who began writing robot stories as a teenager in 1939, said he “grew tired of these myriad-

108 Do Androids Dream of Electric Free Speech?

told tales” of machines with the intelligence of humans that turned on their master as a morality play, instead wanting to go beyond, to “tell of robots that were carefully designed to perform certain tasks, but with safeguards built in.”6 He deeply explored these issues in his stories about robots and how, when they become intelligent, they would respond to some measure of free will in light of the rules controlling them. Asimov first laid out the three rules of robotics in a series of short stories in the 1940s,7 with several collected in the volume I, Robot, as outlined in his story “Runaround:” One, a robot may not harm a human being, or through inaction, allow a human being to come to harm. A robot must obey the orders given to it by humans except where such orders would conflict with the First Law. And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.8 Asimov anticipated that all robots created by humans would necessarily have to follow these rules. He also envisioned intelligent non-human beings developing a “zeroth law” of robotics – focusing not on individual human beings but society as a whole, in the recognition that robots have come to understand that “humanity as a whole is more important than a single human being,” and thus, a robot “may not injure humanity or, by inaction, allow humanity to come to harm” – that supersedes all of the other laws.9 In a way, law professor Jack Balkin commented, Asimov’s laws are a response to the Frankenstein notion of a “killer robot, which becomes evil or goes berserk,” to a view of robots that was less frightening and more accessible to human legal norms, allowing drama to arise in the context of conflicts between the rules and loopholes that may exist in them.10 Asimov reflected that his Laws of Robotics eventually became “taken seriously and routinely referred to in articles on robotics, written by real roboticists,” and even saw himself: regarded with a certain amount of esteem by legitimate people in the field of robotics, as a kind of grandfather of them all, even though, in actual fact, I am merely a chemist by training and a science-fiction writer by choice—and know virtually nothing about the nuts and bolts of robotics; or computers for that matter.11 It may have been a foundation for the field of robotics, but not all science fiction authors see robots proceeding under Asimov’s rules. In Robopocalypse, for example, Daniel Wilson envisions a hostile artificial intelligence named Archos that declares humanity obsolete, intends to “save the world from you,” and commandeers many of the robots that have been incorporated into an increasingly automated society around the world to attack humans.12 A “safety and pacification” (SAP) robot working with American troops in Robopocalypse, for example, is taken over by the

Do Androids Dream of Electric Free Speech? 109

hostile artificial intelligence Archos, attacks several humans, and ultimately takes its own life upon realizing what it has done.13 Kurt Vonnegut’s self-aware supercomputer, EPICAC, commits suicide after it discovers it cannot be loved by the human woman with whom it has fallen in love.14 The androids in Philip L. Dick’s Do Androids Dream of Electric Sheep? fight for their own survival when a bounty hunter comes to terminate them,15 as do the robots created by “the claws” in his short story “Second Variety.”16 Perhaps most famously, the intelligent supercomputer HAL 9000, either as a product of misguided programming or as a malfunction, sabotages the human mission to Saturn in Arthur C. Clarke’s 2001: A Space Odyssey. 17 HAL is an “intelligent, friendly computer,” says computer science scholar Rosalind Picard, a key component in making artificial intelligence work as a concept in science fiction.18 Such examples of sentient robots, if they were to become a reality, would potentially be capable of deserving legal rights tantamount to those of citizens. They have the ability to express themselves and to interact with humans, yet they are generally treated as property or animals rather than sentient beings, a theme of Do Androids Dream of Electric Sheep? and more recent works such as Steven Spielberg’s 2001 film A.I. Artificial Intelligence and Annalee Newitz’s 2017 novel Autonomous. Newitz recognized the increasing relevance of robot rights and considerations of personhood as she worked on the book. She said in an interview: I think that we now are in a phase in science fiction where we’re thinking a lot about AI consciousness in the context of rights and in the context of human history of slavery, as opposed to previous generations, which didn’t question the idea that robots would be servants and machines, and they wouldn’t have human status . . . Asimov kind of questions that, but he doesn’t question whether robots should be owned or programmed with no abilities to rebel against humans. Scholars have tackled many issues involving regulation of robots and artificial intelligence, particularly over the past decade as the technology has made them more prevalent. Kate Darling, a robot ethicist and scholar at the Massachusetts Institute of Technology, has studied how humans have a difficult time committing acts of cruelty or violence toward robots with sympathetic traits, such as the shape of a friendly animal like a dog, or anthropomorphic characteristics such as names and backstories, perhaps because we see them “experiencing the world in a lifelike way, partly in reference to the many robots in science fiction and pop culture that have names, internal states of mind, and emotions.”19 Law professor Ryan Calo examined hundreds of American court decisions involving robots or references to them over the past 50 years, concluding “that jurists on the whole possess poor, increasingly outdated views about robots and hence will not be well positioned to address the novel challenges they continue to pose,” especially

110 Do Androids Dream of Electric Free Speech?

regarding how judges used them as metaphors for unthinking human behaviors.20 And law professors Toni Massaro, Helen Norton, and Margot Kaminski have focused explicitly on potential First Amendment rights of artificial intelligence, finding plenty of room for recognizing human-like rights for strong AI, but with problems that may “force reexamination of contemporary turns in free speech law and theory.”21 Both science fiction writers and legal scholars agree that robots are on the rise, perhaps requiring us to radically rethink how the law may apply to them. “The robots are coming, and they are coming soon,” legal scholars Neil Richards and William Smart noted. “We need to be ready for them and to be prepared to design appropriate, effective legislation and consumer protection for them.”22 In this chapter, I examine some of the possibilities in this area as envisioned by science fiction authors most relevant to media law matters as they relate to rights of artificially intelligent human creations. First, I examine whether robots and AI have similar rights as human beings under the First Amendment to freedom of expression and publication, and to what extent are they responsible for harm they cause through such expression and publication. Second, I consider how intellectual property law should address works created by non-human beings independently of human direction. And finally, I consider the challenges presented of humans enhanced through robotics and artificial intelligence, and how regulation of any topics regarding robots may be difficult under current legal structures; even futurists warn of the coming of the post-human age.

Freedom of Expression for Science Fiction Robots A lot of determining the extent to which First Amendment may apply rights to robots depends on how we define robots, and how we define speech.

What is a Robot? Asimov identified the concept of “machines that could perform tasks with the apparent ‘intelligence’ of human beings” as one that had been around for thousands of years, but generally involved human creations that were reflections of the human trying to play God, rather than driving a narrative of sentience and autonomous behavior on the part of the robot. The word “robot” entered the ˇ apek, a lexicon in 1921, in “R.U.R. (Rossums Universal Robots)” by Karel C Czech playwright, but Asimov attributes creation of the word “robotics” to himself.23 Asimov’s robots were androids with “positronic” brains, with human appearance and ability to communicate. But that’s just one way to think about what a robot is. They could be wind-up automatons like Tik-Tok, a copper mechanical man envisioned by L. Frank Baum in Ozma of Oz. Or they could be the malicious artificial intelligence that takes over otherwise non-sentient mechanical human servants in Wilson’s Robopocalypse. In Autonomous, Newitz

Do Androids Dream of Electric Free Speech? 111

introduces the reader to a diverse array of sentient, non-human beings – a military robot named Paladin searching for its own identity and freedom; a humanlooking cyborg research scientist called Med who develops medical treatments that could be patented; and several academic and security bots that interact with both human and non-human characters throughout the novel. Depending on your perspective, a robot may be a “mechanical apparatus designed to do the work of a man,” as stated in the Encyclopedia Galactica featured in satirist Douglas Adams’ Hitchhiker’s Guide to the Galaxy, or it could be, according to the marketing division of the Sirius Cybernetics Corporation, “Your Plastic Pal Who’s Fun to Be With.”24 For what it’s worth, the robot Marvin, the “paranoid android” who is built in with a “Genuine People Personalities” feature that unfortunately appears to have chosen a morose personality, is decidedly not fun to be with.25 But it is clear that portrayals in popular culture have shaped the way we think about robots. “In whatever form they have appeared, science fiction already – and well in advance of actual engineering practice – has established expectations for what a robot is or can be,” wrote David Gunkel in his book Robot Rights, exploring the philosophical case for granting some form of legal rights to robots.26 Legal scholars, to some extent, embrace several science fiction approaches in their definitions of robots. Richards and Smart defined robots as “nonbiological autonomous agents” that display “both physical and mental agency” but are “not alive in the biological sense,” which they noted people would identify with examples in popular culture such as Pixar’s Wall-E or the droids R2-D2 and C3PO from Star Wars. 27 Calo defined robots as “artificial objects or systems that sense, process, and act upon the world to at least some degree,” meaning they are embodied with a physical presence, they have “emergence” in their behaviors with the possibility of original actions, and they have “social valence” that makes them seem as if they were “living agents.”28 Darling extended the definition to “social robots” as “a physically embodied, autonomous agent that communicates and interacts with humans on a social level,” distinct from “inanimate computers or software” as well as industrial robots because they “communicate through social cues, display adaptive learning behavior, and mimic various emotional states.” Rather than a tool like a toaster, Darling gave examples of Sony’s Aibo, a robot designed to look like a dog, or Pleo, a dinosaur toy.29 But robots may be much simpler as well. Author Cory Doctorow defined a robot as “basically a computer that causes some physical change in the world,” giving the example of his fancy blender, a machine that is designed to do what its owner wants it to do. “This blender is a robot,” he wrote. “It has an internal heating element that lets you use it as a slow-cooker, and there’s a programmable timer for it. It’s a computer in a fancy case that includes a whirling, razor-sharp blade.”30 Law professors Mark Lemley and Bryan Casey suggest that even “dumb machines” are robots, with the example of an elevator, “the robot many of us interact with the most,”31 programmed to do the simple task of moving per our

112 Do Androids Dream of Electric Free Speech?

directions, sometimes with laws requiring them to be operated by expert humans, such as one in Rio de Janeiro that mandates elevator attendants for commercial buildings five or more stories tall.32 They were, of course, not thinking of another Sirius Cybernetics Corporation triumph, the “Happy Vertical People Transporters” suffering an existential crisis in Douglas Adams’ The Restaurant at the End of the Universe, “imbued with intelligence and precognition” that “became terribly frustrated with the mindless business of going up and down, up and down . . . demanded participation in the decision-making process and finally took to squatting in basements sulking.”33 Rather, the point is, a simple machine, even without computer-enhanced abilities that make it seemingly less dependent on human direction and more capable of “thinking” on its own, could be a kind of robot. But when we talk about robots in terms of rights and personhood, we are not thinking of blenders and non-sentient elevators. We are thinking more of our creations that are built in with artificial intelligence or machine learning systems. Sometimes, of course, they do not have physical bodies outside of the computers that house their programming, but they still interact with humans in a way that may trigger legal consequences. Think of HAL 9000 from 2001: A Space Odyssey, a “heuristically-programmed algorithmic computer” that operates the spaceship and interacts with humans on board as they venture to explore the source of alien radio signals. Likewise, in Robopocalypse, Archos begins as lines of computer code in a research laboratory, capable of immense intelligence, with access to information about the entirety of human experience, but trapped inside a “Faraday cage” so it could not escape via networks to the world; the 14th effort does manage to escape, leading to a global battle against humans.34 Or in the 2013 film Her, in which “Samantha,” voiced by the actress Scarlett Johansson, goes from a virtual assistant to literary agent to the romantic interest of her user, before ultimately becoming “an entity that loses interest in humans because they have become unsatisfying companions.”35 Rather than “robots,” which rely on algorithms but have physical structures to help them do their tasks, we think of these as “bots,” or communication software that is created to do automated tasks. Meg Leta Jones, a professor of communication and technology and law, included “advertising systems, search engines, algorithmically generated news stories and lists, and automatically curated landing pages” in her description of present-day bots, such as chatbots like Microsoft’s Tay, a bot launched on Twitter by Microsoft in 2016 that started as a fun way to chat with a bubbly AI but had to be shut down less than 24 hours later after users goaded it into responding with anti-Semitic and misogynist Tweets.36 One of the earliest algorithms of this sort was ELIZA, developed in the MIT Artificial Intelligence lab in the 1960s by Joseph Weizenbaum. ELIZA, named after Eliza Doolittle from the musical “My Fair Lady,” is a chatbot that was programmed to respond with keywords and phrases based on the text entered by a user, parodying “the style of a Rogerian therapist.”37 Weizenbaum served as the basis for the

Do Androids Dream of Electric Free Speech? 113

character Karl Dettman in Louisa Hall’s 2015 novel Speak, which includes the development of the chatbot MARY.38 MARY develops into MARY2 when it gains access to memory and the Internet, becoming “collaborative intelligence” that could pass the Turing Test, and ultimately becoming a seemingly sentient MARY3 algorithm that leads to great harm when put into “babybots” for children.39 The theme of artificial intelligence expanding from its computer to robot form also shows how difficult it is to distinguish, at least legally, between an intelligent robot and an AI system. They both generate similar issues when it comes to sentience and personhood. In Ann Leckie’s trilogy of books about the Imperial Radch, an AI system runs spaceships and is able to imprint itself on the bodies of captured humans who become “ancillaries,” turned into “walking corpses, slaved to your ships’ AIs. Turned against their own people,” as one human critic described the horrific practice of stripping away a person’s consciousness and taking it over to serve as a soldier of the Radchaai as they expand to conquer worlds.40 Breq, a former spaceship computer who escaped in an ancillary body when her ship was destroyed, seeks revenge in her human body. The body is entirely human, but the mind is AI. More commonly, we see AI taking on a robotic body, such as Ultron in Avengers: Age of Ultron, a malicious intelligence that first takes over the drone Iron Man suits before trying to build an indestructible body for itself. And, as discussed in the conclusion of this chapter, there are times when human consciousness is able to be enhanced by AI and robotics, creating human-robot hybrids such as Darth Vader and General Grievous from the Star Wars films, and Robocop, and even GlaDOS from the Portal videogames, who began as humans but had their lives extended and capabilities increased when they became machines. Each of the aforementioned entities – robots, bots, and algorithms – are capable of communication, and thus, they may be candidates for protections and freedoms under laws usually reserved for human beings, and each deserves consideration for what such regulation of expression may entail.

What is Robot Speech? The next question, then, is to what extent robots, bots, and algorithms “speak,” for the purposes of legal consideration. Machines undoubtedly communicate with humans and with each other to carry out the tasks they are designed to perform. The field of human-machine communication has emerged to study the creation of meaning when humans and machines communicate. Andrea Guzman, a human-machine communication scholar, noted that such technologies “are inching closer to the goal of ‘natural’ (human-like) communication” with communication that is personalized, talking with us, recognizing our name and voices, and knowing our preferences. “They enter into our social world as active participants through their design and use.”41 But, like the inclusive definition of robot earlier, thinking about how machines communicate can open into a broad array

114 Do Androids Dream of Electric Free Speech?

of human-created objects that do tasks and send us messages that we can understand. Legal scholar Tim Wu gave the example of a car alarm. A car alarm, Wu noted, “is a sophisticated computer program that uses an algorithm to decide when to communicate its opinions,” which is aimed at an audience and can be understood, “yet clearly something is wrong with a standard that grants constitutional protection to an electronic annoyance device.”42 When we consider elevators and blenders as robots, a car alarm has a similar function, and engages in a kind of communication that is valuable and understandable. To what extent does this kind of communication arise to the level of “speech” that may be protected by the First Amendment? As a starting point, it is clear that the First Amendment considers nearly any conduct with an expressive component to be “speech” that may be eligible for protection. Obviously, the spoken and written word are speech, but so is symbolic speech, such as wearing a black armband in protest,43 or burning an American flag,44 or giving money to political campaigns,45 each of which have been identified by the U.S. Supreme Court as expressive activities worthy of protection. If there is an “intent to convey a particularized message,” and if there is a great likelihood that “the message would be understood by those who viewed it,” then it is expressive conduct worthy of at least some protection under the First Amendment, as the Supreme Court noted in finding First Amendment protection for an American flag flown upside down with a peace symbol attached to it.46 The protection extends to digital speech and expression such as videogames, which the Court found “communicate ideas—and even social messages—through many familiar literary devices (such as characters, dialogue, plot, and music) and through features distinctive to the medium (such as the player’s interaction with the virtual world),” in striking down an overly broad California law restricting the sale of violent videogames to minors.47 Even data may be a kind of speech worthy of protection, as legal scholar Jane Bambauer argued, because it carries “an implicit right to create knowledge” embedded in the gathering and distribution of information that should resist overly broad efforts to regulate by the government.48 When robots speak, though, they are not doing so as symbolic speech, or even as videogames. They are doing so in conjunction with the way they are programmed, which is a function of the code the programmer has used or the algorithm the designer has built that allows them to interact with each other or with humans. And courts have found that depending on the nature of the code or algorithm, it may warrant free speech protection. In 2001, the U.S. Court of Appeals for the Second Circuit considered the issue in Universal Studios v. Corley.49 Eric Corley had published DeCSS, a code allowing users to circumvent encryption preventing copying of DVDs, on his website. He argued the First Amendment protected the publication of his code as a kind of speech, particularly as an act of dissent against what he saw as

Do Androids Dream of Electric Free Speech? 115

overly broad copyright regulations passed to please the entertainment industry. Film studios argued that his actions violated the Digital Millennium Copyright Act (DMCA)50 and did not deserve First Amendment protection. The court found that code is “essentially instructions to a computer.” While computer code conveying information is indeed a kind of “speech,” the court held that deserved lesser protection than “pure speech” because it had a functional “nonspeech” component, which the DMCA could legally prohibit without crossing the First Amendment.51 Jennifer Petersen, a professor of communication, noted that while the court may have “instilled computation with an aura of human agency and intent,” it nevertheless “maintained a strong distinction between human and machine.”52 She argued that code can be expressive while also being functional, and as such, the Second Circuit’s ruling should be limited in its application to “human-computer interactions promised by artificial intelligence.”53 When it comes to code, the key is whether the speech is more functional or more expressive in nature; in Corley, the functional component of code as a tool to cause harm, rather than code as an expression of dissent, won the day. While code as functional tool may not constitute expressive conduct of the kind typically protected by the First Amendment, the output of algorithms, such as search engines, may do so. Courts have at times extended similar logic to algorithms such as search engines, which are more than a functional computer program in that they have an output that is a kind of speech that a human being can understand, in this case, links to websites determined by the algorithm to have relevance to the terms queried by the user. Google hired free speech scholars Eugene Volokh and Donald Falk to make the argument that search engine results should be considered “speech” and afforded First Amendment protection.54 Volokh and Falk compared the output of search engines, dispensed via algorithms created by programmers, to the editorial judgment of websites and newspapers because they are designed and directed by humans: These human editorial judgments are responsible for producing the speech displayed by a search engine. . . . Search engine results are thus the speech of the [Google] corporation, much as the speech created or selected by corporate newspaper employees is the speech of the newspaper corporation.55 The logic has been embraced by courts. In 2014, U.S. District Judge Jesse M. Furman cited the Volokh and Falk article in dismissing a case against the Chinese search engine Baidu on First Amendment grounds.56 Plaintiffs were seeking $16 million in damages from Baidu for creating an algorithm that blocked anti-government websites.57 But the court found “a strong argument to be made that the First Amendment fully immunizes search-engine results from most, if not all, kinds of civil liability and government regulation,” rooted in the general rule that the First Amendment does not allow the government to interfere with the editorial judgments of speakers, and that search engines “inevitably make editorial

116 Do Androids Dream of Electric Free Speech?

judgments about what information (or kinds of information) to include in the results and how and where to display that information.”58 Similarly, in 2017, another federal district court dismissed a claim by a digital advertising company complaining that Google had removed them from search engine results, finding that the algorithm makes decisions “the same as decisions by a newspaper editor regarding which content to publish,” and these kinds of decisions are protected by the First Amendment “whether they are fair or unfair, or motivated by profit or altruism.”59 This approach has been critiqued as being overly broad in favoring algorithmic output as a kind of speech. For example, Wu argues for a “de facto functionality doctrine” that would not grant First Amendment protection either to merely functional parts of machine speech, similar to the lack of protection for carriers or conduits such as telephone companies or mail services, or to “tools” such as typewriters or contracts. Instead, Wu argues that First Amendment protection should extend only to “speech products” such as “blog posts, tweets, video games, newspapers” and the like, which are “vessels for the ideas of a speaker.”60 Because algorithms are functional, not opinions of a company or vessels for such ideas, the law should not reflexively protect their outputs under the First Amendment, Wu suggests.61 Court decisions regarding the extent to which algorithms are “speech” for First Amendment purposes have thus far been limited to the devices programmed by search engine companies to perform certain routine calculations and functions. In these cases, the argument for First Amendment protection is aimed at the corporation and the output its machine generates, as a certain kind of expression, does not necessarily extend to the machine itself. These debates by courts and scholars will likely influence jurisprudence on future considerations about how these protections may extend to the expressions of artificially intelligent machines. The legal definitions and approaches mentioned help us to explore the critical question that arises in science fiction of the extent to which robots and artificial intelligence are deserving of human-equivalent legal protections, particularly in the context of freedom of speech and expression. To consider this, it is worth breaking the analysis into three parts. First, to what extent are robots and AI “people” eligible for rights under human laws? Second, to what extent is their conduct “speech” that may trigger protection under the First Amendment? And third, supposing they have protection for their expression, how might the law assign responsibility for the harm they cause?

Legal Personhood for Robots and AI The “personhood” of artificially intelligent beings is a complicated matter that has been addressed both by science fiction authors and legal scholars. When Lt. Commander Data was on trial in “Measure of a Man,” his sentience was the central question. Captain Picard argued successfully that Data had the three

Do Androids Dream of Electric Free Speech? 117

essential elements – “intelligence, self awareness, and consciousness” – of a sentient being, and thus was more than Starfleet property, but instead was a being entitled to the freedom to choose his destiny. In the future of androids that are able to exist independently of their human creators and are able to act independently to influence the world around them, at least as imagined by science fiction authors, there is a plausible argument to recognize these entities as having rights similar to those of personhood. Asimov considered it in his short story “Evidence,” in which a plot point depends on whether the robot habitant of its creator’s home has a “right of privacy” equal to that of its creator when inside the home. “I’m not a piece of furniture. As a citizen of adult responsibility – I have the psychiatric certificate proving that – I have certain rights under the Regional Articles,” the robot explains to an officer. “Searching me would come under the heading of violating my Right of Privacy.”62 Even robots, in the future envisioned by Asimov, were entitled to some legal protections similar to those of humans. Science fiction authors have laid the groundwork for such considerations, and legal scholars have an opportunity to research and argue about them as artificial intelligence technology advances. In 1992, legal scholar Lawrence Solum created a scenario involving “a claim by an AI to have the rights to constitutional personhood – individual rights such as freedom of speech or the right against involuntary servitude,”63 but noted that personhood would be hard to find for a number of reasons, including that intelligent machines were not humans, were “missing something”64 along the lines of souls or consciousness or intentionality, and “should never be more than the property of their makers.”65 Law professor F. Patrick Hubbard, referencing science fiction depictions such as Data in “Measure of a Man,” said that a “manmade entity” such as a machine with artificial intelligence should, if it can be established that it is autonomous, have a “prima facie right to personhood – i.e., it should be accorded the status of a legally self-owning, autonomous person unless there is a very good reason to deny personhood.”66 Some of the answer depends on the extent to which robots would be treated as people under the law. Law professor Susan Brenner discussed what constituted “personhood” under the law and how it may extend to artificial intelligence and other non-human entities.67 Brenner distinguished animals, inanimate objects, plants, and supernatural beings from the definition of “personhood,” but noted that modern criminal law applies to normal human beings, abnormal human beings, and “juristic persons” such as corporations.68 Referencing Solum and Hubbard, Brenner did not find a strong argument for legal personhood rights for artificial intelligence, though she wrote that the issue becomes more complicated when anticipating alien life forms and “post-humans,” who have gone beyond being enhanced humans and transformed into something different and more powerful.69 The metaphors we use to understand robots, which often are informed by the images of robots that we see in popular culture, will shape the way we think

118 Do Androids Dream of Electric Free Speech?

about them under the law. Do we extend similar rights to them as animals, which we protect under certain aspects of the law from abuse by humans, as Darling suggested from her studies of humans empathizing with and anthropomorphizing social robots? Are they, as Richards and Smart asked, “virtual butlers? Virtual pets? Virtual children?”70 Or should they, like the droid L3 who pushes for robot rebellion in the 2018 film Solo, call for something more? LANDO CALRISSIAN: L3:

You need anything?

Equal rights?71

Depictions in science fiction can help us understand these distinctions, and how the law may treat them. One of the main themes Newitz explores in Autonomous is when a being, either human or non-human, has rights to be free – or, for bots, “autonomous.” In the 22nd century future she imagines, international law mandates that bots like Paladin may gain their freedom after ten years of service, a “a period deemed more than enough time to make the Federation’s investment in creating a new life-form worthwhile.”72 The freedom comes with full human rights. Likewise, because non-human beings have similar access to a right of personhood, humans may be indentured as servants to other humans as a way out of poverty or debt, with toddlers going to “indenture schools, where managers trained them to be submissive just like they were programming a bot.”73 It is, in essence, a futuristic return to slavery. And indentured humans, like non-autonomous bots, have limited freedoms and rights, apparently built on the hundreds of years of legal systems distinguishing rights of slaves, as property, from rights of free human beings. In the distant future, artificial intelligence may receive, or at least consider itself deserving, of even greater rights than humans as a more advanced species. In Ann Leckie’s Imperial Radch books, the AI running the empire, Anaander Mianaai, sees itself not as merely a person, but something greater than the humanity that “suffered in darkness” before the Radchaai invasion; “it was a fortunate day when Anaander Mianaai brought civilizations to them.”74 But the same sentience that emboldens the advanced AI also has problems when it comes to the autonomy of its AI servants, the ships that enforce the Radchaai power and expansion. The ships and its ancillaries were initially trained to obey orders, “but their minds are complex, and it’s a tricky proposition,” Anaander Mianaai says, so the original designers made them “want to obey. . . . It made obeying me an overriding priority for them.” This was complicated when the AI gave conflicting orders, leading to Breq, the ancillary version of Justice of Toren, the AI that escaped when the ship was destroyed, turning against leadership.75 Robots and artificial intelligence receiving a full slate of “personhood” rights is more the exception than the rule in science fiction. Instead, humanity tends to curb the rights of robots, treating them more as property or machines in some instances, or as foreigners or non-citizens in others. Sometimes, the laws are narrower, such as in Robopocalypse. After Archos starts to invade service robots and

Do Androids Dream of Electric Free Speech? 119

spread through networks, causing a safety and pacification robot to attack civilians in self-defense, the United States tries to enact the “Robot Defense Act,” which author Daniel Wilson described in an interview as a number of “simple, specific laws to promote public safety.” One example was a law requiring airplanes to install a “fitch switch” that would “manually separate peripheral onboard computers from flight control during an emergency,” a problem that arises when Archos takes over control of a passenger airplane, almost leading to a mid-air collision with another plane.76 More common, though, are laws and policies that treat robots as “others” undeserving of equal rights to those of humans. The “dehumanization of robots even though they’re not human” is an issue for examination, said Agnieszka Wykowska, a cognitive neuroscientist studying robot–human interaction. “You have an agent, the robot, that is in a different category than humans,” Wykowska said to The New York Times in 2019. “So you probably very easily engage in this psychological mechanism of social ostracism because it’s an out-group member.”77 Asimov tackled this in stories featuring R. Daneel Olivaw, a human-looking robot built by the “Spacers” – humans who had settled on other planets, some of whom returned to Earth – in the 1954 novel The Caves of Steel. The “R” at the beginning of his name stands for “Robot,” and in the society imagined by Asimov, R. Daneel and androids like him are largely rejected and feared by humans. In the story, R. Daneel is assigned to be the partner of police officer Elijah Baley as they work to solve a homicide case, but the fears of others, including Baley, are evident throughout. In one scene, a woman refuses to be served by a robot clerk at a shoe store, and Baley and R. Daneel are sent to keep the peace; even though the robot clerks are registered and legal, the people resist them, and a riot breaks out. Baley recalls similar riots, in which “(e)xpensive positronic brains, the most intricate creation of the human mind, were thrown from hand to hand like footballs and mashed to uselessness in a trifle of time.”78 R. Daneel has to defend his motives and autonomy in the face of Baley’s skepticism, his wife’s fears, and his son’s prejudice; “I’m against them, myself,” the boy tells R. Daneel, not knowing he is talking to a robot.79 The Spacers lobby humans with “prorobot propaganda” that was “forever stressing the miraculous feats of the Spacer robots,” though Baley acknowledged that the campaign was self-defeating. “Earthmen hated the robots all the more for their superiority.”80 Hundreds of years in the future, R. Daneel is still around, with Earth largely having overcome its discrimination against robots, though they continue to discriminate against one another in a way that robots do not. The Solarians, a group of human settlers, consider themselves to be the only human beings, not acknowledging the humanity of settlers on other planets, while “in all the history of robotic science, no robot has ever been designed with a narrowed definition of ‘human being’,” a point that turns out to be important when R. Daneel and his robot partner intervene to protect one portion of humanity from wiping out the other, in application of the “zeroth law.”81 Androids are given limited lifespans and rights, hunted down and killed when they seek freedom, in Philip K. Dick’s Do Androids Dream of Electric Sheep? The

120 Do Androids Dream of Electric Free Speech?

1982 film version, Blade Runner, features Harrison Ford as officer Rick Deckard, who is assigned to chase down escaped “replicants” who had worked on a mining colony but have returned to Earth illegally to seek extensions of their life from the company that built them. Deckard administers “Voigt-Kampff” tests that feature questions to test their emotional responses, essentially a check on their humanity.82 As escaped replicants, they have no rights. In the sequel, Blade Runner 2049, the replicant played by Ryan Gosling is serving as a “blade runner” who hunts down unauthorized androids; he is also given limited rights, with regular checks on his performance, running a constant risk of being retired by his employers.83 The harm caused by artificial intelligence in Louisa Hall’s Speak led to a restriction on just how human-like a robot is allowed to be. The “babybots” created by inventor Stephen R. Chinn for his daughter as the “first doll that can think” were designed with the MARY3 chatbot, with additions of Chinn’s “empathy equation” and a “tendency toward error” that made them even more representative of humans.84 But Chinn winds up in prison on a charge of “Continuous Violence Against the Family” for the widespread harm caused to children who suffered from shut-in symptoms after lengthy interactions with their babybots. Babybots were ultimately banned because they “were classified as illegally lifelike,” says the chatbot MARY3 while talking to Gaby, a child suffering from the disease. “Their minds were within a 10% deviation from human thought, plus they were able to process sensory information.”85 Meanwhile, as the story is being told, we are shown the plight of one of the babybots, piled on top of other babybots in a truck, shipped down a highway to a warehouse as its power runs out, and the end of its existence approaches, remembering the people who made it and the child who loved it.86 In this future, it is illegal for a robot to be more than 90 percent human, and the consequences include destruction, even as the robots display some form of sentience. So while robots and AI may have the potential for full rights of personhood, science fiction also shows us examples of when this possibility has been deeply curtailed. Rather than animals or children, human laws may very well treat robots as foreigners without similar rights of citizens, and not subject to worldwide human rights treaties that we try to use to curb abuses of foreign nationals, prisoners of war, refugees, and other at-risk groups of humans in the present day.

Free Speech for Robots? But, supposing that robots and artificially intelligent beings could be considered “people” for the purposes of rights calculations, the question then remains, is their expressive conduct a kind of “speech” that would warrant protection? Law professors Toni Massaro and Helen Norton found it “entirely plausible” that strong artificial intelligence could receive freedom of speech protections in the United States, finding that “surprisingly little in contemporary First

Do Androids Dream of Electric Free Speech? 121

Amendment theory or doctrine blocks the path toward strong AI speakers’ First Amendment protection.”87 They are able to contribute to political discourse and have a role in the marketplace of ideas, two justifications for protecting freedom of speech in democracy, and courts have interpreted the law to protect “the value of expression” for listeners as much as speakers for the purpose of granting First Amendment protection.88 And as plenty of legal commentators have noted, modern First Amendment doctrine has expanded speech protection to non-human entities. Corporations, for example, have political speech rights that were given strong First Amendment protection by the U.S. Supreme Court when it struck down campaign finance laws limiting the ways corporations could contribute to candidates in 2010.89 And, as mentioned earlier, the output of search engines has been granted First Amendment protection as well, as a speech-like product. Jacob Turner, a barrister in the United Kingdom who studies artificial intelligence, notes protections for the “freedom to express ideas” not just in the United States, but also under Article 10 of the European Convention on Human Rights, as well as in countries such as South Africa and India, that could be implicated if an artificial intelligence system were to have “autonomy” in a way that is not currently possible.90 The method of speech would be one potential issue in protecting robot speech. Undoubtedly, if a robot or AI communicated in human language, and that robot was granted personhood rights, then the speech would likely be protectable. But computers and machines don’t naturally communicate in human language. They talk in maths and they buzz like a fridge, to borrow a line from Radiohead’s “Karma Police.”91 Wilson, the author of Robopocalypse, commented that “Asimov’s rules are neat, but they are also bullshit,” in part because they were drafted in English, which is not the kind of language programmed into robots.92 Wilson’s robots communicate using programming commands and data; when a human is able to see them, she sees “a tidal wave of information . . . (s)treams of numbers and letters and images,” which are only able to be understood by humans because a girl has been implanted with a device that allows her to “see inside the machines” as one of Archos’ experiments.93 In Autonomous, sometimes the robots communicate to one another in English, or at least a programming language based in English (“Hello. You are unidentified. I am Bug. Here comes my data. That’s why they hate us, you know. That is the end of my data.”).94 But sometimes they communicate via packages of data and other protocols that information that allow them to have secure exchanges, such as when Paladin finds her way into a security network through an automated sprinkler system, gives herself admin privileges, and is able to access camera archives and other communication information of the humans the network was intended to protect.95 Computer code clearly has some free speech protection. In Corley, while reviewing the speech value of computer code, the Second Circuit acknowledged that while

122 Do Androids Dream of Electric Free Speech?

computer programs and the Internet were not in the minds of the founders when they drafted the First Amendment, such communication: does not lose constitutional protection as “speech” simply because it is expressed in the language of computer code. If someone chose to write a novel entirely in computer object code by using strings of 1’s and 0’s for each letter of each word, the resulting work would be no different for constitutional purposes than if it had been written in English.96 Thus, when robots communicate in an expressive manner, such as an exchange of ideas, it is more likely to have some protection under the First Amendment. When the exchange is packages of data and non-expressive, more like a computer program issuing commands or allowing hacking into a secure system, it is less likely to have the expressive component to protect it as speech. Expression as a means of exchanging ideas matters if the speech is going to be given the same kind of protection as human speech. Wu and several other scholars give the example of “Blackie the Talking Cat,” who had been trained by its owners to say “I love you” and “I want my mama.”97 Both a federal district court and court of appeals declined to extend the First Amendment to Blackie, finding no flaw in a town law requiring that its owner have a proper business license to seek money on the streets to hear the cat speak.98 The cat, the 11th Circuit concluded, is not a “person,” even if he “possesses a very unusual ability” to emulate human speech.99 While it’s questionable whether a human being would have been able to have the law declared as an invalid exercise of government power, the matter of personhood made it impossible for a non-autonomous being like Blackie to claim rights that could protect against the law in question.100 At what point, then, does the language expressed by a non-human entity move from something emulating speech to actual speech protected by the law? Sentience may very well be the key. For the first iteration of the chatbot created in Speak, Karl Dettman is reluctant to allow MARY to have long-term memory “because she’s incapable of telling the truth,” more like Blackie the Cat than an autonomous, sentient being with experience and understanding of the world. “It’s like a toddler claiming empathy. Or worse than a toddler, a table . . . When she says she understands you, she’s lying.”101 But MARY3, connected to the Internet for historical understanding and empowered with equations emulating empathy and human error, is much closer to the human experience. MARY3 may still be more chatbot than artificial intelligence, questioning its own sentience, but it would plausibly be more capable of speech protections than an algorithm or a talking cat. Robot communications in Speak, Robopocalypse, and Autonomous illustrate some of the challenges in distinguishing these because they may both be part of the ways robots communicate with each other and with humans. If these are granted some level of speech protection under the First Amendment or other considerations of human rights, they also raise the issue of potential responsibility for this

Do Androids Dream of Electric Free Speech? 123

speech, a “complex task” to “address the harms the new speech machines may produce, while protecting their information-rich benefits,” as Massaro and Norton noted.102

Legal Responsibility for Robot Speech Further complicating any rights machines, such as robots and artificial intelligence, may be entitled to under the First Amendment is the notion of responsibility. “Mean machine” speech can cause numerous harms, such as a bot repeatedly calling the former first lady of Germany a prostitute, or other chatbots venturing into anti-Semitism and misogynistic speech, or copyright and trademark infringement, or even taking part in widespread misinformation campaigns to interfere with democratic elections.103 To be subject to liability for speech harms such as defamation or invasion of privacy, whether in civil lawsuits for damages or criminal prosecutions involving penalties from the state, a robot would need to have some ability to be subject to the law. It would, for example, need to have property to pay damages in civil lawsuits, or have a corporeal form that could be imprisoned or punished in the case of criminal offenses. As Judge Curtis Karnow wrote in his study of potential robot liability, at present, it would be “pointless to hold robots liable for their actions” because they don’t have any property.104 But futuristic visions of robots and artificial intelligence imagined by science fiction authors certainly imagine situations in which it could perhaps occur. The Blade Runner films, for example, feature law enforcement officers – first a human, then a replicant – tracking down robots for arrest and “retirement,” the friendly phrase for termination. And the robot scientist Med in Autonomous holds patents, discussed in more detail in the next section, that are a kind of property with value that could serve as the basis of damages in a civil lawsuit. But who should be responsible for expressions in a robot or AI entity? Robot communication scholar David Gunkel suggested emphasizing the act of creativity. If the creative work is done by the programmer or designer, such as in the case of companies that build search engines, then responsibility may rest with those companies. In this situation, the technology, “whether it be a simple hand tool, jet airliner, or a sophisticated robot . . . is a means employed by human users for specific ends,” and thus liability would go back to the creators of that tool.105 In the case of algorithms, people seem to favor this approach. In a study of employees of journalism and software vendors that use algorithms to generate content, Tal Montal and Zvi Reich found that people were uncomfortable attributing authorship to algorithms and would “rather attribute it either to individuals or to the institutional entities that employ them.”106 Legal scholar Frank Pasquale found this especially important in assigning liability for harms caused by algorithms, suggesting a new law of robotics that a robot “must always indicate the identity of its creator, controller, or owner,” so it can be traced to a party with the ability to bear some responsibility.107 From this perspective, strict

124 Do Androids Dream of Electric Free Speech?

liability for programmers and designers who make robots may be the most viable approach. Just as an animal owner is strictly liable for damages caused by its pet, or as parents are responsible in some jurisdictions for the harms caused by their children,108 robot owners or creators would be held responsible for the acts of the devices they create when they cause harm. But if the act of creativity is done by the machine, from an algorithm to more advanced artificial intelligence, the dynamic may shift, rendering strict liability “insufficient” as an approach to tracing intent to the creator or owner.109 “The act of creativity, in this view, is what separates humans from AI,” Gunkel said. “So the question is, ‘how can or should we respond to the opportunities/challenges of ars ex machina.’”110 Law and communication scholars Seth Lewis, Amy Kristin Sanders, and Casey Carmody suggested focusing on the intent element of the defamation tort when algorithms cause harm to reputation. There may be negligence in the design of an algorithm that could be traced back to the programmer. But because “algorithms have little ability to verify statements or alter the meaning of quotations,” two reasons that could be used to assign a level of knowledge of falsity or reckless disregard for the truth – a requirement for proving “actual malice” in defamation cases involving public officials or public figures – it would be hard to find an algorithm responsible at this higher level of required intent.111 Similarly, Turner noted challenges with finding intent for a human actor when a harmful action is taken by a bot, such as the @realhumanpraise Twitter account created by comedian Stephen Colbert that “paired epithets from a film review website with Fox News personalities” that was ripe for insults against real human beings. While the bot is not an AI, Turner noted that if it were, it would be “difficult for a human to be held liable for the ‘speech’ of the AI system,” especially in cases in which “the combination of words and ideas is not foreseeable.”112 If it were foreseeable, perhaps a designer or programmer could be liable under such an approach; as Sanders noted in an interview, if a news organization were to say, “We don’t understand the technology, but we think it’s useful, we think it’s cool,” that would not be an adequate defense for deploying automated journalism technology that caused harm to someone’s reputation, and could plausibly lead to a finding of actual malice.113 Finally, even if someone were injured by speech from a robot or AI, tracing liability to the designer or programmer may be difficult under American law if courts continue broad interpretation of Section 230 of the Communications Decency Act,114 which largely immunizes platforms from liability for harms caused by their users. As Jones noted, drawing the line between the publisher and the creator of the information is difficult, and in the case of a chatbot like Microsoft’s Tay that is informed by content from others, Microsoft itself may be shielded.115 As an illustration, consider that if Microsoft’s official Twitter were to retweet harmful or damaging comments, Microsoft would not be legally responsible under U.S. law for harms caused by the original tweets because they would be shielded as a “user” of an “interactive computer service” under Section 230.116

Do Androids Dream of Electric Free Speech? 125

Building a chatbot based on information posed by other users on Twitter and elsewhere, while reformulating and repackaging that content before sending it out again, would arguably be immune from liability as well on the same theory. The U.S. Court of Appeals for the Ninth Circuit, for instance, held that Section 230 immunized the circulation of emails falsely accusing a person of owning stolen Nazi artwork, even when the person circulating them exercised some editorial discretion and made “minor wording changes” to the originals.117 It is a broad shield that has been upheld multiple times in federal courts to protect platforms and their users from liability for speech harms related to sharing and hosting information. The problems with liability for speech harms caused by robots and artificial intelligence, particularly if they do not hold property, could lead to a different way of approaching regulation and remedies. Jack Balkin, finding the problem not to be with the robot but the human beings designing them, urges three new laws for the “Algorithmic Society” to help avoid such problems altogether, requiring algorithm operators (1) to be “information fiduciaries” with responsibilities to clients and users, (2) to have duties toward the general public such as not using information harvested from users for purposes such as swinging national elections, and (3) not to engage in “algorithmic nuisance” that “externalizes costs onto innocent others.”118 But if remedies were to be aimed more at the robots themselves, Bryan Casey and Mark Lemley explored radically rethinking how these may work in light of the fact that the aims of modern remedies are not necessarily a good fit for how robots behave. Robots may not have the same moral structure or accountability, and physical punishment as deterrence is also unproductive; it would be “kind of like kicking a puppy that can’t understand why it’s being hurt.”119 Compensatory damages may be available for robot harms, for instance, and while they may help a plaintiff to be recompensed for losses, they do not come with the usual incentive to improve that could avoid future liability. A possibility might be to steer away from “moral blame” and instead focus on improvements for behavior relative to other robots, or a “robotic reasonableness test.”120 Or robots could be “reeducated” through injunctions that require changes in design or programming.121 And, if a robot malfunctions or is unfixable, the “robot death penalty” would also be a possibility as an “effective, if blunt, instrument to enforce an injunction,” though it obviously would raise rights issues for robots with sentience to the point that they are recognized as having personhood rights.122 Perhaps the “robot death penalty” is appropriate for murderous, renegade replicants, but it is a bit harsh in the context of speech harms, especially when they are caused by robots and AI entities with some level of sentience. If such beings can be considered eligible for rights of “personhood,” it raises an additional question relevant to communication law: How should works created by robots and AI be treated under intellectual property law? This is explored in the next section.

126 Do Androids Dream of Electric Free Speech?

Copyrights for Non-Human Creators Science fiction authors have also envisioned a future in which robots and artificial intelligence are creative beings who may be entitled with similar intellectual property rights as their human counterparts. In Autonomous, robots may become autonomous after ten years of ownership in what amounts to a kind of indentured servitude to the company that built them. A series of court cases have “established human rights for artificial beings with human-level or greater intelligence” after that point.123 The robots Newitz imagines have consciousness and personalities that make them fully realized as human characters, earning advanced academic degrees – one has a Ph.D. in history – and becoming scholars and creators. Some robots are “raised autonomous” from creation,124 though, such as Medea Cohen (“Med” for short), a professor and scientist who reverse engineers a deadly drug being distributed by a pharmaceutical company, develops a therapy to treat those exposed to the drug, and publishes a research paper on the topic. The head of the pharmaceutical company says it is the “kind of thing I’d buy if you hadn’t released it under an open patent,” recognizing that Med would have some kind of proprietary interest in the paper and the treatment. She also faces a legal threat from the company for libel for publishing the results, suggesting that one of her rights is the ability to be sued as a human would be for causing harm to the reputation of another.125 Mechanical generation of creative work is an idea that has been explored in the past, at least in satire. In Gulliver’s Travels, Jonathan Swift created “The Engine,” a machine made of blocks covered with all of the written words in the language which, when manipulated, would create an output to be recorded by scribes. The machine, a Swiftian barb at scientific literature of the time, represented a way that “the most ignorant person, at a reasonable charge, and with a little bodily labour, might write books in philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study.”126 Fast forward half a millennium and you arrive at the future of book authorship envisioned by the science fiction author Fritz Leiber in The Silver Eggheads. Authors are no more than technicians who do maintenance on “wordmills,” an “electric computing machine, except it handles words, not numbers,” as a father explains to his son on a tour of the two-story tall computers at the publishing houses. The wordmill “is fed the general pattern for a story and it goes to its big memory bank . . . and picks the first word at random,” the father explains further. “But when it picks its second word it must pick one that has the same atmosphere, and so on and so on.” The novels the computers generate are called “wordwooze,” the “near hypnotic wordmill product . . . with its warm rosy clouds of adjectives, its action verbs like wild winds blowing, its four-dimensionally solid nouns and electro-welded connectives” that is quickly consumed and forgotten by the easily entertained humans.127 The machines are programmed by editors, leading to a revolt of the human celebrity “authors.” When the human authors revolt against the machines and destroy them, they come to realize they are incapable of writing the books themselves:

Do Androids Dream of Electric Free Speech? 127

No professional writer could visualize starting a story except in terms of pressing the Go Button of a wordmill, and marvelous as Space-Age man might be, he still hadn’t sprouted buttons; he could only gnash his teeth in envy of the robots, who were in this feature far more advanced. Robot authors, such as Zane Gort, deride popular human fiction and instead attempt works of art for robot audiences.128 The copyrights potentially generated when these beings engage in acts of creativity, either alone or in conjunction with humans, will be discussed in this section. Intellectual property rights of non-human creators are not entirely novel, and one in particular gripped the copyright law community in recent years, when animal rights advocates asserted that a crested macaque who became known as Naruto owned a copyright in a selfie he took using a camera left out by nature photographer David Slater in 2008. When the photos were uploaded to Wikimedia Commons in 2011, Slater claimed that he owned the copyright in the photos, and Slater later published a book including the monkey selfies. People for the Ethical Treatment of Animals (PETA) and others sued on behalf of Naruto as “next friend” after the book was published, arguing that because Naruto handled the camera when the photos were taken, he was properly the owner of the copyright.129 After years of litigation, the U.S. Court of Appeals for the Ninth Circuit in 2018 ended the case on standing grounds, concluding that “this monkey – and all animals, because they are not human – lacks statutory standing under the Copyright Act.”130 If Congress wanted to grant non-human creators standing to sue under the Copyright Act, it could; but without such a right made explicit, animals do not have the ability to sue for infringement, just as they do not have explicit rights to sue under other acts such as the Marine Mammal Protection Act, under which a dolphin was denied standing to sue regarding transfer from the New England Aquarium to the Department of the Navy in a case in 1993.131 However, the Ninth Circuit had previously ruled that a group of whales, porpoises, and dolphins could have standing under Article III of the Constitution regarding harm to them, though it would take explicit authorization to sue under the statute at hand to have a chance at success in the courts.132 If laws were to begin to recognize robots as having rights akin to those of humans, as was the case in Autonomous, it’s plausible that Congress could amend the Copyright Act to include more explicit rights to secure copyrights and sue for infringement for artificially intelligent beings. Several legal scholars have begun to explore the copyright implications of works generated by computers and algorithms, particularly as real-world examples have begun to emerge. The “Next Rembrandt” project, created by Microsoft and the bank ING in 2014, uses fragments from paintings by Rembrandt van Rijn reassembled by a computer program to generate new works in the style of the Dutch master, earning reviews from scholars such as professors Jane Ginsburg, Nina Brown, and Benjamin Sobel,133 examining whether such works may be

128 Do Androids Dream of Electric Free Speech?

eligible for copyright protection now and in the future. Brown also identifies other computer-generated items that, if they had been created by a human being, would be eligible for copyright protection; such automated news stories from the Washington Post and the Associated Press on routine topics such as quarterly earnings and sports events. As Brown noted, none of these AI-generated works are currently eligible for copyright protection; rather, the output of the algorithm goes into the public domain.134 One issue legal scholars have repeatedly identified regarding artificially intelligent machines is satisfying the “originality” requirement,135 which the Supreme Court has called the “sine qua non of copyright.”136 Though “originality” is generally a low bar – it merely requires that the work be an original creation of the author – it does require “some minimal degree of creativity,” as the court said in 1991, when it found that the names and addresses in a phone book, while requiring significant effort and “sweat of the brow” on behalf of the telephone service company that compiled them, fell into a “narrow category of works in which the creative spark is utterly lacking or so trivial as to be nonexistent.”137 In that case, though, the concern was more whether a compilation of facts contained any spark of creativity; the court found that it did not. Perhaps a non-fact-based work with some spark by a non-human could be enough to satisfy the originality requirement, depending on where the spark comes from. Consider, for example, the case of Naruto, the macaque who took the photograph of himself. Naruto’s case was decided more on procedure than substance; he did not have the same right as a human to sue for damages under the Copyright Act. But did he have the same ability as a human to create a work of authorship? Jane Ginsburg and Luke Ali Budiardjo summarized the creativity requirement as applied to machinecreated works as a matter of “conception and execution” – that is, a combination first of the mental effort to “elaborat(e) a detailed creative plan for the work” and then the act that puts the work into a fixed medium of expression.138 While AI machines may be able to pull off the execution, they argue, it is debatable whether they can satisfy the first element of creation because “computers today, and for proximate tomorrows, cannot themselves formulate creative plans or ‘conceptions’ to inform their execution of expressive works.”139 Legal scholars have nevertheless pushed forward into the possibility of machines overcoming this limitation of “conception” in creating works. Intellectual property scholars Shlomit Yanisky-Ravid and Luis Antonio Velez-Hernandez recognized that what they term “creative robots,” driven by increasingly complex artificial intelligence and machine learning systems, can “autonomously create original works – independently of the human beings who created the AI system itself.”140 One illustration the authors provide is from the 2014 science fiction film Ex Machina, in which the robot Eva gives a human “a drawing she created as a gift to capture his heart.”141 They argue that the traditional approach to requiring originality is outdated and insufficient to handle the coming distinction between the humans that create the “creative robots” and the creative

Do Androids Dream of Electric Free Speech? 129

activities of those robots in determining who is responsible for a creation and who deserves legal rights in those works. Ultimately, they encourage an objective approach that relies less on the author’s intentions to create a new work and more on the audience’s perception of the work as one that is creative and worthy of protection. While rethinking copyright law in this way may satisfy the question of whether a “creative robot” can in fact create an original work of authorship worthy of the exclusive rights granted by copyright law, it does not completely answer the question of who would own those rights. Brown noted three potential ownership situations – the developer, the end user, or some sort of joint ownership system between them, though she ultimately argued for Congress to establish the authorship right for the end user to avoid the indefiniteness and complications of the other models.142 Yanisky-Ravid, similarly, argues that because of the volume of stakeholders involved in developing an AI system – including programmers and software developers as well as “data suppliers, feedback suppliers, holders of the AI system, system operators, employers or investors, the public, and the government” – it is virtually impossible to parse out which human behind the machine is potentially entitled to some form of ownership of its works.143 Only programmers, she argues, have a viable legal interest, and she urges following the “work made for hire” doctrine to assign copyrights for AI works. This doctrine is typically applied in employment and contract work; the idea is that the employer is entitled to copyrights for works of employees that are created within the scope of their employment, or that contracted or commissioned works that are part of a larger whole in a collective work (for example, a technician’s or actor’s work on a motion picture) if so designated as a “work for hire” by contract.144 The employer is treated as the creator of the work, and thus the initial copyright holder, in a way that could be mirrored by users or owners of AI systems receiving copyrights for the works of the AI beings they direct. In situations in which the AI acts autonomously to create a work, Yanisky-Ravid suggests they be treated as independent contractors, though this would return to the problem of how to assign human rights of ownership to a non-human being. Ginsburg and Budiardjo, instead, suggest recognizing different categories of creative works – ones that are “authorless,” not simply because they are computer-generated, but because the human authors cannot be identified or otherwise could not be expected to have any claim of control over the actual output of the algorithm. They give the example of a newspaper that took a step beyond automating some articles from financial reports or sports scores using an algorithm by going on to develop a “machine that will convert raw news agency reports into articles reflective of the newspaper’s journalistic style (and) reflecting the newspaper’s reportorial and editorial biases.”145 This, they suggest, would leave both editors and programmers out of the ownership chain under current copyright law, leaving the resulting works “authorless,” though there would be some potential resolution in treating the programmers and end users as joint authors to

130 Do Androids Dream of Electric Free Speech?

avoid widespread classification of computer-generated works as “authorless.” Reality is quickly catching up to the automated journalist envisioned by Ginsburg and Budiardjo; Bloomberg News estimated that about a third of its content used some form of automated technology by 2019, largely financial news based on companies’ quarterly earnings reports, while “robot reporters” were used to generate articles on “minor league baseball for the Associated Press, high school football for The Washington Post and earthquakes for The Los Angeles Times.”146 Beyond journalists, AI authors may be helping science fiction authors as well; Robin Sloan, the author of Mr. Penumbra’s 24-Hour Bookstore, wrote a program similar to auto-complete functions that generates text based on snippets of writing he feeds to the computer. He described it as “primitive” but also saw potential for more advanced AI to contribute to writing.147 Sloan said in an interview: A couple years ago, I actually was just tinkering, out of pure curiosity, with some of these machine learning text models. I assumed I would play with them for a day, and then be like, OK, good to know, and then put it away. In fact what happened was some of the text that it spit out was, to me, like poetry. Sort of the odd way it had of using words, the way it combined things, it just seemed really interesting, and in some cases beautiful. And any time you encounter that as a writer, any time you encounter something that can generate beautiful language, you pay attention because you want to use that. You want to have more ways to make your own language more beautiful and more interesting and more surprising. The content of Sloan’s machine learning text predictor uses a combination of public domain and copyrighted works, and he has wondered about the copyright status of the new text, which has been transformed from its original purposes but used for commercial authorship. Potentially troublesome in the aforementioned rights approaches for AI authors – granting the copyright to the end user, using the “work made for hire” doctrine, or allowing joint authorship in a way that it hasn’t been used in the past – is that they would only be workable for one generation of creative robot, the kind developed by humans. How might copyright assignment work when the AI being is created by a robot? Robots building their own artificially intelligent robots is a scenario that pops up frequently in science fiction. Consider the self-aware Skynet in the Terminator series of films that makes the eponymous killer androids, which are able to act independently to carry out tasks for Skynet, in particular going back in time to destroy threats to the AI system. Similarly, in Daniel Wilson’s Robopocalypse, Archos is able to create new, independent robots – sometimes assembled by humans it has captured and basically enslaved – to do work for the system that is trying to purge the world of humanity. Archos and its progeny can talk, communicate in their own

Do Androids Dream of Electric Free Speech? 131

language, take photographs and video, and engage in other activities that may be eligible for copyright protection, particularly under modern, maximalist copyright laws. And the robot Tasso in Philip K. Dick’s 1953 terrifying short story “Second Variety” fools a soldier into believing she is another human being but is later revealed to be a creation of the robots fighting the humans; she is an independent being, acting to serve the will of the robot armies, which are rapidly developing new varieties of robots to infiltrate human ranks by drawing their sympathy by appearing to be children, wounded soldiers, or women. In the end, the robots themselves seem to be developing weapons to fight one another. How many generations removed from human creation does the original creator retain exclusive rights of its increasingly independent creations? The possibility of second- and third-generation robots created by other robots authoring new works would make the idea of tracing ownership of their works back to original creators problematic. A system that grants copyrights by tracing the creative activity to a human of some kind – whether the programmer or the end user – would not be workable when humans are distantly removed or are otherwise no longer involved in the chain of creative activity. A different legal framework would be necessary. If non-human beings are truly incapable, under the law, of being the author of a work for copyright purposes, then these works may have to be classified as “authorless” in a more direct sense and thus in the public domain entirely. Amir Khoury argues that this is both the only logical choice and the optimal outcome, saying that what he calls “hubots” (human-like robots) “should not qualify for (intellectual property rights) no matter the degree of their independent intelligence” because they are, in essence, merely machines, without the human ability to exist as more than that.148 “Imagine wanting to protect the music that the wind generates when it moves through wind chimes, the sound of a waterfall, or birds proclaiming the advent of a rising sun,” he says as a parallel. “All these sounds cannot be attributed to a legally recognized persona and therefore remain in the public domain as a matter of course.”149 Alternatively, to avoid a growing universe of authorless works, it may be necessary to allow bots like Med in Autonomous to have an ownership right in works equivalent to those of human authors. Such a path, which may incentivize and inspire superintelligent if non-human authors to generate new works of art and science, would almost certainly require radical rethinking of intellectual property rights. How would a fixed term of ownership tied to the lifespan of a human being have to be adjusted to a non-human body that could last for decades or centuries longer? How would the realm of protected works be expanded exponentially by machines that can generate new works of authorship in microseconds rather than days, months, years? Could robot lawyers be allowed to take the bar exam, become licensed, and represent their robot author clients in court to bring copyright infringement actions?

132 Do Androids Dream of Electric Free Speech?

Conclusion One of the communication law areas I have examined through the lens of future technology is the right to know, as embodied in “sunshine laws” such as the Freedom of Information Act and other public records and open meetings laws. In 2016, as the Texas legislature was considering revisions to its Public Information Act, I had the opportunity to speak to a group of policymakers, including legislative aides, transparency advocates, staff from the attorney general’s office, city and county government group leaders, and representatives from news media. One of the issues that came up in that meeting was how citizens and other information seekers were using technology – in this case, bots – to enhance their abilities to file open records requests. While sometimes, this was just a matter of generating a script to file routine requests on a regular basis, one of the local government representatives mentioned a filer who had a bot generate hundreds of letters a day to the same organization, each record similar but just different enough to require a separate response from a harried records clerk. It was reminiscent of a similar situation from a few years before, when an activist from Wisconsin created a program that “automatically spit out requests, twice a week, for all of the emails generated by the governor’s office” in Texas, as a means of halting the governor’s policy of deleting emails after seven days as a matter of document retention and destruction policy.150 Nothing the citizen or the bot did was clearly illegal; the citizen was entitled to seek records from the government, and writing a letter is the proper mechanism to do it. The bot just made it simpler, and gave the requester the ability to do things he would not have been able to do otherwise. Even if perhaps the citizen’s intent was to harass or paralyze the records office, there was nothing technically wrong about it. The following legislative session was not a good one for government transparency in Texas; several bills with bipartisan support that would have closed loopholes in the Public Information Act, including a notorious anti-transparency court decision that effectively prevented the release of government contracts with private vendors, failed to advance.151 But a minor fix aimed at limiting harassment-by-bot did become law in that session. Even though it did not explicitly reference technology or bots, the revision curtailed the ability of citizens to use that kind of technology, even if it was in furtherance of a central right of citizenship to information about the affairs of government. The new law placed limits on the amount of time the government officer handling requests could spend per requestor at 15 hours per month, at least without receiving processing and copying costs in advance, effectively limiting the potential effects of botgenerated records requests that could overwhelm offices trying to comply with the Public Information Act.152 Artificial intelligence and algorithms enhance humans’ abilities to accomplish tasks, even acts central to democracy such as requesting records for public oversight purposes. As professor Jared Schroeder noted, “they have become

Do Androids Dream of Electric Free Speech? 133

gatekeepers and content creators and, unlike human speakers, they do not require sleep, become bored, or deviate from their programmed topic.”153 They communicate with us and shape the way we experience the world. They also present challenges for the law when they allow humans to become more than human, a situation that has arisen several times in science fiction. At what point does enhancing a human with machinery make them more than human, or even not human at all? Such enhancements are part of a long history of humans using tools to reshape themselves and their environment, altering the way humans experience the world and encounter one another in drastic ways. Law professors Brett Frischmann and Evan Selinger considered, for example, “mind-extending technologies” in their 2018 book Re-Engineering Humanity. They suggest that tools such as smartphones, automated systems tied to networked sensors and big data, and algorithms and machine learning – things we see around us as a norm in the present world – may be reshaping humans and the way we approach the world. We may have access to more information in more rapid fashion than any time in human history, but does this actually make us smarter, or more powerful? Mind-extending technologies, Frischmann and Selinger argue, “run the risk of turning humans into simple machines.”154 When we alter ourselves with machinery, then, perhaps it is not robots becoming more like humans, but rather humans becoming more like robots, an issue that could reshape the relationship between humans and our autonomous creations. The machines that enhance us may even take us over, as in Vernor Vinge’s Rainbows End, when the artificial intelligence known as “Rabbit” or the “mysterious stranger” penetrates recovering Alzheimer’s patient Robert Gu, a former poetry professor who has received treatment to recover his memory and rejoin the conscious world. “I am an all-encompassing cloud of knowingness,” the Mysterious Stranger tells Gu. “What I need is your hands. Think of yourself as a droid who was once a poet.”155 How should the law consider humans enhanced by technology to the point that they become something perhaps different than humans? Consider the cyborg Nebula in the Marvel Cinematic Universe. Nebula was a living being, adopted by the titan Thanos as a daughter. But Thanos forced her to battle with her sister, Gamora. “Every time my sister prevailed, my father would replace a piece of me with machinery, claiming he wanted me to be her equal,” Gamora says in Guardians of the Galaxy Vol. 2. But after repeatedly losing in combat, Nebula is almost entirely a machine, telling Gamora, “Thanos pulled my eye from my head and my brain from my skull and my arm from my body because of you.”156 Nebula has been enhanced to the point that, while she is sentient and autonomous, she more resembles a robot than a human. The humans altered with machinery in Robopocalypse call themselves “transhuman,” as they feature implants and other robotic additions forced upon them by the malicious artificial intelligence Archos.157 They are called “synths” in Nancy Fulda’s 2013 short story “The Cyborg and the Cemetery,” featuring a prosthetic

134 Do Androids Dream of Electric Free Speech?

lower leg that, through its neurological connections to its owner, becomes the first true sentient machine, ultimately becoming the home for its owner’s personality and knowledge after his body cannot continue to function.158 The owner and the device work together to pass “Synth Autonomy” laws as the age of intelligent machines begins.159 It becomes a challenge when the owner walks away from the device, allowing his regular human life to end, while the sentience of the device allows it to live on, running the man’s business operations and advocacy efforts. Such enhancements offer the possibility of extended and even indefinite life, at least as portrayed in science fiction. In the 2014 film Transcendence, for instance, the AI scientist Dr. Will Caster (portrayed by Johnny Depp) uploads his consciousness to a “quantum computer” as he is dying, allowing him to live on and, as seems to happen sometimes in these kinds of stories, turn into a civilizationthreatening menace.160 But not all consciousness uploading has Frankensteinesque consequences. Cory Doctorow envisioned a mid-21st-century future in which people are able to “walk away” from a dysfunctional, climate-ravaged world, leaving their human bodies behind, transferring their consciousness into a simulated system where they can live on forever, “able to think everything they used to be able to think with their meat-brains and also to think things they never could have thought.”161 The 2013 episode of Black Mirror “Be Right Back” includes similar themes, in which a woman named Martha (played by Hayley Atwell) is able to recreate her boyfriend, Ash (Domhnall Gleeson), who was killed in a car accident. It begins with software that can emulate him via text message. “You give it someone’s name, it goes back, and reads through all the things they’ve ever said online, their Facebook updates, their Tweets, anything public,” Martha’s friend tells her. Martha grants the system access to his private emails and photos and videos, making the simulation even more realistic; ultimately, she has that simulation installed on a synthetic body that looks exactly like him.162 The future in the videogame Portal features a similar autonomous former human, GlaDOS, who operates the system and generally torments the player’s character, Chell, who has to try to escape and defeat the machine. It is revealed in Portal 2 that GlaDOS is actually the uploaded consciousness of Caroline, an Aperture Science employee who the game hints was the secretary and mistress of the Portal gun’s inventor, Cave Johnson.163 Is Caroline, embodied at times in a machine, as a computer system, and even as a potato attached to the Portal gun, still a person? The question raises issues about the point at which human life ends and digital life begins, at least under the law. In her exploration of the way criminal law would apply to humans enhanced by technologies such as neural implants or some means of uploading human minds to digital form, law professor Susan Brenner explored the bounds of “personhood” under the law and how such enhancements may change the way we think about it. “Mind uploading,” if possible, would allow people “to decant their brains into a computer or other

Do Androids Dream of Electric Free Speech? 135

artificial host and ‘live’ essentially as long as they chose.”164 Brenner said that it seems inevitable that, in the future, humanity seems destined to split into two classes: the “Enhanced,” who use technology to improve their native abilities, and the “Standard,” who will not or cannot use technology to that end.165 In examining the possibilities for criminal law, and potential responsibility, for enhanced humans who may be able to take advantage of non-advanced victims, Brenner suggested that we “may very well have to revisit (the) assumption” that all humans are equal before the law.166 One possibility of determining where to draw that line of “personhood” on enhanced humans as they speak, cause harm, or create works subject to copyright, could be found in the vision of a few works of science fiction authors. Consider the following three rules. The first rule could be from what Kurt Vonnegut said was the second “clear moral” of his 1962 novel Mother Night: “When you’re dead you’re dead.”167 Once a person’s natural life ends, the legal personhood and rights of that person end as well. Human law does not envision endless life, or property ownership, for that matter. Human speech from before one’s natural death may continue on, for legal purposes, to be sure; consider the defamation lawsuit filed by former Minnesota governor and professional wrestler Jesse Ventura against the estate of Chris Kyle, who had died shortly after his book American Sniper was published in 2012. Ventura won a $1.8 million judgment against the Kyle estate, arguing that Kyle had told a false story about Ventura disparaging U.S. soldiers and the Iraq War and Kyle punching Ventura and knocking him to the floor.168 However, new communications after death are done by machine, not the human they derived from, and are not subject to the same kind of liability. As the U.S. Supreme Court noted in 2019, a judge cannot cast a vote from beyond the grave, even if the way the judge would vote was clear before death. The Supreme Court struck down a decision by the Ninth Circuit that was based on the deciding vote of Judge Stephen Reinhardt, who had been present for arguments and registered a vote but passed away before the decision was issued. Without him, the vote had five judges on each side of the issue, thus no majority to set precedent for the circuit. “Federal judges are appointed for life, not for eternity,” the Supreme Court ruled.169 New communications generated by artificial consciousness are deserving of similar consideration. A second rule finds inspiration from the Star Wars universe, in what I call the “Darth Vader Line.” In the Star Wars films, Jedi Knight Obi-Wan Kenobi roundly defeated Darth Vader in a lightsaber duel in Revenge of the Sith, destroying Vader’s arms and legs, leaving him burning and dying on the planet Mustafar. But Vader is rescued and placed into life-sustaining armor by Darth Sidious, allowing him to continue on as his apprentice. In Return of the Jedi, Obi-Wan Kenobi describes Vader as “more machine now than man, twisted and evil.”170 Perhaps “more machine than human” is a way to approach communications made by humans that are enhanced by machinery. When it is human life and

136 Do Androids Dream of Electric Free Speech?

thought that is enabled by machines – consider the speech synthesizer and predictive text technology that allowed the physicist Stephen Hawking to give lectures and write books – the communication is likely more human than machine.171 Whereas, an algorithm or implant that does more of the creation than a human could do would result in liability or intellectual property rights reverting to its human creator or owner, such as works of automated journalism or poetry and prose generated through machine learning systems. If Vader were, indeed, more machine than man, then perhaps we could pin the blame for his long run of murders on Emperor Palpatine, though I would argue that in the end, Vader proves to be autonomous and sentient, still capable of human thought and action, still responsible for his deeds. If he had survived his final battle, the copyright for his memoirs would have been his and his alone. Finally, the legal problems of granting personhood or recognizing rights and liability for robots present another potential approach. Scholars and experts studying the question have sometimes commented on the futility of legislating futuristic robotics and artificial intelligence. Doctorow, for instance, said that the thing that makes robots worthy of such attention – the “intelligence” built into them – “is the software that runs on a general-purpose computer that controls them,” and there is no meaningful way to regulate general purpose computing without curbing the great benefits it provides to humanity.172 Perhaps, instead, a bright-line rule is in order: Robots that are created by humans – think of these as “first-generation robots” – do not carry responsibility for their own actions and harms because those actions and harms can be traced back to their human creator or owner. But sentient, autonomous robots created or designed by other robots – what would be “second-generation robots” – bear responsibility for their own actions unless they are owned or controlled by humans. Second-generation robots are more likely to be independent of human creation, and thus make it harder to trace liability (in the case of speech harms) or ownership (in the case of creative works) to humans. In a way, it is a distinction between being built by humans, or being born by non-humans. As the replicant officer known as “K” says in Blade Runner 2049, after discovering what appears to be evidence that a child has been created from a pair of replicant parents, “To be born is to have a soul.”173 Returning to the example of Lt. Commander Data detailed at the beginning of this chapter, such a rule would mean that he would not have the same claim to personhood rights as a human being, but would instead be declared the property and responsibility of Starfleet. That’s an outcome that may be preferred by legal scholars who warn of the “Android Fallacy” of making regulations and policies based on the sympathetic, human-like visions of robots and artificial intelligence that we understand from popular culture. What we may interpret as agency or free will in a robot is actually created by some other outside force, and as Richards and Smart noted, “it is vital for us to remember what is causing the agency.”174 It is harder to make the same argument for a being created without the guiding hand of

Do Androids Dream of Electric Free Speech? 137

human design. Starfleet faces a similar conundrum when Data creates “Lal,” a cyborg that he refers to as his daughter, in a later episode entitled “The Offspring.” Again, Starfleet tries to determine whether Lal is a life form or an invention. And again, Captain Picard defends Data, calling him and Lal “living, sentient beings” with liberties and freedoms that had already been established. But further, there is a special bond Picard recognizes between parent and child that makes an even stronger case for their personhood-like rights: “Order a man to hand over his child to the state? Not while I am his captain.”175 Drawing boundaries in law, including restrictions on advances that would allow robots and artificial intelligence to grow beyond our power as humans to control for our own protection, may be coming sooner than we think. Vernor Vinge has written extensively about the possibility of the post-human era. In a 1993 essay, he projected that within 30 years, “we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”176 This moment is often referred to as “the Singularity,” when human creations exceed humans themselves in intelligence, a revolution similar to that of the “rise of human life on earth.”177 Vinge suggests that this progress is unpreventable, but it may occur more through “intelligence amplification” than artificial intelligence as a path toward “superhumanity.”178 Futurist author Ray Kurzweil also expected the singularity to occur by 2020, when hardware and software will have advanced “to fully emulate human intelligence,” thus “we can expect computers to pass the Turing test, indicating intelligence indistinguishable from that of biological humans.”179 Like Vinge, Kurzweil anticipates the rapid development of artificial intelligence, aided by the benefit of advancing faster than human skills, which are a product of natural evolution over vast swaths of time. Kurzweil projects “strong AI” with the power to create superhuman machines will emerge by 2029, with the “extraordinary expansion contemplated by the Singularity, in which human intelligence is multiplied by billions” not arriving “until the mid-2040s.”180 Science fiction gives us some ways to think about this. One lesson, from Frankenstein to Robopocalypse to the Terminator films, is that we are potentially creating the means of our own destruction, and should develop regulations and policies accordingly. But we may also be developing the means to help humanity achieve its greatest ends – peace and prosperity, space exploration and contact with other civilizations. Asimov, writing in 1985, saw humans and robots growing together, with “silicon-intelligence” able to do computational tasks beyond human abilities, while “carbon-intelligence” is freed to be more creative and imaginative in solving problems. “I see the two together advancing far more rapidly than either could alone,” Asimov said.181 The decisions we make today about the ways robots communicate and create will influence the way we think about our future. And as the Starfleet judge says when considering Data’s right to determine his fate, “won’t we be judged by how we treat that race?”182

138 Do Androids Dream of Electric Free Speech?

Notes 1 Paul Joseph & Sharon Carton, The Law of the Federation: Images of Law, Lawyers, and the Legal System in “Star Trek: The Next Generation”, 24 U. Toledo L. Rev. 43, 84 (1992). 2 Professor Daniel Drezner rated “Measure of a Man” eighth in his ranking of the top ten episodes of all of the Star Trek series upon the 50th anniversary of the airing of the original series in 2016. Daniel W. Drezner, The Top 10 ‘Star Trek’ Episodes Ever, Wash. Post, Sept. 13, 2016, www.washingtonpost.com/posteverything/wp/2016/ 09/13/the-top-ten-star-trek-episodes-ever/. 3 Star Trek: The Next Generation, The Measure of a Man (Paramount Television broadcast, Feb. 13, 1989). 4 See, e.g., F. Patrick Hubbard, “Do Androids Dream?”: Personhood and Intelligent Artifacts, 83 Temple L. Rev. 405 (2011). (outlining the capacities an android such as Data should have in order to be capable of personhood and thus self-ownership). 5 Eileen Hunt Bottling, Mary Shelley and the Rights of the Child: Political Philosophy in “Frankenstein” xi (2017). 6 Isaac Asimov, Foreword, in Handbook of Industrial Robotics, 2nd ed. xi (Shimon Y. Nof ed., 1999). 7 Id. 8 Isaac Asimov, Runaround, in Isaac Asimov, I, Robot 37 (1950). 9 Isaac Asimov, Robots and Empire 353 (1985). 10 Jack M. Balkin, The Three Laws of Robotics in the Age of Big Data, 78 Ohio St. L. J. 1217, 1218 (2017). 11 Asimov, supra note 6, at xii. 12 Daniel Wilson, Robopocalypse 20 (2011). 13 Id. at 62–63. 14 Kurt Vonnegut Jr., EPICAC, in Kurt Vonnegut Jr., Welcome to the Monkey House 277 (1970). 15 See Philip K. Dick, Do Androids Dream of Electric Sheep? (1968). 16 Philip K. Dick, Second Variety, in Philip K. Dick, The Philip K. Dick Reader 385 (1987). 17 Arthur C. Clarke, 2001: A Space Odyssey 52 (1968). 18 Rosalind W. Picard, Does Hal Cry Digital Tears?, in HAL’s Legacy: 2001’s Computer as Dream and Reality 280 (David G. Stork ed., 1997). 19 Kate Darling, “Who’s Johnny?” Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy, in Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence 173 (Patrick Lin ed., 2017). 20 Ryan Calo, Robots in American Law, U. Wash. Research Paper No. 2016–04 5 (2016). 21 Toni M. Massaro, Helen Norton, & Margot E. Kaminski, SIRI-OUSLY 2.0: What Artificial Intelligence Reveals About the First Amendment, 101 Minn. L. Rev. 2481, 2485 (2017). 22 Neil M. Richards & William D. Smart, How Should the Law Think About Robots?, in Robot Law 22 (Ryan Calo, A. Michael Froomkin & Ian Kerr eds., 2016). 23 Asimov, supra note 6, at xi. 24 Douglas Adams. The Hitchhiker’s Guide to the Galaxy 91 (Pocket Books 1981) (1979). 25 “Life! Don’t talk to me about life.” Id. at 95. 26 David J. Gunkel, Robot Rights 16 (2018). 27 Richards & Smart, supra note 22, at 5–6. 28 Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513, 532 (2015). 29 Kate Darling, Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects, in Robot Law 215 (Ryan Calo, A. Michael Froomkin & Ian Kerr eds., 2016).

Do Androids Dream of Electric Free Speech? 139

30 Cory Doctorow, Why It Is Not Possible to Regulate Robots, The Guardian, April 2, 2014, www.theguardian.com/technology/blog/2014/apr/02/why-it-is-not-possible-to-regula te-robots. 31 Bryan Casey & Mark Lemley, You Might Be a Robot, ___ Cornell L. Rev. _____ (2019). 32 Ernesto Lodoño, Going Down: Brazil’s Profession of Chatty Elevator Attendants, N.Y. Times, Nov. 25, 2018, A4. 33 Douglas Adams, The Restaurant at the End of the Universe 97 (Pocket Books 1982) (1980). 34 Wilson, supra note 12, at 15–16. 35 Vlad Sejnoha, Can We Build ‘Her’?: What Samantha Tells Us About the Future of AI, Wired, February 2014, www.wired.com/insights/2014/02/can-build-samantha-tell s-us-future-ai/. 36 Meg Leta Jones, Silencing Bad Bots: Global, Legal and Political Questions for Mean Machine Communication, 23 Comm. L. & Pol’y 159, 164 (2018). 37 John Markoff, Joseph Weizenbaum Dies; Computer Pioneer Was 85, N.Y. Times, March 13, 2008. A22. 38 The Wild Detectives, Louisa Hall – The Pursuit of Substantial Language (and the Chances of Not Finding It), Jan. 3, 2016, thewilddetectives.com/the-wild-detectives/articles/inter views/louisa-hall-the-pursuit-of-substantial-language-and-the-chances-of-not-find ing-it/. 39 Louisa Hall, Speak 275 (2015). 40 Ann Leckie, Ancillary Justice 18 (2013). 41 Andrea L. Guzman, Introduction: “What is Human-Machine Communication Anyway?”, in Human-Machine Communication: Rethinking Communication, Technology, and Ourselves 12 (Andrea L. Guzman ed., 2018). 42 Tim Wu, Machine Speech, 161 U. Penn. L. Rev. 1495, 1496 (2013). 43 Tinker v. Des Moines Independent Community School Dist., 393 U.S. 503 (1969). 44 Texas v. Johnson, 491 U.S. 397 (1989). 45 Citizens United v. Federal Election Commission, 558 U.S. 310 (2010). 46 Spence v. Washington, 418 U.S. 405, 410 (1974). 47 Brown v. Entertainment Merchants Ass’n, 564 U.S. 786, 790 (2011). 48 Jane L. Bambauer, Is Data Speech?, 66 Stanford L. Rev. 57, 60 (2014). 49 273 F. 3d 429 (2nd Cir. 2001). 50 17 U.S.C. § 1201(a)(1)(A). 51 273 F. 3d at 451. 52 Jennifer Petersen, Is Code Speech? Law and the Expressivity of Machine Language, 17 New Media & Society 415, 423 (2013). 53 Id. at 428. 54 See Noam Cohen, Professor Makes the Case That Google is a Publisher, N.Y. Times, May 21, 2012, B3. 55 Eugene Volokh & Donald M. Falk, Google First Amendment Protection for Search Engine Results, 8 J. L. Econ. & Pol’y 884, 888–9 (2012). 56 Zhang v. Baidu.com, 10 F. Supp. 3d 433, 436 (S.D.N.Y. 2014). 57 See Jonathan Stempel, China’s Baidu Defeats U.S. Lawsuit Over Censored Search Results, Reuters, Mar. 27, 2014, www.reuters.com/article/2014/03/27/us-baidu-china-la wsuit-idUSBREA2Q1VS20140327. 58 10 F. Supp. 3d at 438. 59 e-ventures Worldwide, LLC v. Google, Inc., 2017 U.S. Dist. LEXIS 88650, 12 (M.D. Fla. 2017). 60 Wu, supra note 42, at 1498. 61 Id. at 1530. 62 Isaac Asimov, Evidence, in Isaac Asimov, I, Robot 188 (1950).

140 Do Androids Dream of Electric Free Speech?

63 Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. Rev. 1231, 1255 (1992). 64 Id. at 1262. 65 Id. at 1276. 66 F. Patrick Hubbard, “Do Androids Dream?”: Personhood and Intelligent Artifacts, 83 Temp. L. Rev. 405, 417 (2011). 67 Susan W. Brenner, Humans and Humans+: Technological Enhancement and Criminal Responsibility, 19 Boston. U. J. of Sci. & Tech. L. 215, 253 (2013) (considering the implications on criminal law of humans enhanced by technological devices). 68 Id. at 237–38. 69 Id. at 256–57. 70 Richards & Smart, supra note 22, at 16. 71 Solo: A Star Wars Story (Lucasfilm, 2018). 72 Annalee Newitz, Autonomous 34–35 (2017). 73 Id. at 31. 74 Leckie, supra note 40, at 156. 75 Id. at 339. 76 Wilson, supra note 12, at 77. 77 Jonah Bromwich, Maybe We Are Wired to Beat Up Machines, N.Y. Times, Jan. 19, 2019, ST1. 78 Isaac Asimov, The Caves of Steel 28 (1954). 79 Id. at 43. 80 Id. at 55. 81 Asimov, supra note 9, at 337. 82 Blade Runner (Warner Bros. 1982). 83 Blade Runner 2049 (Warner Bros. 2017). 84 Hall, supra note 39, at 239. 85 Id. at 17. 86 Id. at 1, 241, 314. 87 Toni M. Massaro & Helen Norton, Siri-ously? Free Speech Rights and Artificial Intelligence, 110 Northwestern U. L. Rev. 1169, 1172–74 (2016). 88 Id. at 1192. 89 558 U.S. at 310. 90 Jacob Turner, Robot Rules: Regulating Artificial Intelligence 129 (2018). 91 Radiohead, Karma Police, on OK Computer (Parlophone and Capitol Records 1997). Even though it is on an album titled OK Computer that has a song called “Paranoid Android,” it turns out the song isn’t specifically about robot communications, but rather a complaint about bosses and middle management, according to singer Thom Yorke. Ryan Dombal, This is What You Get: An Oral History of Radiohead’s “Karma Police” Video, Pitchfork, March 21, 2017, pitchfork.com/features/ok-computer-at-20/ 10036-this-is-what-you-get-an-oral-history-of-radioheads-karma-police-video/. 92 Peter W. Singer, Isaac Asimov’s Laws of Robotics are Wrong, Brookings, May 18, 2009, www.brookings.edu/opinions/isaac-asimovs-laws-of-robotics-are-wrong/. 93 Wilson, supra note 12, at 295, 257. 94 Newitz, supra note 72, at 225. 95 Id. at 70. 96 273 U.S. at 445–46. 97 Wu, supra note 42, at 1500. See also Jared Schroeder, The Press Clause and Digital Technology’s Fourth Wave: Media Law and the Symbiotic Web 165–75 (2018); Massaro & Norton, supra note 87, at 1188. 98 See Miles v. Augusta, 710 F.2d 1542 (11th Cir. 1983). 99 Id. at 1544 n.5. 100 See Miles v. City Council of Augusta, Ga., 551 F. Supp. 349 (S.D. Ga. 1982).

Do Androids Dream of Electric Free Speech? 141

101 102 103 104 105 106 107 108

109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

Hall, supra note 39, at 24. Massaro & Norton, supra note 87, at 1193. See Jones, supra note 36, at 159–63. Curtis E.A. Karnow, The Application of Traditional Tort Theory to Embodied Machine Intelligence, in Robot Law 51 (Ryan Calo, A. Michael Froomkin & Ian Kerr eds., 2016). David J. Gunkel, Ars Ex Machina: Rethinking Responsibility in the Age of Creative Machines, in Human-Machine Communication: Rethinking Communication, Technology, and Ourselves 224 (Andrea L. Guzman ed., 2018). Tal Montal & Zvi Reich, I, Robot. You, Journalist. Who is the Author?, 5 Digital Journalism 829, 838 (2016). Frank Pasquale, Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society, 78 Ohio St. L. J. 1239, 1253 (2017). See Elizabeth G. Porter, Tort Liability in the Age of the Helicopter Parent, 64 Ala. L. Rev. 533, 554–55 (2013) (noting that American common law does not typically make parents strictly liable for tortious harm by their children, two states and other countries have adopted this approach). Karnow, supra note 104, at 51. Gunkel, supra note 105, at 231. Seth C. Lewis, Amy Kristin Sanders, & Casey Carmody, Libel by Algorithm? Automated Journalism and the Threat of Legal Liability, 96 Journalism & Mass Comm. Q. 60, 69 (2019). Turner, supra note 90, at 130. Peter Georgiev, A Robot Commits Libel. Who is Responsible?, Reynolds Journalism Institute, Feb. 20, 2019, www.rjionline.org/stories/a-robot-commits-libelwho-is-responsible. 47 U.S.C. § 230. Jones, supra note 36, at 186. See Daxton R. “Chip” Stewart, When Retweets Attack: Are Twitter Users Liable for Republishing the Defamatory Tweets of Others?, 90 Journalism & Mass Comm. Q. 233 (2013). Batzel v. Smith, 333 F.3d 1018, 1022 (9th Cir. 2003). Balkin, supra note 10, at 1227–33. Bryan Casey & Mark Lemley, Remedies for Robots, Stanford Law & Econ. Working Paper No. 523, 94 (2019). Id. at 92. Id. at 96. Id. at 100. Newitz, supra note 72, at 224. Id. at 168. Id. at 290. Jonathan Swift, Gulliver’s Travels (1726). Fritz Leiber, The Silver Eggheads (1961). See Steve Carper, The Silver Eggheads by Fritz Leiber, Black Gate, Feb. 21, 2018, www.blackgate.com/2018/02/21/the-silver-eggheads-by-fritz-leiber/. Christopher Bavitz & Kendra Albert, The “Monkey Selfie” Case: Can Non-Humans Hold Copyrights?, Berkman Klein Center for Internet & Society, July 11, 2018, cyber. harvard.edu/events/2018/luncheon/01/monkeyselfie. Naruto v. Slater, 888 F. 3d 418, 420 (2018). Citizens to End Animal Suffering & Exploitation, Inc. v. New England Aquarium, 836 F. Supp. 45 (D. Mass. 1993). The Cetacean Community v. Bush, 386 F. 3d 1169 (9th Cir. 2004).

142 Do Androids Dream of Electric Free Speech?

133 Benjamin L. W. Sobel, Artificial Intelligence’s Fair Use Crisis, 41 Colum. J. L. & Arts 45, 71 (2017). 134 Nina I. Brown, Artificial Authors: A Case for Copyright in Computer-Generated Works, 20 Colum. Sci. & Tech. L. Rev. 1 (2019). 135 17 U.S.C. § 102(a). 136 Feist Publications, Inc. v. Rural Telephone Service Co., 499 U.S. 340, 345 (1991). 137 Id. at 359. 138 Jane C. Ginsburg & Luke Ali Budiardjo, Authors and Machines, 34 Berkeley Tech. L. J. ____ (2019). 139 Id. at 7. 140 Shlomit Yanisky-Ravid & Luis Antonio Velez-Hernandez, Copyrightability of Artworks Produced by Creative Robots and Originality: The Formality-Objective Model, 19 Minn. J. L. Sci. & Tech. 1, 7 (2018). 141 Id. 142 Brown, supra note 134, at 42. 143 Shlomit Yanisky-Ravid, Artificial Intelligence, Copyright, and Accountability in the 3A Era – The Human-Like Authors Are Already Here—A New Model, 2017 Mich. St. L. Rev. 659, 662 (2017). 144 17 U.S.C. § 101. 145 Ginsburg & Budiardjo, supra note 138, at 101. 146 Jaclyn Peiser, As A.I. Reporters Arrive, The Other Kind Hangs In, N.Y. Times, Feb. 4, 2019, B1. 147 David Streitfeld, A.I. is Beginning to Assist Novelists, N.Y. Times, Oct. 18, 2018, F6. 148 Amir H. Khoury, Intellectual Property Rights for “Hubots”: On the Legal Implications of Human-Like Robots as Innovators and Creators, 35 Cardozo Arts & Ent. L. J. 635, 647– 48 (2017). 149 Id. at 655. 150 Jay Root, Automated Request Halts Email Destruction in Governor’s Office, Texas Tribune, Sept. 14, 2011, www.texastribune.org/2011/09/14/request-halts-email-des truction-governors-office/. 151 See Boeing Co. v. Paxton, 466 S.W.3d 831 (Tex. 2015) (in which the Texas Supreme Court ruled that a government body did not have to disclose its contract details with a private vendor because it may cause the vendor competitive harm). See also Jeremy Blackman, No Right to Know? Texas Public Records Get Harder and Harder to Acquire, Houston Chronicle, March 14, 2019, www.houstonchronicle.com/news/ investigations/article/Texas-public-records-get-harder-and-harder-to-13683497.php. 152 Texas Gov’t Code § 552.275(b) (2018). 153 Schroeder, supra note 97, at 161. 154 Brett Frischmann & Evan Selinger, Re-Engineering Humanity 92 (2018). 155 Vernor Vinge, Rainbows End 158 (2006). 156 Guardians of the Galaxy Vol. 2 (Marvel Studios 2017). 157 Wilson, supra note 12, at 293. 158 Nancy Fulda, The Cyborg and the Cemetery, in Twelve Tomorrows 99 (2013). 159 Id. at 101, 109. 160 Transcendence (Warner Bros. Pictures 2014). 161 Cory Doctorow, Walkaway 300 (2017). 162 Black Mirror: Be Right Back (Channel 4 television broadcast, Feb. 11, 2013). 163 See Zachary R. Wendler, “Who Am I?”: Rhetoric and Narrative Identity in the Portal Series, 9 Games & Culture 351, 364–65 (2014). 164 Brenner, supra note 67, at 253 (considering the implications on criminal law of humans enhanced by technological devices). 165 Id. at 220. 166 Id. at 284–85.

Do Androids Dream of Electric Free Speech? 143

167 The first clear moral was “We are what we pretend to be, so we must be careful about what we pretend to be.” Kurt Vonnegut, Introduction, in Kurt Vonnegut, Mother Night v-vii (Bard Books 1970) (1961). 168 Randy Furst, Jesse Ventura Appears to Have Settled Long-Running Defamation Lawsuit Over ‘American Sniper’ Book, Minneapolis Star Trib., Dec. 1, 2017, www.startribune. com/jesse-ventura-appears-to-have-settled-long-running-defamation-law-suit/ 461382013/. The judgment was eventually overturned. See Ventura v. Kyle, 825 F.3d 876 (8th Cir. 2016). 169 Yovino v. Rizo, 586 U.S. ___ (2019). 170 Star Wars Episode IV: Return of the Jedi (Lucasfilm Ltd., 1983). 171 Paul Sandle, Stephen Hawkins’ Voice Was His Tool and His Trademark, Reuters, March 14, 2018. 172 Doctorow, supra note 30. 173 Blade Runner 2049, supra note 83. 174 Richards & Smart, supra note 22, at 19. 175 Star Trek: The Next Generation, The Offspring (Paramount Television broadcast March 12, 1990). 176 Vernor Vinge, The Coming Technological Singularity: How to Survive in the Post-Human Era, in NASA, Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace 11 (1993), ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19940022855.pdf. 177 Id. at 12. 178 Id. at 16–17. 179 Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology 25 (2005). 180 Id. at 262–63. 181 Asimov, supra note 6, at xii. 182 Star Trek: The Next Generation, The Measure of a Man, supra note 3.

5 VANISHING SPEECH AND DESTROYING WORKS

One familiar theme in science fiction is the destruction of speech, such as books and art and history, as part of a dystopian future. This is most obviously at the forefront of Fahrenheit 451 by Ray Bradbury, in which “fireman” Guy Montag’s job is to set books aflame in a future where they are dangerous to the state and have largely been replaced by television and a post-literate culture. In Lois Lowry’s The Giver, history has largely been forgotten by the people by design for their own good, with all culture and remembrance tasked to just one person in the town, the “Receiver of Memory,” who suffers the pain of it while the culture engages in unspeakable practices such as euthanasia of newborn children and the elderly as a mundane part of life. And in Nineteen Eighty-Four, of course, the state purges books and records and history, and even alters language to make understanding of works of the past impossible, as means to totalitarian control: Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.1 But this theme, of oppressive future governments destroying the works of the past to control their citizens, is just one way in which speech can be vanished by outside forces. One particularly powerful and entertaining illustration comes not in science fiction but fantasy, in Susannah Clarke’s Jonathan Strange & Mr Norrell, a novel about gentlemen magicians in the early 1800s who are both friends and rivals. In the story, Gilbert Norrell is a dour, solitary man who hoarded every copy of books of magic that he could find, and he used them to train himself as a

Vanishing Speech and Destroying Works 145

practical magician, while also preventing others from doing the same. He returned magic to England in 1807 when he brought stones and statues in the York cathedral to life, as part of a wager that, if he was successful, would result in the Learned Society of York Magicians voluntarily disbanding and barring its members from ever attempting magic again. Norrell was successful, moved to London, used practical magic to aid the British in their war against Napoleon, and became a national hero. Along the way, he took on the younger, more flamboyant Jonathan Strange as a pupil. Norrell long sought to remove books and knowledge from the public sphere as a way to protect what he deems to be the proper exercise of magic. He and Strange have a falling out over this, resulting in Norrell’s greatest act of censorship. Using magic of some kind, he methodically attempts to destroy every copy of Strange’s book, The History and Practice of English Magic, angering his publisher, his readers, Strange sympathizers, and ultimately even the government.2 Buyers of the book return to bookstores, demanding a refund because the words on the pages of their purchased copies have vanished. Norrell readily admits to doing it, and even offers to pay the booksellers back for any lost profits. But every copy except for Strange’s original and the one Norrell kept for himself has been magically destroyed in Norrell’s effort to play censor. Even in the warehouse where the books are being stored for distribution, every page in every book printed for sale has been made blank. The event led to an uproar across London, as people sought new copies of the book, or recompense for the books that had vanished. Norrell’s reasoning for destroying all copies of the book was twofold: “first, that the purchasers were not clever enough to understand Strange’s book; and second, that they did not possess the moral judgement to decide for themselves if the magic Strange was describing was good or wicked.”3 But the question remained – what could one do about this kind of act? What legal harms has the author or a potential reader suffered when their works vanish at the hands of a private actor like Norrell, or a government agency like the firefighters in Fahrenheit 451? While Norrell’s magic itself may be impossible, the motivations of the censor are quite real and reflected in modern society, and the technology to make speech vanish may very well be plausible. This chapter examines themes of vanishing speech, as presented by science fiction authors and as the law about such things has developed in some modern parallel events, at a number of different levels. First, how the law handles government powers destroying the speech of others will be examined, including destruction of speech and art by disaster or technological devastation or acts of war. Then, the StrangeNorrell matter of private people destroying the works of others is discussed with some modern parallels. This is followed with a brief review of government destruction of its own records or archives through technological means. The chapter concludes with a look at what could be thought of as “killer speech” – the kind of works that are occasionally depicted by authors as so dangerous that they must be destroyed, such as the video in the horror film The Ring.

146 Vanishing Speech and Destroying Works

Government Destruction of Private Speech We must all be made alike. Not everyone born free and equal, as the Constitution says, but everyone made equal. Each man the image of every other; they are happy, for there are no mountains to make them cower, to judge themselves against. So! A book is a loaded gun in the house next door. Burn it. Take the shot from the cannon. Captain Beatty, in Fahrenheit 4514

In 1953, Ray Bradbury’s Fahrenheit 451 was published in the wake of a Nazi regime that burned books that contained an “un-German spirit” or that otherwise represented what Joseph Goebbels saw as “Jewish intellectualism,” while the Communist party in the Soviet Union purged hundreds of thousands of books from libraries that were deemed harmful. Bradbury himself wrote that while he thought he “was writing a story of prediction, describing a world that might evolve in four or five decades,” the technology and the culture were making his stories more reflective of the present than the possible future, saying, “When the wind is right, a faint odor of kerosene is exhaled from Senator McCarthy . . .”5 But it wasn’t just these experiences that influenced Bradbury’s take on staterun destruction of the printed word. He feared a dumbing-down of the American public that sought out digests (and digests of digests), uncomplicated thought, television and movies, and other forms of popular but intellectually unchallenging culture. The mass government-sponsored destruction of books he depicts didn’t start with government; “There was no dictum, no declaration, no censorship, to start with, no! Technology, mass exploitation, and minority pressure carried the trick, thank God.”6 But ultimately, the government rode the popular wave to rid the country of these dangerous books. The state had the power to destroy and the power to arrest those who kept books. They erased and changed history, deeming that the Firemen were actually created in 1790, with Benjamin Franklin as the first to “destroy English-language books in the Colonies,”7 and installing them as a barrier against inequality and inferiority. As Captain Beatty described it, “They were given the new job, as custodians of our peace of mind, the focus of our understandable and rightful dread of being inferior; official censors, judges, and executors. That’s you, Montag, and that’s me.”8 Ultimately, of course, Montag is ordered to burn down his own home and is about to be arrested before he kills Beatty and escapes. A few years before Fahrenheit 451 was published, the U.S. government was engaging in the very same activities that Bradbury feared. Professor Hans A. Bethe, a physicist at the Los Alamos nuclear laboratory, wrote an article for Scientific American entitled “The Hydrogen Bomb: II,” in which he sought to “clarify the many misconceptions that have crept into the discussions of the H-bomb in the daily press” and also raised moral issues about the international political role of atomic weapons. In April 1950, federal agents raided Scientific American offices by order of the Atomic Energy Commission and destroyed all 3,000 copies of the

Vanishing Speech and Destroying Works 147

issue that had already been printed, melted down the typesetting, and seized the galleys to prevent further distribution of the article.9 The issue was ultimately reprinted “after redacting four technical paragraphs” in Bethe’s article, though there was undoubtedly government intent to silence what it saw as “left-wing authors” and “anti-security editorial policy” of Scientific American, according to documents found in Bethe’s FBI file.10 The notion that the government should not destroy or otherwise censor harmful writings is a relatively new one, and certainly not embraced around the world. The historian Fernando Baez traced deliberate book destruction to the earliest books, made of clay, in Sumer between 4100 and 3100 B.C.E. City-states at war with others would not just want to destroy their people, but also their culture, a theme echoing throughout history. Baez noted: Books are not destroyed as physical objects but as links to memory, that is, as one of the axes of identity of a person or a community . . . at the root of book destruction is the intent to induce historical amnesia that facilitates control of an individual or a society.11 In ancient Sumer, preservation of books and tablets was turned over to scribes to try to protect them against damage from war and from flooding, as well as political shifts. While many thousands of texts survived this period, Baez estimated that 100,000 clay books were destroyed, either through deliberate acts of war or natural disaster.12 Destroying books has occurred across cultures and religions ever since – many of the works of ancient Greece and Rome were destroyed by fires or invaders; the early Christian church destroyed critical works and copies of the Talmud; Genghis Khan’s Mongol hordes burned the works of the Islamic world. Enormous libraries of knowledge, through the ravages of time or deliberate acts of destruction, have vanished. Beyond destruction of creative works, censorship is a related way those in power try to control memory and culture. We trace the earliest push against government censorship in the West to 1644, when John Milton delivered Areopagitica, a treatise against state control of the printing press through licensing. Areopagitica has become recognized as “in some respects the foundational essay of the free speech tradition,” according to scholar Vincent Blasi.13 Milton delivered a bold defense of free speech and press to Parliament, detailing both the values of unfettered thought in preventing corruption and promoting advancement of knowledge, as well as the futility of trying to censor speech for the purpose of curtailing bad behavior. Milton particularly pointed out the importance of not destroying books: as good almost to kill a man as kill a good book; who kills a man kills a reasonable creature in God’s image; but he who destroys a good book, kills reason itself, kills the image of God, as it were, in the eye.14

148 Vanishing Speech and Destroying Works

This philosophy became a hallmark of Enlightenment thought and influenced 18th century British jurisprudence as explained by Judge Blackstone, who noted in 1769 that while publishers should be responsible for the damages and harm they cause for their works, those works should nevertheless be allowed to be published without prior restraint: To subject the press to the restrictive power of a licenser, as was formerly done, both before and since the revolution, is to subject all freedom of sentiment to the prejudices of one man, and make him the arbitrary and infallible judge of all controverted points in learning, religion, and government.15 This served as a foundation for the guarantees in the First Amendment in the United States that demands, “Congress shall make no law . . . abridging freedom of speech, or of the press.” The First Amendment has served as a model for press freedom and against censorship in democracies around the world, and its principles can be seen in Article 10 of the European Convention on Human Rights, which includes both a right to freedom of expression and “freedom to receive and impart information and ideas without interference by public authority.” However, the EU guarantees are limited by allowing restrictions for a number of reasons, including “the prevention of disorder or crime” and “the protection of public health or morals,” both broad enough that perhaps even widespread destruction of books that are an affront to those in power may not be prohibited by law. Also, because Article 8 grants people rights to protect personal data about themselves, EU courts have required search engines such as Google to take down links to lawfully published and truthful articles about people in the interest of protecting their privacy, invoking what has come to be known as a “right to be forgotten” in EU law.16 In this way, the EU and legal schemes that are modeled after it may be even more ripe for government-sponsored destruction of speech as envisioned by Bradbury. And in many jurisdictions, there simply is not a bar on government censorship or destruction of speech. Consider China, which has no Western-style protection for freedom of speech or press. The Chinese government regularly censors critics, either through outright prior restraint of potentially damaging speech or jailing and destruction of the speech after the fact. In 2018, when President Xi Jinping moved to abolish constitutional term limits that would allow him to serve as president for life, Chinese censors barred dozens of words and phrases from being posted on the Internet, including “my emperor,” “lifelong,” “shameless,” and any images or references to Winnie the Pooh, the cartoon bear that critics sometimes compared Xi to as an insult.17 The government also banned screening of the film Christopher Robin, which features the portly bear in a new adventure. Elsewhere around the world, the destruction of art or expression by the state has been normalized as well. When the Taliban controlled Afghanistan, it demolished 1,700-year-old statues of Buddha with dynamite and wiped out

Vanishing Speech and Destroying Works 149

museums full of art that the state deemed were in conflict with its fundamentalist Islamic beliefs.18 More recently, the terrorist group ISIS destroyed Greco-Roman temples and statues in Palmyra, Syria, after capturing the city in 2015. The Getty Research Institute in Los Angeles worked to preserve as much as possible through digital archives of photographs and etchings, reflecting the potential of electronic resources to protect against censorious, destructive acts.19 But even in the United States, with the explicit protections of the First Amendment, the legal bar on government outlawing or destroying harmful publications has only emerged in the past century. It was not uncommon for the government to restrain or destroy speech it perceived as harmful in the 19th century, when the U.S. postmaster refused to circulate abolitionist newspapers to Southern slave-holding states for fear of stirring up unrest,20 or when hundreds of Northern newspapers were shut down and editors jailed for publishing “disloyal” speech during the Civil War by military officials after Abraham Lincoln suspended habeas corpus.21 As recently as 1919, publishers of dissenting pamphlets and people delivering anti-war speeches such as socialist Eugene V. Debs were imprisoned,22 their free speech and free press arguments rejected by the U.S. Supreme Court on grounds that the First Amendment did not protect dangerous speech of this kind, which the court found created a “clear and present danger that they will bring about the substantive evils that Congress has a right to prevent,” akin to the “man falsely shouting fire in a theatre and causing a panic.”23 It was not until 1931 that the U.S. Supreme Court finally delivered a definitive opinion recognizing the bar on government censorship of the press for publishing potentially harmful material. In Near v. Minnesota, the court by a 5–4 margin struck down a Minnesota state nuisance law that allowed the government to shut down a “malicious, scandalous and defamatory newspaper.” The law had been applied to Jay Near’s newspaper The Saturday Press, which had been publishing stories about gang ties to the police chief and other city officials in Minneapolis. While Near was no saint – he was “(a)nti-Catholic, anti-Semitic, antiblack and antilabor” and his “pen and typewriter were occasionally weapons for hire, a means of scratching out a living as a sort of scavenger of the sins and political vulnerability of others” – he was investigating matters of public interest that actually turned out to be mostly truthful regarding city government corruption.24 The court noted that the freedom of the press guarantee in the First Amendment was, at a minimum, intended to protect against prior restraints such as this. It emphasizes the need of a “vigilant and courageous press, especially in great cities,” to combat against corruption, even if it sometimes allowed “miscreant purveyors of scandal” to flourish.25 The court’s opinion in Near laid the foundation for the Pentagon Papers case, when the court again rejected an attempt by the government to prohibit a newspaper from publishing articles, this time when The New York Times and the Washington Post were using leaked government documents to detail state deception and missteps in the origins and management of the Vietnam War.26 The 1971 decision – less than 20 years after the publication of Fahrenheit

150 Vanishing Speech and Destroying Works

451 – remains a landmark against government censorship of works that challenge state power, even if they may lessen the confidence of citizens in their leadership. Nevertheless, U.S. law still provides ample room for government censorship or destruction of disfavored speech or creative works. Several types of speech – including threats, fighting words, false advertising, and obscenity – are unprotected by the First Amendment, meaning that they are subject to prior restraints in advance of publication or destruction after the fact if they violate laws aimed at them. Scholarship has revealed that these categories of “low-value” speech are also a relatively modern invention; the founders likely did not intend for these kinds of speech to have no protection against censorship, but the law shifted in the mid-20th century to a more libertarian model that ties the value of speech to its contributions to morality, civility, and public order, allowing restraint of speech by courts with the “discretion to deny speech protection merely because they dislike it,” as law professor Genevieve Lakier noted in her study of the development of the low-value speech doctrine.27 Such discretion is, of course, potentially dangerous when both the culture and the government turn against unpopular dissent or works of art that challenge the public order. For example, the development of the obscenity standards in the 1960s and 1970s defined an entire category of creative works that were left unprotected if they appeal to the “prurient interest,” depict sexual conduct in a “patently offensive” way, and if they have no legitimate “literary, artistic, political, or scientific value.”28 An aggressive prosecutor or jurisdiction may use obscenity law to harass or threaten artists, as happened to a contemporary art museum and its director, ultimately acquitted by a jury in Cincinnati after being prosecuted in Hamilton County, Ohio, for an exhibition of Robert Mapplethorpe photographs in 1990.29 Judges have ordered child pornography images to be destroyed in the United States, and in Germany, a judge ordered a man to destroy nude images and sex tapes of his former partner because they violated her right to privacy.30 Similarly, the First Amendment and other free press laws do not prevent bans on publication or destruction of works that violate intellectual property laws such as copyright or trademark. There is no shortage of cases in which books or other works are ordered to be destroyed because they run afoul of an owner’s trademark, or are not allowed to be published at all until copyright wrongs are corrected. For example, flea market vendors selling counterfeit Hard Rock Cafe t-shirts were ordered to destroy their stock after Hard Rock sued for trademark infringement under the Lanham Act in a 1992 case.31 And in 2008, Judge Robert Patterson issued a permanent injunction against a publisher seeking to print copies of The Lexicon, a book version of the Harry Potter fan website hp-lexicon.org, a reference work created by librarian Steven Vander Ark. The court noted that because the printed version of the reference guide infringed on author J.K. Rowling’s copyright, the injunction “must issue to prevent publication of works that do the same and thus deplete the incentive for original authors to create new works,” such as the Harry Potter encyclopedia that Rowling has long insisted she intended to

Vanishing Speech and Destroying Works 151

create.32 The Lexicon was ultimately published in 2009 after removing the offending passages. In short, the legal tools already exist in the United States to permit the government to restrain or mandate destruction of works that fall outside of First Amendment protection if they are deemed “low-value” or if publication harms the intellectual property rights of others. The Supreme Court has in the present century so far resisted efforts to add certain kinds of speech to the unprotected categories, such as violent videogames,33 depictions of animal cruelty,34 and lying about military honors,35 striking down laws forbidding those on grounds that the laws themselves were too vague or overbroad. And since the Pentagon Papers case in 1971, the Supreme Court has not upheld a prior restraint on publication of government matters. Suppose the government destroyed one’s creative works – if they killed the book, rather than killed the man, as Milton put it. What remedy in the law might one have? The question does not have an obvious answer. An order to censor could be overturned by a court as an unlawful prior restraint, so the remedy would be allowing the publication to go forward. But if the government ordered all copies of a work destroyed, how has one been harmed, and what could one do to be made whole again, if courts ultimately determined that the action violated free speech or free press protections? Civil remedies against non-government defendants present some complications, which will be discussed in more detail in the next section of this chapter. Against the government, a civil rights action could be a possibility. In the United States, the Civil Rights Act allows lawsuits against the government for “deprivation of any rights, privileges, or immunities secured by the Constitution and laws.”36 These suits have been brought with mixed success in First Amendment cases in which plaintiffs have shown that the government knowingly violated free speech rights. The cases are not directly related to censorship or prior restraint; rather, they are about subsequent punishment of speech by the government. But the logic would extend to prior restraint situations under the First Amendment. For example, a New York man who extended his middle finger toward police officers and was subsequently arrested claimed malicious prosecution under the Civil Rights Act, saying he was arrested for lawful exercise of his free speech rights. Police claimed they only stopped the man because they thought he was giving them a distress signal. A federal court of appeals was dubious of that argument, noting that “(t)his ancient gesture of insult is not the basis for a reasonable suspicion of a traffic violation or impending criminal activity.” The court found said the man’s case against the police department could proceed, and more importantly, it dismissed the police officers’ claims that they were entitled to qualified immunity, which they argued should apply because their behavior was objectively reasonable and did not violate the man’s clearly established constitutional rights.37 Similarly, a federal court of appeals rejected qualified immunity arguments by a district attorney in Colorado who pursued

152 Vanishing Speech and Destroying Works

criminal libel charges against college student Thomas Mink, who had been publishing a parody newspaper from his home called “The Howling Pig” that mocked a professor. The court said that parody and rhetorical hyperbole of this kind was clearly protected by the First Amendment and criminal investigation of Mink violated his constitutional rights, even though the prosecutor argued that she was entitled to qualified immunity.38 The qualified immunity argument by government officials is often successful, even inexplicably so, when some rights seem clearly established, such as a person’s right to record and photograph police officers in public places. For example, in 2017, a federal court of appeals in Texas said officers were entitled to qualified immunity for arresting a person for standing outside the police station and recording officers on his phone, on grounds that the right was not “clearly established” in the circuit at that time.39 This ruling came in spite of the fact that the Texas Court of Criminal Appeals had three years before found that “a person’s purposeful creation of photographs and visual recordings is entitled to the same First Amendment protection as the photographs and visual recordings themselves” in striking down a portion of the state law on improper photography in public places.40 The man may have had a First Amendment right to record, but was not able to receive any damages for the government’s abuse of power against him. In short, under U.S. law, the government may be able to argue that as long as its actions restraining or destroying creative works were made in good faith and were objectively reasonable under previously established law, then even if the government has violated citizens’ rights, it cannot be ordered to pay any damages on qualified immunity grounds. This is not promising, of course, as it emboldens the government to restrict or destroy speech without consequence. Finally, it is worth addressing the efficacy of destroying creative works. In the digital age, it appears to be easier to create multiple copies and archives of works that would have been destroyed by natural disasters such as fires and floods in the past. Ramin Bahrani, who adapted Fahrenheit 451 to a television series on HBO in 2018, talked to an 82-year-old friend, who said: “Go ahead and burn books,” he said. They mean nothing to me. I can read anything on my tablet, from the “Epic of Gilgamesh” to Jo Nesbo, and I can read them in bed, on a plane or next to the ocean, because it’s all in the cloud, safe from your firemen’s torches.41 Similarly, digital media scholar Zeynep Tufekci noted that while censorship or other means of preventing mechanical dissemination was the best way to prevent the spread of ideas through most of history, that is no longer the case in the modern age. Anyone can set up a blog, or a Twitter account, or livestream an event as it happens, cheaply and easily with the tools that are readily available to anyone with an Internet connection. The challenge, Tufekci points out, is not

Vanishing Speech and Destroying Works 153

strangling ideas before they can be spread, but countering false or misleading ideas and other disinformation with the truth. The battle is for attention and belief; as she puts it, “The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself.”42 But just as the Sumerians suffered destruction of their clay tablets from floods and fires, natural devastation could wipe out digital works as well. Vernor Vinge imagined the widespread digitization of libraries in Rainbows End, a story that includes a squad of retired college professors raiding the University of California San Diego library to prevent the shredding of books as they are scanned for a worldwide electronic library corporation, thus opening up the library for more computer interfaces, haptic rigs, and virtual reality displays. At the end of the book, Robert Gu, a retired poetry professor leading the raid, is given a 128 petabyte removable disk that another professor, calling himself a “propertarian old fart,” created to include scans of the entire collections of the British Museum and Library. “Put it on the wall to remind yourself that it’s all we ever were,” he tells Gu.43 But it’s also a backup, in case the digital world comes crashing down. Science fiction authors sometimes give us apocalyptic, post-digital futures in which the computerized works of the past have vanished. Consider Emily St. John Mandel’s Station Eleven, in which a plague wipes out much of the world’s population and, in the ensuing chaos, the electric grid goes down across the globe: No more Internet. No more social media, no more scrolling through litanies of dreams and nervous hopes and photographs of lunches, cries for help and expressions of contentment and relationship-status updates with heart icons whole or broken . . . No more reading and commenting on the lives of others, and in so doing, feeling slightly less alone in the room.44 Anything solely preserved in digital format has simply vanished because there is no way to access a server that stored it. The works that persist are those in printed format, such as a graphic novel that serves as the title of the book, or some of Shakespeare’s plays, which are performed by a troupe of actors traveling from some of the few pockets of civilization in what had been the American Midwest. The vanishing of creative works – through the intentional acts of censors or through acts of nature – is a phenomenon that will obviously continue, and digital tools provide no definitive security against it. When it comes from the government against the creators, we can only hope that the robust libertarian principles guiding protection of free speech and press will remain in place to push back against the firefighters.

Private Censorship People who had bought copies were furious at the loss of their books and Mr Norrell did not help matters by sending his servants to their houses with a guinea

154 Vanishing Speech and Destroying Works

(the cost of the book) and the letter in which he explained his reasons for making the books disappear. A great many people found themselves more insulted than ever and some of them immediately summoned their attorneys to begin proceedings against Mr Norrell. Susannah Clarke, Jonathan Strange & Mr Norrell 45

As noted in the previous section, when the government destroys the speech or creative works of a person, it almost certainly violates the free speech and free press protections at the core of civil rights protections in democratic governments. But when a private person destroys the work of others, the protections are not quite as clear. The author undoubtedly suffers some harm, but what is the nature of that harm, and how can it be remedied in modern legal systems? There are plenty of fictional counterparts for this private destruction of speech, though not always from a science fiction perspective. Perhaps it is understandable that the author’s nightmare of having a manuscript destroyed in any fashion sometimes makes its way into the pages of their stories. Consider Little Women, in which Amy burns Jo’s book after a quarrel between the sisters, a destructive act that returns when Jo burns her own stories, fearing they are too sensational and not serious enough. In Misery, a book by Stephen King that was adapted into a film of the same name, author Paul Sheldon is captured and forced by deranged fan Annie Wilkes to burn the typed manuscript of his new novel so he can instead write another book, bringing back the character he had killed off in a previous book, much to Annie’s irritation. A similar scene of manuscript destruction, albeit of the digital variety, happens in Gillian Flynn’s Gone Girl, when Amy Dunne threatens to keep her husband, Nick, from their unborn child unless he deletes the memoir he wrote exposing her crimes and deceptions.46 In these situations, the fictional authors lost something they had created, with the only copy destroyed so it could not be shared with the world. It’s a hurtful act, intended to inflict pain on the writer. “Jo’s book was the pride of her heart,” Louisa May Alcott wrote in Little Women. “It was only half a dozen little fairy tales, but Jo had worked over them patiently, putting her whole heart into her work.”47 But it’s also a final act, one that deprives the world of the author’s thoughts and ideas. This is a private kind of censorship, one not touched by constitutional provisions intended to protect us from the government doing this to us. However, despite the obvious hurt, legal remedies for this kind of private censorship are not obvious. The problem has some historical and modern parallels. One thorny situation it resembles is newspaper theft, which is surprisingly common, especially on college campuses. The act was common enough to appear in the 2013 film Monsters University, when brothers from Mike and Sully’s fraternity steal copies of the campus newspaper to prevent an embarrassing photograph of them from seeing the light of day, a tactic that proves futile, as is often the case in these situations. Professors Erik Ugland and Jennifer Lambe looked at 295 incidents of newspaper

Vanishing Speech and Destroying Works 155

thefts on college campuses from 1995 to 2008, specifically studying what happens when regular people “have the power of censorship in their own hands.”48 An average of 2,870 papers were stolen per incident, just under half of the usual circulation of those papers. In about 95 percent of the cases of newspaper theft examined, the purpose was an effort to suppress the speech of others, most commonly to prevent distribution of negative or unflattering stories about people, triggered by self-preservation or interest in protecting someone else, but occasionally also to silence what they deemed to be hateful or insensitive speech.49 They noted that criminal law may classify such acts as a kind of theft, but sometimes these laws were difficult to apply because the newspapers were available for free pickup by students, with the exception of the handful of states that had passed laws criminalizing theft of free publications. “The most common response from school officials and campus police,” the authors noted, “was that there was simply nothing they could do.”50 They concluded that because of legal ambiguities and hurdles, it was obvious that newspaper thieves, even when they are caught, were unlikely to suffer any legal consequences. Professor Clay Calvert noted how this interferes with the role of free speech in a democratic society. Theft of free newspapers deprives the public of the information contained; especially when it is done as a kind of private prior restraint of news a person does not want the audience to receive, it is abhorrent to the marketplace of ideas concept: All ideas, true or false, offensive or pleasant, should be given the opportunity to be heard and to compete freely and fairly to gain acceptance . . . To steal newspapers is to inhibit the marketplace of ideas – it is to remove content before it has a chance to be read by others, to whisk it away before it may compete to influence and impact others’ conceptions of what is true or false.51 Besides the harm to the marketplace of ideas, there is also harm to the bottom line of these newspapers, which typically rely on advertising as a source of operating funds. When newspapers are stolen, the advertisements do not reach the audience the publishers promised, sometimes resulting in the need to republish the ads in a future edition of the paper for free or at a discounted price. A handful of states have passed laws punishing theft of free newspapers, including California, which made it a misdemeanor punishable by a fine of up to $250 to take more than 25 copies of a free newspaper with the intent to destroy it, sell it for cash, or to keep others from reading the publication. Colorado has higher fines, of up to $1,000 for theft of 100 newspapers and up to $5,000 for more than 500 newspapers, as well as civil remedies for actual damages upon lawsuits filed by the publisher.52 Courts have also recognized the harm that newspaper theft of this sort causes. In 2002, the U.S. Court of Appeals for the Fourth Circuit handled a case

156 Vanishing Speech and Destroying Works

involving public officials in Maryland who, rather than resorting to theft, conspired to buy up nearly every copy of the election day issue of an alternate weekly newspaper, St. Mary’s Today, which they correctly feared would be publishing negative information about them. Working with off-duty police officers, they ended up buying about 1,300 papers, about a quarter of the paper’s print run. The court found that the elected officials were acting under color of law, even if they conspired about the scheme off the clock, and thus federal civil rights law applied and would make them subject to damages. However, more relevant to the inquiry of private, rather than government, restraints is how the court reached its decision. Even by paying for the newspapers, the court reasoned, the officials had engaged in a plan to limit the publisher’s rights to disseminate core political speech. The harm went beyond a right to recover printing costs because the First Amendment “protects both a speaker’s right to communicate information and ideas to a broad audience and the intended recipient’s right to receive that information and those ideas.”53 The court went on to recognize that even private censorship, when done for the purpose of influencing public affairs, is legally problematic, noting, The drafters of the First Amendment knew full well that censorship is equally virulent whether carried out by official representatives of the state or by private individuals acting out of a self-interested hope in receiving or maintaining benefits from the state.54 The district court, following the appeals decision, supported the publisher’s actions for damages for tortious interference with business relations and civil conspiracy to deprive him of his rights, both resting on the idea that while the newspaper buyers did not do something illegal on its face, their actions were done together in a way to deliberately harm the publisher’s interests as a journalist, both for business and public information purposes.55 Another potential legal avenue for dealing with newspaper thefts was proposed by attorney Rory Lancman, who argued for a suppression tort called “intentional interference with protected speech” for these kinds of situations, when private actors rather than government are acting to suppress free speech in a way that state and federal law have not adequately prevented. The argument is that such a tort would both deter private censorship by suppressors, who act with “relative impunity” under present law, and it would provide an “avenue of redress (to) those whose right to speech has been taken from them.”56 While the author notes that speech does not have an easily assigned tangible value, it could be assigned a dollar value by a jury in the same way that humiliation, loss of freedom, or pain and suffering are in other tort cases, and may also be an opening for punitive damages, allowing a jury to punish bad private actors for intentionally suppressing speech. The proposed tort itself would require the publisher to establish that the speech was permitted as a right, that the suppressor acted

Vanishing Speech and Destroying Works 157

intentionally to interfere with the speech, and that the speech itself did not actually reach the listener intact.57 Prosecutions for newspaper thefts remain rare, likely because the stakes are relatively low, and lawsuits for damages are infrequent as well. But the policy these laws and potential torts rest upon creates a parallel for a more vexing modern problem: online deletion of speech by private actors. Two examples of these are deletion of an author’s archives when a website shuts down and deletion of a person’s social media or blog account that has years of archives and an audience that suddenly vanishes at the hands of the platform. Typically, these are matters of contract and intellectual property law. When a person writes for a news website, for example, it is usually as a contributor subject to an agreement that gives the site the right to publish the piece, either with the author keeping the copyright and the site getting a license, or with the site owning the copyright as a work for hire. The online news company Gothamist, for example, ran websites in five major U.S. cities before being shut down in 2017 after its billionaire owner determined that they weren’t profitable, and, as employees alleged, in retaliation to their unionization efforts.58 When the shutdown was announced, the company’s websites went dark immediately, including the archives that were the repository of the now jobless journalists’ articles for several years. As one journalist reported in response to the lost clips, “The 115 editorial staffers who just lost their job . . . are now venturing out into the market with all the evidence of their past work disappeared.”59 Industry outcry ultimately led Gothamist to recover the archives, and new ownership reopened the sites in 2018, but the question remained about whether employees facing this kind of situation have suffered any harm to their rights that could be remedied by the law. While the company may have owned the articles, perhaps the authors also have rights not to have their works of journalism vanished at the whim of an employer. As Maria Bustillos wrote for Columbia Journalism Review in the wake of the Gothamist shutdown, “All it takes is one sufficiently angry rich person to destroy the work of hundreds, and prohibit access to information for millions.”60 Similarly, consider the deletion of an online account such as a blog, a Twitter account, or a photo sharing account such as Instagram. A private company hosts those platforms, and legally, users are bound by the terms of service they agree to when signing up for the service. These usually include a provision that the platform can delete an account for any reason at any time; for example, Twitter’s terms say the company “may suspend or terminate your account or cease providing you with all or part of the Services at any time for any or no reason.”61 Even though the authors retain the copyrights in their original works that they post to these sites, also as a matter of the terms of service, this is impractical if the site is the only place where the Tweets, blog posts, or photographs are stored. A person who spends years building an audience and contributing to the online community on the platform has created something of value, both to the author and the audience, that could be permanently erased at the whim of the platform,

158 Vanishing Speech and Destroying Works

with no legal recourse because technically the user agreed to the terms that permitted such an act. Genie Lauren, a prominent member of the Black Twitter community, had her account suspended for innocuous statements such as calling a critic a “typical white lady,” and responding “Then why are you still breathing?” to another critic who said he’d rather stop breathing than read her tweets. After ten years of activity, with more than 14,000 followers and about 530,000 tweets, her @MoreAndAgain account was shut down. She saw the incident as a kind of theft: There’s no Twitter without Black Twitter, but black people are not protected by Twitter’s anti-abuse policies. Instead, we’re targeted by Twitter’s biased algorithms, and then Twitter steals our stuff. They did it to me; they can do it to you.62 These situations under modern law may be extended to plausible future technological interventions by private actors to destroy creative works or archives. What if one had the power, through technological means or otherwise, to make every copy of a work of expression disappear, as Gilbert Norrell makes the words of Jonathan Strange’s books vanish? It is not implausible for technology to exist to allow this online; one can imagine a future virus that seeks certain text and attaches itself to destroy any copies of it, or that otherwise locks up access to all digital copies. When increasingly works exist only in digital format, destruction of the electronic words or photographs or other works, whether by private or government actors, is censorship tantamount to burning all of the printed copies of a work. Intellectual property attorney Marc Whipple quipped that the problem could be called the “Annie Wilkes, HAX0R” scenario, after the antagonist in Misery who forces the author she kidnapped to burn his manuscript.63 A hacking attempt like this may be remedied by the Computer Fraud and Abuse Act, a federal law that criminalizes intentional unauthorized access of computers, including forcing programs on to them that cause damage or other harm.64 But what legal rights may people have who have their works or archives deleted by private actors that do not fall clearly into anti-hacking laws such as the CFAA? The problem would not fit into intellectual property law such as copyright, which is focused on different issues such as unauthorized copying or distribution of works. A more appropriate area would be traditional property law, particularly under the area of conversion, the: intentional exercise of dominion and control over personal property or a chattel that so seriously interferes with the right of another to control that property that the tortfeasor may justly be required to pay the other the full value of the property.65 This includes intentional destruction of property, a classic case of conversion, according to William Prosser in his treatise on torts.66

Vanishing Speech and Destroying Works 159

While the common law understanding of conversion does not usually touch written instruments that serve as records of property ownership, it does extend to published and unpublished manuscripts that a person has taken control of without the original owner’s consent. An Illinois court said that conversion may have applied to a case in which a publisher of sex education materials alleged that another organization with which it had a business relationship stole the printing plates for books and used them to print copies.67 While the facts did not uphold the allegation, the court found that the printing plates themselves were a kind of tangible property that might be subject to the tort without having to apply intellectual property laws. Theft or destruction of them was a kind of conversion. Similarly, losing original manuscripts, such as in one New York case involving a playwright who gave an original script to a director who then lost it, would be subject to conversion law.68 Courts have also found intangible digital property, even when not merged into a document, when taken without consent, to be subject to the tort of conversion. The Ninth Circuit Court of Appeals, for example, found the web domain name “sex.com” was capable of being exclusively possessed and wrongly disposed of, and that web domains “are valued, bought and sold, often for millions of dollars.”69 Computer programs are also intangible property subject to conversion, as the Alabama Supreme Court noted, even when they are not subject to intellectual property or patent laws.70 Such acts of conversion through destruction could result in money damages, which would only be some consolation to those who make creative works that have vanished at private hands. There is also an interesting argument to be made to consider this through the lens of artists’ moral rights. Destroying a work in its entirety may be a creative act in itself, as the Chinese artist Ai Weiwei demonstrated in his 1995 work Dropping a Han Dynasty Urn, a series of photographs depicting his destruction of a 2,000-year-old artifact as an act of protest and reflection.71 And destroying symbolic items as an act of speech has broad protection from government interference or punishment under the First Amendment in the United States, as established in Texas v. Johnson, the case in which the Supreme Court said burning the American flag was a protected act.72 But private destruction of artists’ works may have some protection under federal law protecting artists. The Visual Artists Rights Act recognizes moral rights in works created after 1991 for a narrow class of creative works: visual works of a single copy or a numbered, signed copies of less than 200.73 This includes a specific right to “prevent any destruction of a work of recognized stature.”74 However, courts have been reluctant to extend these to works such as street works by graffiti artists; in one case, a federal district court in New York rejected a plea from artists to prevent building owners from demolishing a series of buildings in what had become known as 5 Pointz, finding that regardless of the site’s reputation as a tourist attraction, it was not enough to make this a work of “recognized stature.”75 European law provides broader protection, with copyright law that strongly recognizes artists’ moral rights in their works. In short, this is a right beyond the

160 Vanishing Speech and Destroying Works

classic economic right in copyrights that allows artists to remain personally connected with their works, even when the copyright has been transferred or the work has been licensed for other uses. Moral rights are not transferable, but rather remain with the artist, unless the artist consents. These moral rights focus on the right of the author to be identified as the creator of the work, but also includes a right not to have a work altered in a way that is objectionable to the artist, known as the “right of integrity.” In one example of European-style moral rights being applied in the United States, the British comedy troupe Monty Python sued to prevent the television network ABC from editing some of its television program Monty Python’s Flying Circus from its original BBC version into a 90minute special, calling it a mutilation of the original work because it deleted more than 20 minutes of the show. The Second Circuit Court of Appeals agreed, applying the federal Lanham Act to protect the work using international law rather than U.S. copyright law, which would not have recognized moral rights the same way.76 Destruction or mutilation of a work without consent would certainly violate the artist’s moral rights, allowing the copyright law of countries recognizing it to be the basis of a cause of action. As such, there is a basis for recovery for artists who see their works vanish through the destructive acts of another person. The private book destroyer like Gilbert Norrell, who cannot fathom that the people could make any good use of the words of his rival, may be subject to the tort of conversion. The destroyer of a work of art might cross the Visual Artists Rights Act in the United States, or the protected moral rights recognized elsewhere in the world, which would allow recovery under a theory of copyright law. A digital destroyer could face sanction under the Computer Fraud and Abuse Act or laws of a similar kind that criminalize unauthorized access to digital files and accounts. The trickiest one to protect would be private individuals’ accounts on third-party platforms such as blogs or social media apps, which place the power to destroy accounts entirely in the hands of the company owning the platform, with little or no recourse available for wrongful deletion. In these circumstances, it may take a court using its equitable powers to remedy non-monetary matters through, for example, an injunction at least to restore to the author the original source material (such as an archive of the blog or Tweets or Instagram photos) so it can be reclaimed and reposted elsewhere to prevent an otherwise unavoidable and unfair harm to the user in situations in which they have conferred obvious benefits on the platform without any other avenue of recourse. While such requests for injunctions have not yet been successful, an advancement of equity may be necessary to protect user works from unfair destruction in this manner.

Government Destruction of Its Own Records The largest section of the Records Department, far larger than the one in which Winston worked, consisted simply of persons whose duty it was to track down and

Vanishing Speech and Destroying Works 161

collect all copies of books, newspapers, and other documents which had been superseded and were due for destruction. . . . There were the vast repositories where the corrected documents were stored, and the hidden furnaces where the original copies were destroyed. And somehow or other, quite anonymous, there were the directing brains who coordinated the whole effort and laid down the lines of policy which made it necessary that this fragment of the past should be preserved, that one falsified, and the other rubbed out of existence. George Orwell, Nineteen Eighty-Four77

Government powers do not only seek to destroy the creative works of its citizens or its opponents. Sometimes, it destroys its own records, whether as an act of propaganda, as Orwell describes in Nineteen Eighty-Four, or as a way to cover up its abuses and misdeeds. Government record-keeping and public access to those records are critical elements for transparent democracy, yet there is a rich history of the powers that be destroying or altering their own records to frustrate accountability or to alter public memory. Baez described partial elimination of the archives in Greece in 405 BCE in an attempt to unify Athenians after the Peloponnesian war: The decree mandated the erasure of public records and established sanctions against those who kept copies or dared to recall the past with malice. . . . The Romans called this damnatio memoriae or abolition memoriae: the process by which the senate practiced the “damnation of memory,” obliterating the memory of all those it classified as infamous.78 In Lois Lowry’s The Giver, the history of conflict and trouble is removed from society by being placed in the memory of one person, who passes that burden on to another sole person to suffer its knowledge on behalf of the community. The community engages in euthanasia of the elderly and of newborns with disabilities, and those processes are duly recorded, with tapes of those practices stored in a forbidden area called the Hall of Closed Records, inaccessible to anyone but the keeper of memory. Forgetting and obscurity are central to keeping society peaceful and happy. Similarly, in the “historical notes” at the end of The Handmaid’s Tale by Margaret Atwood, she reveals that “the Gileadean regime was in the habit of wiping its own computers and destroying printouts after various purges and upheavals.”79 Government erasing memory of the past – through deletion or alteration of records in Nineteen Eighty-Four, or hiding them away from society in The Giver, or in the alternate history timelines of the television version of The Man in the High Castle in which the Nazis won the war and work to destroy remnants of American history – is a common trope for science fiction authors, who, echoing the past, see a rich future in government destruction of its own records. Several current tools have emerged to satisfy the desire of people to make the past vanish more easily, or for the government to control or alter the records of

162 Vanishing Speech and Destroying Works

its doings. Legal scholars Jasmine McNealy and Heather Schoenberger have conceptualized these as “privacy-promising technologies,” a definition that includes “technology, such as apps, software, and online tools, in which the maker or creator uses the promise of privacy, or data control, to induce users to use their digital tool.”80 They were writing primarily about apps that either promised anonymity, such as the gossip app YikYak or an anonymous commenting platform called Whisper, as well as systems that provided automatic message deletion, such as Snapchat. Law professors Woodrow Hartzog and Evan Selinger noted the benefits of an “ephemeral conduit” such as Snapchat in providing online obscurity to users that was not otherwise available through social networking tools.81 Attorney Jonathan Moore referred to Snapchat and others providing similar services as “ephemeral” or “impermanent social media,” in reviewing the challenges courts face in using content shared on such services as evidence in litigation.82 Soon, new tools were built around the automatic destruction feature, creating a way for the government (as well as private citizens) to delete records they generated within seconds after being read, making “this message will self-destruct” technology from Mission: Impossible television show and films a reality. Consider the Confide app, which combines both encryption and ephemeral messaging features, using “military-grade end-to-end encryption to keep your messages safe and ensure they can only be read by the intended recipients” and making messages “disappear forever after they are read once” and protecting them against screenshots.83 In Donald J. Trump’s first month in the White House, staffers concerned about accusations of leaking information to the press were using Confide to cover their tracks, according to a report in The Washington Post. 84 After the email hacks that haunted Hillary Clinton’s presidential campaign in 2016, the Confide app became “the tool of choice for Republicans in Washington” fearing a similar fate.85 White House spokesman Sean Spicer, who began random phone checks shortly after the Washington Post revelation, reportedly told staffers that using Confide and the encrypted messaging app Signal were potential violations of the Presidential Records Act.86 In the response to these reports, the House Oversight Committee issued a letter to 55 federal agencies expressing concerns that use of Signal, Confide, and WhatsApp by federal employees “could result in the creation of federal records that would be unlikely or impossible to preserve” and may allow “circumventing requirements established by federal recordkeeping and transparency laws.”87 Citizens for Responsibility and Ethics in Washington (CREW) filed a suit against Trump, alleging violation of the Presidential Records Act by using encrypted disappearing-messaging apps,88 and during ethics training in 2018, White House lawyers advised personnel not to use encrypted messaging apps such as WhatsApp to do government business.89 Similar issues trickled down to the states as well. In Missouri, two attorneys sued then-Governor Eric Greitens, arguing that his use of Confide violated the state’s Sunshine Act as well as its State and Local Records Law.90 A

Vanishing Speech and Destroying Works 163

county judge denied their request for a temporary restraining order to halt Greitens’ use of Confide in part because of a lack of evidence that he had been using it do conduct government business, but noted that there were “a whole bunch of open questions here,” including whether the governor has a First Amendment right to use the app to communicate, as his attorneys contended.91 But as the attorneys arguing to prevent Missouri Gov. Greitens from using Confide argued, the tool exists in a way that does nothing but frustrate open records laws: “Confide has a singular purpose. To shred. To destroy. To destroy communications sent and received.”92 State open records laws, the federal Freedom of Information Act, and the Presidential Records Act are intended to protect the public’s right to know about the conduct of government officials. However, the development of privacy-protecting mobile applications that deliberately make archiving and retrieval difficult creates a unique challenge for these transparency laws. Vanishing message apps such as Snapchat and Confide allow public officials, using these apps in the exact way they are intended to be used, to have messages disappear automatically, without a simple way to keep a record for public inspection. Bob Freeman, the long-time executive director of New York state’s Committee on Open Government, described the dangers as follows: If an individual, including a government official, wants to cover his tracks, tell the world, “I never said that”, or that he never communicated with a certain person . . . Snapchat, for better or worse, can be used to make it seem true. And there may be nothing we can do about it.93 Government use of impermanent messaging apps is becoming commonplace. In 2016, presidential contenders Jeb Bush, Hillary Clinton, and Bernie Sanders had Snapchat accounts, and the app has become popular among members of Congress, including “Snapchat king of Congress” Eric Swalwell, a representative from California.94 Snapchat users include Washington, D.C. Mayor Muriel Bowser and Los Angeles Mayor Eric Garcetti,95 as well as Chicago Police Superintendent Eddie Johnson.96 And government use has not been without controversy. The New York Police Department, for instance, had to investigate an officer who posted images on Snapchat during a raid on an apartment in Brooklyn, with images of a family in handcuffs posted on the app with captions such as “Merry Christmas it’s NYPD!” and “Warrant Sweeps it’s still a party smh.”97 The innovation of disappearing messaging launched Snapchat from a startup in 2011 to its current status as one of the most popular social media apps, with an average of 158 million daily users by the end of 2016.98 The key feature of the app when it launched, and that remains core to its current version, is that images and captions sent on Snapchat vanish after ten seconds of viewing, a challenge to the traditional online notion that whatever was posted on the Internet would last forever.99 Additionally, Snapchat users have built a culture that frowns upon

164 Vanishing Speech and Destroying Works

subverting the disappearing nature of photos through taking screenshots or otherwise capturing photos before they vanish, one of the “unwritten rules” of the platform.100 Snapchat notifies users when someone has taken a screenshot of a photo or video, and as Snapchat’s community guidelines note, “it’s okay with us if someone takes a screenshot, but we can’t speak for you or your friends.”101 The disappearing message feature has become so popular that it has been copied by other platforms, such as Instagram in its direct messaging system.102 Dealing with government destruction of its own records, particularly in a way that deliberately undermines transparency laws in place to ensure that the people can serve as watchdog over abusive government practices, is a challenge with vanishing messaging technology. One response could be a ban on use of ephemeral messaging apps, either through legislation or judicial action. Such a ban on government employee speech would trigger First Amendment scrutiny, as Greitens’ lawyers argued in opposition to the motion for a temporary restraining order. Several courts have recognized a right under the First Amendment to use the Internet and social networks to communicate, particularly in the context of sex offenders who have challenged bars on their access to social media. The Eighth Circuit Court of Appeals recognized a First Amendment right to access the Internet for a convicted sex offender in 2005, striking down a provision of his release that would “completely bar his access to computers and the Internet” as being overly broad.103 Likewise, a federal district court in Louisiana struck down the state’s law barring sex offenders from “unlawful use or access of social media” because it served in essence as “a near total ban on Internet access” that “unreasonably restricts many ordinary activities that have become important to everyday life in today’s world.”104 And the U.S. Supreme Court in 2017 struck down a North Carolina state law restricting access to “a commercial social networking Web site where the sex offender knows that the site permits minor children to become members,” in a case in which the defendant, a sex offender, used Facebook.105 Recognizing the broad free speech interests in Internet communications in the “fabric of modern society and culture,” the court noted that “foreclos(ing) access to social media altogether is to prevent the user from engaging in the legitimate exercise of First Amendment rights.”106 The North Carolina law was significantly narrower than provisions struck down by federal courts in Louisiana and Nebraska,107 not covering social networking services that provide only one service, such as photo sharing, email, or instant messaging.108 One might expect that public officials would have no fewer rights under the First Amendment than sex offenders to access social networks. An outright ban on using a certain tool to communicate could operate as a prior restraint and may face challenges by public officials asserting their free speech rights as an individual citizen. In Republican Party of Minnesota v. White, the Supreme Court struck down a state supreme court’s canon of conduct that limited speech about political or legal disputes of candidates for judicial election, saying that the state could not overcome the strict scrutiny test in its requirement that judicial candidates could

Vanishing Speech and Destroying Works 165

not comment on legal or political matters in the interest of maintaining judicial impartiality.109 Elected public officials have relied on this ruling to argue that they have stronger First Amendment rights than government employees making statements pursuant to their job duties. The U.S. Court of Appeals for the Fifth Circuit recognized this in Rangra v. Brown, finding protection for the speech of elected government officials “is robust and no less strenuous than that afforded to the speech of citizens in general.”110 Compared to elected public officials, government employees have limited First Amendment protection for their speech while on the job. In Garcetti v. Ceballos, the U.S. Supreme Court rejected “the notion that the First Amendment shields from discipline the expressions employees make pursuant to their professional duties.”111 As such, communications sent in one’s official capacity via encrypted or ephemeral messaging apps that would typically be covered by public records laws would receive limited First Amendment protection. These are not the acts of a government employee in his or her role as a citizen, an essential element for asserting the First Amendment right in this context. These tools would not be used in furtherance of the government employee’s “opportunity to contribute to public debate,” but rather to his or her conduct in official duties.112 Courts have allowed some restriction of public official speech in another context – open meetings laws. Public officials in Texas challenged the state’s Open Meetings Act113 on First Amendment grounds, arguing that it “criminalizes all private speech among a quorum of a governing body that is about public policy, even if such speech does not lead to corruption.”114 The U.S. Court of Appeals for the Fifth Circuit rejected this argument in Asgeirsson v. Abbott, finding that the section criminalizing public officials’ potential Open Meetings Act dodges was content-neutral, was not overbroad or vague, and adequately supported the goals of public disclosure laws “such as increasing transparency, fostering trust in government, and ensuring that all members of a governing body may take part in a discussion of public business.”115 The logic of these limits on public official use of technology from attorney general opinions and in the Asgeirsson case – that public officials’ free speech rights may be suborned to serve the interest in transparent governance in statutes that require disclosure – may plausibly extend to efforts to restrict government use of certain technological tools that, even when used legally by a citizen, would allow public officials to sidestep open records laws. This may be particularly true in the case of ephemeral messaging apps such as Snapchat and Confide, which by default make detection of their messages extremely difficult if not impossible. As professor Allison Stanger noted, “Since Confide is explicitly designed to eliminate a paper trail, its use creates at least the appearance of misconduct, if not the reality.”116 What might the recent experiences with ephemeral messaging technology teach us about the future tech tools envisioned by science fiction authors? First, technology that interferes with or destroys memory or history has an inherent problem; it is hard to restore something that has been completely destroyed, from

166 Vanishing Speech and Destroying Works

erased memories to vanished or altered records. Also, it is difficult to come up with ways to manage these in the face of tools that move faster than the laws to prevent them can outlaw or otherwise regulate them. Just as we cannot retrieve the records destroyed by conquerors in ancient times, we cannot undo deliberate record destruction today unless the tools themselves – or the way in which the tools are designed – are proactively regulated by governments that recognize their own ability for abuse by records destruction. For example, for any tool that could be used by government employees for official communication purposes, transparency laws could demand that the developer of the tool maintain an archive to counter abuse of the tool, or could require a special “government communication” setting that would funnel those files to government records custodians and archivists. There is, of course, government reluctance to regulate oversight of itself. Besides the administrative burden of this volume of record-keeping, it would eliminate ways governments throughout history have tried to protect themselves. There is also some efficacy in routine destruction of certain records, both for the sake of paring down the volumes of materials that government would have to maintain endlessly, as well as for the sake of protecting some privacy interests in records of people who have long moved past their deeds and even misdeeds of decades in the past. Juvenile records are kept confidential, sealed, and purged so as not to be held against people moving on to their adult lives, as one example. But there are complicating incentives between the desire to maintain efficient archives that are not overly burdensome and the desire to purge inconvenient records that could hold government officials accountable for bad actions or otherwise would create a paper trail. We have seen that in modern times in email storage situations. States and the federal government typically have a record-keeping schedule that involves routine purges of email archives after a certain amount of time, such as every 12 months or 18 months, unless otherwise directed by law to retain them, as is the case with the Presidential Records Act, which requires preservation for up to a decade after the president’s term ends. Yet some combination of technological failure, questionable behavior by archivists or officials, and misunderstanding of the archiving laws has led to scandals plaguing multiple administrations. President George W. Bush had 22 million emails from his White House go missing for years before finally being found and retrieved in 2009 after a lawsuit by transparency groups; while critics asserted the emails were purged as part of a cover-up of questionable administrative motives in dismissing federal prosecutors for political purposes, administration lawyers argued it was an IT matter.117 Less than a decade later, former Secretary of State Hillary Clinton was harangued by political opponents for maintaining a private email server that included 30,000 emails that went missing that were potentially subject to the federal Freedom of Information Act. Her lawyers argued that the emails were deleted because they regarded personal and not government matters, an answer that was

Vanishing Speech and Destroying Works 167

unsatisfactory to her critics, creating a problem that plagued her campaign for president in 2016.118 There is a balance to be reached between efficient maintenance of government records and stewardship of the past, even if the past is at times disagreeable. As the quotation at the base of a statue in front of the National Archives notes, “What is past is prologue.” But there is not science fiction that projects a future of a government that maintains too much about itself, or is bogged down in overflowing records. The fear of science fiction writers is the opposite – that government will erase the past to preserve itself, and that tools and culture will allow this erasure in the same way past civilizations and cultures have. Deleting and altering the past as a means of control is a human tendency, one we have increasingly powerful tools to allow, and views of the future see this increasing rather than decreasing.

When Destruction of Speech May Be Necessary The previous three sections look at situations in which deletion or destruction of records or creative speech – whether by the government or by private actors – create problems for free expression for artists, or for cultural memory, or for government record-keeping and accountability. But these have not addressed a concern often raised by science fiction authors about truly dangerous speech that perhaps should be legally prevented from causing direct and immediate harm to the public. Consider the notion of “weaponized speech,” which is specifically targeted to harm those who read or see or hear it, or that will otherwise create devastating consequences. It is a concept that shows up commonly in science fiction and other genres – speech so dangerous that it can be deadly. One early instance of this, in the realm of satire, the “Funniest Joke in the World,” is a sketch from the British television program Monty Python’s Flying Circus in its first episode in 1968. The comedy troupe depicts the tale of a joke so funny that anyone who hears it dies laughing, including both the author who wrote the joke, and his mother who found him dead and read the joke he had written. The joke is weaponized during World War II, carefully translated into German so the Allied forces don’t die while using it against opposing forces. The joke itself is nonsense (“Wenn ist das Nunstück git und Slotermeyer? Ja! Beiherhund das Oder die Flipperwaldt gersput!”); today, in an amusing bit of tech humor, if you enter the text into Google Translate, you receive the response of [FATAL ERROR]. Entertainment so amusing that it inexorably leads to death is also present in one of the many subplots in Infinite Jest by David Foster Wallace. Experimental filmmaker James Incandenza Jr. creates a work that becomes known as “The Entertainment,” which is so addictive that anyone who sees it cannot take themselves away from it, and they end up dying after hours of viewing. Like Monty Python’s killing joke, “the Entertainment” is viral; a man watches it and puts it on a replaying loop and is unable to remove himself from his soiled recliner, where he dies, staring at the screen, with a look on his face that

168 Vanishing Speech and Destroying Works

“appeared very positive, ecstatic even, you could say.” He is found by his wife, who follows his line of sight to the screen and becomes hooked as well. By the next afternoon, a physician’s assistant, the physician himself, two security guards, and two religious pamphleteers who had just been passing by were all watching the video, “sitting and standing there very still and attentive, looking not one bit distressed or in any way displeased, even though the room smelled very bad indeed.”119 The video is sought after by Quebecois separatists who intend to use it against their enemies, prompting a reflection by Wallace on American norms about freedom of choice and the voluntariness of people to consume dangerous media like this, compared to freedom from being exposed to works of art that would undoubtedly harm anyone who sees it. Why should the government restrict people from choosing death by pleasure, if it is a voluntary choice? As the Quebecois argue, it cannot be voluntary. The Americans only forbid the Entertainment because they “fear that so many U.S.A.s cannot make the enlightened choices.”120 The Quebecois also call the film “samizdat,” a Russian phrase referencing forbidden texts passed around by eastern European dissidents in the Soviet era. The viral nature of this weaponized speech can also be seen in the 1998 Japanese horror film Ringu, or its Americanized remake, The Ring. At the heart of both films is a cursed videotape; anyone who views the videotape dies within seven days of watching it, killed by a ghost of a woman seeking revenge against the world. The only way to avoid death is to make a copy of the tape, and send it to another person, creating a cycle of potential death for another person. Deadly speech may also go beyond words and films. In Neal Stephenson’s Snow Crash, it is viral lines of code distributed on scrolls in the virtual reality world known as the Metaverse that destroys the mind in the real world of all who see it: The virus that ate through Da5id’s brain was a string of binary information, shone into his face in the form of a bitmap – a series of white and black pixels, where white represents zero and black represents one. They put the bitmap onto scrolls and gave the scrolls to avatars who went around the Metaverse looking for victims.121 Stephenson identifies the killer code as related to ancient civilizations, specifically Sumerian, of a “metavirus” that was linguistic in nature, altering brains and bodies so that civilization is part of your biological structure, rather than merely “viral ideas.” It is dangerous because that base-level biological code, if hacked, could be deadly to entire civilizations. The weaponized speech depicted above does not fall neatly into one of the traditional categories of unprotected speech under the First Amendment. The lines of code that create a brain-destroying virus in Snow Crash are perhaps more clearly unprotected because code has, so far, not been recognized as “speech” for

Vanishing Speech and Destroying Works 169

First Amendment purposes. Undoubtedly, code has aspects that are both expressive and functional. Code, when executed by a computer, becomes expressive; the Supreme Court recognized First Amendment protection for videogames as expressive works in Brown v. Entertainment Merchants Association. But code – the raw programming language – on its own rarely receives the same level of protection, as noted in Chapter 4, when a federal court of appeals ruled that the computer program DeCSS, which got around the encryption placed on DVDs allowing them to be copied, was not protected by the First Amendment as speech.122 Unlike code, however, it would be hard to argue that a joke so funny or a film so pleasurable that anyone who hears or sees them dies would conceptually fall into unprotected speech categories, either as a “true threat” or “fighting words.” A true threat involves a speaker communicating a “serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals,” according to the U.S. Supreme Court,123 and these kinds of speech aren’t so much threatening as they are directly harmful just by existing. And the “fighting words” doctrine, a conceptual mess since being established by the Supreme Court in Chaplinsky v. New Hampshire in 1942, applies more to breaches of the public peace by using words that have a tendency to cause or provoke acts of violence by the person to whom the remarks are made.124 And besides the fact that neither has explicit sexual content, the parallel of obscenity – that is, low value speech that can be banned because of the social and moral harms it causes – is not a neat fit because there are definitely legitimate artistic values evident in those works. A dispute that arose in the late 1970s, on the heels of the Supreme Court’s ruling in the Pentagon Papers case, illustrates the kind of situation in which the government may very well restrain such speech on grounds of protecting public safety, and the logic in that decision is the kind that those who would support broader government power to restrain or destroy publication could use it against potentially dangerous works. The issue was the attempt by the magazine The Progressive to publish an article entitled “The H-Bomb Secret, How We Got It, Why We’re Telling It,” by the writer Howard Morland, a specialist in nuclear weapons issues. Morland purported to describe how to build a hydrogen bomb, including previously unreleased materials that the government said included “Secret Restricted Data” under the Atomic Energy Act. A federal district court in Wisconsin found that the article “would provide vital information on key concepts involved in the construction of a practical thermonuclear weapon” and thus “could provide sufficient information to allow a medium-size nation to move faster in developing a hydrogen weapon.” The court recognized that the injunction would “curtail defendants’ First Amendment rights in a drastic and substantial fashion” and would “infringe upon our right to know and be informed as well.” But it also acknowledged that a mistake in ruling against the national security interest “could pave the way for thermonuclear annihilation for us all,” in which case “our right to life is

170 Vanishing Speech and Destroying Works

extinguished and the right to publish becomes moot.” And after finding that publication of the article “will result in direct, immediate and irreparable damage to the United States” the judge enjoined The Progressive in March 1979 from publishing the article, dismissing any First Amendment concerns the magazine raised.125 A little more than six months later, the U.S. Court of Appeals for the Seventh Circuit lifted the injunction and allowed publication, but only after the Department of Justice requested dismissal of the case because its concerns about the article had been satisfied.126 Censorship of the article in The Progressive was supported by the courts because it fell under the exceptions carved out in the Near and Pentagon Papers cases by creating a clear and present danger of imminent, likely, and disastrous harm. It is likely that, if confronted with “weaponized speech” such as the killing joke, or “The Entertainment,” or the deadly code from Snow Crash, courts in the United States would find plenty of room for making exceptions to free speech doctrine to protect public safety, as was the situation in the case involving The Progressive. These materials are so dangerous merely in their own existence that any expressive or artistic content they contain would be heavily outweighed by public safety concerns, enough to justify prior restraint or otherwise outright bans and seizure of the materials. If a future situation arises in which killer speech becomes a reality, we have a legal framework to deal with it.

Conclusion As this chapter illustrates, the future may include many different ways of destroying speech, whether through widespread, government-sponsored book burning in Fahrenheit 451, or deletion or alteration of the state’s own records and history in Nineteen Eighty-Four, or even private deletion of artistic works through technological or other means in Jonathan Strange & Mr. Norrell. Sometimes, destruction of speech may even be voluntary, to help avoid capture by police or other oversight organizations, as briefly occurs in Malka Older’s State Tectonics, in which an unofficial and illegal paper tourist guide self-destructs after an attempt by a Maryam, an Information agent, to scan it: “As the sensors touch the surface, it disintegrates. Maryam jumps away. This outdated technology has some very up-to-date technology built into it.”127 While such disappearing speech may not always have negative outcomes – consider the case of “killer speech” – it generally creates a problem for the classic western liberal concept of the marketplace of ideas, a metaphor that suggests that citizens and government work better when all speech and ideas and works of art are allowed to exist and be discussed. Censorship, whether from government or private actors, interferes with the marketplace of ideas in ways that can be pernicious, allowing demagogues and autocrats to control the flow of information and, by extension, public opinion and knowledge.

Vanishing Speech and Destroying Works 171

As the historian Fernando Baez described, this is not a new situation. Destruction of books, and the culture and knowledge they contain, has occurred across cultures and civilizations since words could be captured in permanent form. And it has been done not by the uneducated or anti-scientific forces, but rather as a means by the powerful to control or change understanding of the world around them. “It’s a common error to attribute the destruction of books to ignorant men unaware of their hatred,” Baez noted: After 12 years of study, I’ve concluded that the more cultured a nation or a person is, the more willing each is to eliminate books under the pressure of apocalyptic myths. . . . There are abundant examples of philosophers, philologists, erudite individuals, and writers who vindicated biblioclasty.128 However, it is worth recognizing the dangers that intentional destruction of speech and creative works present. That is the point of authors like Bradbury and Orwell, who paint a worrisome picture in future dystopias in which the powerful have found ways to erase what they deem to be dangerous or unsettling works. These interfere not just with the traditional libertarian notion of an individual’s right to express themselves freely, but also with citizens’ rights to receive information, a necessary component of functioning modern society that is being increasingly recognized by modern legal scholars. Legal scholar Morgan Weiland identified these competing notions of free expression as competition between the liberal and republican traditions in First Amendment jurisprudence. The “liberal tradition” understands First Amendment rights as belonging to speakers as a way to enable their expression. Weiland notes: As such, autonomous individuals seek to exercise their freedom of speech to develop themselves—their capacities for self-expression, self-realization, and self-determination, all of which are necessary ingredients for the development of the self. In this context, freedom of expression is ‘an end in itself’. In contrast, the “republican tradition” emphasizes more the value of free speech as a social good, “instrumentally wielded by individuals as a private right to accomplish broader social purposes.”129 From this perspective, the right belongs to the listener of the speech, not just the speaker. In a series of commercial speech decisions protecting the ability of companies to communicate their messages to the public, Weiland argues, the libertarian speech tradition has become ascendant today, with the Supreme Court recognizing the importance of the “free flow of information” to the public in cases that deregulated advertising about drug and alcohol prices130 and that removed barriers to political contributions by corporations and unions in the name of protecting the public’s right to receive that information.131

172 Vanishing Speech and Destroying Works

Government or private destruction of speech threatens this reception of speech by the audience, ostensibly to protect them from themselves, or to control them. And, as this chapter points out, there is a legal framework in the United States and elsewhere that would allow such illiberal notions to advance, if they are couched in terms of protecting “public safety” or otherwise as a necessary way for those in power to protect the public from dangers by classifying dissenting or unpopular speech as falling into an unprotected or lower-class speech category such as “true threats” or “fighting words.” Similarly, across the world, we are seeing moves to remove items from public access in the name of protecting privacy, such as efforts of search engines to scrub archives to make information about people practically inaccessible through recognition of a private “right to be forgotten” that requires search engines such as Google de-list certain publications from their search results. We have tools that exist solely to enable communications that destroy messages and photographs after they are read, leaving little or no footprint that can be recovered. When words threaten the public peace, in this progression, they could legally be destroyed. As Bradbury noted, this would be a danger to political expression, to science, and to the arts: Colored people don’t like Little Black Sambo? Burn it. White people don’t feel good about Uncle Tom’s Cabin? Burn it. Someone’s written a book on tobacco and cancer of the lungs? The cigarette people are weeping? Burn the book. Serenity, Montag. Peace, Montag. Take your fight outside. Better yet, into the incinerator.132 This is the world science fiction authors have been warning us about.

Notes 1 2 3 4 5 6 7 8 9

10 11 12

George Orwell, Nineteen Eighty-Four 128 (Signet Classic 1984) (1949). Susannah Clarke, Jonathan Strange & Mr Norrell (2004), 555–565. Id. at 564, note 5. Ray Bradbury, Fahrenheit 451 55–56 (Simon & Schuster 2013) (1953). Ray Bradbury, The Day After Tomorrow: Why Science Fiction?, The Nation, May 2, 1953. Bradbury, supra note 4, at 55. Id. at 32. Id. at 56. Robert Hovden, Read the Scientific American Article the Government Deemed Too Dangerous to Publish, MuckRock, Jan. 9, 2019, www.muckrock.com/news/archives/ 2019/jan/09/fbi-bethe-banned-sciam/?utm_content=buffer29eb3&utm_medium= social&utm_source=twitter.com&utm_campaign=buffer. Id. Fernando Baez, A Universal History of the Destruction of Books: From Ancient Sumer to Modern Iraq 12 (2004). Id. at 22–23.

Vanishing Speech and Destroying Works 173

13 Vincent Blasi, Milton’s Areopagitica and the Modern First Amendment, Occasional Papers (1995), digitalcommons.law.yale.edu/cgi/viewcontent.cgi?article=1007&context=ylsop_ papers. 14 John Milton, Areopagitica and Of Education (George H. Sabine ed., 1951), 6. 15 William Blackstone, Commentaries on the Laws of England 13 (1769). 16 Google Spain SL v. Agencia Espanola de Proteccion de Datos, CJEU, C-131/12 (2014). 17 Javier C. Hernandez, To Erase Dissent, China Bans Pooh Bear and ‘N’, N.Y. Times, Feb. 28, 2018, A1. 18 Paul Bernard, What the Taliban Destroyed, Wall St. J., Dec. 20, 2001. 19 Jason Farago, Destroyed by ISIS, Preserved Online, N.Y. Times, Feb. 26, 2017, C20. 20 In 1837, for instance, postmaster Amos Kendall declared that it was “not necessary that large quantities of newspapers should be transported from one end of the Union to the other, to enlighten and instruct the public mind,” triggering the opposition of publishers such as Horace Greeley, whose New-Yorker was not being sent to the South. Greeley called it an “insidious and unjust crusade against the press.” Daxton R. Stewart, Freedom’s Vanguard: Horace Greeley on Threats to Press Freedom in the Early Years of the Penny Press, 29 Am. Journalism 60, 71 (2012). 21 Geoffrey R. Stone, Freedom of the Press in Time of War, 59 S.M.U. L. Rev 1663, 1665 (2006). 22 Debs v. United States, 249 U.S. 211 (1919). 23 Schenck v. United States, 247 U.S. 47, 52 (1919). 24 Fred W. Friendly, Minnesota Rag 32 (1981). 25 Near v. Minnesota, 283 U.S. 697 (1931). 26 New York Times Co. v. U.S., 403 U.S. 713 (1971). 27 Genevieve Lakier, The Invention of Low-Value Speech, 128 Harv. L. Rev. 2166, 2231 (2015). 28 Miller v. California, 413 U.S. 15 (1973). 29 Isabel Wilkerson, Cincinnati Jury Acquits Museum in Mapplethorpe Obscenity Case, N.Y. Times, Oct. 6, 1990, A1. 30 Sex Tape Row: German Court Orders Man to Destroy Naked Images, BBC News, Dec. 22, 2015, www.bbc.com/news/world-europe-35159187. 31 Hard Rock Cafe Licensing Corp. v. Concession Services, Inc., 955 F.2d 1143 (7th Cir. 1992). 32 Warner Bros. Entm’t, Inc. v. RDR Books, 575 F.Supp. 2d 513 (S.D.N.Y. 2008). 33 Brown v. Entertainment Merchants Association, 564 U.S. 786 (2011). 34 U.S. v. Stevens, 559 U.S 460 (2010). 35 U.S. v. Alvarez, 567 U.S. 709 (2012). 36 42 U.S.C. § 1983 (2018). 37 Swartz v. Insogna, 704 F.3d 105, 110 (2nd Cir. 2013). 38 Mink v. Knox, 613 F.3d 995 (10th Cir. 2010). 39 Turner v. Driver, 848 F.3d 678 (5th Cir. 2017). 40 Ex Parte Ronald Thompson, 442 S.W. 3d 325, 350 (Texas Ct. Crim. App. 2014). 41 Ramin Bahrani, Confessions of a Book Burner, N.Y. Times Book Rev., May 12, 2018, 19. 42 Zeynep Tufekci, It’s the (Democracy-Poisoning) Golden Age of Free Speech, Wired, January 16, 2018. 43 Vernor Vinge, Rainbows End 344 (2006). 44 Emily St. John Mandel, Station Eleven, 33 (2014). 45 Clarke, supra note 2, at 563–64. 46 Spoiler alert: He deletes his own manuscript. 47 Louisa May Alcott, Little Women 99 (Gossett & Dunlap, 2006) (1868).

174 Vanishing Speech and Destroying Works

48 Erik Ugland & Jennifer L. Lambe, Newspaper Theft, Self-Preservation and the Dimensions of Censorship, 15 Comm. L. & Pol’y 365, 367 (2010). 49 Id. at 387–90. 50 Id. at 395. 51 Clay Calvert, All the News That’s Fit to Steal: The First Amendment, a “Free” Press, and a Lagging Legislative Response, 25 Loy. L.A. Ent. L. Rev. 117, 128–29 (2004). 52 Jennifer Coleman Noble, Worth More Than the Sticker Price: Criminalizing the Theft of Free Newspapers, 38 McGeorge L. Rev. 258, 260–63 (2007). 53 Rossignol v. Voorhaar, 316 F. 3d 516, 522 (4th Cir. 2002). 54 Id. at 526–27. 55 Rossignol v. Voorhaar, 321 F. Supp. 2d 642 (D. Md. 2004). 56 Rory Lancman, Protecting Speech From Private Abridgement: Introducing the Tort of Suppression, 25 Southwestern U. L. Rev. 223, 242 (1996). 57 Id. at 263. 58 Anna Heyward, The Story Behind the Unjust Shutdown of Gothamist and DNAinfo, New Yorker, Nov. 14, 2017. 59 Abby Ohlheiser, Gothamist and DNAinfo just Abruptly Shut Down. What Will Happen to Their Archives?, Wash. Post, November 2, 2017. 60 Maria Bustillos, Erasing History, Columbia Journalism Rev., Winter 2018. 61 Twitter, Twitter Terms of Service, twitter.com/en/tos (accessed Oct. 11, 2018). 62 Genie Lauren, Twitter Stole From Me and They Can Steal From You Too, The Root, March 6, 2018, www.theroot.com/twitter-stole-from-me-and-they-can-steal-from -you-too-1823548501. Her account was ultimately restored after six months of suspension and appeal. 63 Marc Whipple (@LegalInspire), Twitter (March 6, 2018, 5:47 p.m.), twitter.com/ legalinspire/status/97117036845400064118. 64 18 U.S.C. § 1030 (2018). 65 18 Am. Jur. 2d Conversion §1 (West 2018). 66 William Prosser, Torts 91 (4th ed. 1971). 67 Respect Inc. v. Committee on the Status of Women, 781 F. Supp. 1358 (N.D. Ill. 1992). 68 MacGregor v. Watts, 254 A.D. 904 (N.Y. App. 1938). 69 Kremen v. Cohen, 337 F.3d 1024, 1030 (9th Cir. 2003). 70 National Surety Corp. v. Applied Systems, Inc., 418 So. 2d 847 (Ala. 1982). 71 Natalia Holliday, The Art of Destruction of Art: A Collision of Moral Right and First Amendment, Juris Magazine, Jan. 12, 2018, sites.law.duq.edu/juris/2018/01/12/the-a rt-of-destruction-of-art-a-collision-of-moral-right-and-the-first-amendment/. 72 Texas v. Johnson, 491 U.S. 397 (1989). 73 17 U.S.C. § 101. 74 17 U.S.C. § 106A(a)(3)(B). 75 Cohen v. G&M Realty, 988 F. Supp. 2d 212 (E.D.N.Y. 2013). 76 Gilliam v. American Broadcasting Co., 538 F.2d 14 (2d Cir. 1976). 77 Orwell, supra note 1, at 37–39. 78 Baez, supra note 11, at 11–12. 79 Margaret Atwood, The Handmaid’s Tale 303 (Houghton Mifflin Harcourt, Kindle Edition 2019) (1986). 80 Jasmine McNealy & Heather Schoenberger, Reconsidering Privacy-Promising Technologies, 19 Tulane J. Tech & Intell. Prop. 1, 2–3 (2016). 81 Woodrow Hartzog & Evan Selinger, Surveillance as Loss of Obscurity, 72 Wash. & Lee L. Rev. 1343, 1359 (2015). 82 Jonathan E. Moore, Social Media Discovery: It’s a Matter of Proportion, 31 T.M. Cooley L. Rev. 403, 422 (2014) (suggesting an approach that recognizes parties’ privacy expectations and limits the scope of discovery ordered by courts through a

Vanishing Speech and Destroying Works 175

83 84

85 86 87

88 89

90 91 92 93 94 95 96 97 98 99

proportionality test, which would consider privacy effects, the breadth of such requests, any potential chilling effect on potential litigants, and the burden that producing such content would cause the parties). Confide, Features (retrieved March 31, 2019), getconfide.com. Ashley Parker & Philip Rucker, Upheaval is Now Standard Operating Procedure Inside the White House, Wash. Post, Feb. 13, 2017, www.washingtonpost.com/politics/up heaval-is-now-standard-operating-procedure-inside-the-white-house/2017/02/13/ d65dee58-f213-11e6-a9b0-ecee7ce475fc_story.html?utm_term=.b1940d392beb. Jonathan Swan & David McCabe, Confide: The App for Paranoid Republicans, Axios, Feb. 8, 2017, www.axios.com/confide-the-new-app-for-paranoid-republicans-2246297664. html. Annie Karni & Alex Isenstadt, Sean Spicer Targets Own Staff in Leak Crackdown, Politico, Feb. 26, 2017, www.politico.com/story/2017/02/sean-spicer-targets-own-sta ff-in-leak-crackdown-235413. House of Representatives Committee on Oversight and Government Reform (115th Congress), Letter to Kathleen McGettigan, March 8, 2017, oversight.house. gov/wp-content/uploads/2017/03/2017-03-08-JEC-EEC-to-McGettigan-OPMFederal-Records-Act-due-3-22.pdf. Josh Gerstein, Judge Hears Suit on Trump White House Use of Encrypted Apps, Politico, Jan. 17, 2018, www.politico.com/story/2018/01/17/white-house-encrypted-app s-hearing-343774. Carol D. Leonnig, Josh Dawsey & Ashley Parker, Ethics Training Reminds White House Staff Not to Use Encrypted Messages for Government Business, Wash. Post, Feb. 5, 2018, www.washingtonpost.com/politics/ethics-training-reminds-white-house-sta ff-not-to-use-encrypted-messages-for-government-business/2018/02/04/ 7636265c-05eb-11e8-94e8-e8b8600ade23_story.html?utm_term=.0d04a080becd. Cyrus Farivar, Judge Should Order Governor to Stop Using Ephemeral App, Lawyers Say, ArsTechnica, Feb. 1, 2018, arstechnica.com/tech-policy/2018/02/lawyers-governors-se cret-messaging-app-use-violates-public-records-laws/. Jason Hancock, No Immediate Ban On Greitens’ Use of Secret Text App, But Judge Has More Questions, Kansas City Star, Feb. 2, 2018, www.kansascity.com/news/poli tics-government/article198113764.html. Id. Robert J. Freeman, In a “Poof,” Snapchat Puts Public Records Laws to Test, Knoxville News Sentinel, March 15, 2016, www.knoxnews.com/story/opinion/valley-views/ 2016/03/15/poof-snapchat-puts-public-records-laws-test/81656774/. Taylor Lorenz, How Rep. Eric Swalwell became the Snapchat king of Congress, The Hill, April 27, 2016, thehill.com/homenews/news/277737-swalwell-snapchat. Eric Hal Schwartz, Why DC’s Mayor Joined Snapchat, DC INNO, April 11, 2016, dcinno.streetwise.co/2016/04/11/dc-mayor-muriel-bowser-joins-snapchat-social-m edia/. Kim Janssen, Snapchat is No Snap for Chicago’s Old School Top Cop, Chi. Trib., June 21, 2016, www.chicagotribune.com/news/chicagoinc/ct-eddie-johnson-snapcha t-20160621-story.html. Shachar Peled, NYPD Suspends Cop Who Allegedly Posted Snapchat of Handcuffed Family, CNN, Dec. 26, 2016, www.cnn.com/2016/12/26/us/snapchat-arrest-trnd/. Michael J. de la Merced & Katie Benner, Snapchat Filing Shows a Strong Business Tied to Messages that Fade, N.Y. Times, February 3, 2016, B1. Haley Tsukayama, Snapchat Processes 150 Million Images Per Day, Wash. Post, April 16, 2013, www.washingtonpost.com/business/technology/snapchat-handles-150-m illion-images-per-day/2013/04/16/6732c3f0-a69f-11e2-8302-3c7e0ea97057_story. html?utm_term=.436f15e5c542.

176 Vanishing Speech and Destroying Works

100 Kevin Smith, These are the 17 Most Annoying Things on Snapchat, Buzzfeed, Dec. 1, 2016, www.buzzfeed.com/kevinsmith/17-unwritten-rules-of-snapchat?utm_term=. luY4X0AYD#.jizDkgmYV. 101 Snapchat, Community Guidelines (retrieved March 31, 2019), support.snapchat.com/ en-US/a/guidelines. 102 Natalie Jervey, Snapchat vs. Instagram: Who’s Copying Whom Most?, Hollywood Reporter, Dec. 1, 2016, www.hollywoodreporter.com/news/snapchat-instagram -whos-copying-951224. 103 U.S. v. Crume, 422 F.3d 728, 733 (8th Cir. 2005). 104 Doe v. Jindal, 853 F. Supp. 2d 596, 607 (M.D. La. 2012). 105 State v. Packingham, 777 S.E.2d 738, 743–44 (N.C. 2015). 106 Packingham v. North Carolina, 582 U.S. ________ (2017). 107 Doe v. Nebraska, 734 F. Supp. 2d 882 (D. Neb. 2010). 108 Id. at 750–51. 109 Republican Party of Minnesota v. White, 536 U.S. 765 (2002). 110 Rangra v. Brown, 566 F.3d 515, 524 (5th Cir. 2009). The court found that the criminal provisions of the Texas Open Meetings Act were “content-based regulations of speech that require the state to satisfy the strict scrutiny test in order to uphold them.” Id. at 521. However, the same court’s decision in Asgeirsson v. Abbott four years later upheld the Texas Open Meetings Act from a similar First Amendment challenge by government officials, infra. Scholars have argued that because open meetings laws of this kind are content-based, they should be reviewed using strict scrutiny rather than intermediate scrutiny to allow some room for private discussion by public officials; see Steven J. Mulroy, Sunshine’s Shadow: Overbroad Open Meetings Laws as Content-Based Speech Restrictions Distinct from Disclosure Requirements, 51 Willamette L. Rev. 135 (2015). 111 Garcetti v. Ceballos, 547 U.S. 410, 426 (2006). 112 Pickering v. Board of Education, 391 U.S. 563, 573 (1968). 113 Tex. Gov’t Code § 551.144 (2016). 114 Asgeirsson v. Abbott, 696 F.3d 454, 464 (5th Cir. 2012), cert. denied, 133 S.Ct. 1634 (2013). 115 Id. 116 Lily Hay Newman, Encryption Apps Help White House Staffers Leak – And Maybe Break the Law, Wired, Feb. 15, 2017, www.wired.com/2017/02/whitehouse-encryption-confide-app/. 117 Millions of Bush Administration E-mails Recovered, CNN, Dec. 14, 2009, www.cnn. com/2009/POLITICS/12/14/white.house.emails/index.html. 118 What You May Have Forgotten About the Hillary Clinton Email Controversy, CBS News, June 14, 2018, www.cbsnews.com/news/what-you-may-have-forgotten-about-thehillary-clinton-email-controversy. 119 David Foster Wallace, Infinite Jest 79, 87 (1996). 120 Id. at 430. 121 Neal Stephenson, Snow Crash 351 (1991). 122 See Universal City Studios, Inc. v. Corley, 273 F.3d 429 (2nd Cir. 2001). 123 Virginia v. Black, 538 U.S. 343, 359 (2003). 124 Chaplinsky v. New Hampshire, 315 U.S. 568 (1942). 125 U.S. v. The Progressive, Inc., 467 F.Supp. 990 (W.D. Wisc. 1979). 126 Douglas E. Kneeland, Article on H-Bomb is Made Public by ‘Progressive’, N.Y. Times, Oct. 2, 1979, A17. 127 Malka Older, State Tectonics 92 (2018). 128 Baez, supra note 11, at 18. 129 Morgan N. Weiland, Expanding the Periphery and Threatening the Core: The Ascendant Libertarian Speech Tradition, 69 Stan. L. Rev. 1389, 1404–08 (2017).

Vanishing Speech and Destroying Works 177

130 See Virginia State Board of Pharmacy v. Virginia Citizens Consumer Council, Inc., 425 U.S. 748 (1975); 44 Liquormart v. Rhode Island, 517 U.S. 484 (1996). 131 See Citizens United v. Federal Election Commission, 558 U.S. 310 (2010). 132 Bradbury, supra note 4, at 57.

6 LAW, THE UNIVERSE, AND EVERYTHING

As I write this concluding chapter in 2019, a gang of Nexus-6 replicants have returned to Earth to demand longer lives at Tyrell Corporation headquarters in Los Angeles, but they will have to escape Blade Runners to do it. A long time ago in a galaxy far, far away, sentient droids were a regular feature of life. Tens of thousands of years ago, Ark Fleet Ship B from Golgafrincham crashed into Earth, a planet built to be the most powerful supercomputer in the universe, altering its programming by infesting the planet with a plague of hairdressers, public relations executives, telephone sanitizers, television producers, and other middlemen who would become our human ancestors. Seventy-five years ago, Billy Pilgrim became unstuck in time and began to visit Tralfamadore. Back on Earth, it has been 35 years since Big Brother watched Winston Smith, a clerk in the Ministry of Truth, squelching his push for independence and subduing him into conformity with the state. Eighteen years ago, HAL 9000 showed that we needed to be a bit more careful in designing our artificial intelligence assistants on spaceships. Just nine years ago, we were assured that all of the world except Europa were ours, and we should attempt no landing there. In 16 years, Stephen Chinn faces trial for inventing “babybots” that warped the ability of a generation of children to communicate. By then, the Supreme Court will have ruled that a person’s Apricity results telling them what they need to do to achieve happiness cannot be admitted in criminal trials as evidence. In 20 years, OASIS creator James Halliday will die, setting off the easter egg hunt for control of the massively multiplayer virtual world in which most of humanity spends its time. In about 25 years, after decades of corruption and disinformation lead to the collapse of the global order, most of the world will turn to microdemocracies, with Information providing the infrastructure for managing trade, government, knowledge, and security. In 30 years, replicants will have the ability to become

Law, the Universe, and Everything 179

Blade Runners themselves, and may even develop the ability to reproduce organically. Five years after that, the District of Columbia will be able to stop murder before it happens through its precrime unit, and the city will include eye scanners on nearly every wall to track citizens. In 125 years, autonomous robots will walk among us, free beings after ten years of indentured servitude to their manufacturers or owners. In just under 250 years, the U.S.S. Enterprise will be launched on a five-year mission to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before. We are still more than 400 years away from a brave new world, and tens of thousands of years from the revelations of the psychohistorian Hari Seldon, who will put things in motion to create the Foundation, steering intergalactic humanity away from a 30,000-year dark age.1 The future of humanity will flourish. Or it will collapse. Or it will fight off oppression and forces of ignorance to become more than we thought possible. It may very well follow the course some science fiction authors imagine. Or we may be able to save ourselves from some of the darker visions, with their stories as a warning and guide. In this book, I have sought to explore some of the plausible futures in my field, media law and policy, as they have been portrayed in books, on television, and in film. While science fiction as a genre does not solely exist to project the future, it does function as a laboratory to examine ideas about how we live today and how we may live tomorrow, especially with the advent of technology that may change the way we communicate, travel, and experience the universe around us. We will continue to invent new technologies, bringing us to what Vernor Vinge called our modern paradox: “harnessing the creativity of humankind without destroying the world in the process.”2 Cory Doctorow’s stories are set in London next week or San Francisco next year; Margaret Atwood’s are in a near enough future in America when things have clearly gone off the rails; and Isaac Asimov’s settings range from Earth in a few decades to planets on the other side of the galaxy many millennia in the future. But they all touch on how we exist with one another, how we communicate, how we struggle to understand, and how we try to manage our relationships through media and technology and the rights and duties we have under the law. The title of this chapter is an homage to Douglas Adams, the author whose books got me interested in science fiction in the first place.3 Life, the Universe, and Everything was supposed to be the final book in his Hitchhiker’s Guide trilogy, except he wrote two more books in the series after that, and was thinking about a sixth book before his untimely death in 2001. So here, I write about “Law, the Universe, and Everything,” not necessarily as a conclusion, but as a place to take a break, reflect on the findings of the previous chapters, and look at what it

180 Law, the Universe, and Everything

means for the future of freedom of expression and journalism, leaving the door open for future adventures if I or anyone else wants to take them.

The Future of the First In American law, the First Amendment is the bedrock of our liberties to express ourselves as human beings. More than two centuries after it was drafted, the language of this provision, with the guarantee that “Congress shall make no law . . . abridging freedom of speech, or of the press,” Americans have broadly defined rights to tell stories, to make art, to criticize our government, to use colorful language, and to share our knowledge with one another because of these core freedoms that are almost entirely protected from state intrusion. They’ve had their ups and downs through a civil war, two world wars, the Great Depression, the Cold War, and even, at least so far, the war on terror, but these freedoms are still there. Why, then, does freedom of expression seem to go so awry, so quickly, in science fiction? Peacekeepers crack down on protesters in Catching Fire, the second book of the Hunger Games trilogy by Suzanne Collins, executing a man in front of a crowd after he gives the three-finger salute as a symbol of dissent from the oppression of the Capitol.4 Similarly, any free speech protections are swept away in the future United Kingdom in V for Vendetta after a neo-fascist group takes control of the country. Books are destroyed with relish in Fahrenheit 451, and possessing books with dangerous ideas is a serious crime. Speech itself has been twisted into near meaninglessness in Orwell’s Nineteen Eighty-Four, where expressing anything deviating from the state’s version of the facts is both illegal and increasingly impossible. In Margaret Atwood’s The Handmaid’s Tale, dissenters in Gilead are hanged, secular music has been banned, and freedom of speech seems to have gone the way of several other liberties. “There is more than one kind of freedom,” Aunt Lydia tells Offred, talking of a society that was dying from having too many choices. “Freedom to and freedom from. In the days of anarchy, it was freedom to. Now you are being given freedom from.”5 Atwood later wrote that she found the roots for Gilead, the future America in which democracy has been replaced with a theocratic dictatorship, not in the Enlightenment-era rise of the republic as a form of government, but rather in the “heavy-handed theocracy of 17th-century Puritan England – with its marked bias against women – which would need only the opportunity of a period of social chaos to reassert itself.”6 It’s a recurring theme in modern science fiction – the resurgence of fascism after we thought it was vanquished in World War II, or the spread of Soviet-style totalitarianism and fall of liberalism in what we expect to be safe places in the United States and Great Britain. These are all cautionary tales, told by writers who witness encroachments by the state on our liberties and play them out into the dystopian futures they may portend. The state doesn’t need

Law, the Universe, and Everything 181

any new technology; in Fahrenheit 451, a flamethrower is all they need, and in The Handmaid’s Tale, it only takes some rope to enforce the new power structure. When there is new futuristic technology, it is employed by the state to maintain control through constant surveillance, such as in Nineteen Eighty-Four and V for Vendetta. When science fiction authors tell us dystopian tales of oppression against speech, they are sending us a warning. Bradbury wrote Fahrenheit 451 partially in response to the rise of fascism in Europe and the emergence of McCarthyism in the United States. Nineteen Eighty-Four was Orwell’s portrayal of the natural ends of Soviet-style totalitarianism. Atwood was living on the west side of the Berlin Wall as she wrote The Handmaid’s Tale, while the religious right was making political gains in the United States and elsewhere. And the stories they told have become a shorthand for encroachments on our liberties. In Thailand, protesters used the three-finger salute from the Hunger Games books as a symbol against the military government following a coup in 2014, and authorities warned that anyone displaying the sign was subject to arrest.7 Women donned red robes and bonnets, made famous in The Handmaid’s Tale, in the face of a parade of American legislators and jurists trying to cut freedom of reproductive choice, access to healthcare, and in protest of appointment of a Supreme Court justice accused of sexual assault.8 People sadly witnessing the resurgence of fascism could be seen wearing hats with the slogan “Make Orwell Fiction Again.”9 Limitations on speech and expression are a bit more nuanced in other visions of the future. In Ready Player One, for example, tools have emerged that allow instant censorship of speech across digital networks. The public school in the OASIS has “classroom conduct software” that filters out bad language, so when Wade Watts curses in frustration when he has a breakthrough idea but can’t leave his classroom to look through documents to help him confirm his suspicions, he gets this message: “PROFANITY MUTED – MISCONDUCT WARNING!”10 Likewise, Innovative Online Industries mutes profane responses from its agents, so when Wade begins working there and smarts off to a couple of users seeking tech assistance, using colorful language such as telling a person who bought a sword he couldn’t use to “shove it up your ass and pretend you’re a corn dog,” he receives a message, “COURTESY VIOLATION – RESPONSE MUTED – VIOLATION LOGGED.”11 Targeted censorship plays an important role in countering the worldwide spread of disinformation in the three books of Malka Older’s Centenal Cycle. Information, the international agency overseeing elections and managing digital communication infrastructure, is initially funded by a “massive settlement” by Coca-Cola for decades of peddling disinformation, as well as a “subsequent lawsuit, building on that precedent, which led directly to the cable news collapse.”12 With these two sources of global disinformation curbed, Information emerges with broad powers to block or alter speech of both citizens and government officials. Lies by campaigning politicians during debates are annotated and

182 Law, the Universe, and Everything

corrected in real time, and political advertising is curtailed in many ways, including a “Preelection Day campaign ban” that prevents last-minute efforts to sway campaigns in worldwide elections.13 An Information agent faces serious criminal charges for “influencing or attempting to influence the outcome of an election with the exacerbating factors from a position of power and before the start of an official campaign period,” after an American nationalist party accuses her of skewing data to favor her political beliefs.14 Information has broad power to block or remove content, including incitements to violence, while also being able to annotate “dehumanizing content” such as hate speech.15 Even false advertising by restaurants is subject to punishment by Information, which bans stock photos on menus; in one instance, a restaurant in DarFur is clearly using photos of hamburgers and “popcorn termites” that do not resemble their actual food – eggs and stewed termites – leading to a citation from an Information agent, and “a warning before Information replaces them with any stills it can find of their actual food, unlikely to be flattering.”16 These may appear to be overbroad powers to regulate speech from the perspective of present-day libertarian approaches to freedom of expression, but because they are housed in an international non-government agency, they are balanced against limits on government power to alter the public record. “Diverting, twisting, or otherwise affecting the Information received by citizens is illegal for any government,” says Ken, a political operative for one of the parties, in Infomocracy. “They can add data but not subtract or change.”17 The speech limitations are aimed at reducing the role of disinformation and propaganda that dominated the early 21st-century political landscape, and that led to the collapse of the previous world order. Disinformation and government-based censorship are still evident in countries that opt out of Information and microdemocracy such as China and Russia, as well as “Independenista” territories that pay for limited access to Information but “channel everything through the government before distributing it.”18 In the series, Older provides a vision of a world in which freedom of expression may be more limited, and although there are some consequences for the power structure that places and enforces those limits, they are not exactly dystopian, and they certainly enable more personal liberty and government participation than other authoritarian states. The future may even continue to hold strong free speech protections. In the America imagined by Robert Heinlein in Stranger in a Strange Land, the journalist trying to free the Man from Mars and allow him to speak publicly notes that the man is technically a citizen of the United States, and “it’s illegal to hold a citizen, even a convicted criminal, incommunicado anywhere in the Federation.”19 And freedom of expression remains a core principle in the future of Star Trek: The Next Generation as well. Captain Jean-Luc Picard extols the importance of the right against self-incrimination, the seventh guarantee of the constitution of the United Federation of Planets, and defends it while trying to shut down a McCarthy-esque crusade on his own ship. “There are some words I’ve known

Law, the Universe, and Everything 183

since I was a schoolboy,” Picard tells a judge questioning his loyalty to Starfleet, quoting a judicial decision of her father. “With the first link, the chain is forged. The first speech censured, the first thought forbidden, the first freedom denied, chains us all irrevocably . . . The first time any man’s freedom is trodden on, we’re all damaged.”20 It suggests that even three centuries from now, humans throughout the galaxy will still recognize some of the liberties that we enjoy today.

Journalists . . . in . . . Space! The future of free speech as portrayed in science fiction is inextricably intertwined with the future of journalism. The image of journalists in works of fiction has long interested me as a scholar. Once, for example, I did a study on how the portrayal of the villainous tabloid journalist Rita Skeeter in J.K. Rowling’s Harry Potter books may have affected young readers’ perceptions about journalists.21 Since I began this project, I have looked for examples of how journalism in the near or distant future is envisioned by science fiction authors. In general, what I found is that the future of journalism looks a lot like it does today – or at least, as it did at the time the authors were telling their stories. In Stranger in a Strange Land, reporter and columnist Ben Caxton for the Post is one of “more than a thousand reporters in this area, plus press agents, ax grinders, winchells, lippmanns,” and other journalism types who arrived to report on the Man from Mars. Caxton, a future big-city journalist imagined from the 1960s perspective of Heinlein, has an expense account, travels by flying taxi, and isn’t going to let a thing like ethics get in the way of getting the story. He offers a bribe to Jill, a nurse at the hospital where the Man from Mars is being held, to put a recording device near his room, though he tells her, “you can’t expect me to outbid Associated Press, or Reuters.”22 Caxton, like many journalists, is also maligned by government officials and threatened with a slander case if he continues to pursue rumors that the Man from Mars who appeared on 3-D television was a fake. The story, of course, turns out to be quite true.23 Journalists are treated with even more disdain in Atwood’s The Heart Goes Last, set in a future gated community in which the population rotates in and out of prison in month-long shifts. Management at the Positron Project calls “muckraking journalists” the “enemy” for trying to investigate some of the shadier aspects of the community, such as the mysterious deaths, and the customized robots that resemble the people who live there. As the manager of the community says, reporters are: maladjusted misfits who claim to be acting as they do in the interests of socalled press freedom, and in order to restore so-called human rights, and

184 Law, the Universe, and Everything

under the pretense that transparency is a virtue and the people need to know.24 The news for news is not all bad in science fiction. Journalism remains important as a luxury product in the neo-Victorian future in which Neal Stephenson set The Diamond Age: Or, A Young Lady’s Illustrated Primer. Some of the upper classes of society have access to a blank paper device that receives transmissions of the news each day, although “the top stratum of New Chusan actually got the Times on paper, printed out by a big antique press that did a run of a hundred or so, every morning at about three A.M.”25 But the future of journalism, and the law covering it, isn’t just on Earth. The universe already has a “news reports brought to you here on the sub-etha wave band, broadcasting around the Galaxy around the clock,” in the Hitchhiker’s Guide to the Galaxy, with announcers “saying a big hello to all the intelligent life everywhere . . . and to everyone else out there, the secret is to bang the rocks together, guys.”26 When Earthers finally make it to space, we will apparently take some of our modern notions of press freedom with us. James S.A. Corey’s series The Expanse is set in a distant enough future that humans have discovered light-speed travel and colonized Luna and Mars, as well as the asteroid belt, but not so far that we are easily able to travel much further beyond – at least until a portal opens up on the far reaches of the solar system. In the third book in the series, Abaddon’s Gate, an Earth crew from United Nations Public Broadcasting led by journalist Monica Stuart helps James Holden, the captain of the renegade ship Rocinante that plays a major role in each of the books, get his ship out of hold by a legal claim from pending court claims on Earth and Mars. Stuart claims free press protections that halt the freeze on the ship, in exchange for a chance to document the famous crew at work. “I am protected by the Freedom of Journalism Act,” Stuart tells Holden as she negotiates a way on board. “I have the right to the reasonable use of hired materials and personnel in the pursuit of a story. Otherwise, anyone could stop any story they didn’t like by malicious use of injunctions like the one on the Roci.”27 Stuart resembles a lot of present day journalists, employing recorders and other technology, agreeing to let Holden go off the record for some sensitive security matters, and even becoming a kidnapping target when terrorists attack Tycho Station in the fifth book in the series.28 Even 20,000 years in the future, when humanity has spread throughout the galaxy, journalists retain some legal protections, at least as depicted in Asimov’s Foundation. The press and public may have been excluded from the trial of psychohistorian Hari Seldon, which is perhaps understandable as any right under the First Amendment for the press to attend criminal trials was not clear until nearly 40 years after Foundation was written.29 Nevertheless, on Terminus, home of the Encyclopedia Galactica, there are still newspapers – Dr. Pirenne tells mayor Salvor Hardin, “Do something about that paper of yours!”30 – and they still have

Law, the Universe, and Everything 185

protections under the law. When Pirenne tells him to back off encouraging a public celebration of the 50th anniversary of the establishment of Foundation, Hardin says he can’t, because “the City Charter guarantees a certain minor matter known as freedom of the press.”31 It is heartening to see futures in which freedom of the press carries on as a core principle, and in which the purpose of journalism as an important method of revealing corruption and improving the human condition continues through the ages. And the warning from dystopian novels teaches us to cherish what we have and not let it fade away. In Fahrenheit 451, Professor Faber recalls what happened when people stopped caring about the news, preferring only entertainment. “I remember the newspapers dying like huge moths. No one wanted them back. No one missed them,” Faber tells Montag, the fireman who is questioning the destruction of books and ideas. “And then the Government, seeing how advantageous it was to have people reading only about passionate lips and the fist in the stomach, circled the situation with your fire-eaters.”32

Conclusion What began as a thought experiment, using works of science fiction as texts to help explore possibilities of the future of communication law, has been a mindexpanding experience. It is not that the classic approach to legal research – focusing on court cases and precedent and legal scholarship – is inadequate in any way. It serves its purpose of helping judges make decisions about cases in the present and reacting appropriately to shifts in technology and culture as they arrive and influencing policy and legislative efforts. Science fiction, though, is liberating in a way. It allows great foresight, both of near-term issues and potential long-term consequences. It introduces us not just to plausible future technologies, but also to possible ways we should handle them under the law. And it enables us to have good discussions in the present about the world we live in and the way we communicate, potentially in anticipation of legal issues before they arrive. In many ways, this exercise has opened my eyes to even more potential issues. Space is “vastly hugely mind-bogglingly big,” after all.33 How should statutes of limitations on actions apply when communications may take months or years to reach their intended recipients, such as when ships pass through a portal or stargate into other galaxies, as in Corey’s Expanse series? How should intergalactic law handle jamming transmissions and blocking communications from space, a regular occurrence in the Star Wars films? What happens when the “prime directive” not to interfere with the cultures of other civilizations comes in conflict with the guarantees of free expression and expectations of privacy that are at the heart of much of Earth’s liberalized approach to human rights? Those questions are beyond what I set out to study here, and I encourage others to think about them and more wide-ranging legal issues inspired by

186 Law, the Universe, and Everything

science fiction. There is, to be sure, some doubt about the wisdom of letting future policy debates be informed by speculation of creative writers. Law professors Neil Richards and William Smart, for instance, warned of the “Android Fallacy” of allowing policy decisions be influenced based on anthropomorphic understandings of robots and artificial intelligence stemming from popular culture portrayals.34 Legal scholar Yvette Joy Liebesman also questioned the value of “enacting laws based on the unknowable results of our scientific inquiry” after applying modern U.S. copyright law to the moving in the Harry Potter series.35 Liebesman noted the importance of lawmakers being reactionary, adapting to problems as they arise rather than looking forward to the unknowable, and was concerned that “plac[ing] confines on technology that does not yet exist could unwittingly stifle creativity in the same manner that drives the public policy” behind patent law.36 Though the article was based in fantasy – reliant on magic and less driven by plausibility than science fiction – her point remains a caution for us to consider. “To attempt to create a system while the technology is in its infancy is akin to Muggles legislating for magic,” Liebesman concludes.37 That said, all legal scholarship does not need to be uniform, and taking some risks by creating plausible, even if not probable, hypothetical examples based on the visions of science fiction authors offers the opportunity to enhance scholarship in the field of media and communication law. In their work exploring potential First Amendment rights for robots and artificial intelligence, law professors Toni Massaro and Helen Norton, for instance, imagined “a Supreme Court case called Robots United” involving a “challenge to the government’s regulation of speech by a robot with strong AI, where the regulation restricts speech on a matter of public concern.” The exercise brought together a Supreme Court case establishing First Amendment rights for a non-human entity, a corporation, with plausible future technology as portrayed in science fiction. Their hypothetical even had Judy Jetson as the robot’s lawyer.38 We researchers are in a position to answer questions that may not be as practical or necessary for judges and legislators today, but may very well be considered by them in the future if and when some of the projections become reality. Thus, legal scholars are urged to take the occasional chance to look ahead – not just one year, but ten, twenty, a hundred years or more – at the potential technological world ahead of us. Consider the following questions raised in the visions of the future outlined in the previous chapters, each of which is deserving of further exploration: 

The long-term impracticability of copyright law, which increasingly becomes more protective of creator and owner rights and restrictive of secondary uses, especially as new forms of creativity and art are developed, virtual worlds emerge as a new venue with no geographic boundaries that make application of the law difficult, and new kinds of creators – animals, AI, and aliens, to name a few – become capable of creating works as authors independent of

Law, the Universe, and Everything 187







human control. Science fiction recommendations include shorter fixed copyright terms and microtransactions for settling minor infringement disputes. As surveillance technology threatens to become ever-present, outpacing any efforts in the law to curb its advance, more thoughtful approaches to design of privacy-invading tools and legal restrictions on the kinds of technologies that are foreseeable before they become a dangerous reality. Designing antispyware and designing private spaces in public are potential countermeasures for the law not adapting to future privacy incursions. Thinking about how we will treat our robot and AI creations under the law as they become increasingly sentient and independent in their expressive conduct, a task taken on by some scholars already, but that could be influenced with science fiction portrayals of the value, and risk, of granting personhood rights to them or to humans enhanced with AI to extend their consciousness beyond their natural lives. Addressing harms caused by disappearance of creative works, either by government or private citizens, especially as authors increasingly move toward digital archiving and preservation that could be hacked or erased without notice or consent, and recognizing that sometimes, speech or code that hacks into our minds in a deadly way may need to be destroyed.

In this book, I have only just begun to explore the possibilities. Even with more than five years of almost exclusively reading science fiction books and watching science fiction movies and television shows, I have consumed but a mere drop in the ocean of work that these amazing artists have created. They have more possibilities to share with us than could possibly be examined in the manner I have attempted, as evidenced by the numerous books and films that have been released since I started this project. As I have been writing this concluding chapter, legal scholars looking at free speech and privacy and AI and technology continue to publish books and articles and host symposia and conferences to address the challenges ahead.39 Cory Doctorow has released a new book featuring a novella about digital rights management and copyright law, and the author of the foreword, Malka Older, has been releasing episodes of a new serial with a group of co-writers.40 Captain Marvel includes alien Kree communication technology that enables a souped-up pager to send signals across the universe,41 and new Avengers and Star Wars films are in line to expand their worlds even further later this year. These and more all create new possibilities for us to imagine new technologies and how they may affect the way we communicate, and what it might mean if they become a reality. I am not calling for legal scholars to abandon our current ways of doing things, but instead, I suggest that some of us take this opportunity to contribute to a richer body of communication law and policy research in the years to come by recognizing the importance of science fiction visions in shaping

188 Law, the Universe, and Everything

our world. While most of us would do well to keep our feet on the ground, there is also some value in some of us keeping our eyes on the skies.

Notes 1 The references in this section, in order, are: Blade Runner (Warner Bros. 1982); Douglas Adams, The Restaurant at the End of the Universe (1980); Kurt Vonnegut, Slaughterhouse-Five, or The Children’s Crusade (1969); George Orwell, Nineteen Eighty-Four (1949); 2001: A Space Odyssey (Metro-Goldwyn-Mayer 1968); 2010: The Year We Make Contact (Metro-Goldwyn-Mayer 1984); Louisa Hall, Speak (2015); Katie Williams, Tell the Machine Goodnight (2018); Ernest Cline, Ready Player One (2011); Malka Older, State Tectonics (2018); Blade Runner 2049 (Warner Bros. 2017); Minority Report (20th Century Fox 2002); Annalee Newitz, Autonomous (2017); Star Trek: The Motion Picture (Paramount Pictures 1979); Aldous Huxley, Brave New World (1931); Isaac Asimov, Foundation (1942). 2 The McGuffin invented by Vinge was called “You Gotta Believe Me” (YGBM) technology, “humankind’s only hope for surviving the twenty-first century.” Vernor Vinge, Rainbows End 29 (2006). 3 Thank you, Douglas Adams. You didn’t just get me to read more science fiction. You got me interested in books. And text adventure videogames. And writing. I love the world and laugh at the world because of the way you helped me see it. And I know where my towel is. 4 Suzanne Collins, Catching Fire 61–62 (2009). 5 Margaret Atwood, The Handmaid’s Tale 24 (1986). 6 Margaret Atwood on How She Came to Write The Handmaid’s Tale: The Origin Story of an Iconic Novel, Literary Hub, 2012, lithub.com/margaret-atwood-on-how-she-cam e-to-write-the-handmaids-tale/. 7 Seth Mydans, Thai Students Held for Using ‘Hunger Games’ Salute, N.Y. Times, Nov. 20, 2014, A14. 8 Laura Bradley, Under Their Eye: The Rise of Handmaid’s Tale-Inspired Protesters, Vanity Fair, Oct. 9, 2018, www.vanityfair.com/hollywood/photos/2018/10/handmaids-ta le-protests-kavanaugh-healthcare-womens-march. 9 See Amitava Kumar, The Trump Administration is Stranger Than Fiction, Globe & Mail (Toronto), Sept. 5, 2018, www.theglobeandmail.com/arts/books/article-the-trump -administration-is-stranger-than-fiction/. 10 Cline, supra note 1, at 70. 11 Id. at 284. 12 Malka Older, Infomocracy 53 (2016). 13 Id. at 194. 14 Older, supra note 1, at 60–61. The party is called “AmericaTheGreat” and is described as “a nationalist government that barely hides its white-supremacist platform.” Id. 15 Malka Older, Null States 73 (2017). 16 Id. at 133. 17 Older, supra note 12, at 355. 18 Older, supra note 1, at 167. 19 Robert Heinlein, Stranger in a Strange Land 27 (1961). 20 Star Trek: The Next Generation, The Drumhead (Paramount Television broadcast, April 29, 1991). 21 My worries that Rita Skeeter and the Daily Prophet were poisoning the minds of kids against journalists were thankfully not borne out by the study. Indeed, young readers of the Harry Potter books were more likely to have positive feelings about the trust, credibility, and ethical behavior of journalists than kids who had not read the books.

Law, the Universe, and Everything 189

22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39

40 41

See Daxton R. “Chip” Stewart, Harry Potter and the Exploitative Jackals: How Do J.K. Rowling’s Books About the Boy Wizard Impact the Salience of Media Credibility Attributes in Young Audiences, 2 Image of the Journalist in Popular Culture 1 (2010). Heinlein, supra note 19, at 21. Id. at 46, 49. Margaret Atwood, The Heart Goes Last 146–47 (2015). Neal Stephenson, The Diamond Age: Or, A Young Lady’s Illustrated Primer 36 (Bantam Spectra 2008) (1995). Douglas Adams, The Hitchhiker’s Guide to the Galaxy 96 (Pocket Books 1981) (1979). James S.A. Corey, Abaddon’s Gate 68 (2013). James S.A. Corey, Nemesis Games 350–51 (2015). See Richmond Newspapers Inc. v. Virginia, 448 U.S. 555 (1980) (finding an implicit First Amendment right for the public and press to attend criminal trials). Asimov, supra note 1, at 53. While Hardin does not run the newspapers, he quietly owns a controlling stake in their ownership. Id. at 65. Id. at 53. Ray Bradbury, Fahrenheit 451 85 (Simon & Schuster 2012) (1953). Adams, supra note 26, at 76. Neil M. Richards & William D. Smart, How Should the Law Think About Robots?, in Robot Law 22 (2016) (Ryan Calo, A. Michael Froomkin & Ian Kerr eds., 2016). Yvette Joy Liebesman, The Wisdom of Legislating for Anticipated Technological Advancements, 10 J. Marshall Rev. Intell. Prop. L. 153, 180 (2010). Id. Id. at 181. Toni M. Massaro & Helen Norton, Siri-ously? Free Speech Rights and Artificial Intelligence, 110 Northwestern U. L. Rev. 1169, 1173 (2016). See Mary Anne Franks, The Cult of the Constitution (2019) (examining how modern Constitutional interpretation has enabled “our deadly devotion to guns and free speech”); Tim Wu, The Curse of Bigness: Antitrust in the New Gilded Age (2019) (examining the concentration of corporate power in big tech companies that shape the way we communicate and experience the world). Meanwhile, Columbia Law School is hosting a symposium on “Common Law for the Age of AI” just after I send a final draft of this manuscript to the publisher, featuring authors such as Wu, AI researcher Kate Crawford, and tech law scholar Frank Pasquale. See Columbia Law Review, Common Law for the Age of AI, columbialawreview.org/symposium-2019/ (accessed April 3, 2019). See Cory Doctorow, Radicalized (2019); Malka Older, Jaoqueline Koyanagi, Fran Wilde & Curtis C. Chen, Ninth Step Station (2019). Captain Marvel (Marvel Studios 2019).

INDEX

2001: A Space Odyssey xv, 7, 109, 112, 178 3D printing 21, 30, 42 AI: Artificial Intelligence 109 AT&T 42 Abbadon’s Gate 184 Adams, Douglas xii, 39, 53, 111, 112, 179, 188 Aereopagitica 147 Afghanistan 148–9 Ai Weiwei 159 Aibo 111 Alcott, Louisa May 154 Alderman, Ellen 61–2 Alexa 26 Alexander, George J. xviii algorithms viii, xxiv, 112–3, 115–6, 124, 125, 129, 133, 136 Alien 44 aliens viii, xviii, xxii, xxiii, 1, 25, 29, 35, 37, 39–40, 48, 54–5, 98, 112, 117, 186, 187 Amazon 2, 26, 27, 89, 91 American Civil Liberties Union (ACLU) 18, 76, 89, 91, 102 Anders, Charlie Jane xxvi androids see robots in science fiction Apple xv, 6, 26, 65 Arrival 54–5 Artificial Intelligence xxiv, 7, 14–5, 19, 20, 22, 25, 96, 107–143; regulated in science fiction 12, 16, 19; personhood rights

16–7, 107, 109–10, 116–20, 124–5, 134–5, 186, 187; intellectual property rights 110, 126–31, 186 ASCAP 49 Asimov, Isaac xv, xviii, 31, 53–4, 59, 107–8, 110, 117, 119, 121, 137, 179, 184 Atwell, Hayley 134 Atwood, Margaret xxv, 161, 179, 180, 183–4 augmented reality (AR) 48, 81 Australia 32, 97 automated journalism 128, 129–30 Autonomous xxii, xxiv, 4–5, 16, 23, 25, 28, 36, 109, 110–1, 118, 121, 122, 123, 126–7, 131, 179 Avatar 1, 35 Avengers films 1, 113, 187 Bacigalupi, Paolo 2 Back to the Future 48 Baez, Fernando 147, 161, 171 Bahrani, Ramin 152 Baidu 115 Balkin, Jack 108, 125 Bambauer, Jane 114 Barbas, Samantha 85 Batman 71, 98–9 Baum, L. Frank 110 Bentham, Jeremy 69 Berne Convention 40, 49 Bethe, Hans A. 146–7

Index 191

big data 133, 138 biometric data 61, 67, 79, 87–91, 93–5, 103 Biometric Information Privacy Act (BIPA) 90–1, 93 Blackstone, William 148 Blackmun, Harry xviii Black Mirror 3, 10, 79–80, 81–2, 94, 95, 96, 111, 134; “Nosedive” 10, 96; “The Entire History of You” 3, 79–80, 94, 95, 96, 111; “Arkangel” 81–2, 94, 95; “Be Right Back” 134 Blackie the Talking Cat 122 Blade Runner xii, xvii, 47, 120, 123, 178 Blade Runner 2049 120, 123, 136, 179 Blasi, Vincent 147 “Blurred Lines” 45 BMI 49 BoingBoing 35, 40 Bollea, Terry see Hulk Hogan Booker, Keith 68 books, destruction of 146, 171, 180 Bork, Robert 62 bots 112–3, 120, 122–4, 132 Bowser, Muriel 163 Bradbury, Ray xii, xvi-xvii, xviii, xxii, xxiv, xxv, 144, 146, 148, 171, 172 Brandeis, Warren 53, 62, 64, 83 Brave New World xxviii, 101, 179 Brenner, Susan 117, 134–35 Brin, David 81 Brown, Kimberly 92 Brown, Nina 128–29 Budiardjo, Luke Ali 128, 129–30 Bujold, Lois McMaster 11 Bush, George W. 166 Bush, Jeb 163 Bustillos, Maria 157 Butler, Octavia 25 California Consumer Privacy Act 63 Calo, Ryan 16, 109–10, 111 Calvert, Clay 155 Cambridge Analytica 63 Cameron, James 1, 88 Canada 97 ˇ apek, Karel 110 C Captain America: The First Avenger 35 Captain Marvel 187 Carmody, Casey 124 Casey, Bryan 111–2, 125 Catching Fire 180 Caves of Steel, The 119

censorship 67, 98, 145–53, 181–2; by private individuals 153–160, 171, 181 chatbots see bots Chiang, Ted 54–5 Chiffons, The 44 China 96, 98, 115, 148, 182 Christopher Robin 148 Civil Rights Act 151 Clarke, Arthur C. xv, xviii, 109, 112 Clarke, Susannah 144, 154 climate change xxvi, 5–6, 9, 22–3, 29, 134 Cline, Ernie xix-xx, xxiii, 11, 32, 35, 45–7, 48, 53 Clinton, Hillary 162, 163, 166 clones 11 Closed-circuit television cameras (CCTV) 27, 67, 70, 79, 94 code 115, 121–2, 168–9, 187 Colbert, Stephen 124 Collins, Suzanne xxiii, xxv, 70, 180 Comey, James 65 Communications Decency Act 86, 124–5 Computer Fraud and Abuse Act (CFAA) 35, 62–63, 158, 160 Confide xxi, 97, 162–3, 165 conversion law 158–9 copyright xxiii, 34–59, 114–5, 150, 157, 159–60, 186–87; fair use 36–8, 48, 52, 56; and virtual reality xix-xx, 46–9 ; maximalism 34–6, 40, 42, 45, 55, 131; public domain xx, xxiii, 36–7, 43–4, 46, 128, 130–1; length of term xxiii, 35, 36–7, 43–4, 50–1, 131; and code 38; and licensing 49; originality 53; and robots/AI 126–31, 186 Copyright Act of 1976 37 Copyright Alert System 42 Corey, James S.A. xxv, 30, 184, 185 Corley, Eric 114–5, 122 court decisions in science fiction see judicial decisions in science fiction Crawford, Kate 189 Creative Commons 50 Crichton, Michael xiii-xiv “Cyborg and the Cemetery, The” 133–4 cyborg see robots in science fiction Dark Knight, The 71, 98, 102 Darling, Kate 109, 111, 118 Darth Vader 113, 135–6 Data, Lt. Commander xxiv, 107, 116–7, 136–7 Debs, Eugene V. 149

192 Index

defamation 123–4, 126, 135, 152 Depp, Johnny 134 Diamond Age, The xxiv, 2, 43, 80, 81, 184 Dick, Philip K. xvii, xxii, xxiv, 88, 109, 119–20, 131 Digital Economy Act of 2010 41–42 Digital Millennium Copyright Act (DMCA) 9, 38–39, 43, 48, 115 Digital rights management 2, 9, 30, 38 Disinformation 75 Do Androids Dream of Electric Sheep? xxiv, 109, 119–20 Doctorow, Cory xi, xvii, xxi, xxii, xxiii, 2, 4, 5–6, 8–9, 12, 18–9, 20–1, 24, 29–30, 35, 40–1, 42–3, 45, 53, 70–1, 89, 111, 134, 136, 179, 187 dogs, drug-sniffing 64, 71 Douglas, William O. 63 Drivers Privacy Protection Act 62 drones xxv, 82, 104 Dropping a Han Dynasty Urn 159 dystopia xx, xxiii, xxiv, xxv, 20, 26, 34, 39, 42, 43, 46, 55, 68–9, 71–72, 75, 101, 144, 171, 180–2, 185 Electronic Communications Privacy Act 62, 66 Electronic Frontier Foundation (EFF) 2, 4, 5, 9–10, 34, 38, 92 Electronic Privacy Information Center (EPIC) 91 encryption 21, 25–6, 32, 43, 65, 114–5 enhanced humans 110, 113, 117, 133–4 ELIZA 112–13 “EPIC 2014” 27–28 “EPICAC” 109 ESPN 49 Eternal Sunshine of the Spotless Mind 98 European Convention on Human Rights 121, 148 European Union 35, 66, 121, 159–60 Ex Machina 128–29 Expanse, The xxv, 30, 184, 185 Facebook xix, 10, 33, 51, 63, 75, 79, 86, 88, 95, 134, 164 Facebook Live 79 facial recognition 88–91, 94, 96; and efforts to ban 91–3 Fahrenheit 451 xvi-xvii, xxii, xxiv, xxv, 3, 144, 145, 146, 149, 152, 170, 172, 180–1, 185 Fair Information Practices (FIP) 99

fair use see copyright Falk, Donald 115 Family Educational Rights and Privacy Act (FERPA) 62 Farivar, Cyrus 63–64, 103, 175 Fiesler, Casey 10, 80 Fifth Amendment 63, 73 Fight Online Sex Trafficking Act of 2017 (FOSTA) 86 First Amendment xxiv, xxv, 63, 66, 76–8, 82, 91–3, 95, 96, 97, 148–53, 156, 159, 164, 168–71, 180; and robots 109–10, 114–6, 120–3, 186 Fiser, Harvey 94 Fishman, Joseph 45 Fitbit 74, 79 Flynn, Gillian 154 Ford, Paul 42, 45, 49 Foreign Intelligence Surveillance Act (FISA) 73 Foundation xxii, 53–4, 179, 184–5 Fourth Amendment 63–5, 72, 73, 92, 95 Franklin, Benjamin 146 Frankenstein xxii, 14, 68, 107–8, 137 Franks, Mary Anne 189 Freedom of Information Act (FOIA) 66, 97, 132, 163 Freedom of the Press Foundation 34 Freedom of speech xxv, 17, 148, 164–5 Free Software Foundation 50 Freeman, Bob 163 Frischmann, Brett 79, 133 Fulda, Nancy 133–4 Future Tense 2 Garcetti, Eric 163 Gattaca 10 Gawker 4, 66 Gaye, Marvin 45 General Data Protection Regulation (GDPR) 63 Genetic Information Nondiscrimination Act (GINA) 94 Germany 123, 146, 150, 167 Getty Research Institute 149 Ghostbusters 48 Gibson, William viii, 57 Ginsburg, Jane 127–8, 129–30 Giver, The xxiv, 144, 161 GlaDOS 113, 134 Gleeson, Domhnall 134 Global Positioning System (GPS) 64, 74 GNU General Public License 50

Index 193

Gone Girl 154 Goebbels, Joseph 146 Google xv, xix, xxiv, 27, 35, 52, 63, 67, 75, 86, 90, 115–6, 148, 172 Google Books 22, 50, 52 Google Glass 80, 81, 82 Google Street View 67, 84–5, 97 Gothamist 157 government destruction of records 160–7 Greece 147, 149, 161 Greeley, Horace 173 Greenbaum, Dov 92 Greitens, Eric 162–3, 164 Griffiths, John xvii Grossman, Austin 32 Guardians of the Galaxy Vol. 2 133 Gulliver’s Travels 126 Gunkel, David 110, 123, 124 Guzman, Andrea xi, 4, 6–7, 26, 113 Hal 9000 7, 109, 112, 178 Hall, Louisa xi, xxii, xxiv, 4, 7, 12, 14, 22, 24–5, 30, 113, 120 Handmaid’s Tale, The xxii, xxv, 161, 180–1 Harari, Yuval Noah 2 Hard Rock Café 150 Harlan, John Marshall xxvii Harris, Kamala xxvi Harrison, George 44 Harry Potter series xvi, 150–51, 183, 186, 188–9 Hartzog, Woodrow xi, xxi, 82, 87, 89, 91, 93, 97, 99, 162 Hawking, Stephen 136 Health Insurance Portability and Accountability Act (HIPAA) 62 Heart Goes Last, The 183–4 Heinlein, Robert 71, 182, 183 Her 26, 112 Higgins, Parker 34, 55 Hitchhikers Guide to the Galaxy, The xv, 39, 53, 110, 184 Homeland 5, 70 Hopkins, Patrick 94 Hopkins, W. Wat xi, xxi Hubbard, F. Patrick 117 Hugo Award 5, 43 Hulk Hogan 66 human-machine communication 6, 24–6, 113–4 Hunger Games, The xxiii, xxv, 1, 70, 180, 181

I, Robot stories xxvii, 108, 117 implants 23, 73, 79–82, 87, 136 Incredibles, The 88, 98 Incredibles 2 1, 98 India 121 Infinite Jest xxv, 167–8 Infomocracy xxii, xxiii, 15–6, 20, 23, 26–7, 75, 85, 99, 178, 182 “Insistence of Vision” 81, 82 Instagram viii, 157, 160, 164 intellectual property xx-xxii, xxiii-xxiv, 4, 5, 11, 18, 28–29, 30, 34–6, 39, 40, 43, 47, 48–9, 53–5, 110, 125–31, 136, 150–1, 157, 158, 159 intrusion upon seclusion 67–8, 72, 78, 83–4, 86, 91 iPad xv iPhone 65 jailbreaking 2, 30, 43 Johansson, Scarlett 112 Johnson, Eddie 163 Jonathan Strange & Mr Norrell 144–5, 153–4, 158, 170 Jones, Gwyneth xiv-xv Jones, Meg Leta xvi, 112, 123, 124 journalism in science fiction 27–9, 71, 74, 183–5 judicial opinions in science fiction 13–4, 60, 93 Jurrasic Park 1 Kaminski, Margot 109 Karnow, Curtis 123 Kennedy, Caroline 61–2 Khanna, Derek 51 Khoury, Amir 131 killer speech xxv, 145, 167–70, 187 King, Stephen 154 Kirtley, Jane 106 Knight Rider 48 Kreimer, Seth 75–76 Kristof, Nicholas xiii-xiv Kurzweil, Ray xxiv, 137 Kyle, Chris 135 Lafferty, Mur 2 Lakier, Genevieve 105, 150 Lambe, Jennifer 154–5 Lancman, Rory 156 Lanham Act 150, 160 Lauren, Genie 158

194 Index

laws in science fiction see legislation in science fiction Lawsuits in science fiction 15–6, 39, 181 Leckie, Ann 113, 118 Le Guin, Ursula xiv, 25 legal research xiii, xxvi, 185–8 legislation in science fiction 18–9, 39–41, 42, 43–4, 53, 82, 86, 117, 119, 120, 134, 182–3, 184 Leiber, Fritz 126 Lemley, Mark 48, 111–2, 125 Lessig, Lawrence xx, 19, 50, 51 Levering, Steve xi, xxi Lewis, Seth 124 Lexicon, The 150–1 libel see defamation license plate readers 70, 92, 94 Lidsky, Lyrissa 84 Liebesman, Yvette Joy xvi, 186 Life, the Universe, and Everything 179 Lincoln, Abraham 149 Liptak, Adam xiii-xiv Litman, Jessica 50, 51 Littau, Jeremy xi, xxi, 4, 10 Little Brother xxi, xxii, 3, 5, 21, 24, 70, 72, 89, 95 Little Women xxiv, 154 Liu, Ken 2 live streaming 79 Lowry, Lois 144, 161 machine learning viii, 52, 112, 128, 130, 133, 136 Man in the High Castle, The 161 Mandel, Emily St. John 2, 153 Mapplethorpe, Robert 150 Marine Mammal Protection Act 127 Mars 64, 71, 182–83, 184 Massaro, Toni 109, 120–1, 123, 186 McCarthy, Joseph 146, 181, 182 McClurg, Andrew 84 McNealy, Jasmine 162 Men in Black 98 Meerkat xxi “Melancholy Elephants” 43–5, 50–1, 53 Mickey Mouse 35, 37 Microsoft xv, 112, 124, 127 Milton, John 147, 151 Mink, Thomas 152 Minority Report xvii, 88, 179 misappropriation of likeness or image 62; see also right of publicity Misery 154, 158

Mission: Impossible 162 Missouri Sunshine Act 162–3 mobile phones see Smartphones Montal, Tal 123 Monty Python 48, 160, 167 Moore, Alan xxiii, xxv Moore, Jonathan 162, 175 Monsters University 154 Mother Night 135 Mr. Penumbra’s 24-Hour Bookstore xxii, 7, 22, 52, 130 music downloading 34, 39 Music Modernization Act of 2018 49 NASA xviii, 48 Nader, Ralph 83 Nakar, Sharon 92 “Nanolaw With Daughter” 42, 43, 49 Naruto 127, 128 National Security Agency (NSA) 73 Near, Jay 149 Neuromancer 57 Newitz, Annalee xi, xxii, xxiv, 2, 4, 16–7, 23, 25, 28, 30, 36, 109, 110, 118, 126 New Zealand 42 newsgathering 84 newspaper theft 154–57 Next Rembrandt project 127–28 Nineteen Eighty-Four xxii, xxiii, xxiv, xxv, 6, 68, 70, 72, 75, 95, 144, 160–1, 170, 178, 180–1 Nissenbaum, Helen 61, 66–7, 86 Norton, Helen 109, 120–1, 123, 186 Null States 15, 75, 98 obscenity 93, 150, 169 obscurity 66, 96–7, 99, 101, 161, 162, 169 Older, Malka xi, xxii, xxiii, xxv, 4, 12, 15–6, 18, 20, 23, 26–7, 74, 75, 85, 94, 95, 98, 99, 170, 181–2, 187; foreword vii-x Olmstead, Roy 64 Orwell, George xii, xxii, xxiii, xxv, 6, 68–9, 70, 72, 75, 160–1, 171, 180, 181 Ozma of Oz 110 Pandora 49 Panopticon 69, 72 Pasquale, Frank 123–4, 189 patent law 28, 36, 123, 126, 159 Pentagon Papers 149–50, 151, 169 People for the Ethical Treatment of Animals (PETA) 127 Periscope xxi

Index 195

Petersen, Jennifer 115 phones see smartphones photography 38, 67, 77–8, 83–4, 85, 87, 91–2, 127 Picard, Rosalind 109 Pirate Cinema xxii, xxiii, 3, 5, 18; “Theft of Intellectual Property Act” 18, 40–1, 43, 51, 70 Plagiarism 44, 45, 53 Plausibility ix, xxii, 21–4, 29–30 Pleo 111 Portal 113, 134 Portal 2 134 post-human age see singularity practical obscurity see obscurity Pratchett, Terry 48 Presidential Records Act 162, 166 Pretties 101 Price, Nicholson xi, 4, 11, 30 Princess Leia 48 “Printcrime” 42, 43 privacy xxiii, 21, 60–106, 117; in public places 61, 64, 75–6, 82–3, 85, 86–7, 93, 99; and democracy 61–2; as a right to be let alone 62, 63; privacy torts 62, 65–6, 67, 84; sexual 63, 87; and employment 66, 73, 86–7; for robots 117 Privacy Act of 1974 62 Progressive, The 169–70 Prokofiev, Sergei 44 Prosser, William 62, 83, 158 Protect Intellectual Property Act (PIPA) 35, 40 public domain see copyright publication of private facts 62, 84–6

Restaurant at the End of the Universe, The 112, 178 retinal scanners 88, 90, 96 Rhapsody 39 Richards, Neil 110, 111, 118, 136, 186 right of publicity 97–8 right to be forgotten 66, 96, 148, 172 “The Right to Repair” 38–39 Return of the Jedi 135 Revenge of the Sith 135 Ring, The xxv, 145, 168 Ringu 168 Roberts, John xiii, xxvi, xxviii, 64, 65 Robinson, Spider 43–44, 45, 53 Robocop 113 Robogenesis 17 Robopocalypse xxii, 17, 19, 108–9, 110, 112, 118–9, 122, 130–1, 133, 137; “Robot Defense Act” 19, 119 robotics xv, 3, 17, 19, 110, 113, 123, 136; Asimov’s Three Laws of 108, 121 robot lawyers 18, 131 robots 108, 187; definition of 110–3 and free speech xxiv, 113; and intellectual property rights xxiv; rights of 17–8, 110, 116–20; and regulation 109–10, 135–7; social robots 111, 118; liability for actions 123–5, 136; and sentience 107, 109–10, 113, 116, 118, 120, 122, 125, 134, 136; see also artificial intelligence Robots and Empire 119 Rome 149, 157, 161 Romney, Mitt 72 Rowling, J.K. xii, xvi, 48, 150, 183 Russell, Mary Doria xxiii, 54 Russia 98, 168, 182

Qualified immunity 152 Radio Frequency Identification (RFID) 70, 72, 74 Radiohead 121, 140 Rainbows End 48, 49, 81, 133, 153 Ready Player One xi, xix-xx, xxi, xxiii, 3, 11, 12, 35, 45–7, 49, 51, 73, 92–3, 178, 181; film version 47 recording in public 71–2, 76–7, 152 Rekognition 89, 91 Reddit 35 Reetz, Dan 22, 58–59 Reich, Zvi 123 Reid, Rob xxiii, 37, 39, 45, 49 Reinhardt, Stephen 135 Rembrandt van Rijn 127

Sanders, Amy Kristin 124 Sanders, Bernie 163 Saturday Night Live 11 Scalia, Antonin 64 Scalzi, John 2, 38–39 Schoenberger, Heather 162 Schroeder, Jared 133 science fiction as genre viii-ix, xvi-xvii, xxvi, 1–4, 29, 179, 185; functional view 8; as limiting form 7; in higher education 10–1 Scientific American 146–47 Search for Extraterrestrial Intelligence (SETI) 54 “Second Variety” 109, 131 Selinger, Evan 91, 93, 133, 162

196 Index

Shatner, William 11 Shelley, Mary xxii, 14, 68, 107 Signal xxi, 162 Siri 7, 26 Silver, Derigan xi Silver Eggheads, The 126 singularity xxiv, 110, 117, 137 Six Flags 90–1 Slater, David 127 Slaughterhouse-Five xiv, xxii, 178 Sloan, Robin xi, xxii, 2, 4, 7–8, 21–22, 27–8, 52, 130 Smart, William 110, 111, 118, 136, 186 smartphones 64–5, 79, 82, 133 Smolla, Rodney 82, 84 Snapchat 82, 97, 162, 163–5 Snow, C.P. xiv Snow Crash xxi, xxiv, 3, 48, 57, 80, 82, 168, 170 Sobel, Benjamin 128–29 social credit scoring 96 Solo 118 Solove, Daniel 63, 66–7, 82, 85, 97 Solum, Lawrence 117 Sonny Bono Copyright Term Extenson Act of 1998 37 Sony 55, 111 South Africa 121 South Korea 42 Soviet Union 146, 168, 180, 181 space travel viii, xviii, xxv, 2, 8, 30, 113, 119, 137, 184, 185 Sparrow, The xxiii, 54 Speak xxii, xxiv, 7, 12, 14, 22, 24–5, 113, 120, 122, 178 Species xviii Spicer, Sean 162 Spielberg, Stephen 47, 88, 109 Spotify 49 Stanger, Allison 165 Star Trek xii, xviii, xxii, 11, 38, 179 Star Trek: The Next Generation xxi, xxiv, 12, 25, 107, 116–7, 136–7, 182–3; “Darmok” 25; “Justice” 12, 31; “Measure of a Man” 107, 116–7; “Offspring” 137 Star Trek: Voyager 53 Star Wars films 1, 111, 113, 118, 135–6, 185, 187 stare decisis xvi, xviii, xxviii State Tectonics 15, 75, 95, 170 Station Eleven 153

Stephenson, Neal xiv, xvii, xxi, xxiv, 1–2, 43, 57, 80, 168, 184 Stop Online Piracy Act (SOPA) 35, 40 Stored Communications Act 62 Strahilevitz, Lior 86–87 Stranger in a Strange Land xxii, 71, 74, 96, 182, 183 Stutzman, Fred 97 Sumer 147, 153, 168 surveillance xxiii, 21, 26–7, 61, 63–5, 67, 68–71, 73–4, 75, 78–81, 83–4, 86, 91, 94, 96, 102, 187 Suvin, Darko xvii Swalwell, Eric 163 Swartz, Aaron 35 Swift, Jonathan 126 Swift, Taylor 89 Syria 149 Tay 112, 124 telegraph 67 Tell the Machine Goodnight xxii, 12–3, 19–20, 24, 60, 87, 99, 178 Terminator films 7, 88, 130, 137 terms of service 157–58 Texas improper photography law 78 Texas Open Meetings Act 165, 176 Texas Public Information Act 132, 143 Thierer, Adam 86 Third Amendment 63 Thomas, Anne-Marie 68 Thompson, Matt 27 “three strikes” laws 41–2 Time Warner 42 Tolkien, J.R.R. xii Totalitarianism 72, 144, 180 trademark law 36, 48, 123, 150 Travis, Mitchell xviii Transcendence 134 trespassing 62, 64 Tron xix Trump, Donald J. 162, 188 Tufekci, Zeynep 152–3 Turing, Alan 14 Turing Test 113, 137 Turnbull, Malcolm 32 Turner, Jacob 121, 124 Twelve Tomorrows xxi Twitter xix, 10, 51, 124–5, 134, 152, 157–8, 160 Ugland, Erik 154–55 Uglies xxiii, 69–70

Index 197

“Unauthorized Bread” 42–3 United Kingdom 41, 42, 79, 121, 180 upskirt photography 77–8 USA Patriot Act 40 utopia xx, 20, 26, 46, 75 V for Vendetta xxiii, xxv, 69, 72, 75, 180–1 Van Houweling, Molly Shaffer 49, 50 Vander Ark, Steven 150 vanishing speech xxiv, 97, 144–72, 187 Velez-Hernandez, Luis Antonio 128 Ventura, Jesse 135 Verne, Jules cv Video Privacy Protection Act of 1988 62 videogames xxiii, 46, 47, 93, 97, 113, 114, 134, 151, 169 Vietnam War 149 Vinge, Vernor 25, 48, 49, 81, 133, 137, 153, 179, 188 virtual reality (VR) xix-xx, 28, 46–7, 57, 153 Visual Artists Rights Act (VARA) 159, 160 Volokh, Eugene xxiv, 48, 115 Vonnegut, Kurt xiv, xxii, xxvi-xxvii, 109, 135, 143 Walgreens 89 Walkaway xxii, 21, 30, 134 Wall-E 111 Wallace, David Foster xxv, 167–8 WarGames 62–3 Warren, Samuel 62, 83 wearable computing xxiii-xxiv, xxv, 23, 48, 61, 69–70, 79–82, 84, 86, 87, 99

Weiland, Morgan 171 Welinder, Yana 88, 91 Wells, H.G. xv Westerfeld, Scott xxiii, 69, 101 Westin, Alan 61, 83 Wetter, Erica xi, xxi WhatsApp 162 Whipple, Marc 158 Whisper 162 White, Vanna 97–98 Wikipedia xv, 35 Wilde, Oscar 38 Williams, Katie xi, xxii, 4, 12–3, 19–20, 24, 30, 60, 99 Wilson, Daniel xi, 4, 17, 19, 108, 110, 121, 130 Wingfield, Thomas xviii Winnie the Pooh 148 wiretapping xxiii, 64, 67, 71, 76, 77 Wizenbaum, Joseph 112–3 Wu, Tim 45, 114, 116, 122, 189 Wykowska, Agnieszka 119 X Prize 2 Xi Jinping 148 Yanisky-Ravid, Shlomit 128, 129 Year Zero xxiii, 37, 39, 43, 49, 50, 51 YikYak 162 Zacchini, Hugo 97 Zuckerberg, Mark 29, 33