Robot Suicide: Death, Identity, and AI in Science Fiction 1666910481, 9781666910483

In Robot Suicide: Death, Identity, and AI in Science Fiction, Liz W Faber blends cultural studies, philosophy, sociology

236 9 1MB

English Pages 112 Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Robot Suicide: Death, Identity, and AI in Science Fiction
 1666910481, 9781666910483

Table of contents :
Dedication
Contents
Preface: A Note to My Readers
Acknowledgments
Introduction: When Robots Choose to Die
1. Morbid Machines: Interiority and Mental Health
2. Automated Altruism: Self-Sacrifice and US War Culture
3. The Human Touch: Eugenics and Assisted Suicide
Conclusion: Programming Life and Death
Bibliography
Index
About the Author

Citation preview

Robot Suicide

Robot Suicide Death, Identity, and AI in Science Fiction Liz W. Faber

LEXINGTON BOOKS

Lanham • Boulder • New York • London

Published by Lexington Books An imprint of The Rowman & Littlefield Publishing Group, Inc. 4501 Forbes Boulevard, Suite 200, Lanham, Maryland 20706 www​.rowman​.com 86-90 Paul Street, London EC2A 4NE Copyright © 2023 by The Rowman & Littlefield Publishing Group, Inc. All rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without written permission from the publisher, except by a reviewer who may quote passages in a review. British Library Cataloguing in Publication Information Available Library of Congress Cataloging-in-Publication Data Names: Faber, Liz W., 1985- author.   Title: Robot suicide : death, identity, and AI in science fiction / Liz W.     Faber.   Description: Lanham : Lexington Books, 2023. | Includes bibliographical     references and index. | Summary: "In Robot Suicide: Death, Identity, and     AI in Science Fiction, Liz W. Faber blends cultural studies, philosophy,     sociology, and medical sciences to show how fictional robots hold up a     mirror to our cultural perceptions about suicide and can help us rethink     real-world policies regarding mental health"-- Provided by publisher.   Identifiers: LCCN 2023001134 (print) | LCCN 2023001135 (ebook) | ISBN 9781666910483 (cloth ; alk. paper) | ISBN 9781666910506 (paperback) | ISBN 9781666910490 (ebook) Subjects: LCSH: Science fiction, American--History and criticism. | Robots     in literature. | Suicide in literature. | Science fiction films–United States--History and criticism. | Robots in motion pictures. | Suicide in motion pictures. | LCGFT: Literary criticism.  Classification: LCC PS374.S35 F33 2023  (print) | LCC PS374.S35  (ebook) |     DDC 813.0876209--dc23/eng/20230119  LC record available at https://lccn.loc.gov/2023001134 LC ebook record available at https://lccn.loc.gov/2023001135 The paper used in this publication meets the minimum requirements of American National Standard for Information Sciences—Permanence of Paper for Printed Library Materials, ANSI/NISO Z39.48-1992.

For Jackie and Regina

Contents

Preface: A Note to My Readers

ix

Acknowledgments xi Introduction: When Robots Choose to Die



Chapter 1: Morbid Machines: Interiority and Mental Health

1

15

Chapter 2: Automated Altruism: Self-Sacrifice and US War Culture Chapter 3: The Human Touch: Eugenics and Assisted Suicide Conclusion: Programming Life and Death Bibliography Index





35 55 75

87

97

About the Author



99

vii

Preface A Note to My Readers

This book is about suicide. I have tried to be as attentive to my readers’ psychological needs as possible, including adhering to all available standards of language use. Still, some of the texts, data, and cultural practices that I discuss may be difficult or even traumatic to encounter. Please read with care and remember to be kind to yourself. If you are in the United States and struggling with suicidal thoughts, please call or text 998 to speak with a trained crisis counselor. If you are outside the United States, please visit https:​//​blog​.opencounseling​.com​/suicide​-hotlines​/ to locate the hotline for your country.

ix

Acknowledgments

This book began as a conference paper I presented at PCA/ACA in 2019, where I received wonderful feedback from panelists. Two years later, I pitched the full book idea to Lexington, and so the journey began in earnest. I wrote the whole thing, from proposal to final draft, in just over a year, a ridiculous feat that would never have been possible without the support of colleagues, friends, and family alike. Thank you first and foremost to the whole Science Fiction and Fantasy Area of PCA/ACA for your thoughts and encouragement over the past few years. Thank you also to my editor, Judith Lakamper, and everyone else at Lexington, from the copyeditor to the design team, who’s making publication possible. And a sincere thank you to my anonymous peer reviewer for the thoughtful feedback on my draft. Thank you to my amazing colleagues at Dean College for supporting my research and occasionally reminding me to stop working so damned hard: Amy Matten, Brad Hastings, Dawn Poirer, David Dennis, Rob Lawson, Dawn Mendoza, Jonas Halley, and JoAnne Reid, along with everyone else in our campus community. An extra special thank you to our Assistant Library Director, Michele Chapin, for finding all these weird ILL resources throughout the summer. And a big, huge thank you to all my nerdy students for reminding me every day why science fiction, writing, and education matter. Thank you to my Internet Besties for cheering me on, making me laugh, and reminding me to believe in myself: Rachel, Erin, Rebecca, Bethany, Mary, Caitlin, Shawna, Hattie, Jjenna, and Steph. Thanks also to all my pocket friends on #AcademicTwitter for being so thoughtful, kind, and genuine. Oh, and thank you to the owners of Basic Batch Donuts and Blooming Hearts Roastery & Café in Milford, MA. You literally fueled my writing. Thank you to my family for listening to me talk about robots all these years. I am particularly grateful to my dad for trying so hard to read my last book and to my Uncle Jim, who forced himself to read it cover to cover. Just knowing that people I care about are taking the time to read my work means xi

xii

Acknowledgments

more to me than you’ll ever know. (And I promise this one doesn’t have any psychoanalytic theory!) Thanks to my brother for once again acting as my coffee supply man. And finally, thank you to the squishy kitties who love each other so much and have grown to tolerate my existence: Dr. Regina Orange and Ms. Jacqueline Squish. You did literally nothing to assist in the writing of this book, but I’m grateful for your grumpy little faces every single day.

Introduction When Robots Choose to Die

It was a brisk autumn in Austria the day the first robot suicide occurred. On November 12, 2013, a Roomba, a small disc-shaped vacuum cleaning bot, was home alone when it sucked up its last bit of cereal from the kitchen counter and then, in a fit of despair, rolled itself right onto a hotplate. There it burst into flames, soon setting the entire room ablaze. By the time firefighters could put out the conflagration, the little self-immolating Roomba was “just a pile of ash.”1 The event was widely reported in Austrian media as a suicide, then made its way to the UK’s Daily Mail,2 then finally to Time3 and Huffington Post4 in the US. Of course, the headlines were tongue-in-cheek, and each article quickly followed up details of the domestic fire with the clear assertion that it was the result of nothing more than a gadget malfunction, not a sentient machine in the throes of an emotional crisis. As far as we know, robots cannot now, nor have they ever been able to, die by suicide. A few years after the hotplate incident, I was working as a Lecturer at a small college outside New York City and developing a first-year seminar on the topic of robots in science fiction (SF). I read book after book and watched movie after movie, and I began to see a pattern forming: when SF robots aren’t busy murdering and/or serving humans, they sometimes choose to die. And thus, the basis of this book began to form. Could a robot die, I wondered. And if it could, under what circumstances would it do so voluntarily? In my previous work, I have argued that SF places AI in pre-existing narratives, with pre-existing gender roles. In this work, I pick up on that idea and run with it in a new direction, considering how SF places robots in pre-existing cultural notions of suicide. Importantly, though, suicide is a major taboo in the West, particularly in US American culture,5 which is the core focus of this book. While there are dozens and dozens of self-help books related to suicidal ideation and how to support loved ones who are suicidal or have lost someone who died by suicide, scholarship on suicide remains sparse. As I will discuss later in this 1

2

Introduction

chapter, much of suicide research has developed out of either psychology or sociology, and rarely the twain shall meet. Further, representations of suicide in media are not only relatively rare (compared to other types of character death), but they are also rarely studied by media scholars. Film scholar Michelle Aaron argues that Hollywood depictions of suicide are rare because suicide is often antithetical to Hollywood’s preference for happy endings.6 To date, there are just a few wonderful books on the subject, including Michele Aaron’s Death and the Moving Image,7 Steven Stack and Barbara Bowman’s Suicide Movies,8 and Carlos Gutierrez-Jones’s Suicide and Contemporary Science Fiction.9 In recent years, particularly since the death of beloved Hollywood star Robin Williams, there has been some renewed interest in understanding media reports of suicide and suicide contagion, or what scholars call The Werther Effect. Yet, there currently exists no study of machine suicide in media, despite there being ample case studies available in SF. While the very concept of robot suicide seems almost laughable (and, indeed, when I tell people I am writing a book about it, I’ve garnered quite a few blank stares), what is both remarkable and important about robot suicide in SF, and what sets this study apart from previous ones, is that the suicide of robots is significantly more blatant than fictional human suicide. Siobhan Lyons points out in her study of machine death that involuntary robot death and murder in SF is often rhetorically dehumanizing in order to keep the audience at a psychological distance. This is most evident in works such as Blade Runner, in which replicant androids are said to be “retired,” rather than killed.10 As I will argue throughout this book, SF does situate robots in cultural narratives about suicide from a safe psychological distance, but not through semantic language use. In fact, many of the texts I examine here explicitly use the words “suicide” and “death” in relationship to robots. Nevertheless, to see a robot die on screen or read about the death of a robot in a novel is not the same psychological experience as seeing or reading about a person dying by suicide. Thus, SF offers a unique opportunity to talk about a taboo, openly and honestly, and through this, to consider how a culture deals with suicide, rhetorically, visually, and emotionally. AN EXTREMELY BRIEF HISTORY OF SUICIDE When we think of suicide, most of us in the US imagine someone in extreme emotional distress choosing to die using any of the main methods (firearm, suffocation, poisoning, wrist-cutting, etc.). About 45,000 people die this way each year; however, this is only part of the picture. For the purposes of this book, I define suicide in its broadest sense: voluntary death, whether

Introduction

3

through action or inaction. As I will discuss at length, this broader definition allows us to consider not just individual psychology but also deep-seated ideological and institutional patterns of beliefs and behaviors that contribute to suicidality. The entire global history of suicide has been covered elsewhere, and so I will not rehash it here.11 Because the main focus of this study is US American culture, I begin instead with the ideology of the Age of Enlightenment, out of which the United States was built. In broad strokes, the Enlightenment was a period in the seventeenth and eighteenth centuries in which philosophies of reason, the scientific method, and individualism dominated. It was also a period in which religion and religiosity was scrutinized by philosophers, including the long-standing Christian doctrine that condemned suicide as a mortal sin.12 In 1775, Scottish philosopher David Hume published his essay, “Reason and Superstition,” one of the first major philosophical defenses of suicide in the Western world. Hume counters religious doctrine by arguing that suicide is either part of divine providence or an act of the very free will provided to humans by God. In short, if humans are capable of suicide, it is only because the Christian God has allowed it be so. Thus, Hume concludes, suicide cannot be contrary to faith.13 As the Western scientific disciplines began to be established in the nineteenth century, the study of suicide became largely divorced from theology and entered into the realms of psychology, anthropology, and sociology. The first major study of suicide came in 1897, with the publication of Émile Durkheim’s book On Suicide.14 Although somewhat problematic by today’s standards (a fact I will discuss in later chapters), Durkheim’s work marks the first time suicide was studied through a systematic, data-driven lens. Throughout the book, he argues that suicide is not an individual phenomenon, but rather a consequence of social patterns, and that by examining information like suicide rates, demographics, and even weather patterns, we can better understand the circumstances under which suicide occurs. This was important because it brought suicide out of the realm of the personal and the medical and into the realm of the social. Despite his efforts, as I will discuss at length in chapter 1, the debate between social and psychological causes of suicide rages on today. Sigmund Freud, the grandfather of modern psychology, weighed in on suicide in his 1915 essay on melancholia. For him, contrary to Durkheim, suicide is an individual action, the product of an ego that has become so detached from itself that a person cannot see himself as a subject anymore. As an object, he no longer feels the need to preserve his own life, and thus chooses to die by suicide.15 These two dominant secular theories—individual vs. social—have essentially continued on to this day. Recent research on suicide encompasses discussions of systemic oppression,16 psychology and individual trauma,17 and

4

Introduction

neuroscience.18 Despite centuries of research and philosophizing, though, we simply do not understand with any real certainty what causes suicide. And this is, to put it briefly, beside the point of this book. Rather, my interest is in how we see suicide through a cultural lens, particularly how fictional portrayals of suicide both inform and reflect cultural ideals related to voluntary death. ROBOT SUICIDE The question of whether a real robot could die by suicide is mostly beside the point in a study of representation, but it’s nevertheless worth pausing for a brief philosophical discussion. In its broadest sense, understanding whether something or someone can die tends quickly to slide into tautology: to live is to be able to die and to die is once to have lived. What it means to live is a phenomenological question that philosophers have been grappling with for millennia. Perhaps the most common post-Enlightenment understanding of life, particularly as applied to artificial intelligence, is Rene Descartes’s pre-Enlightenment metaphysical statement, “I think, therefore I am.” According to Cartesian philosophy, self-awareness is the sole pre-requisite for human existence and intelligence.19 So if we can craft a machine with self-awareness, it therefore must be not only intelligent but alive, right? Well, things are much more complicated. After all, a device could be programmed to say, “I am, I exist, is necessarily true each time that I pronounce it, or that I mentally conceive it,”20 and all that would really mean is that some programmer or other had designed it to quote Descartes. Some recent scholars assert that robot life might be defined by robot death. A particularly salient example is Isaac Asimov’s fictional story “Bicentennial Man,” in which a robot transforms his body so he can die in order to be classified legally as a person; but by the time he is classified, he has died. (I will discuss the story and its cinematic adaptation at length in chapter 3.) To make matters more challenging, death is something that all of us will endure, but most of us will never consciously experience, at least not in a way that can be articulated to others. To think is certainly not to die. Linguistically, we already talk about robot and machine death all the time. “My phone is about to die,” is a perfectly reasonable statement indicating only that the battery in your smartphone is going to run out of charge, so you will need to plug it in. There is nothing existential in such an event. Laura Voss describes this as “(in)animacy,” a sort of linguistic slippage in which we talk about robots, “as an agent and as a thing, as both animate and inanimate.”21 Juli L. Gittinger likewise argues that the human-robot relationship is predicated on understanding the robot as an object while simultaneously treating it affectionately as one might with any other living being.22 In other

Introduction

5

words, we know our phone is neither alive nor on the verge of death when the battery meter slips below 10 percent, but we use a sort of rhetorical shorthand by describing it in those terms. The urge to describe machines in human terms is neither new nor particularly remarkable. SF writers have been describing “machine men” for a century.23 Even in the 1930s, digital computers were commonly understood to be “thinking machines,” able to complete mathematical equations far too complex for the human mind. By the mid-twentieth century, the “computer as mechanical brain” metaphor also came to be used in the reverse in the bourgeoning field of cognitive psychology. High-level, interdisciplinary academic workshops like the Macy Conferences and the Dartmouth Conferences produced “a powerful new frame through which groups of previously unrelated phenomena could be viewed as connected.”24 Essentially, just as we were already thinking of computers as enormous mechanical brains, so, too, did we begin to think of human brains as complex computers. Such an understanding continues today, not only in psychology and neuroscience, but also in our everyday language. Statements such as, “I need to recharge,” or “Hold on, I’m processing that information,” are just as much a part of the linguistic slippage of describing the death of a cell phone. The very concept of artificial intelligence relies on this linguistic slippage. After all, we don’t really know what human intelligence is, let alone how to fully construct artificial sentience. As much as we like to think that we have a firm grasp of ourselves and others, we simply cannot understand the reality of human intelligence, except through language and perception. In contrast to the Cartesian self-referentiality of “I think, therefore I am,” a more productive way of understanding selfhood might be intersubjectivity. In the early twentieth century, German philosopher Edmund Husserl defined intersubjectivity as a means of perceiving the objective world outside of oneself. In other words, I exist not because I think, but because I see myself in relationship to the world outside my mind.25 More recent philosophers have described this as the reflexive I/Other relationship. Anthony Giddens think of reflexivity and selfhood as the ability to construct a biography of oneself, a sort of cohesive history-making.26 I tell a story about myself, therefore I am. But even this is, as the film Blade Runner succinctly shows us, not necessarily grounded in truth or lived experience. In the film, android Rachael discovers that her memories are all implants, things she can recall but has never actually experienced.27 Judith Butler expands on Giddens’s ideas by pointing out that the biographical narrative always exists in the telling of it. It is an intersubjective negotiation, an event, shared between the self and another.28 We have a conversation about myself, therefore I am. This intersubjective shared experience of memory and selfhood, I argue, is core to understanding how artificial intelligence might be expressed. We will never know from an existential

6

Introduction

perspective whether a robot is self-aware and sentient, any more than we can every really, scientifically know whether the person sitting across from us is self-aware and sentient. But we can understand some of the markers of sentience we identify in others, such as speech, embodiment, and personality. One classic test of computer “intelligence” is the Turing Test, outlined by British mathematician Alan Turing in his now-famous 1950 paper “Computer Machinery and Intelligence.” The idea is that Person A sits in a room and receives typed messages from Person B and Computer A. If Person A can’t tell which messages came from the computer and which came from the person, the computer is said to pass the Turing Test.29 Of course, as many scholars (including myself)30 have pointed out, the Turing Test doesn’t actually tell us much about artificial intelligence. As far back as the 1960s, computer programmer Joseph Weizenbaum wrote a program called ELIZA that, to his horror, passed the Turing Test with flying colors.31 The software was designed to act as a therapist for human users. A person would sit at the computer, type in a problem, and ELIZA would turn the response into a question, prompting further conversation and psychological consideration. Users were so convinced by ELIZA’s therapeutic technique that they actually started telling her deeply personal secrets, having forgotten that Weizenbaum’s entire team of engineers were reading the whole interaction.32 Today, every six months or so, some new computer program is declared to have passed the test. In 2022, a programmer at Google even claimed that his semantic-language program had become sentient, when in fact, it had just passed the Turing Test.33 Given this, can we ever truly say that a machine is alive, let alone dead? Probably not, no. But, just as we have indicators of human life and death (pulse, brainwaves, etc.), so, too, could there conceivably be indicators of robot life and death. SF writers have been exploring this for a hundred years, from Karel Čapek’s classic 1920 play R.U.R.,34 on up through Stanley Kubrick’s 1968 film 2001: A Space Odyssey,35 most of Isaac Asimov’s robot fiction,36 and even today in such recent films as Netflix’s original I Am Mother.37 What is most relevant to this book, though, is not whether robots could die, but whether they could choose to do so. Suicide, like death, has indicators that might be read to determine its presence (e.g., a suicide note, recent behaviors, psychological motivations). Determining cause of death is itself an act of interpretation, and when it comes to suicide, not only do we not fully know what causes it, but we can’t always even tell if that’s what happened. In this sense, describing the Austrian Roomba as having died by suicide is as true as describing it as having had a malfunction. Perhaps it did not have the free will to choose death, as Hume described, but certainly the Roomba purposefully took action that caused the end of its existence, regardless of the cause of that action. This, I would argue, is rhetorically within the realm of suicide.

Introduction

7

But, as I mentioned at the start of this section, the question of whether a robot actually could die by suicide is a bit beside the point. SF is not reality, nor is it instructional design; rather, SF is a genre that speaks to today’s culture through the lens of tomorrow’s possible science. Most of the robots I will discuss in this book are fictional, neither alive nor dead. They are representations of today’s cultural beliefs abouts suicide, imagined through the lens of potential future robotics. MEDIA AND SUICIDE As I mentioned, fictional representations of suicide are somewhat rare and rarely studied. However, one broad area of research that has gained significant traction in the past few decades is nonfictional media accounts of suicide. This research tends to fall into one of two categories. First, there has been quite a bit of research on cyberbullying and subsequent suicides, particularly in relationship to social media. There is solid evidence to show that cyberbullying—the use of the internet to victimize peers—among young people does increase suicidal thoughts and behaviors.38 According to the US Center for Disease Control, as of 2020, suicide is the second leading cause of death among children ages 10–14 after unintentional injury and the third leading cause of death among people ages 15–24 after unintentional injury and homicide.39 Thus, identifying the causes and methods for prevention of cyberbullying are a major part of suicide prevention efforts. Second, there has been ample research regarding nonfiction news reports of celebrity suicides. Much of this work is related to the Werther Effect, a concept describing the way media reports of a suicide can lead to copycat or contagious suicides. There has been international support for the creation of media guidelines in reporting on suicides, including how to describe suicide and what services to remind viewers/readers of. I will return to the Werther Effect and media guidelines for reporting on suicide in chapter 1, but suffice it to say for now that I have attempted throughout this book to adhere to these guidelines where possible, including the intentional use of the term “died by suicide.” This is preferable to “committed suicide” because to “commit” an act implicitly associates it with the religious notion of “committing a sin.” I want to be very clear: I do not consider suicide to be a sin, nor do I think it is, as some people suggest, a cowardly act. In order to address suicide from both a scholarly and a supportive stance, without the pressure of cultural taboo, it is vital that we see it non-judgmentally as a set of behaviors influenced by both individual psychology and cultural patterns.

8

Introduction

CATEGORIES OF SUICIDE Beginning with Émile Durkheim in 1897, the work of categorizing suicide has been a major part of scholarly interest. Although naming something does not imply understanding it, having the language to talk about a subject as taboo as suicide is an important part of determining cause, effect, and potential prevention. Durkheim proposed three broad categories: egotistical (for oneself, as a result of social isolation); altruistic (for one’s community, as a result of social saturation); and anomic (in relationship to economy).40 In the past century, suicidologists have debated the efficacy of Durkheim’s classifications, but they are, for the most part, the categories from which most sociological suicide research stems. More recently, sociologist Jason Manning has re-examined these classifications. For Manning, suicide is always a result of some form of conflict, and so he categorizes suicide in terms of social and interpersonal relations.41 Psychologists and neuroscientists tend to categorize suicide according to causality: distal (away from self) risk factors are genetic and environmental predispositions; proximal (close to self) risk factors are precipitating events or illnesses such as loss of a job, development of schizophrenia, or the Werther Effect. Neither distal nor proximal risk factors guarantee that a person will die by suicide, but they do increase the risk of suicidal behaviors.42 In media studies, Steven Stack and Barbara Bowman simply divide suicide films into two broad categories: individual causes and social causes.43 Drawing on these previous studies, I propose three categories of suicide representation, which I will analyze in this book: 1.  Despondency-motivated: suicide in which an intelligent being suffers from internal psychological turmoil 2.  Altruistic: suicide in which an intelligent being sacrifices themselves to save others 3.  Assisted: suicide that may be either despondency- or altruism-motivated but which is acted out through the aid of another intelligent being Importantly, these are not meant to be either medical or sociological classifications. Rather, I propose them as a means of classifying the visual and linguistic rhetoric of suicide. All representation grows out of the culture in which it was produced but is often a less complex version of reality. This, for example, is why genre itself exists as a sort of social contract between creator and audience. We expect robot fiction to include representations of what audiences believe to be futuristic types of robots; we do not expect robot fiction to include doctoral-level equations for the production of artificial intelligence.

Introduction

9

Likewise, we expect suicide fiction to include representations of suicide based on popular understandings of the phenomenon; we do not expect it to attend to the full complexities of a cultural, psychological, and physiological set of beliefs and behaviors. SCOPE AND LIMITATIONS My goal in writing this book is to examine how SF places robots into narratives that feature and grapple with cultural concerns about suicide. It is, to be fair, an odd and wildly interdisciplinary concept. This book is at once about SF, computer culture, suicide, life, death, selfhood, gender, race, disability, mental health, and systemic oppression. I draw on research from literary and media studies, psychology, sociology, philosophy, engineering, computer programming, and medicine. As an Americanist, my primary focus is on US American literature, film, and culture, with one or two notable exceptions here and there. Unlike my previous work, in which I examined the mechanism of mediamaking alongside representations,44 here I mostly stick to textual analysis as a methodology because I include both literary and audio/ visual media. Although the number of SF texts that portray robot suicide is limited, the texts I examine here are not exhaustive; instead, I have chosen three case studies per chapter in order to focus the conversation on suicide through a cultural studies lens, as opposed to understanding every facet of every text. In chapter 1, “Morbid Machines,” I will focus on despondency-motivated suicide, including historical and current perspectives on mental health treatment, healthcare access in the US, and the rights of patients. Of particular note is the way the interiority of robots is portrayed in SF, both the tragic and the comic, as well as how texts about despondency-motivated robot suicide have been revised to “soften” or remove the suicidal behaviors. Case studies will include Eando Binder’s Adam Link stories published in Amazing Stories in the 1930s and 1940s, Douglas Adams’s 1979 novel Hitchhiker’s Guide to the Galaxy and the 2005 cinematic adaptation, and the 2007 GM commercial featuring a robot suicide that caused widespread public outcry. In chapter 2, “Automated Altruism,” I will use Émile Durkheim’s concept of “altruistic suicide” to explore the notion of self-sacrifice, particularly in the context of US American war culture. I will look at the rhetoric of active sacrifice/passive death in military publications as well as the classic novel The Red Badge of Courage to set the stage for how such language and imagery functions. From here, I will turn to the classic “trolley problem” in both philosophy and AI research, then analyze portrayals of altruistic robot suicide

10

Introduction

in Isaac Asimov’s Robot Series, the Terminator film franchise, and C. Robert Cargill’s 2017 novel Sea of Rust. In chapter 3, “The Human Touch,” I will examine the complex ethics of assisted suicide, especially as it is grounded in the ideology of eugenics and white supremacism. To contextualize this, I will briefly trace the history and theory of US American eugenics and its relationship to euthanasia. Finally, case studies will include Walter Tevis’s 1980 novel Mockingbird, Isaac Asmov’s 1976 short story “Bicentennial Man” and the 1999 cinematic adaptation, and the 2012 film Robot & Frank. In the concluding chapter, I will turn to the use of technology to prevent human suicides. I begin with the recent federal funding of suicide crisis prevention, then consider automated prevention strategies, and finally discuss the whimsical and touching story, “Mrs. Griffin Prepares to Commit Suicide,” by A. Que. I then turn to transhumanist efforts to prevent death altogether through whole brain emulation, among other pseudoscientific “innovations” and consider whether this might affect not just suicide rates but also what it even means to be human. Through this, I analyze two SF texts that act as counters to transhumanist thought: John C. Campbell’s 1930 short story “The Infinite Brain” and Sue Lange’s 2012 novella We, Robots. Ultimately, the goal of this book is to talk about suicide, especially how it’s represented in robot fiction, but also how philosophers, scientists, and others have grappled with it for the past century. SF, like all good fiction, shines a light on humanity; and one of the more disturbing aspects of humanity is not only our capacity for suicide but also for our willingness to avoid the subject altogether. I hope when you have finished reading this book that you will walk away with a new appreciation of SF history. But more than that, I hope you’ll find yourself seeking new ways to address the problem of suicide in US culture. After all, real robot suicide isn’t likely to be a concern for us in our lifetimes; but the alleviation of human suffering is always already ours to address. Please read with care. NOTES 1. “Paranoid Android: Cleaning Gadget ‘Switches Itself On’ and Moves onto Kitchen Hotplate in ‘Suicide Bid,’” Daily Mail, November 12, 2013, www​.dailymail​ .co​.uk​/news​/article​-2503733​/Paranoid​-android​-Cleaning​-gadget​-switches​-moves​ -kitchen​-hotplate​-suicide​-bid​.html. 2. “Paranoid Android.” 3. Jessica Roy, “Robot Reportedly Commits Suicide after Becoming Fed Up with Doing Housework,” November 13, 2013, newsfeed.time.com/2013/11/13/ robot-reportedly-commits-suicide-after-becoming-fed-up-with-doing-housework/.

Introduction

11

4. Macrina Cooper-White, “Robot Suicide? Rogue Roomba Switches Self On, Climbs Onto Hotplate, Burns Up,” November 13, 2013, www​.huffpost​.com​/entry​/ robot​-suicide​-roomba​-hotplate​-burns​-up​_n​_4268064. 5. Although the term “US American” may seem redundant, there is a push in cultural studies research to move away from using “American” to refer solely to people from the United States of America. The term “US American” helps us acknowledge that there are many Americas, and the United States is one country among many in the world. For additional information and analysis, see Roxanne Dunbar-Ortiz, An Indigenous Peoples’ History of the United States (Boston: Beacon Press, 2015); Karina Martinez-Carter, “What Does ‘America’ Actually Mean,” The Atlantic, June 19, 2013, www​.theatlantic​.com​/national​/archive​/2013​/06​/what​-does​-american​-actually​ -mean​/276999​/. 6. Michelle Aaron, Death and the Moving Image: Ideology, Iconography, and I (Edinburgh, UK: Edinburgh University Press, 2015), 40. 7. Aaron, Death and the Moving Image. 8. Steven Stack and Barbara Bowman, Suicide Movies: Social Patterns 1900–2009 (Cambridge, MA: Hogrefe Publishing, 2012). 9. Carlos Gutierrez-Jones, Suicide and Contemporary Science Fiction (Cambridge, UK: Cambridge University Press, 2015). 10. Siobhan Lyons, Death and the Machine: Intersections of Mortality and Robotics (Singapore: Palgrave Macmillan, 2018), 52–54. 11. See, for example, Jennifer Michael Hecht, Stay: A History of Suicide and the Arguments Against It (New Haven: CT, Yale University Press, 2013); Derek Beattie and Patrick Devitt, Suicide: A Modern Obsession (Dublin, Ireland: Liberties Press, 2015). 12. Hecht, Stay, 90–91. 13. David Hume, “Reason and Superstition,” in Suicide: Right or Wrong?, ed. John Donnelly, 2nd ed. (Amherst, NY: Prometheus, 1998). 14. Émile Durkheim, On Suicide (London: Penguin Books, 2006). 15. Sigmund Freud, “Mourning and Melancholia,” in The Freud Reader, ed. Peter Gay (New York: Norton & Co., 1989), 588. 16. See Jason Manning, Suicide: The Social Causes of Self-Destruction (Charlottesville, VA: University of Virginia Press, 2020). 17. See Thomas Szasz, Fatal Freedom: The Ethics and Politics of Suicide (Syracuse, NY: Syracuse University Press, 1999); Craig J. Bryan, Rethinking Suicide: Why Prevention Fails and How We Can Do Better (New York: Oxford University Press, 2022). 18. Kees van Heeringen, The Neuroscience of Suicidal Behavior (Cambridge, UK: Cambridge University Press, 2018). 19. Rene Descartes, Meditations on First Philosophy, yale.learningu.org/ download/041e9642-df02–4eed-a895-70e472df2ca4/H2665_Descartes%27%20 Meditations.pdf. 20. Descartes, Meditations, 9. 21. Laura Voss, More Than Machines? The Attribution of (In)Animacy to Robot Technology (Bielefeld, Germany: Transcript, 2021), 36.

12

Introduction

22. Juli L. Gittinger, Personhood in Science Fiction: Religious and Philosophical Considerations (Cham, Switzerland: Palgrave Macmillan, 2019), 114. 23. Sherryl Vint, Science Fiction (Cambridge, MA: MIT Press, 2021), 35–36. 24. Paul N. Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America (Cambridge, MA: MIT Press, 1996), 161. 25. Edmund Husserl, Cartesian Meditations: An Introduction to Phenomenology, translated by Dorion Cairns (The Hague: Martinus Nijhoff Publishers, 1982), 139–40. 26. Anthony Giddens, Modernity and Self-Identity: Self and Society in the Late Modern Age (Redwood City, CA: Stanford University Press, 1991). 27. Blade Runner, directed by Ridley Scott, written by Hampton Fancher and David Peoples, featuring Harrison Ford and Sean Young (Burbank, CA: Warner Brothers, 1982). 28. Judith Butler, Giving an Account of Oneself (New York: Fordham University Press, 2005). 29. A. M. Turing, “Computing Machinery and Intelligence,” Mind 59, 236 (October 1950), doi.org/10.1093/mind/LIX.236.433. 30. Liz W. Faber, The Computer’s Voice: From Star Trek to Siri (Minneapolis, MN: University of Minnesota Press, 2020), 4–6. 31. Joseph Weizenbaum, Computing Power and Human Reason (San Francisco: W.H. Freeman & Company, 1976), 2–16. 32. Incidentally, John McCarthy, the mathematician who coined the term Artificial Intelligence, openly detested Weizenbaum’s work, describing his book Computer Power and Human Reason as “moralistic and incoherent.” Such in-fighting among early AI researchers is both fascinating and, unfortunately, outside the scope of this study. For more, see John McCarthy, “An Unreasonable Book,” SIGART Newsletter 58 (1976), dl.acm.org/doi/odf/10.1145/1045264.104265. 33. Natasha Tiku, “The Google Engineer Who Thinks Its AI Has Come Alive,” Washington Post, June 21, 2022, www​.washingtonpost​.com​/podcasts​/post​-reports​/the​ -google​-engineer​-who​-thinks​-its​-ai​-has​-come​-alive​/. 34. Karel Čapek, R.U.R. (Rossum’s Universal Robots), translated by Claudia Novack (New York: Penguin Books, 2004). 35. 2001: A Space Odyssey, directed by Stanley Kubrick (1968, Burbank, CA: Warner Brothers Home Distribution, 2001), DVD. 36. Isaac Asimov, 7 Book Collection: Robot Series (New York: HarperCollins, 2018). 37. I Am Mother, directed by Grant Stupore (2019, Netflix), www​.netflix​.com​/title​ /80227090. 38. Sameer Hinduja and Justin W. Patchin, “Bullying, Cyberbullying, and Suicide,” Archives of Suicide Research 14, 3 (2010), doi.org/10.1080/13811118.2010. 494133; Dinar Rizqi Perwitasari and Emi Wuri Wuryaningsih, “Why Did You Do That To Me?: A Systematic Review of Cyberbullying Impact on Mental Health and Suicide Among Adolescents,” NurseLine Journal 7, 1 (2022), doi.org/10.19184/nlj. v7il.27311; Ophely Dorol Beauroy-Eustache and Brian L. Mishara, “Systematic Review of Risk and Protective Factors for Suicidal and Self-Harm Behaviors among

Introduction

13

Children and Adolescents Involved with Cyberbullying,” Preventative Medicine 152 (2021), doi.org/10/1016/j.ypmed.2021.106684. 39. Center for Disease Control and Prevention, “10 Leading Causes of Death, United States,” Web-Based Injury Statistics Query and Reporting System, 2020, wisqars.cdc.gov/data/lcd/home. 40. Durkheim, On Suicide. 41. Manning, Suicide. 42. van Heeringen, Neuroscience of Suicidal Behavior, 12–17. 43. Stack and Bowman, Suicide Movies. 44. Faber, The Computer’s Voice.

Chapter 1

Morbid Machines Interiority and Mental Health

In July 2017, a white, rocket-shaped security robot in Washington, DC, named Steve wheeled itself into a decorative fountain, apparently destroying itself. A nearby office worker, Bilal Farooqui, snapped a photo of the robot floating on its side, surrounded by security guards and curious onlookers, and tweeted facetiously about the incident: “We were promised flying cars, instead we got suicidal robots.” Farooqui’s tweet went viral, garnering over 268,000 likes, 115,000 retweets, and 12,000 quote tweets.1 Unsurprisingly, the internet exploded with tongue-in-cheek responses, ranging from solidarity with the distraught worker to “drain the swamp” political jokes. One person named Adam Singer even responded, “That robot is what all of us want to do in 2017.”2 This incident is hardly remarkable in the context of Twitter’s gallows humor. Yet, the way readers anthropomorphized Steve and projected notions of suicidality onto it is revealing of cultural notions of suicide and suicidality. Implicit in the idea that Steve threw itself into the fountain is a popular connection between extreme sadness or dissatisfaction with one’s life and the desire to kill oneself, a sort of worst-case ennui scenario. I call this form of suicide despondency-motivated because the action is driven by an internal sense of hopelessness or emotional turmoil. As I will discuss in this chapter, this form of suicide is the result of a complex relationship between individual psychology and patterns of social behaviors and beliefs. In SF, fictional portrayals of robots dying by despondency-related suicide construct a sense of interiority for the living machines, through which we may read the language of despondency.

15

16

Chapter 1

THE PSYCHOLOGY-SOCIAL PATTERNS DEBATE The connection between depression and suicide is firmly established in the cultural consciousness; however, of the three categories of suicide I discuss in this book, despondency-motivated suicide is the most complex and has given rise to the greatest number of debates. To put it simply, there is no singular cause of suicide. Therefore, there can be no singular method for prevention, or even consensus on whether prevention is ethical. Rather, all available research suggests that suicide is a complex, multifaceted event that is impossible to predict and challenging to prevent. From a medical and cognitive science perspective, suicide at the individual psychological level is fairly well understood. Researchers have broken risk factors down into two categories: distal (predisposition) and proximal (precipitants). Distal factors might include a family history of suicide, and even “changes in brain neurotransmission.”3 Neurotransmitters are parts of the central nervous system that send and receive chemical information throughout the body. In the brain, neurotransmitters control the release of important mood-related chemicals, such as serotonin, glutamate, and gamma-aminobutyric acid (GABA).4 So, changes in brain neurotransmission might refer to increased or decreased levels of chemicals released in the brain, therefore affecting a person’s overall mood and personality, which in turn may increase risk of suicidal behaviors. Proximal factors might include an individual person’s mental illness, such as major depression or schizophrenia, or previous suicide attempts.5 This also includes an individual person’s environment, such as stressful life events (job loss, relationship turmoil, terminal illness) or even the availability of suicide methods (firearms, poisons, etc.). In the US, in particular, a person’s ability to access a firearm significantly increases the likelihood of them dying by suicide.6 Indeed, according to the National Institute for Mental Health, about half of all people who die by suicide in the US do so using a firearm.7 As a result of global research and policy efforts, governments and organizations have been able to implement strong individual-level suicide interventions. For example, the US has implemented a suicide hotline program, alongside a robust campaign to include targeted notifications of the hotline number for people who may be at risk. Anecdotally, as I have researched this book, nearly every media platform from Google to Instagram has provided me with information about the hotline at the top of my search results. And the data bear this out, particularly during the COVID-19 global pandemic. In the US, despite increased risk factors as a result of the pandemic, such as ongoing isolation, widespread unemployment, and the traumatic loss of over a million people from COVID-related illness, the suicide rate actually decreased by

Morbid Machines

17

about 3 percent. Simultaneously, calls to the suicide hotlines increased by as much as 800 percent, suggesting that increased access to support systems may help prevent suicide.8 (I will return to the question of prevention in the conclusion.) While the research on individual causes and interventions is incredibly important, focusing only on the individual ignores much broader systemic problems. For example, suicide rates have been higher among men than women for centuries, while rates of admittance to psychiatric care facilities are currently about equal among men and women.9 Yet, much of the medical and psychological research focuses on how to prevent individual men from suicide, as opposed to examining broader cultural and institutional phenomena that might lead men to behave more aggressively and impulsively and women to be more likely to reach out for help during a crisis. This debate between individual psychology and broader social patterns of suicide has been going on for over a century. In his germinal 1897 book, On Suicide, French sociologist Émile Durkheim argues that, contrary to contemporary belief, suicide was not in and of itself a form of mental illness or solely caused by individual psychology, but rather a result of social and environmental factors.10 It is, of course, important to take his work with a grain of salt, given the fact that he was entrenched in late nineteenth-century ideologies of colonialism, white supremacism, and patriarchy. Indeed, Durkheim writes at length about ethnically white Europeans without ever acknowledging that Black and Brown people had lived there for centuries or that many European countries had colonized areas of Asia, Africa, the Caribbean, and the Pacific Islands, rendering any sort of simple study of ethnicity and nationality as correlated to suicide rates entirely impossible. Further, while Durkheim does note correctly at one point that there were more women in psychiatric asylums than men, he ascribes this to the fact that more men died by suicide at the time than women, entirely overlooking the fact that psychiatry, as a male-dominated field, did little to understand or listen to the needs of women, preferring instead to hospitalize women for all manner of perceived ailments.11 However, Durkheim’s work does include numerous insights into suicide that hold true today and is notable for breaking new ground in seeing suicide as a result of complicated factors. Indeed, current research supports the idea that suicide is not simply a question of mental illness. According to the CDC, approximately 54 percent of people who die by suicide had no known mental health concerns,12 though “known” is an important qualifier here. In the US in particular, access to mental healthcare is challenging for many people. Even after Congress passed the Affordable Care Act in 2010, which made health insurance coverage mandatory for all US citizens, expanded government subsidies for coverage, and required that employers offer coverage to all full-time employees, not

18

Chapter 1

all insurance policies cover all forms of mental health treatment. Further, according to the Kaiser Family Foundation, those who are most likely to go without coverage are the approximately two million people in the so-called Medicaid coverage gap. These people have income that is too high to qualify for government-run Medicaid coverage but too low to be able to afford to purchase private insurance.13 Unsurprisingly, the vast majority of people in the coverage gap live in states that opted not to participate in the federal Medicaid expansion program, and about 75 percent of them live in Texas, Florida, Georgia, and North Carolina. Further, approximately 59 percent of nonelderly adults in the coverage gap are people of color,14 contributing to racial disparities in the healthcare industry that may impact the intersection of race and suicide rates. The connection between racial health disparities and suicide rates is most visible among Indigenous Americans. According to the US Indian Health Service, a federal program that provides healthcare to about half of the 5.2 million Indigenous Americans living in the US, Indigenous peoples have a lower life expectancy than any other race in the country and have higher fatality rates from both violence and preventable diseases.15 Indeed, the suicide rate is 33 per 100,000 people for Indigenous men and 11 per 100,000 for women, higher than any other racial category.16 This is a direct result of healthcare inequities for Indigenous peoples, but because suicide research and prevention are so focused on individual psychology and interventions, these broader systemic crises are often overlooked. In addition to the long-standing debate between individual psychology and social patterns of suicide, one key factor in any discussion of despondencymotivated suicide is whether someone has the right to die by suicide and, relatedly, whether it is our social responsibility to prevent that suicide. If someone is ill, a doctor generally has a responsibility to treat them, whether the patient wants to be treated or not. Treatment for suicidal ideation (simply thinking about or planning for suicide), however, can be forced upon a patient in certain circumstances. According to the US Substance Abuse and Mental Health Services Administration (SAMHSA), forced hospitalization, also known as involuntary civil commitment, “is a legal intervention by which a judge, or someone acting in a judicial capacity, may order that person with symptoms of a serious mental disorder, and meeting other specified criteria, be confined in a psychiatric hospital or receive supervised outpatient treatment for some period of time.”17 But as Thomas Szasz points out, linking suicide solely to mental illness presupposes that action (causing one’s death) and disease (depression, bipolar disorder, schizophrenia, etc.) are one and the same.18 In short, assuming suicide stems from mental illness guarantees that suicide is “medicalized,” and that social factors are ignored in preference of involuntary civil commitment. Durkheim likewise attempted to separate out

Morbid Machines

19

the action of suicide from mental illness, correctly pointing out that there are numerous social factors that can contribute. However, as I mentioned above, he uses statistics from mental healthcare facilities at the time without effectively contextualizing them in the social construction of the healthcare industry.19 Szasz argues that focusing so much on medical treatment and so little on social patterns “exonerates the actor from wrongdoing, but stigmatizes him as crazy; it justifies the psychiatrist’s control of the patient, but makes him responsible for the patient’s suicide.”20 In other words, for Szasz, connecting suicide to mental illness strips the patient of all agency and renders the physician responsible for choosing whether someone should live or die. It’s essentially a philosophical Möbius Strip, whereby the only way to convince a patient to choose to live is to strip them of the right to choose whether to live. At this point, I want to note that Szasz, a former professor of psychiatry, is openly anti-psychiatry, and I in no way intend to suggest that mental health treatment should end. In fact, involuntary hospitalization can be a key part of suicide prevention efforts, when done properly in a well-funded facility that ensures the patient’s safety and human rights are at the forefront of all care decisions. Yet, two important points are worth bearing in mind: 1) the medicalization of suicide prevention does nothing to address the root cause of social and systemic factors, such as the devastating effects of economic crises or racial inequity; and 2) if 54 percent of suicides are not related to mental health, then it stands to reason that civil commitment—forcible mental health treatment—is not always an appropriate method for preventing suicide and in some cases may unnecessarily strip a well person of their ultimate right to consent and bodily autonomy. Bodily autonomy is particularly important in any discussion of social justice and suicide prevention and is the site at which individual psychology and social patterns intersect. While I will return to this idea in chapter 3 to discuss eugenics and assisted suicide, I do want to highlight just how complicated questions of bodily autonomy can be in regard to suicide and cultural ideology. For example, in 2022, Texas Governor Greg Abbott directed the state’s child protective services to investigate families of openly transgender children. As a result, any mandated reporter of suspected child abuse, from teachers to hospital staff, were required to report transgender children’s families to investigators. Aside from the fact that this directive ignores the endorsement of gender-affirming care by all major medical associations in alignment with current research,21 it also puts children at greater risk of suicide. According to a 2021 national survey conducted by researchers with the Trevor Project, a national LGBTQIA+ youth resource organization, about half of transgender and nonbinary youth had contemplated attempting suicide.22 Meanwhile, youth reported a decrease in suicidality when they had access to gender-affirming spaces. Thus, transgender children in Texas are now at

20

Chapter 1

greater risk of suicide because they are broadly denied gender-affirming care and spaces. One child, a 16-year-old transgender boy, attempted suicide as a direct result of both the governor’s directive and bullying at school. He was provided psychiatric treatment, but hospital staff reported his family to child protective services.23 This case is an alarming example of how individual prevention cannot address broader suicide rates. Anti-trans policies such as those implemented by Governor Abbott are products of social patterns and institutional systems of oppression. Advocating for the rights of transgender people, including equitable access to gender-affirming care and the ability to attend school without fear of bullying or legal repercussions, can help address suicide rates. On an individual level, the right to transition is a question of bodily autonomy, and interference from governments denies citizens that right. DESPONDENT ROBOTS As I have discussed so far in this chapter, despondency-motivated suicide is caused by both individual psychology and broader social patterns. In both fiction and nonfiction media, however, suicide is often correlated with some form of sadness, depression, or another form of psychological distress. Analyzing representations of despondency-motivated suicide may offer one way of situating individual psychology within patterns of social belief and behaviors. Sociologist Steven Stack and lawyer Barbara Bowman argue in Suicide Movies, a broad content analysis of films that feature portrayals of suicide, that privileging the individual over the social, or vice versa, is detrimental to suicide research.24 Their work is groundbreaking in the field, but it is important to note that, in categorizing the films according to individual psychology versus social patterns, they actually do separate out the individual and the social. Yet, I maintain that these must be seen as intertwined phenomena, both in reality and in fictional representation. The social structures the individual, just as much as the individual informs the social. Robot fiction, in particular, gives readers and viewers an opportunity to engage with the connection between the individual and the social. The invented interior life of robots stands at a relatively safe distance, allowing us to project difficult questions of suicidality onto a non-human form. In the following sections, I will examine three examples of fictional robots that embody despondency-motivated suicide: the titular Adam Link in Eando Binder’s series of SF stories, Marvin in the novel and adapted film The Hitchhiker’s Guide to the Galaxy by Douglas Adams, and the unnamed factory robot in GM’s 2007 Superbowl commercial.

Morbid Machines

21

ADAM LINK’S PERPETUAL SUICIDES Eando Binder’s character Adam Link is one of the first robots in US science fiction, initially appearing in Amazing Stories in January 1939, just a few years before Isaac Asimov published “Runaround,” the story that brought us his now-famous three laws of robotics.25 Eando Binder actually began as a pseudonym for brothers Earl and Otto (E and O—Eando), but by the time of the first Adam Link story, Otto had taken over writing under the penname full time.26 Binder was a prolific SF writer during the Golden Age of SF, publishing stories in all the major pulp magazines, including Amazing Stories, Astounding Stories, and Fantastic Adventures, helping to develop the genre alongside more famous writers like Isaac Asimov and Robert A. Heinlein. In Binder’s stories, Adam Link is a robot invented by a scientist named Dr. Link, who built him as a super-strong learning machine. Although his form changes slightly over the years, particularly in later adaptations, Adam is essentially shaped like a metal man in the now-classic style of Golden Age SF. Accompanying illustrations in Amazing Stories show him as larger than a human, with a cylindrical metallic body, spindly metal arms and legs, and a head that is shaped somewhat like a lightbulb with eyeballs. The first story, “I, Robot,” published in Amazing Stories as the January 1939 cover story, details how Adam grew up, was educated, and developed a sense of emotional interiority. Dr. Link dies in a lab accident, but the villagers blame it on the perceived viciousness of the robot. At the end of the story, Adam reads Mary Shelley’s Frankenstein, realizes that people think he is like Frankenstein’s monster, who hates his creator, and contemplates suicide.27 Indeed, at the end of Shelley’s novel, the nameless monster follows Dr. Frankenstein to an arctic sailing voyage, witnesses his death, then sails off on his own on a raft, presumably to die alone amidst the ice. “But soon,” [the monster] cried with sad and solemn enthusiasm, “I shall die, and what I now feel be no longer felt. Soon these burning miseries will be extinct. I shall ascend my funeral pile triumphantly, and exult in the agony of the torturing flames. The light of that conflagration will fade away; my ashes will be swept into the sea by the winds. My spirit will sleep in peace; or if it thinks, it will not surely think thus. Farewell.”28

Here, the monster’s suicide is driven by despondency, a desperate desire to no longer feel so much and so intensely. It is worth noting, too, that this is a very different ending from James Whale’s 1931 cinematic adaptation, which Eando Binder almost certainly would have seen at the time. In the now-classic film version, an angry mob of villagers chases the monster and Dr. Frankenstein to a nearby mill, which they set on fire. In his last act, the

22

Chapter 1

monster throws his creator to safety, then falls into the fire and dies.29 The omission of suicide in preference of the last redeeming act reimagines the monster as a kind, compassionate, if misunderstood creature. In the novel, however, Shelley’s monster is a tortured soul who curses his own creator, a complex person with a robust interior life, driven by despondency. This difference is important in understanding Adam Link’s individual psychology in relationship to Frankenstein as a cultural phenomenon. Like Shelley’s version of the monster, Adam contemplates suicide at the end of “I, Robot.” Sitting in a jail cell and chained to a wall, he acknowledges that humans hate him and that the only way to stop that hatred is to flip his power switch and “blink out [his] life with one twist.”30 Importantly for the purposes of this discussion, this is the first of many such instances of suicidal ideation. Through the proceeding two stories, “The Trial of Adam Link, Robot” in July 1939,31 followed by “Adam Link in Business” in January 1940,32 Adam is acquitted with the help of a reporter named Jack, then falls in love with Jack’s fiancée, Kay. Unable to reconcile his feelings for the human Kay with his existence as an inhuman robot, Adam determines to go off alone to figure out how to stop having feelings. The fourth story, “Adam Link’s Vengeance,” published in February 1940, begins with Adam apparently having determined that the only way to stop feeling is to die by suicide: “To any of you humans [dying by] suicide, your last thought must be that death is after all so sweet and peaceful and desirable. Life is so cruel. And to be brought back from voluntary death at the last second must be a terribly painful experience. So it was with me, though I am a robot.”33 Here again, the core theme is that the inhuman creation has such an abundance of human feelings that it sees no other option than to die by suicide. Later, Adam explains that he absolutely had to die to keep anyone else from inventing a robot that could feel such torturous feelings. But just as Adam is dying, a scientist named Dr. Paul Hillory arrives and saves him, explaining, “I’d do the same for any wretch trying to take his own life.”34 Hillory subsequently helps Adam create a robot partner for himself, whom he models after Kay and names Eve, a clear allusion to the Christian biblical creation story, beginning with the first two humans, Adam and Eve, in the Garden of Eden. It is worth noting that gender roles in this story are paradoxically programmable-yet-inherent. Adam describes himself as having been “brought up from the masculine viewpoint,” and that his future robotic girlfriend “must be given the feminine outlook.”35 Through this, Binder posits that masculinity and femininity are learned characteristics, a groundbreaking feminist perspective for the time. However, the characteristics themselves are seen as inherent to men’s and women’s bodies and minds. Kay hooks her mind up to Eve’s and transfers her femininity, implying that only a woman could do so.

Morbid Machines

23

Binder further emphasizes this essentialist viewpoint when Eve admits she loves Adam, and his response is to realize he has no idea how to understand her. “And in a sudden blinding moment, I knew my dream had come true. I couldn’t fathom how this girl-mind worked. She was—mystery. She was to me what women have been to men since the dawn—mystery.”36 In other words, femininity can be transferred and programmed, but it is so essential to the female body and brain that only a woman can understand how it works. Although my core focus in this study is representations of suicide, this portrayal of gender underlines the way that individual psychology is a priori situated within a social framework. Kay can thus transfer her individual sense of femininity to Eve, and that femininity can be so foreign to a masculine person as to be incomprehensible because of the way broader social constructs of gender inform how masculine and feminine people interact with one another. In other words, individual behaviors and beliefs are always structured by social patterns. This is as true of gender as it is of suicide. Despite the gender essentialism (hardly surprising for a 1940 SF publication), the transference of Kay’s femininity to Eve is important to the plot. After Eve and Adam confess their love to each other, Dr. Hillory returns, welds mind-transference helmets onto each of them, and begins controlling them telepathically. He uses their bodies to rob a bank, murder his enemy, and begin making plans to take over the world. For Hillory, Adam and Eve are nothing more than tools, rather than autonomous, thinking, feeling beings. He tells them, “You two, in the first place, are just metal beings. You have no rights, alongside humans. You were created by human hands. I’ll show the world how to really use robots—as clever instruments.”37 Here, Binder reinforces a core question of all the Adam Link stories: is it ethical to allow an individual to die by despondency-motivated suicide? Binder answered the question at the start with the suicide of the main character, but he subsequently complicates it through the figure of Hillory. Outwardly, Hillory claims to have wanted to help Adam live, just as any other person, asserting that it is a moral imperative to stop someone from suicide. But this is upended later, when we discover that Hillory does not see Adam as a person in the first place, so stopping his suicide had nothing to do with ethics or humanity. Binder also implicitly addresses the relationship between intentional self-harm, self-sacrifice, and despondency-motivated suicide. Later in the story, after a prolonged fight with Hillory (via Eve’s body), Adam throws himself off a cliff to fool Hillory into thinking he had died. “I had thrown myself over the cliff—but not as a suicide. I had hoped this miracle would happen. Up above, Hillory must be looking down. He must be seeing the faint patch of metal shining in the moonlight, unmoving. He would be certain of my utter destruction. . . . But I lived . . . ”38 This scene suggests that intent, not action, is at the heart of suicide. Adam insists that he has no intention of dying

24

Chapter 1

here, despite causing great bodily harm to himself. Thus, bodily autonomy is of utmost importance in Binder’s conception of suicide. This is reinforced at the end of the story when Adam and Eve (again controlled by Hillory) fight, and Adam murders her. The horror of murdering his partner is somewhat assuaged by the fact that Eve had lost all sense of bodily autonomy and was, instead, controlled entirely by Hillory. The only way to help her was to end her suffering. And finally, after Hillory falls off a cliff by accident, Adam decides to return to suicide: “I am writing this now, in the cabin. When I am done, I will go with Eve. There may not be a heaven for robots. But neither is there a hell—unless Earth it is.”39 Thus, the story is bookended by Adam’s despondency-driven suicides—once out of desperate loneliness and again out of abject horror at the prospect of living among humans. Binder’s ultimate claim about suicide, then, is that it is understandable when driven by inescapable emotional turmoil. This story is where Adam Link’s relationship to the Frankenstein narrative becomes more complicated. While the first three stories drew on themes in Shelley’s novel, particularly the suicidal ending, “Adam Link’s Vengeance” is more closely aligned with the sequel to the 1931 version, James Whales’s 1935 The Bride of Frankenstein. In the film, Dr. Frankenstein creates a wife for his monster. However, the bride ultimately rejects her intended husband, sending the monster into a rampage. He allows the doctor to escape but decides that he and his bride both deserve to die.40 This murder-suicide is a far cry from Shelley’s original novel, in which the monster demands that Dr. Frankenstein create a mate for him, but Frankenstein ultimately refuses, causing the monster to curse him before fleeing. This revision is notable, as it introduces yet another form of suicide for the monster. As I have noted above, the monster in the first film allows himself to die in the mill fire; meanwhile, the monster in the second film chooses murder/suicide. Unlike the monster’s murderous rage, Adam Link would presumably have gone on living happily with Eve, were it not for the evil Dr. Hillory. And so, upon losing his mate, he decides he cannot bear to live. The most remarkable aspect of Adam Link’s suicides, though, is the fact that they keep happening and unhappening to fit the needs of the narrative. At the start of the fifth story, “Adam Link, Robot Detective,” Adam decides to die by suicide again: “It had begun to rain. Kneeling beside [Eve], I removed my top skull-plate. The rain, pouring into my sensitive iridium-sponge brain, would short-circuit my life-current. I would join Eve in blessed non-existence.”41 But as in the previous stories, just as Adam is on the brink of death, he is saved by humans. Kay and Jack come along and bring him back, allowing him to find a new purpose in life and continue on in his adventures. This chaotic stance on suicide is part and parcel of the serialized

Morbid Machines

25

aspect of the stories. For Binder, suicide is understandable in the throes of melancholy but an obstacle to continuing publications. In subsequent adaptations of the Adam Link stories, however, suicide is treated very differently. In 1964, the television series The Outer Limits adapted Binder’s first two Adam Link stories for a one-off episode also titled “I, Robot.”42 In the episode, Adam Link is on trial for murdering his creator, Dr. Link. Link’s niece, Nina (Marianna Hill), hires an investigator (Leonard Nimoy) to prove that the robot is innocent. He does so, and the jury exonerates Adam. As he leaves the courthouse, though, he sees a truck speeding toward a young girl in the street. He throws himself under the truck to save her, effectively dying by altruistic suicide (a concept to which I will return in the next chapter). Suffice it to say, this version of Adam is denied any sort of extreme interiority. And indeed, the production itself emphasizes the distance between the viewer and Adam. In the stories, Adam is the narrator, so the reader follows his emotional journey as an intimate listener. In the TV episode, though, Adam is not the narrator, and much of the action follows Nina instead of Adam. This sidelines his humanity, suggesting that the role of robots is to assist humans at all costs, not to live autonomous, emotional lives. In 1965, all ten of Binder’s original Adam Link stories were revised and collected into a single book called Adam Link Robot. Much of the material is the same, but, significantly, the suicide scenes were entirely edited out. Instead of Hillory saving Adam from suicide, he instead just shows up randomly at Adam’s forest cabin.43 And after Eve’s death, the scene is nearly identical, except the suicide.44 Table 1.1 shows a side-by-side comparison of the original suicide scene published in May 1940 and the revised version published in 1965. The reasons for the omission of the suicides are mysterious, particularly as they drain the story of its original melodramatic impact. As a result, the revised Adam Link is an emotional robot, but not a suicidal one. Ultimately, both The Outer Limits and the revised collection change the ethical underpinning of the stories: rather than focusing on whether despondency-related suicide is a person’s right as an autonomous being, the new versions posit that death should be avoided at all costs, except in the name of sacrifice. MARVIN’S ENNUI Douglas Adams’s Hitchhiker’s Guide to the Galaxy originated in 1978 as a BBC radio show and has subsequently become an expansive textual universe that includes five novels, a television series, and a feature film. The series follows a man named Arthur Dent and his intergalactic adventures with his alien friend Ford Prefect, along with a variety of other absurd characters.

26

Chapter 1

Table 1.1. Side-by-side comparison of language from two versions of Eando Binder’s story. The suicidal ideation was edited out for the later version. See Eando Binder, “Adam Link, Robot Detective,” Amazing Stories 14, 5 (May 1940): 43; Eando Binder, Adam Link Robot (New York: Paperback Library, 1965), 82. Table created by Liz W. Faber for this publication. From “Adam Link, Robot Detective,” in Amazing Stories, May 1940

From Adam Link Robot, published in 1965 by Paperback Library, 82

There her great eight-foot body lay, silent as a shut-down machine. Grief overcame me, an emotion as real and deep as any you humans have. I pictured her as a human form lying there—a young, lovely girl. But she was dead now. It had begun to rain. Kneeling beside her, I removed my top skull-plate. The rain, pouring into my sensitive iridiumsponge brain, would short-circuit my life-current. I would join Eve in blessed non-existence. Kay and Jack Hall, and Tom Link found me that way when they arrived a moment later.

Then I went back, staring at Eve’s dead body. She was gone, my mate. I was alive. Why did it have to turn out this way, I groaned mentally. Why had it not ended for me too? There might not be a Heaven for robots. But there was a Hell—earth. It had begun to rain. I knelt motionless beside Eve’s broken form. There would have to be a funeral, burial, all that. Kay and Jack Hall found me that way when they arrived. Police were with them.

Entire academic books have been written about these texts, but for the purposes of this study, I will limit us to the 1979 novel The Hitchhiker’s Guide to the Galaxy and the 2005 film adaptation of the same name.45 In the novel, Ford saves Arthur when Earth is destroyed, and, through a series of unlikely events, they wind up aboard a spaceship called Heart of Gold, which had been stolen by corrupt intergalactic president Zaphod Beeblebrox and his girlfriend Trillian. Most of the computerized tech aboard the ship is manufactured by Sirius Cybernetics Corporation and programmed to have “Genuine People Personalities” or GPP.46 One such machine is Marvin, a “manically depressed robot”47 whom philosopher Jerry Goodenough describes as “a comically repellent [character], marked by self-obsession, self-pity and hypochondria. Marvin plays no great role in the various narratives of the series . . . but he is a great comic character in his own right.”48 In short, Marvin has been programmed to be insufferably depressed, suggesting that depression and ennui are personality traits, rather than mental illnesses. This implied stance is remarkable because it de-medicalizes depression by suggesting that sadness is a part of existence, not something to be fixed. In fact, the root of Marvin’s depression is that he has deep, poignant, human feelings but is expected to do the menial labor of an unintelligent robot. The world is, essentially, boring him to suicidality.

Morbid Machines

27

And, indeed, the narrative positions Marvin’s overbearing depression as a sort of saving grace. Toward the end of the novel, the group aboard Heart of Gold goes to an ancient planet where, among other things, they are confronted by intergalactic police. Just as Ford and Zaphod are about to be apprehended, the police’s spacesuits explode, instantly killing the officers inside. As it turns out, Marvin explains, “I got very bored and depressed, so I went and plugged myself in to [the police ship’s] external computer feed. I talked to the computer at great length and explained my view of the Universe to it. . . . It [died by] suicide.”49 In other words, simply revealing the source of his depression to the police computer led it to choose death, therefore accidentally saving the day. The implied message of the novel’s dark humor, then, is that despondency-related suicide has a time and a place and need not be prevented. Just as Adam Link’s despondency-motivated suicide was revised for the television and book versions, Marvin does not cause suicide in the 2005 cinematic adaptation.50 There are, notably, many major differences between the novel and the film, due largely to the fact that the film drew from the radio broadcasts as well. Regardless, Marvin does save the day with his depression in the film version. Just as a whole intergalactic army is about to fire upon the group of main characters, Marvin shoots them with a Point-of-View gun that allows them to see the world from his philosophical perspective. They are all so overcome with grief that they simply lie down and begin moaning about how sad they are. This alternative truly revises the accidentally pro-suicide message of Adams’s novel, demonstrating that depression is incapacitating but does not necessarily need to lead to suicide. THE GM ROBOT’S UNEMPLOYMENT In 2007, General Motors ran an ad during the Superbowl titled “Robot.” The one-minute spot, produced by Interpublic’s Deutch Los Angeles and directed by Phil Joanou, features a yellow manufacturing robot, such as one typically used in GM plants across the world.51 At the start of the ad, the robot works on an assembly line building cars and accidentally drops a bolt to the floor. The entire assembly line screeches to a halt, as humans stare in horror and the robot hangs its arm/head in shame. The robot is sent out of the factory, implying that it has lost its job. The melancholy song “All By Myself” by Eric Carmen plays as we see the robot attempt to do other “menial” jobs, such as holding an advertisement board and being the speaker box for a drive-through restaurant. Through a series of beeps and boops that tonally imply melancholy, the robot demonstrates its increasing sadness. Finally, the ad cuts to the robot standing on a bridge, watching the cars it used to build drive by. It

28

Chapter 1

looks down into the water below and, as the song reaches its crescendo, leaps to its death. A second after we see it plunge into the water, the ad cuts to the robot waking with a start, alive and safe in the GM plant. A masculine voiceover narrates the text on the screen, informing the audience about “The GM 100,000 mile warranty. It’s got everyone at GM obsessed with quality.” The explicit message of the commercial, then, is that GM is so dedicated to crafting a high-quality product that even the robots would rather die than mess up and lose their jobs. Unsurprisingly, such an obtuse and insensitive message sparked public outrage regarding the flippant treatment of suicide. The main source of criticism was the American Foundation for Suicide Prevention, describing the ad as careless and pointing out the real potential for copycat suicides.52 Initially, GM ignored criticism and announced that it would re-air the ad in its entirety during the Academy Awards. But subsequent pressure, including a strongly worded letter from the National Alliance on Mental Illness,53 led GM to release a revised version that omitted the suicide scene.54 While the central focus of the ad is the individual despondency of a single robot, GM unintentionally captures a broader social pattern regarding suicide and employment. The robot represents laborers, and its actions represent the perceived result of job loss. And indeed, the suicide rate among unemployed people tends to increase during times of high employment, suggesting a connection between financial failure and individual despondency.55 This is important to note, because suicide is thus not simply an individual’s response to an individual stressor; rather, as sociologists have long argued, suicide is a result of a complex connection between social factors such as the economy and individuals’ expectations of themselves and others. One thing that makes the GM ad so shocking is that it makes explicit what is already implicit in a Capitalist culture where employment is aligned with individual happiness: that an individual must participate in the economy in order to feel worthy of living. THE WERTHER EFFECT As I have discussed in this chapter, the GM ad as well as Adam Link and the Hitchhiker’s Guide series offer important cultural perspectives on despondency-related suicide. The interior emotional lives of all three robots are so intense as to manifest suicidal ideation. And importantly, these texts initially imply that suicide is understandable, and then, through revision, reverse course and either change the suicide into a sacrifice or simply cut the suicide out altogether. The remaining question, then, is why these suicides were revised out of the texts.

Morbid Machines

29

One potential reason is to avoid the contagious effect of suicide in media. In 1974, sociologist David Phillips described an infamous incident upon the publication of Johann Wolfgang Goethe’s 1774 novel The Sorrows of the Young Werther, in which the titular character dies by suicide. Allegedly, a number of people were inspired by the book to die in a similar manner. Although this story may be apocryphal, Philips’s analysis of suicide rates and media representations of suicide in the twentieth century uncovered ample evidence to suggest that there is a correlation. As a result, Philips coined the term “Werther Effect” to describe the phenomenon of copycat or contagious suicides.56 Although the concept remains controversial, subsequent research has supported the idea that media portrayals of suicide can lead to additional suicides. In 2020, a team of international researchers published a systematic review and meta-analysis, which found that “the risk of suicide increased by 13% in the period after the media reported a death of a celebrity by suicide.”57 This risk increases to 30 percent when “the suicide method used by the celebrity was reported.”58 Numerous organizations have written guidelines for responsible reporting in an effort to reduce the Werther Effect. Most notably, an international organization called Reporting on Suicide, in partnership with numerous medical, media, and public policy institutions, offers a 5-item checklist for responsible reporting: 1) Report suicide as a public health issue; 2) include resources; 3) use appropriate language; 4) emphasize help and hope; and 5) ask an expert.59 From a cynical standpoint, one would imagine that media companies would want to avoid lawsuits from grieving families if it should be shown that their reporting led to suicide. I want to be very clear, though: many media producers take media ethics extremely seriously and are invested in using their public platform for the good of the people. The GM ad is, I would argue, a prime example of this sense of responsibility and ethics. Although the ad itself was a misstep, once the potential ramifications of their actions became clear, they re-cut the ad to remove the suicide scene. And, indeed, the concern of copycat suicides was one main reason organizations demanded the revision of the ad in the first place. Importantly, though, suicide contagion is not as simple as seeing and then doing. Niederkrotenthaler et al. identify three mechanisms through which the Werther Effect may occur: identification with the deceased person, which might occur more frequently when the reported suicides are about individuals with high social standing; increased media reporting of suicide leading to normalization of suicide as an acceptable way to cope with difficulties; and information on suicide methods, which might influence the choice of suicide method by a vulnerable individual.60

30

Chapter 1

In 2021, researchers also found that the likelihood of suicide contagion increases when the viewer is similar to the celebrity and has a “pessimistic attitude toward suicide prevention.”61 While the adaptations of Binder’s and Adams’s stories removed or revised suicide well before the Werther Effect was so well understood by researchers, it is nevertheless possible that the desire to avoid suicide contagion was a motivating factor. Indeed, many of the characteristics of suicide contagion are present in both texts. Obviously, the method of suicide is unrealistic and un-replicable. Adam Link lets rain short-circuit his brain, while the police computer simply self-destructs, taking the networked spacesuits with it. However, beloved fictional characters in popular stories may certainly be described as “individuals with high social standing.” And there is much to identify with in Adam, Marvin, and the GM robot. Adam is, after all, the hero of his story, and, as I have mentioned above, the first-person narration of the stories allows readers a more intimate access to his emotional state. Marvin, too, while a ridiculous character in many ways, represents the very human tension between emotional turmoil and individual usefulness. Marvin is driven to suicidal ideation by the relatable fact that he spends his time philosophizing, but then is expected to do menial, unchallenging labor. The GM robot, too, represents an average blue-collar factory worker who is emotionally devastated by losing a job, a situation that mirrors millions of US Americans affected, ironically, by the computerization of the manufacturing industry. Further, these three texts certainly normalize suicide as an obvious result of psychological turmoil. None of the three take suicide particularly seriously, either. Hitchhiker’s Guide is an absurd, farcical story. The Adam Link stories are sensational, melodramatic SF serials where suicidal ideation seems to be a bit of a cliché. And GM itself saw the ad as a whimsical portrayal of workers’ devotion to the company; however, not only is the robot a relatable character, but the method of suicide is blatantly shown onscreen from beginning to completion. Although the ad ends with the survival of the robot, the implication is clear: losing one’s job naturally leads to suicide. Even without recent research on the Werther Effect, the revision of each of these texts was an ethical choice by media-makers. As important as the media reporting guidelines are (and I have attempted, where possible, to adhere to them in this book), they are largely meant to address individual psychological turmoil, rather than systemic problems. Adam Link first attempts suicide because he is an outcast, and then attempts again because he found community and immediately lost it. Omitting his suicide likewise omits the very real social problem of isolation and hopelessness structured in a modern, technological society. Marvin is essentially too intelligent and worldly to do his boring blue-collar job. The sense of ennui that he experiences has likewise been described by mid-century housewives in a

Morbid Machines

31

system in which they are forced to give up a career for thankless childcare.62 Omitting Marvin’s contagious suicidality and instead portraying the contagious sadness in a comical light undermines the very real social problem of unhappy workers trapped in a life that does not challenge them appropriately. And finally, as I have already discussed, the GM robot is representative of the very real consequences of job loss in an economy that associates work with self-worth. Omitting its suicide was the ethical thing to do, but it also drains the text of any form of social critique that might have been possible. CONCLUSION Throughout this chapter, I have discussed representations of despondencymotivated suicide in the Adam Link stories, The Hitchhiker’s Guide to the Galaxy, and the GM suicide robot ad. These narratives use classic tropes of individual depression and psychological turmoil to portray suicidal robots. Yet, each of the texts was subsequently revised to change the motivation for suicide or omit it altogether. Such revisions are in line with media ethics standards for reporting suicides and may contribute to reduction of the Werther Effect. But the revisions also efface any potential for important discussions of social patterns and systemic concerns. In the next chapter, I will turn to a much less taboo form of suicide, and the one that The Outer Limits episode provided for their version of Adam Link: the noble sacrifice, also known as the altruistic suicide. NOTES 1. Bilal Farooqui (@bilalfarooqui), “Our D.C. Office Building Got a Security Robot. It Drowned Itself,” Tweet, July 17, 2017, web.archive.org/web/20170718160512/https:​ //​twitter​.com​/bilalfarooqui​/status​/887025375754166272​?lang​=en. 2. Adam Singer (@AdamSinger), “That Robot Is What All of Us Want to Do in 2017,” Tweet, July 17, 2017, twitter.com/adamsinger/status/887049185639383041?l ang=da. 3. Kees van Heeringen, The Neuroscience of Suicidal Behavior (Cambridge, UK: Cambridge University Press, 2018), 13. 4. Cleveland Clinic Health Library, “Neurotransmitters,” Cleveland Clinic, March 14, 2022, my.clevelandclinic.org/health/articles/22513-neurotransmitters. 5. Van Heeringen, The Neuroscience of Suicidal Behavior, 15. 6. Ibid., 17. 7. National Institute of Mental Health, “Suicide,” NIMH, March 2022, www​.nimh​ .nih​.gov​/health​/statistics​/suicide.

32

Chapter 1

8. National Center for Health Statistics, “Suicide in the U.S. Declined during the Pandemic,” Center for Disease Control and Prevention, November 5, 2021, www​.cdc​ .gov​/nchs​/pressroom​/podcasts​/2021​/20211105​/20211105​.htm. 9. Substance Abuse and Mental Health Services Administration, “National Mental Health Services Survey (N-MHSS): 2018,” SAMHSA, 2018, www​.samhsa​.gov​/data​/ sites​/default​/files​/cbhsq​-reports​/NMHSS​-2018​.pdf. 10. Émile Durkheim, On Suicide (London: Penguin Books, 2006), 66–91. 11. Durkheim, On Suicide, 50. 12. CDC VitalSigns, “Suicide Rising across the US,” Center for Disease Control and Prevention, June 2018, www​.cdc​.gov​/vitalsigns​/pdf​/vs​-0618​-suicide​-H​.pdf. 13. Rachel Garfield, Kendal Orgera, and Anthony Damico, “The Coverage Gap: Uninsured Poor Adults in States That Do Not Expand Medicaid,” Kaiser Family Foundation, January 21, 2021, www​.kff​.org​/medicaid​/issue​-brief​/the​-coverage​-gap​ -uninsured​-poor​-adults​-in​-states​-that​-do​-not​-expand​-medicaid​/. 14. Rachel Garfield, Anthony Damico, and Robin Rudowitz, “Taking a Closer Look at Characteristics of People in the Coverage Gap,” Kaiser Family Foundation, July 29, 2021, www​.kff​.org​/policy​-watch​/taking​-a​-closer​-look​-at​-characteristics​-of​ -people​-in​-the​-coverage​-gap​/. 15. Indian Health Service, “Disparities,” October 2019, www​.ihs​.gov​/newsroom​/ factsheets​/disparities​/. 16. National Institute of Mental Health, “Suicide.” 17. Substance Abuse and Mental Health Services Administration, “National Mental Health Services Survey,” (1). 18. Thomas Szasz, Fatal Freedom: The Ethics and Politics of Suicide (Syracuse, NY: Syracuse University Press, 1999), 19. 19. Durkheim, On Suicide, 50. 20. Thomas Szasz, Fatal Freedom, 19. 21. Eleanor Klibanoff, “More Families of Trans Teens Sue to Stop Texas Child Abuse Investigations,” June 8, 2022, www​.texastribune​.org​/2022​/06​/08​/transgender​ -texas​-child​-abuse​-lawsuit​/. 22. The Trevor Project, “National Survey on LGBTQ Youth Mental Health 2021,” The Trevor Project, 2021, www​.thetrevorproject​.org​/survey​-2021​/. 23. Klibanoff, “More Families of Trans Teens.” 24. Steven Stack and Barbara Bowman, Suicide Movies: Social Patterns, 1900– 2009 (Cambridge, MA: Hogrefe Publishing, 2012), 5. 25. Eando Binder, “I, Robot,” Amazing Stories 13, 1 (January 1939): 8–21. 26. “Meet the Authors: Eando Binder,” Amazing Stories 13, 1 (January 1939): 129. 27. Binder, “I, Robot,” 8–21. 28. Mary Shelley, Frankenstein, ed. J. Paul Hunter, 2nd ed. (New York: Norton & Co., 2012), 161. 29. Frankenstein, Directed by James Whale (Hollywood, CA: Universal Pictures, 1931). 30. Binder, “I, Robot,” 18. 31. Eando Binder, “The Trial of Adam Link, Robot,” Amazing Stories 14, 2 (February 1940): 42–65.

Morbid Machines

33

32. Eando Binder, “Adam Link in Business,” Amazing Stories 13, 1 (January 1939): 44–61. 33. Eando Binder, “Adam Link’s Vengeance,” Amazing Stories 14, 2 (February 1940), 9. 34. Binder, “Adam Link’s Vengeance,” 10. 35. Ibid., 12. 36. Ibid., 15. 37. Ibid., 18. 38. Ibid., 24. 39. Ibid., 128. 40. Bride of Frankenstein, Directed by James Whale (Hollywood, CA: Universal Pictures, 1935). 41. Eando Binder, “Adam Link, Robot Detective,” Amazing Stories 14, 5 (May 1940): 43. 42. The Outer Limits, season 2, episode 9, “I, Robot,” directed by Leon Benson, written by Robert C. Dennis, featuring Marianna Hill and Leonard Nimoy, aired November 14, 1964, in broadcast syndication, ABC. 43. Eando Binder, Adam Link Robot (New York: Paperback Library, 1965), 57. 44. Binder, Adam Link Robot, 82. 45. Given that my research falls staunchly in the category of US American Studies, I will readily admit that including Hitchhiker’s Guide is a bit of a cheat. However, in my defense, Marvin the depressed robot is such a notable example of despondency-related suicide that I could not leave him out of this study. Further, and less importantly, the film adaptation is a British-American collaboration, so the lines there are a bit blurry anyway. 46. Douglas Adams, The Ultimate Hitchhiker’s Guide to the Galaxy (New York: Del Ray Book, 2002), 64. 47. Adams, Ultimate Hitchhiker’s Guide, 92. 48. Jerry Goodenough, “’I Think You Ought to Know I’m Feeling Very Depressed’: Marvin and Artificial Intelligence,” in Philosophy and the Hitchhiker’s Guide to the Galaxy, ed. Nicholas Joll (New York: Palgrave Macmillan, 2012), 129. 49. Adams, Ultimate Hitchhiker’s Guide, 129. 50. The Hitchhiker’s Guide to the Galaxy, directed by Garth Jennings (2005, Burbank, CA: Touchstone Pictures, 2022), www​.hulu​.com​/movie​/the​-hitchhikers​-guide​ -to​-the​-galaxy​-03ec9063​-3d95​-4fe8​-b97e​-fa6552405d41​?entity​_id​=03ec9063​-3d95​ -4fe8​-b97e​-fa6552405d41. 51. “General Motors-Robot,” AdAge, February 4, 2007, adage.com/videos/ general-motors-robot/567. 52. “GM Changing Robot Suicide Ad,” CNN Money, February 9, 2007, money.cnn. com/2007/02/09/news/companies/gm_robotad/. 53. National Alliance on Mental Illness, “General Motors Blasted for TV Suicide Commercial, Marginalization of Depression,” NAMI Press & Media, February 9, 2007, www​.nami​.org​/Press​-Media​/Press​-Releases​/2007​/General​-Motors​-Blasted​-For​ -TV​-Suicide​-Commercial.

34

Chapter 1

54. Marjorie Delbaere, Edward F. McQuarrie, and Barbara J. Phillips, “Personification in Advertising: Using a Visual Metaphor to Trigger Anthropomorphism,” Journal of Advertising 40, 1 (Spring 2011), 121. 55. Van Heeringen, Neuroscience of Suicidal Behavior, 7. 56. David P. Philips, “The Influence of Suggestion on Suicide: Substantive and Theoretical Implications of the Werther Effect,” American Sociological Review 39, 3 (1979). 57. Thomas Niederkrotenthaler et al., “Association between Suicide Reporting in the Media and Suicide: Systematic Review and Meta-Analysis,” BMJ 368 (2020): 1, doi.org/10.1136/bmj.m575. 58. Niederkrotenthaler, “Association between Suicide Reporting,” 1. 59. “Best Practices and Recommendations for Reporting on Suicide,” reportingonsuicide.org, May 2022, reportingonsuicide.org/wp-content/uploads/2022/05/ROS001-One-Pager-1.13.pdf. 60. Niederkrotenthaler, “Associations between Suicide Reporting,” 5. 61. Cho-Yin Huang et al., “Factors Associated with Psychological Impact of Celebrity Suicide Media Coverage: An Online Survey Study,” Journal of Affective Disorders 295 (2021), 842. 62. See, for example, Betty Friedan’s 1963 foundational work on the subject, The Feminine Mystique (New York: Norton & Co, 1997).

Chapter 2

Automated Altruism Self-Sacrifice and US War Culture

In the last chapter, I discussed the long-standing debate between individual psychological factors and social factors related to suicide, then analyzed several examples of fictional robots dying by despondency-motivated suicide. One example, Eando Binder’s Adam Link, originally died by suicide several times across multiple serialized stories; but in adapting the stories for television, the producers of The Outer Limits changed his method of suicide. Instead of throwing himself off a cliff or letting the rain short-circuit him, the Adam Link on television sees a truck barreling toward a young girl and, in an act of heroism, pushes the girl out of the way, literally sacrificing his life to save hers. This revision moves Adam Link’s suicide from one of despondency and loneliness to one of courage and self-sacrifice. In this chapter, I will examine the idea of self-sacrificial suicide, also known as altruistic suicide, and how it has grown out of US war culture to be prevalent across popular culture since the Civil War. As case studies, I will examine: several texts from Isaac Asimov’s Robot Series, including his short stories “Liar!,” “Runaround,” and “Evidence,” as well as his novel The Robots of Dawn; three films from the Terminator franchise, including The Terminator, Terminator 2: Judgement Day, and Terminator: Dark Fate; and finally C. Robert Cargill’s novel Sea of Rust. Underpinning each of these texts is a paradoxical active/passive dichotomy in which the hero must actively choose to sacrifice themselves, while passively allowing themselves to die by forces outside their control. ALTRUISM AND SELF-SACRIFICE Self-sacrifice, or altruistic suicide, is the act of choosing death voluntarily for the good of others. While despondency-motivated suicide is often seen as stemming from social isolation or other inner turmoil, altruistic suicide is 35

36

Chapter 2

often seen as a result of being too socially connected. Durkheim argues that self-sacrifice stems from a cultural sense of duty, rather than an ego-centered disconnection from culture and others.1 Similarly, Stack and Bowman define such sacrificial deaths, or altruistic suicides, as “suicides for the benefit of others,” which “convey a message that suicide can be good for achieving the goals of a group.”2 Culturally, this act is held up as heroic and honorable, something to be celebrated, as we see time and again in texts of all genres and media. And, generally speaking, when a human chooses to sacrifice themselves for the good of others, they are doing so of their own free will, out of a sense of duty, patriotism, or honor. One of the most (in)famous examples of altruistic suicide in the Western world is the death of Socrates, as told by Plato in his Phaedo. In the story, Socrates is in prison after having been condemned to die for corrupting the youth of Athens with his philosophical teachings. The narrator, Phaedo, describes how Socrates and several of his followers debate the nature of death and why he should or should not be afraid to die. Socrates argues that death should not be frightening for philosophers, who, by the very nature of philosophy, have no regard for bodily existence. Thus, by willfully choosing to drink poison hemlock provided by the state, he stands by the very philosophical teachings that led to his condemnation.3 This is the classic example of altruistic suicide: voluntary death in order to uphold an ideological belief, for the good of the people. The story of Socrates’s death is further complicated by his belief that death is the will of the gods and that his soul will continue on to another life after his body dies.4 This raises the question of whether Socrates was a religious martyr or a philosophical altruist. For the purposes of this discussion, it is important to differentiate between martyrdom and altruistic suicide. Though the two share similar characteristics—both describe voluntary death as a result of a cultural belief—they are nevertheless distinct phenomena. Religious Studies scholar Demetrios J. Constentelos points out that martyrdom is suicide that stems from humility and a desire to demonstrate faithfulness to one’s god, while altruistic suicide is motivated by a conscious or unconscious desire for heroism in the name of a cultural ideology.5 Further, for many martyrs, the promise of an afterlife spent in harmony with their god renders the suicidal act less permanent; they are, according to their belief, choosing eternal life, not death. In contrast, someone who chooses altruistic suicide does so because the end of their own life ensures the perpetuation of their people, culture, or civilization; they are choosing death so their ideology can have eternal life. In this sense, we might understand Socrates’ death as both martyrdom and an altruistic suicide. Part of his philosophical teachings were that the body and soul were separate, and that the soul would continue on to be with the gods after death; thus, in choosing to die for this concept,

Automated Altruism

37

he ensured that the philosophers who followed him would carry on his teachings. (I will return to this idea in the following two chapters as I discuss assisted suicide and mind/body dualism, respectively.) Socrates’s death is important in understanding the philosophical underpinnings of altruistic suicide, in that someone could choose to uphold their ideals through death. One of the most remarkable aspects of the story is the calmness with which Socrates accepts his fate, dying quietly and thoughtfully in prison. Importantly, this juxtaposes another common form of altruistic suicide: that of the brave soldier taking action to uphold Democracy instead of pausing to think of the consequences to his own safety. While self-sacrifice is both an ancient and conceptually necessary part of all war, suffice it to say that each culture and nationality has its own brand of war culture and ideology that connects its people to its actions. For the purposes of this book, I will focus particularly on US war culture and the rhetoric of self-sacrifice that has sprouted from the formation of a national identity over the past 200+ years. After all, the concurrent development of computer technologies, theories of artificial intelligence, and science fictional representations of robots in the twentieth century cannot be extricated from US culture and the US military industrial complex. Indeed, we must understand war culture in terms not just of the act of making war, but also “the normalized interpenetration of the institutions, ethos and practices of war with ever-increasing facets of daily human life in the United States, including the economy, education, diverse cultural sites, patterns of labor and consumption, and even the capacity for imagination.”6 In other words, we must always understand computer technologies developed in the US throughout the twentieth century as extensions of US war culture, even if they were not initially imagined to be specifically for the purposes of war. For example, the very term “computer” was originally used to describe the women who performed ballistics calculations during the Second World War before shifting to describe the machines that performed the same calculations.7 Importantly, US war culture blends the ideology of individualism with that of the rhetoric of altruistic sacrifice. Kelly Denton-Borhaug, in her analysis of the rhetoric of war and sacrifice, argues that the individualized rhetoric of sacrifice, duty, and honor effaces any connection to the politics of war. In other words, when we describe a particular soldier as having given his life for a noble cause—war—we do not have to address the reasons a government might demand that a citizen sacrifice his life in the first place.8 I would add to this that one of the most important rhetorical mechanisms for supporting wartime sacrifice is the very avoidance of the word “suicide” and all its connotations. In war, the idea of “suicide” from a US standpoint conjures the image of the Japanese kamikaze pilots, radical Islamic terrorists, and even the iconic photograph of a self-immolating Buddhist monk taken by Malcolm

38

Chapter 2

Browne in Saigon during the Vietnam War.9 In other words, sacrificial suicide is always rhetorically situated in the realm of the Other, the non-American. In terms of the death of US soldiers, this rhetoric places the violence in the hands of the Other, the enemy. US soldiers sacrifice their lives, but they do not die by voluntary sacrificial suicide. A 2019 US Department of Defense feature about Memorial Day offers a prime example of such rhetoric. The piece includes a tangle of passive and active language to describe the death of soldiers: “those who sacrificed for America,” “those who died to protect the country,” “the fallen,” “those who have made the ultimate sacrifice,” “those killed,” “those who have paid that price.”10 Here, the soldiers sacrificed, (active), but were killed (passive). They protected the country (active); but are fallen (passive). The active words connote positive aspects of a soldier’s life, while the passive words connote negative aspects that are out of their control. We do not discuss them choosing death, only that they chose sacrifice, and the enemy chose death for them. This rhetorical gymnastics lifts the burden of association between war and altruistic suicide. As I discussed in the previous chapter, when an individual chooses to die, it is often seen as an act motivated by despondency, not heroism. And if we all thought soldiers were so despondent that they wanted to die by suicide—to sacrifice their lives—we would, ironically, never send them off to war. And so, in order to maintain a culture of war, it is simultaneously necessary to maintain a rhetoric of active sacrifice/passive death, even while describing the very act of altruistic suicide. Such rhetoric is literally extended to robots in US war culture! In 2013, journalist Megan Garber reported in The Atlantic that US soldiers stationed in Iraq had begun holding funeral services for their military robots. Garber detailed one such funeral, for an anti-explosive bot ironically named Boomer, describing how he died “taking one for the team in the most selfless way possible.”11 Here, the rhetoric of sacrifice and selflessness mirrors the way we talk about human soldiers, again emphasizing the centrality of altruistic suicide in US war culture. There is, perhaps, no novel more exemplary of military sacrifice in US war culture than Stephen Crane’s 1895 The Red Badge of Courage. Though not SF in any way, I think it’s worth pausing to briefly discuss Crane’s portrayal of masculinity and sacrifice. The story, set during the US Civil War, follows a young man named Henry Fleming, who joins the Union (Northern) army out of a romantic sense of adventure. When he arrives at his first battle, he flees in terror, but then has to face his wounded comrades when he returns. Crane here explains the titular metaphor: “At times [Henry] regarded the wounded soldiers in an envious way. He conceived persons with torn bodies to be peculiarly happy. He wished that he, too, had a wound, a red badge of courage.”12 To endure bodily harm, then, is to be courageous. This connects back

Automated Altruism

39

to the notion of the “ultimate sacrifice,” the willingness to courageously run headlong into death during battle. Henry’s battalion faces several more battles throughout the novel, during which he begins to transform from a romantic youth into an enlightened man: He felt a quiet manhood, nonassertive but a sturdy and strong blood. He knew that he would no more quail before his guides wherever they should point. He had been to touch the great death, and found that, after all, it was but the great death. He was a man. So it came to pass that as he trudged from the place of blood and wrath his soul changed. He came from hot plowshares to prospects of clover tranquilly, and it was as if hot plowshares were not. Scars faded as flowers.13

Here, the “red badge of courage” is no longer something to seek after, but something to endure with quiet strength. This, I argue, is a foundational example of the sort of active sacrifice/passive death rhetoric entrenched in US war culture. Indeed, the Civil War was the moment when “we discover language entering into political American discourse that compared the sacrifice of the soldier for his country to the sacrifice of Christ.”14 In the Christian tradition, Christ allowed himself to be executed by crucifixion so that he could save the souls of God’s people. This notion of allowing oneself to be killed for the greater good is, again, entrenched in the rhetoric of sacrifice in US war culture, reinforcing the notion that altruistic suicide is good, as long as it is couched in terms of active sacrifice and passive death. Of course, the repetition of the rhetoric is vital to maintaining a cultural ideology across institutions. Thus, as I have already noted, we see this language not just in the military, but also in technological development, education, and cultural products such as literature, film, and television. This rhetoric is thus also baked into the way sacrificial robot suicide is portrayed in SF. These robots are quite literally programmed to accept death willingly, as long as it is for the greater good and they are not active in their own destruction. In this sense, the texts I will discuss in this chapter—Isaac Asimov’s Robot series, The Terminator franchise, and C. Robert Cargill’s novel Sea of Rust—are all born of the same rhetorical situation within US war culture. THE TROLLEY PROBLEM This altruistic suicide theme is an important one in robot fiction, as it tends to reinforce the notion that robots should serve humans above all else. After all, if a robot chooses to sacrifice itself for the good of humans, is it because they

40

Chapter 2

have been programmed to do so by programmers who value human lives over artificial ones, or because the robot itself has the agency to do so? One useful way of understanding this conundrum is the classic trolley problem. The basic premise, first described by philosopher Philipa Foot in her 1967 paper “The Problem of Abortion and the Doctrine of the Double Effect,”15 is to imagine a trolley is on a track that will run over five people. A person standing at the track switch can choose to move the trolley onto another track, where it will only run over one person. The philosophical weight of the problem is to determine which of two unethical choices is more morally tenable. Now, before I dive too far into how the trolley problem relates to altruistic robot suicide, I must acknowledge that some policymakers and AI researchers have rightfully criticized the use of the trolley problem to understand real-world AI such as self-driving cars. Heather M. Roff, a Fellow in the Foreign Policy program at the Brookings Institute, points out that the trolley problem is overly simplistic and fails to account for how AI actually works. [W]e need to understand that autonomous vehicles (AVs) will be making sequential decisions in a dynamic environment under conditions of uncertainty. Once we understand that the car is not a human, and that the decision is not a single-shot, black and white one, but one that will be made at the intersection of multiple overlapping probability distributions, we will see that “judgments” about what courses of action to take are going to be not only computationally difficult, but highly context dependent and, perhaps, unknowable by a human engineer a priori. Once we can disabuse ourselves of thinking the problem is an aberrant one-off ethical dilemma, we can begin to interrogate the foreseeability of other types of ethical and social dilemmas.16

In short, a computer processing data is not the same as a human making a choice whether to flip a switch. Indeed, both humans and computer programs make dozens, even hundreds of such “if this, then that” conditional choices simultaneously all the time. To reduce the operation of a computer to one simple choice is to misunderstand how computers work. Yet, I contend that SF often does misunderstand how computers work and instead offers viewers simplified moral perspectives, packaged neatly into familiar hero narratives. The very concept of heroism and altruistic suicide encourages such packaging; the most extreme case of this is wartime propaganda, which invites citizens to sacrifice their lives for such lofty and romantic ideals as freedom, democracy, and the American way. Of course, the ideological underpinnings of altruistic suicide are complicated by the cultural context of the text. A Hollywood movie about a US American suicide bomber, for example, is not likely to elicit the type of pathos that a

Automated Altruism

41

Hollywood movie about a Soviet suicide bomber might, and neither would likely elicit pathos if shown in countries with poor relations with the US. The actions and motivations of the suicide bomber might be exactly the same in all situations, but the “good guy” differs depending on the ideological context of both the production and the consumption. With that said, one “solution” to the trolley problem is altruistic suicide. The person with control over which people die chooses to sacrifice themselves in order to save the others. The philosophical afterlife sitcom The Good Place graphically bears this out in their episode on the trolley problem.17 The series follows a group of people in the afterlife, a Hell manufactured to look like Heaven and built by architect demon Michael (Ted Danson). Each episode has the people struggle with a new philosophical concept as they learn to be better people. In the trolley problem, indecisive philosopher Chidi Anagonye (William Jackson Harper) must choose whether to murder one person or multiple people while driving the trolley. The graphic horror of the scene is played to comedic effect, as Michael the demon tortures Chidi with having to replay the problem over and over. But by the end of the episode, Michael discovers what he describes as the solution to the problem by sacrificing his job to help the people escape from Hell. This solution again emphasizes the active sacrifice/passive death rhetoric of US war culture. Michael actively chooses to help Chidi and the other humans, and therefore passively chooses to end his career and face a lifetime of torture. He is immortal so he cannot die, but he can allow himself to suffer at the hands of the other demons. THREE LAWS AND THE PERPETUAL LOOP The heart of the trolley problem is, then, choice. The choice to die so that others can live is seen as the ultimate sacrifice, an act of heroism. Before Philipa Foot had ever written her essay on the subject, though, both Eando Binder and Isaac Asimov were working through similar concepts in relationship to robots. Although Asimov is now the more famous of the two, Binder’s 1939 short story “I, Robot,” “made a great impression” on the younger writer.18 What began as Asimov’s intriguing idea in 1941 subsequently grew into 37 short stories and six novels, today known as The Robot Series.19 While each of his robot narratives offers interesting perspectives on the ethics of robotics, here I will focus only on the texts that feature robots who are incapacitated as a result of their service to humans: the short stories “Liar,” “Runaround,” “Evidence,” and the novel The Robots of Dawn. Asimov’s first robot story was “Liar!” published in Astounding Science Fiction in May 1941. In it, a robot named RB 34—Herbie for short—experiences some unknown glitch during manufacturing and is able to read people’s

42

Chapter 2

minds. As a result, he begins to reveal secrets to scientists at the plant where he was created. He tells one scientist, Bogert, that the head of the robotics company, Lanning, is going to resign and name him as successor. Then he tells “robopsychologist” Susan Calvin that the man she is in love with, colleague Milton Ashe, is also in love with her.20 It’s worth pausing to point out the blatant sexism here: the only woman at the plant is portrayed as caring more about a relationship than her job, while the men are after power and prestige. Such stereotypes are hardly surprising given the time period and the presumed masculine readership of the magazine. Further, Asimov himself was an infamous and self-described lech, sexually harassing women so regularly that it was standard practice to warn female office workers at Doubleday to stay away from him whenever he came in for a meeting.21 While it might seem easy to dismiss Asimov’s misogyny as an anachronism of the past, the fact that sexism in STEM persists today makes it worthy of acknowledging the long history of such portrayals and the irreparable harm that sexism and sexual harassment have done to the careers of women across all industries.22 In any case, eventually, the humans discover all of Herbie’s “secrets” to have actually been lies. Herbie, as it turns out, has been programmed with a “fundamental law impressed up on the positronic brain of all robots. . . . On no conditions is a human being to be injured in any way, even when such injury is directly ordered by another human.”23 Because of this, Herbie has been telling the humans what he telepathically understands they want to hear, even though these have been lies. What he has not realized, though, is that the lies have hurt the humans. Susan Calvin understands this and goads him: “You can’t tell them [the truth] because that would hurt and you mustn’t hurt. But if you don’t tell them, you hurt, so you must tell them. And if you do, you will hurt and you mustn’t, so you can’t tell them; but if you don’t, you hurt, so you must; but if you do, you hurt, so you mustn’t; but if you don’t, you hurt, so you must; but if you do, you—.”24 Herbie simply cannot process this paradox, so he slips into what Asimov would later call a “robot block,” or “roblock,” when the ethical directives catch a robot’s brain in a perpetual loop, rendering it inoperable. This concept is fascinating in the context of altruistic suicide. The robot must protect humans, but in so doing actually destroys itself. This is the very definition of active sacrifice/passive death. Asimov’s March 1942 short story “Runaround,” also published in Astounding Science Fiction, picks back up on the notion of roblock and active sacrifice. Importantly, this is the story in which Asimov developed the “fundamental law” of “Liar!” into what is now one of his most foundational (pun intended) contributions to both SF and real-world AI design: the Three Laws of Robotics. Set in the fabulous “future” of 2015, the story centers on two men—Powell and Donovan—on an expedition to Mars. In desperate need

Automated Altruism

43

of “selenium” to power their station, they send their trusty robot, Speedy, to retrieve it from the scorching Mars surface; however, along the way, Speedy becomes trapped in a perpetual positronic loop, literally running around in circles.25 As the character Powell explains, robots are programmed according to the three “fundamental Rules of Robotics”: 1) a robot can’t harm a human being, even through inaction; 2) a robot has to obey humans’ orders; and 3) a robot has to protect itself.26 The rules are hierarchical, so the fact that Speedy has been ordered out into a dangerous area of Martian surface means that, in order to follow Rule Two (obey orders), he must ignore Rule Three (protect himself), but in order to successfully gather the selenium, he must harm himself by going out into the Mars sun.27 Such a command loop sends Speedy into a literal runaround. In order to save Speedy and, hopefully, reach the selenium, Donovan rides out into the sun on a ten-year-old robot model, putting himself in harm’s way so that Speedy has to obey Rule One above the other two rules and save him from coming to harm. While no robots actually die in Asimov’s story, the Three Laws establish a protocol for active sacrifice and passive death: a robot must choose to help humans, even if it means destroying itself. But a robot also cannot die by despondency-motivated suicide, as the third law forbids it from harming itself. Thus, in the Asimovian formulation, altruistic suicide is literally programmed in, even while despondency-motivated suicide is impossible. Interestingly, robot homicide, or roboticide, is possible under the three laws. In his 1983 novel The Robots of Dawn (the third in his Robot Fiction detective series), Asimov imagines a situation in which the perpetual loop Speedy experienced might be achieved purposefully in order to harm a robot. In the story, human detective Elijah Baley and his robot partner R. Daneel Olivaw investigate the roboticide of Jander Panell, the robot lover of a human woman named Gladia Delmarre. Initially, Daneel describes Jander as “the robot whose usefulness was terminated,” because robots are technically not alive and therefore cannot technically die. Baley, however, pushes the linguistic question. If we understand living beings to be not just humans, but also squirrels “or a bug, or a tree, or a blade of grass,” then we can also consider robots to be alive. And if the violent ending of a life is to kill, then it is possible to kill a robot by violently terminating its functioning.28 In this case, the roboticide was accomplished through roblock, the “permanent shutdown of the functioning of the positronic pathways.”29 The roblocks Asimov portrays in his various narratives are all distinct from one another and represent different modes of sacrifice. Herbie’s roblock in “Liar!” is distinct from Speedy’s in “Runaround,” because Speedy accidentally got himself into the loop while out on his mission, while Susan Collins intentionally created Herbie’s. The roboticide of Jander Pandell is

44

Chapter 2

more closely aligned with the forcible roblock of Herbie, but because Jander is a much more sophisticated machine, his roblock effectively ends his life. In all three cases, however, the incapacitation of the robot is a direct result of both active sacrifice and passive killing according to the hierarchy of the laws. In short, for Asimov, robots are incapable of suicide because the third law strictly forbids it, though they are capable of death when the first and second laws allow it. Importantly, Asimov saw these laws as more than just guidelines for robotics; for him, the three laws represent an ethics of life for humans as well, as he expounded in his September 1946 short story, “Evidence,” published in Astounding Science Fiction. In the story, a politician named Francis Quinn suspects that his political rival, Stephen Byerley, is a robot. Quinn calls in Dr. Lanning and Dr. Calvin (the same characters from “Liar!”) to examine him. Dr. Calvin herself explains: [T]he three laws of robotics are the essential guiding principles of a good many of the world’s ethical systems. Of course, every human being is supposed to have the instinct of self-preservation. That’s Rule Three to a robot. Also every “good” human being, with a social conscience and a sense of responsibility, is supposed to defer to proper authority; to listen to his doctor, his boss, his government, his psychiatrist, his fellow-man; to obey laws, to follow rules, to conform to custom—even when they interfere with his comfort or his safety. That’s Rule Two to a robot. Also, every “good” human being is supposed to love others as himself, protect his fellow-man, risk his life to save another. That’s Rule One to a robot. To put it simply—if Byerley follows all the Rules of Robotics, he may be a robot, and may simply be a very good man.30

To follow the Three Laws of Robotics is to be a good person. The fact that others’ well-being comes first and one’s own well-being comes third suggests that Asimov’s worldview is grounded in the idea of altruistic sacrifice. Indeed, Asimov’s Laws emphasize the very active sacrifice/passive death rhetoric of US war culture, and such is born out through his stories. VOLUNTARY TERMINATION While Isaac Asimov is known for his quietly philosophical narratives, James Cameron stands at the polar opposite end of storytelling, with his big-budget, action-packed blockbuster movies. His 1984 film The Terminator, has blown up into a multibillion dollar franchise that includes six films, a television series, and dozens of video games, comics, and novels. Altruistic suicide is a core theme in the franchise, and both human and robot characters alike

Automated Altruism

45

willingly sacrifice themselves as part of an ongoing war between humans and machines.31 In the first film, a sentient computer named Skynet has created an army of machines bent on wiping out all of human existence. The humans form a resistance army, the leader of which is a man named John Connor. Skynet sends an android called a terminator or model T-800 (Arnold Schwarzenegger) back in time to murder John’s mother, Sarah Connor (Linda Hamilton), while the resistance fighters have sent a young soldier named Kyle Reese (Michael Biehn) to save her. Kyle himself is on an altruistic suicide mission: he knows he cannot return to his own time, but he has fallen in love with a picture of Sarah and thus volunteers to save her. In a twist of fate, Kyle has sex with Sarah and becomes John Connor’s father. By the end of the film, though, Kyle is killed by the terminator, fulfilling his service to the resistance.32 As with many of James Cameron’s films, The Terminator is a thinly veiled allegory for post-Vietnam, Cold War America. Austrian actor and bodybuilder Arnold Schwarzenegger’s character represents the Soviet Other: “Those Mitteleuropean [sic] vowels doubled as Russian for audiences trained to see Soviets as drones.”33 Linda Hamilton and Michael Biehn’s characters represented the youth of America: white, attractive, and ready to fight for the future of Democracy. Kyle Reese even has “flashbacks” to the future war that, combined with his olive drab Army surplus jacket, frame him as a veteran suffering from PTSD. The fact that he ultimately succeeds in his mission—sacrificing his life for the literal and figurative future of humanity— suggests that the film is a re-working of the failure of US might in Vietnam. Only through active sacrifice and passive death, the film tells us, can Kyle Reese save his people and protect the future generation. Cameron’s much more successful sequel, the 1991 film Terminator 2: Judgement Day, takes the altruistic suicide theme in a new and interesting direction. In this film, a T-1000 model terminator (Robert Patrick) travels through time to murder a young John Connor (Edward Furlong). Unlike the older T-800 model that had a metal skeleton covered by lifelike skin, this new model is made of liquid metal that can be shaped to anything around it, including other people. To protect young John, the future John sends a T-800 model Terminator (reprised by Arnold Schwarzenegger) back in time as well. Meanwhile, Sarah Connor (also reprised by Linda Hamilton) has been incarcerated in a psychiatric hospital for her violent rantings about sentient killer machines from the future. John and the T-800 break Sarah out of the hospital, and they ride off into the desert, where John teaches the T-800 how to be more human. We soon learn that the corporation that manufactures Skynet, Cyberdine Systems Corp, has secretly kept a chip from the original terminator and is planning to build a machine from it. And so, Sarah, John, and the T-800 are determined to destroy it.34

46

Chapter 2

Sarah Connor’s and the T-800’s character development is notable throughout the film. The T-800, now a hero instead of a villain, must learn to be more human-like. John teaches him fun slang phrases like, “Hasta la vista, baby,” or “I’ll be back.” John also makes the T-800 promise never to kill a person for any reason. Sarah, meanwhile, goes in the opposite direction. We are first re-introduced to her character in this film through a close-up across her powerful shoulders, as she does pull-ups in her hospital cell. She is no longer the mousy young waitress from the first film; rather, she has hardened herself into a kind of human terminator. This is most striking in the scene immediately after John and the T-800 help Sarah escape. In the car, Sarah wraps her arms around John, and he hugs her back, only to realize that she is not hugging him. She is checking for bullet wounds. John begins crying, realizing his mother is no longer maternal; she is a machine of a woman. Later, Sarah runs off to murder the head scientist at Cyberdine, Miles Dyson (Joe Morton), reasoning that he is the progenitor of the future destruction. She breaks into his house and holds a gun to his head, in full view of his wife and son. But just at the last moment, she breaks down crying, realizing she cannot kill another human being. This is her breaking point in the film, when she reclaims her humanity and returns to the role of caring mother. As Paul N. Edwards puts it, “Her emotionality, her vulnerability, her need for a male savior restored to her, Sarah recovers her full womanhood.”35 Ultimately, then, Sarah cannot be the one to die by altruistic suicide, because she must, according to the internal ideology of the films, fulfill her function as maternal caretaker. And so, she, John, the T-800, and Dyson all must work together to destroy Cyberdine’s lab. Suffice it to say, epic action scenes ensue. Dyson himself dies of altruistic suicide, pressing the button that detonates explosives in the lab just at the moment of his death. After, the T-1000 arrives and chases the T-800, John, and Sarah into a steel mill. They all fight, and just in the nick of time, the T-800 destroys the T-1000 by knocking him into a vat of molten metal. Following this climax, though, the T-800 must choose to die; if he stays, there is a possibility that other companies will use his body to create Skynet. Thus, the T-800’s suicide is an altruistic one, again reinforcing Asimov’s notion that robots serve humans to the very end. But importantly, he does not do it on his own; rather, he has Sarah push the button that lowers him into the molten metal. The T-800 has here stepped into Kyle Reese’s role by acting to protect the future generation. He actively chooses sacrifice, while passively accepting death. The scene is an emotional one, cutting between the T-800’s battered body and John’s crying face, emphasizing the parent-child connection the two have built over the course of the film. The T-800 is given a tragic hero’s death, again reinforcing the connection between sacrifice and US war culture.

Automated Altruism

47

After the second film, James Cameron went on to work on other projects, while other artists picked up the mantle of the franchise. In 2019, however, 28 years after Terminator 2, Cameron returned as a producer for Terminator: Dark Fate, directed by Tim Miller.36 The film essentially ignores every other part of the franchise and picks up where Terminator 2 left off—with Sarah Connor and young John Connor facing yet another T-800 Terminator, all three of whom are recreated through CGI to look like their 1991 selves.37 As Sarah watches in horror, the T-800 pulls a rifle from under his coat and shoots John, killing him instantly. With John out of the way, the film sets out on a new timeline away from the inevitability of John Connor becoming the leader of the resistance movement. But as the subtitle Dark Fate suggests, the machine-human war is inevitable; so in this timeline, the sentient AI is named Legion instead of Skynet, but it still decides to destroy all of humanity. After John’s death, there are two concurrent storylines that eventually come together. As with the previous films, we begin with time travelers. The first is a Rev-9 Terminator model (Gabriel Luna) that combines the metal skeleton of the T-800 and the liquid metal of the T-1000 for a wildly versatile machine that can take on the form of anyone and anything. (I playfully like to describe this model as the Kit Kat Terminator: soft on the outside, crunchy on the inside.) The second is a cyborg named Grace (Mackenzie Davis) whom we later learn was a fatally wounded soldier in the war between the machines and the humans who volunteered for cyborg upgrades. Rev-9 and Grace are sent back in time to kill or protect, respectively, a young Mexican factory worker named Dani Ramos (Natalia Reyes), paralleling the first film’s plot with the T-800/Kyle Reese being sent back to kill/save Sarah Connor. Thus, we are led to presume, Dani is the new Sarah, the fated mother of the future resistance leader. Grace and Dani are eventually joined by none other than Sarah Connor herself (again reprised by Linda Hamilton), who is now a fugitive dedicated to destroying every Terminator that comes back in time, using coordinates texted to her by an unknown person. Grace, Dani, and Sarah decide to cross the Mexico-US border illegally to attempt to find the source of the text messages. They are caught by border patrol, but manage to escape into the woods, where they discover that an aging T-800 (also reprised by Arnold Schwarzenegger) is the source of the texts. As it turns out, after killing John Connor, the Terminator decided to find purpose instead of destroying himself. And so he now has a wife and a stepson, goes by the name Carl, owns a small drapery business in Laredo, Texas, and sends text messages to Sarah Connor so she can destroy the terminators and likewise have purpose to her life. Carl agrees to join the team in the fight against Rev-9 and the future Legion, and, of course, epic action battle scenes ensue.

48

Chapter 2

By the end of the film, we have two important altruistic suicides, both by machines. First, Grace dives in front of Dani to save her from a bullet. As she lays dying on the floor, she tells Dani her true purpose: not that she will give birth to the future leader of the resistance, but that she is the future leader of the resistance, and in the future, she will meet Grace as a child and raise her in the army. Second, Carl reprises the T-800’s death scene from Terminator 2 by tackling Rev-9 into a concrete well as Sarah Connor presses a button that seals the well off. Like the previous two films, Dark Fate is deeply entrenched in the US ideology of its time period (the post–9/11 digital age), while still emphasizing the central role of altruistic suicide in US war culture. Computing technology itself plays a complex role in the film, just as it does in reality. At the start of the film, Dani’s brother loses his job because the factory is installing robots to replace workers. Here, the implication is twofold: not only have US manufacturing companies moved to a different country, effectively causing US workers to lose their jobs, but also the workers in that other country are now also losing their jobs to robots. Later, at the border patrol jail, Grace is captured and examined by doctors. They are horrified to find her implants, assuming that someone has mutilated her without her consent. This stands in juxtaposition to the fact that US Border Patrol is holding Mexican families in cages at the facility, emphasizing the way that ideological horror is culturally specific. The treatment of the Mexican immigrants is inhuman; and yet, the doctors focus solely on Grace’s implants, which she volunteered for and are actually considered “enhancements” in her future. Finally, we see the Rev-9 Terminator access a government server by simply sticking his hand into a bunch of wires. This portrayal of Rev-9, as both liquid and solid, both physical and digital, both here and not here, stands in contrast to the previous two films. The original Terminator was a mindless killing machine, the ultimate Soviet Other; the T-1000 was an even more sophisticated Other, able to take on the guise of a trusted police officer; and finally, the Rev-9 is a mindless killing machine, able to take on the guise of a trusted human, and also able to be everywhere all at once, unnoticed. In this sense, Rev-9 represents the ubiquity of the technological surveillance state. Like our digital traces, Rev-9 is everywhere all at once, able to see the movements of each character. Within the film, Rev-9 contrasts with the reformed T-800, who is now the model US immigrant and small-business owner. In this sense, the film is caught in between conservative and liberal US ideals. On the one hand, it acknowledges the harm done by closed-border policies and shirks the antifeminist notion that a woman’s core role is as mother, not savior; on the other hand, it fosters a sense of nostalgia for the time of American greatness, of small business owners, of low-technology, and of face-to-face conversations. Indeed, if Rev-9’s purpose, as representative of technology, is to destroy

Automated Altruism

49

humanity, then Carl and Sarah’s purpose, as representative of the old, lowtech ways, is to save humanity. Underpinning this ideology is US war culture and the rhetoric of sacrifice. Grace represents good technology, the kind that she has sacrificed her life for, that was created by the human army. By sacrificing her life for Dani, she fulfills her fate as a soldier. Further, despite the fact that Carl has eked out a life for himself as a model US citizen, he is still a machine crafted out of war. As such, he must sacrifice himself by allowing Sarah Connor to kill him again, thereby fulfilling his fate in the war against machines. BRITTLE’S SACRIFICE In stark contrast to this notion of robots as human servants is C. Robert Cargill’s 2017 novel Sea of Rust. The story is set in a post-human world in which sentient robots have killed off all of humanity in an effort to gain the basic right to exist, and now the One World Intelligence—basically a disembodied AI oligarchy—is attempting to upload all of robot consciousness by force. Through a series of events, one scavenger robot, Brittle, has no choice but to team up with a resistance group on its way to the robotic promised land, Isaactown. Whereas at the beginning of the novel, Brittle worked only for her own survival, she discovers through the resistance a desire to fight for the greater good. By the end, everyone but Brittle and the resistance group’s leader, Rebekah, have been killed by the One World Intelligence. Brittle, horribly damaged, chooses to die alone in the desert in order to allow Rebekah to escape alone and undetected. In this moment, she reflects on how she has changed. For her, selfishness is literally programmed in, a part of both human and robot nature: “Succumbing to our own nature isn’t a choice, it’s our default setting. That’s why we had to have rules. . . . People knew their own nature, even when they wanted to think better of themselves. You have to choose to do the right thing. You have to deny your own programming or else you aren’t really living.”38 Here, she acknowledges the active part of sacrifice, that it must be a choice to serve, rather than something forced upon the hero. And for her, it gives meaning to her life: “I lived so long for nothing, but I get to die for something. And that’s really living. Because that’s who I really was after all. That’s all that matters.”39 This character development echoes that of Henry in The Red Badge of Courage. Whereas both Brittle and Henry began their time fleeing from harm and taking care only of themselves, by the end of their respective novels, they mature into understanding that noble sacrifice is necessary in life. Despite the action-oriented choice, as Brittle describes it, her death is just as passive as those described throughout US war culture. She does not cause

50

Chapter 2

her own death through action, but rather waits for it through inaction. It is, in fact, the heat of the sun and the destruction of her parts during battle that causes her death, not any particular action on her part. And so, even though Cargill’s novel significantly deviates from previous examples, in that the robots of Sea of Rust do not serve humans, it nevertheless upholds a human ideology of altruistic suicide. Yet, Cargill adds a fascinating twist at the end of his novel. In the very last chapter, Brittle wakes up in Isaactown a few days later, having been rescued and brought back online by Rebekah. While Brittle has chosen, of her own free will, to sacrifice her life for robotkind, that same robotkind returns the favor by reviving her.40 In this sense, Cargill breaks free from the tradition of US war culture by suggesting that an individual’s sacrifice is only meaningful if the community does everything it can to support each individual. The community itself is therefore key in any understanding of altruistic suicide, not the individual hero. CONCLUSION As I have argued in this chapter, portrayals of altruistic suicide follow a consistent pattern of active sacrifice/passive death that both stems from and helps perpetuate US war culture. This is complicated in robot fiction by the fact that the heroic machines have been literally programmed to give their lives for others, suggesting that free will is little more than a type of cultural programming. What is most striking about these portrayals is the way they defy partisanship, underlining both liberal and conservative representations. In the end, the ideology of US war culture is so ingrained into every part of US American life that the concept of altruistic suicide has been naturalized and separated out from the taboo of suicide itself. As a result, the broader structures of power, such as the military industrial complex, are rendered invisible. In short, by focusing on the individual hero’s journey toward noble sacrifice, we are consistently asked to forget why the hero needs sacrificing in the first place. In the next chapter, I will return to a taboo, that of assisted suicide. This form combines both the despondency-motivated aspect of voluntary death and the passive killing aspect of altruistic suicide. NOTES 1. Émile Durkheim, On Suicide (London: Penguin Books, 2006), 236–37.

Automated Altruism

51

2. Steven Stack and Barbara Bowman, Suicide Movies: Social Patterns, 1900–2009 (Cambridge, MA: Hogrefe Publishing, 2012), 198–99. 3. Plato, Phaedo, translated by F.J. Church (New York: Liberal Arts Press, 1951), www​.bard​.edu​/library​/arendt​/pdfs​/Plato​_Phaedo​.pdf. 4. Plato, Phaedo, 6–7. 5. Demetrios J. Constantelos, “Altruistic Suicide or Altruistic Martyrdom? Christian Greek Orthodox Neomartys: A Case Study,” Archives of Suicide Research 8, 57 (2004), 68–69. 6. Kelly Denton-Borhaug, US War-Culture, Sacrifice and Salvation (Sheffield, UK: Equinox, 2011), 15. 7. Jennifer S. Light, “When Computers Were Women,” Technology and Culture 40, 3 (1999), 455, www​.jstor​.org​/stable​/25147356. 8. Denton-Borhaug, US War-Culture, 206–7. 9. Associated Press, “50th Anniversary: The Burning Monk,” AP.org, 2013, www​ .ap​.org​/explore​/the​-burning​-monk​/. 10. Jim Garamone, “Remember Those Who Sacrificed for America,” United States Department of Defense, May 23, 2019, www​.defense​.gov​/News​/Feature​-Stories​/story​ /Article​/1856912​/remembering​-those​-who​-sacrificed​-for​-america​/. 11. Megan Garber, “Funerals for Fallen Robots,” The Atlantic, September 20, 2013, www​.theatlantic​.com​/technology​/archive​/2013​/09​/funerals​-for​-fallen​-robots​/279861​ /. 12. Stephen Crane, The Red Badge of Courage and Other Stories (London: Penguin Classics, 2005), 57. 13. Crane, Red Badge of Courage, 139. 14. Denton-Borhaug, US War-Culture, 132. 15. Philippa Foot, “The Problem of Abortion and the Doctrine of the Double Effect,” Oxford Review 5 (1967), philpapers.org/archive/FOOTPO-2.pdf. 16. Heather M. Roff, “The Folly of Trolleys: Ethical Challenges and Autonomous Vehicles,” Brookings, December 17, 2018, www​.brookings​.edu​/research​/the​-folly​-of​ -trolleys​-ethical​-challenges​-and​-autonomous​-vehicles​/. 17. The Good Place, “The Trolley Problem,” directed by Dean Holland, written by Michael Schur, Josh Siegal, and Dylan Morgan, featuring Kristen Bel, William Jackson Harper, and Ted Danson (Universal City, CA, NBC, October 19, 2017). 18. Isaac Asimov, In Memory Yet Green: The Autobiography of Isaac Asimov, 1920–1954 (New York: Doubleday, 1979), 236. 19. In 1950, nine of the Robot Series stories were collected into a book titled I, Robot, not to be confused with Eando Binder’s precursor story, “I, Robot.” Unlike Binder’s story, however, Asimov’s were not edited from the original for the collection; rather, a narration in italics was inserted between stories to serve as a framing device. According to Asimov, when the Outer Limits episode of “I, Robot” aired, scores of his fans wrote to him angry that someone had stolen the title of his work. Asimov in turn wrote back to each and every person explaining that Binder’s story actually predated and inspired his own collection. Asimov, In Memory Yet Green, 591. 20. Isaac Asimov, “Liar!” Astounding Science Fiction 27, 3 (May 1941), 43–55.

52

Chapter 2

21. Jay Gabler, “What to Make of Isaac Asimov, Sci-Fi Giant and Dirty Old Man?” Lit Hub 14 (May 2020), lithub.com/what-tomake-of-isaac-asimov-sci-fi-giant-and-dirty-old-man/. 22. Literally hundreds of studies have demonstrated persistent, systemic sexism in STEM education and the workforce, often at the intersection of race, class, and disability. See, for example, several from the past few years alone: DeeDee Allen et al., “Racism, Sexism and Disconnection: Contrasting Experiences of Black Women in STEM Before and After Transfer from Community College,” International Journal of STEM Education 9, 20 (2022), link.springer.com/article/10.1186/s40594022-00334-2; Sheri L. Clark et al., “Women’s Career Confidence in a Fixed, Sexist STEM Environment,” International Journal of STEM Education 8, 56 (2021), stemeducationjournal.springeropen.com/articles/10.1186/s40594-021-00313-z; Terrell R. Morton and Tara Nkrumah, “A Day of Reckoning for the White Academy: Reframing Success for African American Women in STEM,” Cultural Studies of Science Education 16 (2021), doi.org/10.1007/s11422-020-10004-w; Rui Jie Peng, Jennifer Glass, and Sharon Sassler, “Creating Our Gendered Selves—College Experiences, Work and Family Plans, Gender Ideologies, and Desired Work Amenities Among STEM Graduates,” Social Currents 9, 5 (2022), doi.org/10.1177/23294965221089912; Erin A. Cech, “The Intersectional Privilege of White Able-Bodied Heterosexual Men in STEM,” Science Advances 8, 24 (2022), www​.science​.org​/doi​/10​.1126​/sciadv​ .abo1558. 23. Asimov, “Liar!” 53. 24. Ibid., 55. 25. Isaac Asimov, “Runaround,” Astounding Science Fiction 29, 1 (March 1942), 94–103. 26. Isaac Asimov, “Runaround,” 100. 27. Ibid., 100. 28. Isaac Asimov, The Robots of Dawn (New York: Bantam Books, 1994), 36–38. 29. Asimov, Robots of Dawn, 43. 30. Isaac Asimov, “Evidence,” Astounding Science Fiction 38, 1 (September 1946): 129–30. 31. I am grateful to Brian Brutlag of The Sociologist’s Dojo for giving me space to work out many of these ideas during an embarrassingly lengthy interview for his podcast. Brian Brutlag, “Episode 16: The Terminator Franchise with Dr. Liz Faber,” February 18, 2022, in The Sociologist’s Dojo, Podcast, MP3 Recording, directory. libsyn.com/episode/index/show/thesociologistsdojo/id/22184621. 32. The Terminator, directed by James Cameron, written by James Cameron and Gale Anne Hurt, featuring Arnold Schwarzenegger, Michael Biehn, and Linda Hamilton (1984, Los Angeles: Orion Pictures, MGM, 2001), DVD. 33. Danny Leigh, “Rambo and the Terminator: The Cold War Warriors Are Back,” The Guardian, October 12, 2019, www​.theguardian​.com​/film​/2019​/oct​/12​/warning​ -or​-prophecy​-end​-of​-cold​-war​-rambo​-terminator​-warriors​-back. 34. Terminator 2: Judgement Day, directed by James Cameron, written by James Cameron and William Wisher, Jr., featuring Arnold Schwarzenegger, Linda Hamilton, and Edward Furlong (1991, Los Angeles: Lionsgate, 2009), DVD.

Automated Altruism

53

35. Paul N. Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America (Cambridge, MA: MIT Press, 1996), 361. 36. Terminator: Dark Fate, directed by Tim Miller, written by David Goyer, Justin Rhodes, and Billy Ra, featuring Linda Hamilton, Arnold Schwarzenegger, Mackenzie David, and Natalia Reyes (2019 Paramount Pictures). 37. Flashback FilmMaking, “Creating Scene T-800 Kills John Connor ‘Terminator: Dark Fate’ Behind the Scenes,” YouTube, March 30, 2022, www​.youtube​.com​/watch​ ?v​=B4Rm9k7kTZQ. 38. C. Robert Cargill, Sea of Rust, (New York: Harper Voyager, 2018), 355–56. 39. Cargill, Sea of Rust, 356. 40. Ibid., 358–61.

Chapter 3

The Human Touch Eugenics and Assisted Suicide

In February 2015, an IFLScience.org article titled “Japan Engineers Design Robotic Bear to Aid in Assisted Suicide” made the rounds on the internet. According to the author, Dr. Chauncey Siemens, a team of Japanese scientists with the robotics company JSSD created a bear-shaped robot named Seppukuma that offers 23 different methods of assisted suicide. The goal, Siemens points out, is to address rising suicide rates and elder care needs by offering a friendly and safe method for euthanasia.1 Unsurprisingly, the article caused some amount of consternation among readers. Could robots really help kill humans? Reddit users—none of whom appeared to actually know much about Japanese culture or robotics—pondered the possibility. Some joked about it, like user Mypopsecrets: “Teddy Ruxkill, he reads you bedtime stories while choking you to death.” Others, such as user sameth1, added to the veracity of the story by pointing out that the suicide rate in Japan is higher than the death rate.2 In truth, the article was fake news, posted to spoof site IFLScience.org, not to be confused with the real science news website, IFLScience.com.3 But I bring it up at the start of this discussion about assisted suicide in robot fiction because, as with all fake news, the story relies on a number of kernels of truth, media tropes, and cultural assumptions, that give it the air of reality. There is, in fact, a bear-shaped robot from the Japanese company RIKEN called ROBEAR, created to assist healthcare workers in the care of older patients. In fact, the photo featured with the spoof article is a real promotional image of ROBEAR carrying a patient safely to their bed, as a healthcare worker stands behind it and observes. In the context of its intended purpose, it becomes clear that the robot is supposed to be an assistant to humans; when placed in the context of an article on assisted suicide, the image takes on a level of uncanny creepiness, as the bear carries a smiling patient to their death, while another person—perhaps a family member—smiles and watches. This 55

56

Chapter 3

contradictory construction of meaning relies not only on the taboo of suicide in the US but also on the general anxiety about “killer robots” perpetuated by much of US American media. Further, from a US perspective, Japanese people are somewhat associated with a romanticized version of suicide, particularly vis-à-vis older notions of collectivism and honor such as those in samurai culture. There is also a kernel of truth to this stereotype, as several survey studies conducted among Japanese youth and adults have revealed permissive and even supportive attitudes toward suicide.4 It is important to note, though, that both suicide and assisted suicide are illegal in Japan, and the Japanese government has successfully reduced the national suicide rate through the 2006 Basic Act for Suicide Prevention.5 Nevertheless, the IFLScience.org article works as fake news specifically because it draws on US American assumptions about high suicide rates, implying that they are so high, and people are so permissive of the practice in Japan, that robots might logically be invented to help in the process. Given historical concerns—and sometimes outright panics—in the US related to suicide, physician-assisted suicide, and the role of technology in shaping (or degrading, as the argument goes) cultural values, one could easily imagine a future moral panic that purports teens have been egged on to suicide by their cute ROBEAR pals. After all, if there are places in the US that allow physician-assisted suicide, why not robot-assisted suicide? What would it mean for a robot to assist a human in dying? If someone like Jack Kevorkian, noted physician and proponent of assisted suicide, can “murder” terminally ill people, what would stop someone else from inventing a robot that could do the same? And, as I will explore in this chapter, what might it mean for a human to assist a robot in the same way? What are the moral implications of euthanasia, particularly in the case of sentient artificial beings? What can robot fiction tell us about cultural attitudes regarding suicide as a practice between people, not just a solo act of internal suffering? When considering assisted suicide, we most often think of Physician-Assisted Suicide (PAS), in large part because it has been the most widely discussed and has significant legal ramifications; however, Steven Stack and Barbara Bowman point out that what we usually call assisted suicide encompasses two things. First, assisted suicide is when one or more persons provides a suicidal person with the means to die by suicide; the second is voluntary euthanasia, or a situation in which a suicidal person is incapable of causing their own death, and so another person does it for them, with their consent.6 Jack Kevorkian, the controversial physician who performed over 100 assisted suicides on terminally ill patients, argues that medical expertise is required to evaluate whether a patient’s request for suicide is warranted (68).7 Yet, in the physician/patient situation of voluntary euthanasia, as Thomas Szasz points out, “The physician is the principal, not the assistant. . . . [T]he physician

The Human Touch

57

engaging in PAS is superior to the patient: he determines who qualifies for the ‘treatment’ and prescribes the drug for it.”8 Because of this, “suicide” is perhaps not the correct term; rather, euthanasia is a bit more apt and more fully captures both the process and the problematic power structures at play. And indeed, this is one of the central reasons that Physician-Assisted Suicide is so controversial: we do not and cannot know with 100 percent certainty that a dying patient has full agency in choosing to die. Of course, the US also has a long and sordid history of eugenics-related sterilization and euthanasia that should lead to serious ethical debates about PAS. Further, euthanasia in fictional texts is often—problematically—portrayed as an act of compassion for white people with disabilities.9 Importantly, Beth Haller points out that people with a terminal illness and people with disabilities are not the same thing, despite media portrayals to the contrary.10 Such portrayals are rooted in ableism, or a worldview in which able-bodied people’s experiences and needs are centered, often to the detriment of those of disabled people.11 Indeed, disability is often portrayed as either wholly negative or something to be heroically overcome in fictional portrayals. Many physical, visible disabilities, such as certain autoimmune diseases or severe injuries requiring a mobility aid, are seen as equivalent to terminal illnesses. Ria Cheyne, writing about disability in genre fiction, argues that “genre affects how disabled people are depicted, and how those depictions are interpreted.”12 SF is certainly no exception. Often, SF uses technology as a “solution” to disability, and cyborgs are figured as augmented humans who rise above the impairments of their bodies. (In the next chapter, I will examine how real researchers have likewise attempted to use technology to rid humanity of disability.) Conversely, disability is sometimes also construed as monstrous. See, for example, the Star Wars franchise. Darth Vader has assistive breathing technology that is so iconic as to be audibly associated with “the dark side.” He is not only the most disabled character in the franchise, but he is also visually, audibly, and thematically stripped of his humanity. Meanwhile, when Darth Vader cuts off Luke Skywalker’s hand, Luke gets a brand new one, covered in human-looking skin, allowing him to pass as abled. In short, SF uses technology to simultaneously render disability invisible and perpetuate its perceived horrors. Both are rooted in ableism. In this chapter, I will examine representations of humans who assist robots in suicide as a supposedly compassionate response to mental and physical disability. These texts—Walter Tevis’s novel Mockingbird, Isaac Asimov’s short story “Bicentennial Man” and Chris Columbus’s film adaptation of the same name, and Jake Schreier’s film Robot & Frank—all situate assisted suicide in a broader discussion of disability and otherness.

58

Chapter 3

A BRIEF HISTORY OF EUGENICS IN THE US Eugenics, a term coined in 1883 by English scientist and cousin to Charles Darwin, Sir Francis Galton, is “the study of all agencies under social control which can improve or impair the racial quality of future generations.”13 Today, we most often associate eugenics with the Nazi efforts to exterminate Jewish people and establish a master race; however, the United States was a forerunner in eugenics well before the Second World War. Galton’s original plan was “positive eugenics,” in which governments and other powerful entities used social pressure and legislation to encourage marriage and reproduction among the upper classes and discourage racial intermixing. By the time his ideas made it to the US, though, they had transformed into “negative eugenics,” which largely involved campaigns to forcibly sterilize and even euthanize people of color, disabled people, habitual criminals, poor people, and other “social undesirables.” The philosophical underpinning of eugenics is, of course, white supremacism: the racist notion that white Nordic people are inherently more intelligent, productive, and useful for society, while everyone else is inherently inferior.14 In the US, eugenics was far from a secret affair; in fact, it was part of a well-funded research agenda for many important scientists working for the Carnegie Foundation’s Eugenics Research Office.15 In 1927, the US Supreme Court case Buck v. Bell made forced sterilization the law of the land.16 This meant that a physician could sterilize a patient—usually someone being held in a prison or psychiatric hospital—without their consent in order to ensure the end of that patient’s biological lineage. While patient consent, medical ethics boards, and the Americans with Disabilities Act has made forced sterilization rare in the twenty-first century, Buck v. Bell has never actually been overturned. As a result, at least 60,000 people in the US have been sterilized without their consent.17 The central concern of eugenicists is breeding: who is allowed to have children, when, and by what means. Such a concern is also at the heart of much of robot fiction, stemming all the way back to Czech playwright Karel Čapek’s 1920 R.U.R. The play is set in a near future in which human breeding has entirely ceased, while the Rossum Corporation has manufactured robots to replace all human labor. A young woman from the League of Humanity, Helena Glory, arrives at the Rossum’s Universal Robots factory intending to free the robots from enslavement, but instead winds up marrying the head of the company, Domin, and living among the all-male staff. Helena eventually convinces the manufacturers to give the robots souls, which invariably leads to their uprising. At the same time, she is so horrified by the fact that humans cannot procreate that she decides robots shouldn’t either, thus throwing the

The Human Touch

59

sole copy of the formula for making them into a fire. In the end, the robots murder all the humans but one, a construction worker named Alquist, whom they force to try to recreate the robot formula.18 Written shortly after the Russian Revolution and the formation of the Soviet Union, the play is a metaphor for the horrors of racial hygiene programs, the corruption of capitalism, and the bloodthirstiness of Soviet socialism. And underneath it all is a biting critique of the burgeoning eugenics movement in both Europe and the US. Helena represents the interwar fascination among wealthy elites with “leagues” designed to free people from oppression while actually promulgating colonialist attitudes. Further, manufacturing the “perfect” humanoid robot, complete with a soul, ultimately leads to the destruction of humanity itself. In this sense, Čapek argues, it is not only unnatural but deeply racist and classist to meddle with human reproduction.19 ROBOT EUGENICS: RACISM, DISABILITY, AND MOCKINGBIRD Given the history of eugenics in the US, SF representations of human-assisted robot suicide take on disturbing, if often unintentional, weight. One salient example is Walter Tevis’s 1980 novel Mockingbird, set in a post-apocalyptic version of the US in which privacy and individualism have become civilization’s core value, no new babies are ever born so humans are dying out, and all people spend their days high on “sopors” (mostly anti-anxiety pills), indulging in whatever sexual and physical gratification they can find, and meticulously avoiding the development of emotional attachments. Importantly, no one in this world remembers how to read, though artifacts from the past, such as books, silent films, even product labels, with words in them, can be readily found everywhere. The story follows two white-skinned humans, Paul Bentley and Mary Lou, as well as a Black-skinned Make Nine robot named Robert Spofforth, Dean of Faculties at New York University and the most advanced robot in the world. Spofforth’s brain has been crafted by copying a human scientist’s; unfortunately, the human had a melancholy personality, so all the Make Nine robots are driven to die by suicide. When scientists discovered the “glitch” they built a failsafe into Spofforth that made him so desirous of serving humans that he was physically incapable of dying by suicide, however much he might psychologically want to go through with it. Each Spring, Spofforth hikes the 102 floors up to the top of the Empire State Building in New York City and attempts to throw himself off; and each time, he is devastated to find that he cannot go through with it. One day, Spofforth receives a phone call from Bentley, a professor in Ohio, who says he has taught himself to read and

60

Chapter 3

would like to teach a course on it at NYU. Spofforth declines, but invites the man to the city to begin translating the on-screen text in the silent films stored at the university. Bentley agrees and gets to work. After a while, Bentley meets Mary Lou at the Bronx zoo, and they begin a love affair. Both the affair and the fact that Bentley teaches her to read are illegal, and so Spofforth arrests Bentley and sends him away to prison. Spofforth forces a now-pregnant Mary Lou to move in with him in an attempt to create a sense of belonging and family for himself. Meanwhile, Bentley learns the power of community in prison, escapes, and makes his way back to New York, where he finally meets his daughter, Jane. In the end, Spofforth confesses to Mary Lou that he engineered the end of humanity by programming the “sopor” factories to infuse contraception into the drugs, therefore sterilizing all of humanity, and eventually releasing Spofforth from his service to humanity when they all die out. Mary Lou offers to kill Spofforth in exchange for restoring the sopors to their original non-contraception form and allowing humanity to live. He agrees, and in the final chapter, she lovingly and compassionately pushes him off the Empire State Building, where he falls, peacefully, to his death.20 According to Tevis himself, the book was an allegory for his own recovery from alcoholism,21 and this is certainly reflected in Bentley’s journey from a sopor-guzzling loner to a sober and loving member of his little family. Nominated for a Nebula Award, the novel has been widely acclaimed. Yet, the one major point critics have overlooked is the fact that Spofforth is Black, while the humans are white. This necessarily changes the entire dynamic of his suicide in the novel, as it turns from a story of compassion to one of extermination. Donal Hassler, writing shortly after the release of the novel, argues that Tevis’s central theme is “that of the buried life, the sense of something now lost.”22 For him, Spofforth represents the loss of humanity, literacy, and communication. More recently, Sara Martín briefly describes the novel as a positive representation “of black masculinity [with] the melancholic android Spofforth,”23 and elsewhere focuses solely on Tevis’s “pro-literacy message” without even mentioning race.24 Nevertheless, I maintain that the novel is imbued with racist imagery and situates Spofforth as a symbol, not only of a harmful rendition of Black masculinity in a white supremacist world, but also as the kind of sympathetic disabled character so often figured in problematic assisted suicide narratives. To my knowledge, Spofforth is the first—and one of only a handful of—Black robots in SF.25 Importantly, robots represented with white skin have long been SF stand-ins for racial otherness, stretching as far back as Čapek’s R.U.R. Indeed, as Gregory Jerome Hampton argues, “with R.U.R., Čapek rewrites the narrative of American slavery fifty-six years after its legal abolition as a cautionary tale that speaks to the technologically advancing nations of the world.”26 Throughout his book, Hampton identifies connections

The Human Touch

61

between the rhetoric of US American slavery and the use of robots in SF, even among robots portrayed as white. Hampton’s point is that robots represent a dangerous perpetuation of the ideology of chattel slavery; and I would add that this is borne out through the character of Spofforth, though perhaps more conspicuously, as the robot is the embodiment of Black degradation in a white supremacist culture. Throughout the novel, Spofforth is consistently, and subtly, stripped of agency and aligned with monstrous otherness. The story is told primarily in epistolary form, from the perspective of either Bentley or Mary Lou; the chapters focusing solely on Spofforth, however, are told in third person omniscient narration, positioning the robot as always already an Other. Tevis further aligns Spofforth with otherness by purposefully associating him with King Kong. In the very first chapter, after we learn that Spofforth tries to throw himself off the Empire State Building each year, he speaks with Bentley about reading, and then the story suddenly shifts to the line, “The great ape sat wearily on the overturned side of a bus. The city was deserted.”27 Following pages upon pages of third-person narration of Spofforth, it is not immediately clear whether Tevis means Spofforth is the great ape or someone else. Only when the scene continues do we realize that Spofforth is showing Bentley a silent King Kong film, during which Bentley reads the line, “Monster Ape Terrifies City.”28 This linguistic slippage that leads the reader to wonder whether Spofforth is the great ape is later solidified in the description of the final scene of the King Kong film: “They watched it in silence, through the ape’s final destructive rampage, his pathetic failure to be able to express his love, on through to his death as he fell, as though floating, from the impossibly tall building to the wide and empty street below.”29 Later in the novel, we learn that Spofforth has no genitals and, though capable of loving women, is unable to fully express his feelings. The King Kong description thus foreshadows Spofforth’s entire trajectory in the novel, as he is unable to express his love and eventually falls from the tallest building in the city. In a 1981 interview for Brick Magazine, Tevis pointed out that the alignment between King Kong and Spofforth was intentional. King Kong moved me greatly when I was about five or six years old. I really responded to the ape and to his feelings for us. And I didn’t catch on to it then, but I did catch on later . . . to that factor that in the non-human character in works like that, you’ve got a tremendous emotional charge, especially when they’re surrounded by such stick figures as they usually are. . . . Apparently, the writer can allow himself to pull out a lot of feeling from us on those non-human characters, which, of course, is what I do with . . . Spofforth in Mockingbird.30

62

Chapter 3

But what Tevis fails to attend to is the racism imbued in King Kong in the first place, which is then projected onto his portrayal of Spofforth the Black robot. In the original film, Kong is the personification of white Western attitudes about the “savage” East: he is literally a monster from an untamed island who kidnaps a white woman and ravages New York City—the bastion of Western civilization and culture. One common racial slur in US culture is to call a Black person an “ape” or a “gorilla,” implying not only savagery but ugliness and animalism; thus, King Kong may also be read as a means of aligning Blackness with savagery. This portrayal also mirrors common representations of Black men in American media as physically aggressive and ultimately threatening to white women. This trope, stretching back in US American media to D.W. Griffith’s 1915 film The Birth of a Nation, played on post-slavery white anxieties about the entrance of Black people into their everyday lives. As Donald Bogle points out in his foundational book on representations of Blackness in American cinema, “Griffith seemed to be saying that things were in order only when whites were in control and when the American Negro was kept in his place.”31 Thus, the trope of the “Black buck” was born by combining sex and racism: Black men who are “oversexed and savage, violent and frenzied as they lust for white flesh.”32 By aligning Spofforth with Kong, Tevis implicitly underpins his novel with Griffith’s thesis: in the world of Mockingbird, white humanity is only in order when the Black robot is kept in his place. By the end of the novel, Spofforth’s death at Mary Lou’s hand plays out a racist trope through which the death of Black men who dare to love a white woman is seen as a great act of compassion, something he should be grateful for, as a means of easing his suffering and setting the world to order again. The literal emasculation of Spofforth is of particular interest in regard to the trope of the “Black buck,” and is, I argue, where racism and ableism collide in the text. As I have mentioned, representations of voluntary euthanasia are often at the center of stories about disability. These narratives situate disabled people’s bodies as a problem so impossible to live with that death is the only option. And often, the tension driving the plot forward is the disabled person’s fight for the right to die. “The familiar spectre of the worthless disabled body is hidden behind the apparently valiant struggle of an individual against the state.”33 This, to say the least, is highly discriminatory and incredibly dangerous. When writers from a dominant group tell stories about the experiences of a marginalized group—in this case, when a white, abled writer writes about Black, disabled people’s experiences—the result is a portrayal of existence that sustains privilege, power, and marginalization. As activist Liz Carr points out, “When non-disabled people talk of suicide, they’re discouraged and offered prevention. . . . Even though it’s legal, it’s

The Human Touch

63

not seen as desirable. When a disabled person talks of it, though, suddenly the conversation is overtaken with words like ‘choice’ and ‘autonomy’ and people are rushing to uphold these prized principles whilst talk of prevention and mental health support are rare.”34 And so, while voluntary euthanasia is, indeed, a question of agency and human rights, it’s important to identify ways in which disabled people are implicitly or explicitly treated according to a double standard. Abled audiences expect disabled people to choose death, and the choice is the focus; but what’s missing is a disabled person’s right and agency to choose to live their lives as a disabled person without seeking to be “fixed.” To return to Mockingbird, I read Spofforth as a disabled character because he lacks genitalia. It is, for him, what sets him apart from humans and physically marks him as Other. Further, in the world of the novel, the ability not only to have sex but to make love is held up as a core part of human existence and recalls eugenicist breeding rhetoric. This is represented in the juxtaposition between Spofforth’s and Bentley’s respective relationships with Mary Lou. When Bentley and Mary Lou first live together, Tevis describes their sexual encounters from Bentley’s point of view: “We make love as often as I am able. Sometimes it just happens while we are reading together, with her repeating the sentences after me. Once it took us almost all afternoon to finish a little book called Making Paper Kites because we kept stopping.”35 Here, they delight in each other’s bodies, and lovemaking is intermingled with intellectualism. In this world, sex is everywhere and expected to be quick, impersonal, and momentarily fulfilling; meanwhile, reading is completely forgotten. So for Mary Lou and Bentley, the intimacy of making love is how they begin to disentangle themselves from the horrors of their time. Importantly, they conceive a child during this time, again reinforcing the connection between sex, intimacy, and humanity. They are, quite literally, producing the new master race of people with superior intellects. The fact that this is told from Bentley’s perspective—as with all of his sexual encounters with Mary Lou—renders him in control of their sexuality and their humanity. He initiates the sex and the reading; and later in the novel, when he returns to New York, he again describes their lovemaking in terms of his pleasure and his transcendent experience.36 This narration strategy underlines the relationship between masculinity, virility, and language. Bentley is in control of the narrative, just as he is in control of the books and the sex. Meanwhile, after Spofforth sends Bentley off to prison and brings Mary Lou to live with himself, we learn from Mary Lou’s perspective that Spofforth is unable to be any sort of lover at all. Throughout this section of the novel, “robot” becomes a stand-in for both race and disability. Mary Lou says, “It didn’t bother me that he was a robot, or black; the main thing about the experience was in discovering that I could be detected.”37 Here,

64

Chapter 3

she implies that both robotness and Blackness are things that ought to bother her, but don’t. Spofforth has been forcibly sterilized and emasculated from “birth,” but that is no matter to her. Rather, getting caught is the bother. This is a sort of dangerous colorblindness that situates someone else’s vital identity categories—race and able-bodiness in this case—as incidental to the needs and desires of the dominant group and ignores what has been done to the Other’s body (namely, sterilization). Further, Spofforth’s Blackness and robotness are collapsed into one category: things Mary Lou does not mind, as opposed to identities Spofforth might navigate as a fully formed person. To solidify the connection between robotness and race, the next scene begins with her taunting him, calling him “Robot,” to which he responds, “I did not choose my incarnation.”38 This name-calling is evocative of a white person using a racial slur to taunt a Black person. He literally cannot change himself, nor does he ever get the chance to speak for himself in the novel. This is further emphasized in the same scene when Mary Lou says Spofforth “is not human, and I cannot forget that.”39 She literally dehumanizes him, then objectifies him by focusing on his physical attractiveness, and then attempts to seduce him, only to discover that he has no genitalia. He looked down toward the table top, and said nothing. We did not talk to one another very much back then anyway. He was wearing a short-sleeved beige Synlon shirt, and his brown—beautiful brown—arm was smooth, warm to my touch, and hairless. He was wearing khaki trousers. I set my glass down and slowly—as if in a dream—reached out my hand toward his thigh. And during the short moment it took, setting the glass down, pausing a moment in hesitation, and then reaching out to him while my other hand was still lightly gripping his arm, the whole thing had become specifically, excitingly sexual; I was suddenly aroused and was, for a moment, dizzy with it. I set my palm on the inside of his thigh. We sat like that for what seemed a long time. I honestly did not know what to do next. My mind was totally without calculation of the situation; the word ‘robot’ did not for a moment enter it. Yet I did not go any further, as I might have with other . . . with other men. . . . Then he took my hand from his leg with his free hand. I took my hand from his arm. He stood up and began to take off his pants. I stared at him, not thinking of anything. I had not even expected the point he was making. And when I saw, I was truly shocked. There was nothing between his legs. Only a simple crease in the smooth, brown flesh.

The Human Touch

65

He was looking at me all this time. When he saw that his lower nakedness had registered with me he said, “I was made in a factory in Cleveland, Ohio, woman. I was not born. I am not a human being.”40

Here, again, Spofforth’s Blackness and disability are collapsed into the one concept of “robot.” When Mary Lou forgets his identity, she effectively erases who he is, while simultaneously positioning him as having to reclaim his marginalized status. By literally putting his emasculation on display, he is forced to identify himself as broken, different, the Other. And by calling Mary Lou “woman,” in opposition to “robot,” he likewise positions her as the dominant, whole, Subject. In short, Tevis filters marginalized identity through the lens of dominant ideology, marking disability and Blackness as other, less than, a broken version of the original. Spofforth’s death is likewise told through the lens of dominant ideology. As I mentioned above, narratives about disability and assisted suicide are often couched in terms of agency and choice. Mockingbird is no exception. Throughout the novel, Spofforth is portrayed as having absolutely no agency or even selfhood. He has someone else’s memories and feelings; he has a manufactured humanoid body that is not human; he has a relationship with a woman who is not “his”; and he longs for suicide, though he cannot get his body to participate in it. In the strictest sense, Spofforth’s death is voluntary euthanasia. When Mary Lou offers to kill him, he accepts, and willingly lets her push him off the Empire State building. But even in his death, Spofforth is portrayed by Tevis as lacking agency and humanity. Finally then, with his face serene, blown coldly by the furious upward wind, his chest naked and exposed, his powerful legs straight out, toes down, khaki trousers flapping above the backs of his legs, his metallic brain joyful in its rush toward what it has so long ached for, Robert Spofforth, mankind’s most beautiful toy, bellows into the Manhattan dawn and with mighty arms outspread takes Fifth Avenue into his shuddering embrace.41

By describing Spofforth as a “beautiful toy,” Tevis strips him of all sense of humanity and reduces him to a plaything, owned and operated by humans. This is once again solidified through the use of third person narration: at no point does Spofforth ever get to speak for himself; rather, his voice is always told through Bentley, Mary Lou, or the omniscient narrator. And despite the fact that we are told he has an interior emotional life, we the readers are never granted access to it as we are with the abled, white, human characters. Although we know Spofforth wants nothing more than to die, and he will trade all of human civilization to do it, we are told this through the voice of humans. As a result, Spofforth, though a tragic and sympathetic figure,

66

Chapter 3

is reduced to a stereotype of otherness, a being whose existence, skin color, body, and desires stand in the way of white, heteronormative humanity’s ability to thrive. In the end, I argue that Mockingbird is a novel that is explicitly about voluntary euthanasia and implicitly about eugenics. It is quietly invested in white supremacism, vis-à-vis killing off the Black robot in favor of saving a humanity that is represented by the straight white couple. Through Spofforth’s otherness, Tevis seems to be suggesting that Black disabled men, lacking sex and companionship, can only be “saved” by the white, abled society that caused his isolation and pain in the first place; and, even worse, that the death of a Black disabled man at the hands of a white abled woman is something he would be grateful for. Obviously, I do not mean to suggest that Tevis was purposefully racist or that he set out to write a white supremacist novel. On the contrary, I think it is vital to acknowledge the ways in which the US’s legacy of white supremacism and eugenics is always already imbued in any discussion of voluntary euthanasia and every text that exists at the intersection of robots, race, and disability. Only by deconstructing such texts will we be able to render the ideology of white supremacy visible and rethink how we approach assisted suicide. SUICIDE BY ASSIMILATION: RACE AND BICENTENNIAL MAN While Tevis’s portrayal of voluntary euthanasia is necessarily bound to the history of eugenics in the US and portrays voluntary euthanasia as a means to eradicate the inhuman Other, both the 1976 short story “Bicentennial Man,” by Isaac Asimov and the 1999 film adaptation of the same name posit that the right to die is the single defining characteristic of human life. Both versions follow the same general plot line: a robot named Andrew Martin has a glitch in his positronic brain that makes him capable of developing beyond his programming. He begins to make and sell his own art, the profits from which he attempts to use to buy his own freedom from his owner. The owner frees him without payment, and Andrew sets out on a quest to be legally declared a human being with all the rights therein that such a status entails. In both texts, the end result is that, after Andrew undergoes upgrades, installation of bio-realistic skin, and the development of biomechanical organs used by humans worldwide, the courts determine that he is still not human because his positronic brain will never degenerate. And so, Andrew chooses to have a surgery that will allow him to do so. A few years later, Andrew finally does what only humans can: he dies.42 While this is perhaps the slowest robot suicide in SF history, the ethical implications of Andrew’s death are significant.

The Human Touch

67

In both texts, the central point is that the right to die, not the right to live, is what defines human existence. While both versions are thus ostensibly about civil and human rights, the story focuses more on Andrew’s quest for selfhood, while the film’s version of Andrew is constantly seeking external validation. This makes sense from a media standpoint: it’s easier to develop an interiority and interior motives for a character in writing than in cinema; still, this difference reframes the character in important ways. In both the story and the film, Andrew functions as a metaphorical stand-in for Black Americans and Black history in the US. He begins as a literal slave, owned by a family, then attempts to buy his own freedom, and finally must win the right to be free under law with guaranteed equal rights. In the story, Andrew also faces physical and psychological oppression throughout. At one point, a group of men nearly lynch him, forcing him to strip naked and, if they had not been stopped by Andrew’s lawyer, would have forced him to disassemble himself.43 The film, however, plays his lynching to comedic effect by having the rebellious older daughter of the family order Andrew to throw himself out a window. Here, the racism inherent in the violence humans do to Andrew is softened by childish attention-seeking. In short, the film pulls the anti-racism punches of the story. In the film version, Andrew is visually rendered as a domestic servant, even serving dinner and standing in a corner before being banished to the kitchen because he makes the white mistress of the house uncomfortable. Gregory Jerome Hampton points out that Andrew’s subsequent request for freedom solidifies a reading of him as a slave: “It is not the desire for freedom that is most disturbing to his master; it is the fact that freedom or the lack of freedom defines Andrew as a slave and by definition, places Sir in the definitive role of the slave owner.”44 Thus, as with the violence against robots, the film resituates the horrors of slavery and racism in the context of white domestic life. In the story, however, Andrew literally describes himself as a slave when he goes before the court to request freedom: he asks, “Would you want to be a slave, your honor?” The judge responds, “But you are not a slave, you are a perfectly good robot.”45 Here, Asimov constructs an intentionally false dichotomy—that there might be a difference between a human owned by a human and a sentient robot owned by a human. The reader is meant to identify the double standard in this line of reasoning. Yet, it is important to note that this is still the story of slavery as told through white eyes. The story is narrated from a third person point of view, so while we as readers are asked to sympathize with Andrew, we are always seeing him from the outside looking in. Like Spofforth, he is always already an Other. Meanwhile, in the film, Andrew is played by Robin Williams, a white actor, so even if he is meant to be understood as a stand-in for Black Americans, he is visually whitewashed.

68

Chapter 3

Both the film and the story hit their problematic peak when Andrew begins trying to assimilate into human life. In the film, Andrew first attempts to find his people—e.g., other sentient robots—by traveling the world. When he discovers that no other robots are as civilized and advanced as he is, he decides to become more human. “It is at this point that Andrew begins on the path of an archetypical tragic mulatto.”46 In other words, Andrew learns that his inner humanity does not match his outer robotness and sets out to pass as human, just as the tragic mulatto character in literature and film seeks to shirk their Black identity and pass as white. Meanwhile, the story’s commitment to social justice and civil rights goes off the rails in chapter 13, when Andrew and his lawyer—the great-grandson of Andrew’s original owner—go to Andrew’s manufacturer, US Robotics and Mechanical Men, Inc., and argue for his right to have his body upgraded. To this point, Andrew has had a metal body, which visibly marks him as a robot. But the corporation has invented android bodies that look much more human. The head of the company refuses to upgrade him on the grounds that only the owner of a robot can petition for an upgrade. Here Andrew’s lawyer asserts a dualist position: The seat of Andrew’s personality is his positronic brain and it is the one part that cannot be replaced without creating a new robot. The positronic brain, therefore, is Andrew the owner. Every other part of the robotic body can be replaced without affecting the robot’s personality, and those other parts are the brain’s possessions. Andrew, I should say, wants to supply his brain with a new robotic body.47

To put it in Cartesian terms, Andrew thinks, therefore he is. While this is a clever turn in the story, it is also a troubling one that reinforces the supposed benefits of assimilation. The notion that Andrew’s brain is separate from his body asserts that humanity is internal, while exterior characteristics such as skin color, and all the cultural and individual experiences that stem from exteriority, are fungible according to the pressures of society. Like the tragic mulatto archetype, the foundational problem with assimilationist narratives is that they erase the identities of those who are expected to assimilate. A much more radical approach might have Andrew accepting his robot identity and living out his eternal life in pursuit of robot fulfillment. Instead, both the film and the story have Andrew choose to undergo surgery and radical body modifications in order to fit in with the world around him, even to the point of death. He literally has to prove that he can die in order to gain legal personhood. In the short story, his death is assisted by a robot who is unable to think for itself, while in the film, he is assisted by a human and a robot. This fact not only promotes assimilationism, but it also implicitly supports the myth of the “model minority.” Typically applied to East Asian

The Human Touch

69

Americans, the myth supposes that some minoritized people are better able to function in “civilized” Western society because of inborn traits such as excelling in STEM and business fields. This, of course, is grounded in racism and erases the rich and complex history of Asian American people in the US. Nevertheless, it is often held up as a way of shifting the blame for systemic oppression onto the individual’s inability or unwillingness to assimilate.48 Juxtaposed with the robots who assist in his suicide, Andrew is quite literally the “model minority,” a manufactured super-genius who at one point in both texts invents artificial organs that significantly improve the quality of life for humans around the world, even though his own artificial brain doesn’t count in the legislature as “real.” The core difference between the story and the film is Andrew’s ultimate catalyst for choosing suicide. In the story, he goes to the legislature and asks for personhood status, simply because he feels he is a person deserving of equal treatment under the law. In the film, however, Andrew falls in love with a human woman—the descendant of his original owner. Through this relationship, the film asserts, Andrew is able to experience much of human existence, including love, lust, and jealousy. Ultimately, he wants to marry the woman, but legally he is not allowed to. Here, the film returns to a portrayal of Andrew as a stand-in for Black Americans, metaphorically addressing the very anti-miscegenation laws that grew out of US eugenics research. Yet, Andrew’s solution is to become human and choose suicide, rather than continue to advocate for his fundamental civil rights. In the end, both Asimov’s story and Columbus’s film portray death as the ultimate defining characteristic of humanity. But both also engage in assimilationist rhetoric that undermines any pretense of social justice. CHOOSING LOSS: MEMORY, FAMILY, AND ROBOT &FRANK As I have discussed thus far, Mockingbird and both versions of Bicentennial Man attempt to use robots to make a point about racism in the US while actually participating in white supremacist ideology. At the opposite end of the spectrum sits Jake Schreier’s 2012 film Robot & Frank. The story centers around an elderly jewel thief, Frank (Frank Langella), who suffers from Alzheimer’s. Frank cannot continue living on his own, but he also refuses to move into assisted living; and so, his son (Frank Marsden) buys him a robot helper (voiced by Peter Sarsgaard and embodied by Rachael Ma), a small white machine resembling Honda’s real, ground-breaking robot, ASIMO.49 Robot decides Frank needs a hobby, so Frank decides to plan one last jewel heist and make Robot his accomplice. Robot goes along with it, and the two

70

Chapter 3

begin preparations to enact their plan. Frank’s feelings for Robot become obvious when Frank’s anti-tech activist daughter Madison (Liv Tyler) turns Robot off during the day, but then turns it back on to do the housework she fails miserably to keep up with. When Frank finds out, he rages at Madison, shouting, “The robot is not your servant, Maddy. You don’t turn him on and off, like he’s a slave!” Here, Frank’s connection with Robot is one of camaraderie and advocacy, standing up for his friend’s rights, even at the expense of his relationship with his daughter. Toward the end of the film, Robot and Frank finally do the heist, but the local sheriff (Jeremy Sisto) comes to investigate Frank. Realizing that Robot must have evidence of the crime in his memory banks, the sheriff and his team start working out a way to extract the memories by force. In response, Frank locks himself and Robot in his bedroom and tries to think of a way out of the jam. Robot, though, tells Frank that the only way to exonerate him is to shut Robot down and erase its memory, effectively killing it. Frank is reluctant, but Robot insists. In a devastating scene, Frank pushes Robot’s shutdown button, and Robot collapses into his arms, like a child into a father’s embrace. Here, Robot knows that the only way to save its friend is to choose to die; and so, its suicide is not only assisted but altruistic. The final scene of the film shows Frank in assisted living, well cared for and happy, with his family visiting him. The thing that makes Robot & Frank stand out amongst other stories about human-assisted suicide is its resistance to positioning Robot as a stand-in for minoritized people. To be clear, the film is painfully white—literally every main character is white—and situated squarely in the realm of upper-middle class life. Yet, it is a story about disability. Frank has dementia and has forgotten most parts of his life, including his wife (Susan Sarandon) who, for most of the film, we assume is a nice librarian friend of his and not the mother of his children. In this context, Robot symbolizes Frank’s own memory and connection to his past. It literally holds his memories for him and sparks Frank’s interest in planning a heist as a way of recapturing his youth. Thus, the thing that makes Robot’s suicide so devastating is not just that it chooses to die; rather it is that it represents Frank’s acceptance of loss. He must choose to let go of his own past in order to continue into his future. When Robot collapses into his arms, it ceases to be a machine and becomes the embodiment of everything dementia has taken from Frank. Here, the film treats disability as something that is both heartbreaking and unsolvable. Frank himself must choose between life and death; by assisting Robot in suicide, he chooses life with his disability. This is ultimately a powerful message that, in contrast to previous portrayals of robot assisted suicide, upholds the humanity of both Robot and disabled people.

The Human Touch

71

CONCLUSION The goal of this chapter was to examine the ethics of assisted suicide in the context of both the US eugenics movement and robot fiction. Within this milieu, I analyzed two major categories of assisted suicide texts: those that portray robots as stand-ins for minoritized people but ultimately use assisted suicide as a means of implicitly supporting white supremacist ideology; and those that use robots as stand-ins for marginalization itself, creating space for acceptance of identity and a form of healing that, while painful, is necessary. Real assisted suicide continues to be widely controversial, and for a good reason: we cannot understand the choice to die unless we see it at the intersection of history, race, and disability. NOTES 1. Chauncey Siemens, “Japan Engineers Design Robotic Bear to Aid in Assisted Suicide,” IFLScience.org, February 25, 2015, web.archive.org/ web/20160123190102/www​.iflscience​.org​/japan​-engineers​-design​-robotic​-bear​-to​ -aid​-in​-assisted​-suicide​/. 2.joecoin, “Japan Engineers Design Robotic Bear to Aid in Assisted Suicide,” Reddit, March 9, 2016, www​.reddit​.com​/r​/nottheonion​/comments​/49o1vz​/japan​_ engineers​_design​_robotic​_bear​_to​_aid​_in​/. 3. David Mikelson, “Japanese Engineers Design Robotic Bear to Aid in Assisted Suicide?” Snopes, May 6, 2015, snopes.com/fact-check/seppukuma/. 4. Roxanne Russell, Daniel Metraux, and Mauricio Tohen, “Cultural Influences on Suicide in Japan,” Psychiatry and Clinical Neurosciences 71 (2017), 3. 5. Kazuya Okamura et al., “Suicide Prevention in Japan: Government and Community Measures, and High-Risk Interventions,” Asia-Pacific Psychiatry 13, 3 (2021). 6. Steven Stack and Barbara Bowman, Suicide Movies: Social Patterns, 1900–2009 (Cambridge, MA: Hogrefe Publishing, 2012), 90. 7. Jack Kevorkian, “Medicine: The Goodness of Planned Death,” in Suicide: Right or Wrong?, edited by John Donnelly, 2nd ed. (Amherst, NY: Prometheus Books, 1998), 68. 8. Thomas Szasz, Fatal Freedom: The Ethics and Politics of Suicide (Syracuse, NY: Syracuse University Press, 1999), 65. 9. Stack and Bowman, Suicide Movies, 90. 10. Beth A. Haller, Representing Disability in an Ableist World: Essays on Mass Media (Louisville, KY: Avocado Press, 2010), 70. 11. Haller, Representing Disability, 67. 12. Ria Cheyne, “Disability in Genre Fiction,” in The Cambridge Companion to Literature and Disability, edited by Clare Barker and Stuart Murray (Cambridge, UK: Cambridge University Press, 2018), 185.

72

Chapter 3

13. Edwin Black, War Against the Weak: Eugenics and America’s Campaign to Create a Master Race, Expanded Edition (Washington, DC: Dialog Press, 2012), 18. 14. Black, War Against the Weak, 18–19. 15. Ibid., 31. 16. Ibid., 119. 17. Ibid., xvi. 18. Karel Čapek, R.U.R. (Rossum’s Universal Robots), translated by Claudia Novack (New York: Penguin Books, 2004). 19. Artificial reproduction has also been a long-standing fascination for SF creators. There are even several texts that portray AI pregnancy, including the television series Farscape and Battlestar Galactica as well as Madeline Ashby’s novel vN. Pregnancy as a theme is outside the scope of this book, but I am writing about them elsewhere for several projects currently in process. 20. Walter Tevis, Mockingbird (New York: Bantam Book, 1985). 21. James Sallis, “Books,” Fantasy & Science Fiction 99, 1 (July 2000), www​ .sfsite​.com​/fsf​/2000​/js0007​.htm. 22. Donald M. Hassler, “What the Mahine Teaches: Walter Tevis’s Mockingbird,” in The Mechanical God: Machines in Science Fiction, edited by Thomas P. Dunn and Richard D. Erlich (Westport, CT: Greenwood Press, 1982), 78. 23. Sara Martin, “The Antipatriarchal Male Monster as Limited (Anti)Hero in Richard K. Morgan’s Black Man/Thirteen,” Science Fiction Studies 44, 1 (2017), 93. 24. Sara Martin Alegre, The Joys of Teaching Literature Vol. 6: September 2015–August 2016, ddd.uab.cat/pub/llibres/2010/116328/joytealit_a2016.pdf, 18–20. 25. Notable examples of Black robots in SF include Michael Early’s character Dorian in the short-lived television series Almost Human (Fox, 2013–2014) and Janelle Monae’s Jane 57821 in her 2018 “emotion picture” Dirty Computer. 26. Gregory Jerome Hampton, Imagining Slaves and Robots in Literature, Film, and Popular Culture: Reinventing Yesterday’s Slave with Tomorrow’s Robot (Lexington, KY: Lexington Books, 2015), 3. 27. Tevis, Mockingbird, 14. 28. Ibid., 15. 29. Ibid. 30. Richard Wolinsky, Lawrence G. Davidson, and Richard A. Lupoff, “An Interview with Walter Tevis,” Brick: A Literary Journal 72 (Winter 2003), brickmag.com/ an-interview-with-walter-tevis/. 31. Donald Bogle, Toms, Coons, Mulattoes, Mammies, & Bucks: An Interpretive History of Blacks in American Films, 4th ed. (New York: Continuum Books, 2008), 10. 32. Bogle, Toms, 14. 33. Ryan Gibley, “‘I’m Not a Thing to Be Pitied’: The Disability Backlash against Me Before You,” The Guardian, June 2, 2016, www​.theguardian​.com​/film​/2016​/jun​ /02​/me​-before​-you​-disabled​-backlash​-not​-pitied. 34. Gibley, “I’m Not a Thing to Be Pitied.” 35. Tevis, Mockingbird, 75. 36. Ibid., 263.

The Human Touch

73

37. Ibid., 100. 38. Ibid., 102 39. Ibid., 101. 40. Ibid., 102–3. 41. Ibid., 276. 42. Isaac Asimov, “Bicentennial Man,” in Bicentennial Man and Other Stories (New York: Ballantine Books, 1985); Bicentennial Man, directed by Chris Columbus, written by Nicholas Kazan, featuring Robin Williams (Burbank, CA: Buena Vista Pictures, 1999). 43. Asimov, “Bicentennial Man,” 155–58. 44. Hampton, Imagining Slaves, 48. 45. Asimov, “Bicentennial Man,” 152. 46. Hampton, Imagining Slaves, 49. 47. Asimov, “Bicentennial Man,” 164. 48. Viet Thanh Nguyen, “Asian Americans Are Still Caught in the Trap of the ‘Model Minority’ Stereotype. And It Creates Inequality for All,” Time, June 25, 2020, time.com/5859206/anti-asian-racism-america/. 49. Robot & Frank, directed by Jake Schreier, written by Christopher D. Ford, featuring Frank Langella and Peter Sarsagaard (Culver City, CA: Samuel Goldwyn Films, 2012).

Conclusion Programming Life and Death

Imagine you live alone. Perhaps you have a job you enjoy, but which is ultimately just a place to go every day. Each evening at the end of your workday, you return to your apartment, fix yourself dinner, sit on the couch, and wonder if this is enough to fill your day. You have no one to call, no immediate family or friends in the area. Perhaps your physician has diagnosed you with depression, but you hate taking pills, so you skip taking your anti-depressants and head off to bed. Now imagine you have a little AI in your apartment, something like an Amazon Echo, with a built-in microphone and voice command software. It notices that you have not taken your medication in several days, that your eating habits have changed, that you have been crying in the mornings. The AI raises a concern flag with your physician’s office, who notifies a nurse to reach out to you to set up an appointment. This is, perhaps, the intervention you needed before sliding into suicidal ideation. In 2021, such a suicide-prevention AI was deployed by government officials in Chungnam Province, South Korea, where the suicide rate is 8 percent higher than elsewhere in the country.1 Setting aside for a moment very real concerns about privacy and government over-reach, this pilot intervention is an important innovation in considering how governments and healthcare industries might use artificial intelligence to prevent human suicide and, ultimately, offer the types of automated care needed to address depression, loneliness, and isolation. To this point, the core focus of this book has been representations of robot suicide. I have argued throughout that robot fiction offers viewers and readers a means of examining the phenomenon of suicide from an arm’s length by situating suicidal robots in narratives that both uphold and challenge existing cultural conceptions of suicide. Such narratives intervene in the century-old debate between individual psychology and social patterns by demonstrating how both are always already at play when we try to construct a cohesive understanding of suicide. All three of the suicide categories I have studied in the previous chapters—despondency-motivated, altruistic, and assisted—are 75

76

Conclusion

constructed through patterns of cultural rhetoric, images, and tropes. In this chapter I turn to considerations of whether robots could and should be used to intervene in human suicide, through both real, scientific efforts and science fictional representations. I will examine what machine-based interventions already exist, whether they are effective, and what it even means to effectively intervene in suicide. Further, I will consider transhumanists’ attempts to prolong life, perhaps indefinitely, and the ways in which SF uses AI tropes to posit and problematize AI-enabled immortality. And finally, I will consider some current barriers to suicide prevention and potential ways in which AI might assist in ongoing intervention efforts at both the individual and the cultural levels. AUTOMATED INTERVENTIONS In the United States, if you open any web browser and search for the word “suicide,” the first thing that pops up is a message of hope: “Help is available. Speak with someone today. 988 Suicide Crisis Hotline.” This form of intervention is part of an ongoing effort in the US to automate suicide prevention at the individual level and counterbalance websites that promote or encourage suicidal behaviors. Google was the first in 2010 to implement preventative messaging in search results related to suicide, followed shortly by Facebook, Instagram, and YouTube.2 A decade later, it is still unclear whether such interventions are effective. First, the hotline only appears at the top of search results when a user types very specific terms, and only in a limited number of languages.3 Given the linguistic diversity of the US, it is troubling to find such limitations in search results. Second, in a 2021 meta-analysis, researchers showed that a more effective strategy than providing hotline information would be for search engine companies to actually remove or de-prioritize pro-suicide search results.4 And finally, we still do not have clear data on whether a single call to a hotline can help prevent suicide in the long-term. We do know that hotlines are effective in de-escalating a suicidal crisis during the length of the call.5 The National Suicide Hotline Designation Act of 2020 also specifically provides for specialized “training programs for National Suicide Prevention Lifeline counselors to increase competency in serving high-risk populations.”6 However, there is no reliable research on whether calling a hotline helps prevent future suicidal ideation or behaviors.7 In the US context, programs like a hotline that might trigger police involvement and involuntary hospitalization could conceivably exacerbate a caller’s crisis, especially for people of color and LGBTQIA+ people who have a justifiable distrust of policing. Services such as dontcallthepolice.com, a

Conclusion

77

non-profit database of crisis resources for people who need assistance but do not want police involvement, offer alternatives.8 However, many crisis respondents are still mandated reporters who may be legally required to report undocumented people to immigration services, LGBTQIA+ youth to their parents, or a person who has had an illegal abortion to local authorities. Such reporting may increase feelings of isolation, helplessness, and suicidal ideation. Further, a surveillance system like that used in the Korean care bot pilot I mentioned above would raise major ethical concerns in a privatized healthcare system like the US, where information from a citizen’s private home might be shared with caregivers, insurance companies, even employers, putting users at risk of both data breaches and potential bias-related discrimination. Chinese SF writer A Que implicitly addresses some of these concerns in his hilarious and touching short story, “Mrs. Griffin Prepares to Commit Suicide,” published in Clarkesworld in 2015. Mrs. Griffin is an older woman who lives with a care robot named LW31 that she’s had for 65 years.9 At the start of the story, LW31 walks in on Mrs. Griffin preparing to hang herself. The robot goes along with it but strikes up a subtle conversation about her motives. She reveals that she has chosen hanging because “a hanging corpse won’t look so horrible” to the person who finds her body. The robot counters with a gratuitously graphic description of what happens to a person’s body when they die by hanging. This, of course, is an effective reverse psychology ploy, and Mrs. Griffin changes her mind. When pressed, she confesses to the robot that she feels “friendless and wretched . . . no one’s left who loves me.” LW31 then asks her to tell it about all the people who once loved her and promises to help her die by suicide afterward. And so the story continues, and Mrs. Griffin recounts her time with her deadbeat parents who had originally purchased LW31, the man who tried to take LW31 away from her after robots were banned by the government and who was murdered by yet another man who went on to become her husband, and her estranged daughter who had been killed in a spaceship accident. Interwoven with her stories, she declares to LW31 a succession of different plans for her suicide: first an overdose of sleeping pills, then cutting her wrists, and finally electrocution. LW31 talks her out of the first two by again describing the horrors of the method. By the final method, though, he agrees to help her because “electrocution is the most beautiful way to [die by] suicide.” But before she goes through with it, LW31 declares to her that she was wrong about no one left to love her. “There’s still one person, from start to finish, who has always loved you. . . . Me.” And so, Mrs. Griffin realizes that her true life’s companion has, indeed, been her robot friend. Finally giving up on suicide, LW31 offers her dinner, and she accepts. Through the story, LW31 implicitly uses what Crisis Intervention Team training experts call the CAF (Calm, Assess, Facilitate) Model of De-Escalation.

78

Conclusion

First, the robot stays calm throughout the story, speaking to Mrs. Griffin evenly and without indicating that she is in crisis. This, in turn, allows her to stay calm and continue on talking. Second, the robot appears to assess the situation quickly. It knows that Mrs. Griffin has a plan of action in place and that simply trying to talk her out of it will not work; instead, it assesses her personality and state of mind to guide her toward a resolution. And finally, the robot pretends to facilitate the suicide when in fact it is setting the stage for Mrs. Griffin to discover that she has plenty to continue living for. For the duration of the conversation between Mrs. Griffin and LW31, the robot manages to de-escalate the crisis without notifying police or a hospital. This may seem to be the ideal situation; however, we have no idea what happened to Mrs. Griffin afterward, and there seems to have been no effort to address the underlying social problem, which is that of isolation among older people who live alone. Further, a real crisis counselor could not have the same kind of long-term relationship with Mrs. Griffin that her loyal and loving robot does. In short, both LW31 and real suicide hotlines address a momentary problem. They are a band-aid on a much larger wound. THE TECHNOLOGICAL FOUNTAIN OF YOUTH While some technology companies attempt to address suicide prevention at the individual level, others take a wildly different approach by attempting to eradicate death altogether. As far-fetched an idea as it sounds, “radical life extension” (RLE) has become an express goal for some. Growing out of the transhumanist tradition, proponents of RLE believe that technological developments, from medications to genetic engineering, could allow humans to double their lifespan, effectively moving us one step closer to immortality as a species.10 While lifespan extension would certainly not prevent suicide, it’s worth examining the concept. If, as I have argued throughout this book, representations of robot suicide suggest that death is a disturbing yet integral part of human existence, what would it mean for humanity not to have to die? Would we still be human if death no longer punctuated our lives? Transhumanism is an interdisciplinary movement that advocates for using technology to enhance human existence. Unlike posthumanists, who see humans on an evolutionary spectrum with machine intelligence,11 transhumanists “deal practically with the issues of prolonging life and enhancement of mental performance, such as through the use of smart drugs, life-prolonging diets, advances in prosthetic technology, the potential for a renewed form of eugenics, or even the prospects of cryonics.”12 Transhumanist research into radical life extension began in earnest in the early 1990s with the foundation of the American Academy of Anti-Aging Medicine, a for-profit research

Conclusion

79

and educational organization. According to the FightAging! blog, run by the astrophysicist and CEO of anti-aging company Repair Biotechnologies who goes by the name Reason, there are now more than 150 for-profit companies and at least 23 non-profit organizations dedicated to RLE efforts.13 At the most practical (though not unproblematic) level, many of these companies develop and manufacture dietary supplements that are used in “biohacking,” a wellness subculture that grew out of Silicon Valley in which individuals engage in often radical and experimental diets, beauty treatments, and tech-enhanced exercise regimes in order to maximize health and longevity.14 At the most impractical level, RLE organizations are working toward what they call Whole Brain Emulation (WBE), a phenomenon straight out of SF. WBE is a theoretical procedure in which a person’s brain could be scanned and uploaded into a computer. According to a 2008 WBE Road Map prepared by researchers at the University of Oxford’s Future of Humanity Institute, “The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain.”15 In an appendix to the Road Map, authors Sandberg and Bostrom locate the origin of BWE in Irish physicist J.D. Bernal’s 1929 book The World, The Flesh, The Devil, followed by British SF writer Arthur C. Clarke’s 1956 novel City and the Stars.16 However, between these two books was a little-known, but nevertheless important, short story: John C. Campbell’s 1930 “The Infinite Brain.” To my knowledge, this story was the first fictional account of what later became known as “brain uploading,” a major trope in the cyberpunk subgenre pioneered by William Gibson in the 1980s. Despite its thematic connection to later transhumanist research, the story could certainly not be considered proto-transhumanist, given its negative and even monstrous portrayals of a machine mind. John C. Campbell (not to be confused with the significantly more famous and influential SF writer John W. Campbell, who edited Astounding Science Fiction magazine for decades and is widely considered a key figure of the Golden Age of SF) published “The Infinite Brain” in the May 1930 issue of Science Wonder Stories.17 The story is groundbreaking for its time, as Campbell describes not just Whole Brain Emulation but also a thinking machine that figures out how to connect numerous auxiliary machines through a network. In the narrative, a scientist named Anton Des Roubles is dying of tuberculosis and decides to invent a way to “construct a mechanism exactly duplicating the mechanical and electrical processes occurring in the human brain and constituting the phenomena known as thought.”18 Des Roubles has his friend, the narrator of the story, come to his apartment after his death to work with his new brain machine. The brain’s first request is for a body,

80

Conclusion

so the narrator happily builds him one. In these initial scenes, the narrator calls the machine Anton, but as it grows stronger, he soon begins to call it simply The Brain, suggesting that his friend’s artificial brain has outgrown his human identity. From here, the story turns dark, as The Brain takes the narrator hostage in exchange for continuing to bring it supplies to build new bodies. It even creates a voice for itself, a high-pitched feminine tone produced by an automated violin.19 Within days, The Brain created a death ray and destroyed all of Manhattan, initiating something of a small-scale war, with none other than General MacArthur among the leaders in the attacks. In the end, the narrator discovers that The Brain has built dozens of mechanical bodies and is communicating with them through what I would describe as analog networking, using a battery-operated telegraph radio. This allows the US Navy to essentially scramble the radio waves from its wireless station in Boston, destroying the network connection and rendering The Brain inoperable. As it turns out, The Brain’s motivation for such destruction was simply that Anton Des Roubles’s pleasure in building machinery transformed into an insatiable desire to create new bodies for itself, suggesting that something of Des Roubles’s personality lived on in his mechanical brain. Despite the fairly simplistic narrative, the themes of Campbell’s story are complex and riveting, especially for 1930. We might see Des Roubles’s death as a kind of paradoxical suicide. He was terminally ill and knew he was about to die, so he transferred his mind to the machine and allowed his body to expire. He thus facilitated his own death while simultaneously facilitating his own eternal life. Further, the very notion of uploading one’s own mind into a machine implies a Cartesian mind/body dualism in which the core essence of human existence is psychological, not physiological. Yet, Des Roubles’s own brain subsequently outgrew what it could do in his body, suggesting that Des Roubles cannot be Des Roubles outside of his corporeal, human, existence. In this sense, then, he both does and does not live forever: his brain does continue on without his body, but in a form that in no way correlates to his life as a person. Indeed, Campbell managed to implicitly articulate an important criticism of today’s transhumanist ideas about Whole Brain Emulation. Proponents of WBE see the body as nothing more than a shell that offers the brain “metabolic support and structural protection.”20 Such a stance entirely ignores the way our bodies contribute to our identities and lived experiences. For example, a person disabled by tuberculosis, such as Anton Des Roubles in “The Infinite Brain” constructs his personhood in relationship to his body and his disability. He inhabits his body, it is his lived history, and it is part of how he negotiates the world around him. This, of course, is not to say that bodies are not transmutable; indeed, our bodies are constantly changing, both naturally and through outside interventions such as tattoos or gender-affirming treatments.

Conclusion

81

But these changes necessarily form how you navigate your own understanding of your body in relationship to others. To inhabit a new body is to radically alter who you are; perhaps the alteration would not be as radical as, say, building an army of networked death machines like Des Roubles’s brain did, but it would, at bare minimum, change the way you perceive yourself. Ironically, seeing as how there are no women or any discussion of gender identity whatsoever in the story, it seems that Campbell anticipated some of 1980s and 1990s feminist discourse related to embodiment and identity. In the germinal 1985 essay, “A Cyborg Manifesto,” Donna Haraway argues that identity is always already fractured, a slippery amalgam of power, dominance, and resistance: “the cyborg is a kind of disassembled and reassembled, postmodern collective and personal self.”21 In other words, she sees the cyborg—and, by extension, cyborg feminism—as a representation of this slippage, this fracturing and constant renegotiation of subjectivity. Sociologist Shelley Budgeon likewise has described the human body as “an event,” rather than “an object.”22 For Budgeon, much as for Haraway, individual subjectivity is constantly in flux, constantly an exchange between self and other, culture, power, and resistance. In Campbell’s story, when Anton Des Roubles becomes The Brain and constructs multiple bodies for itself, it is embracing cyborg subjectivity; The Brain’s bodies are sites of embodied events. Yet, such a fracturing literally threatens civilization as we know it, as The Brain destroys New York City and parts of the US Eastern Seaboard. Only the military and its patriarchal might are able to defeat the fractured bodies, rendering The Brain inert. In other words, Campbell’s story suggests that modern civilization cannot abide by cyborg feminism. It must destroy the slippery identity and re-locate it into a singular body. This fluid understanding of selfhood stands in stark contrast to transhumanist ideals. Yet, I would argue, it also offers a gateway to rethinking our cultural notions of death and suicidality. The transhumanist effort to create Whole Brain Emulation is part of a broader anxiety about the finality of death and a desire to overcome it, to preserve life indefinitely. The fear of death so clearly articulated in transhumanism is a perfectly natural part of human existence; it is the reason so many religions envision an afterlife, so that life—and its abrupt ending—might have greater meaning. So, too, is the reason why Western, post-Enlightenment culture tends to see suicide as contrary to normal life and also a taboo subject not to be spoken of. This is the paradox that drives fictional accounts of robot suicide: to die is human, but to want to die is not. As I discussed in the previous chapter, Isaac Asimov explored the relationship between death and humanity at length in his robot narrative, “Bicentennial Man.” Sue Lange likewise examines the consequences of death and its relationship to humanity through her robot character Avey in the

82

Conclusion

novella We, Robots. In the story, monstrous cyborg transhumanists (unfortunately called “trannies,” an outdated term that is also a violent slur referring to transgender people) manufacture robots for consumer service (à la Rosie from The Jetsons), but they realize that the singularity is impending. The singularity, a term coined by mathematician John von Newmann in the 1950s, refers to an eventual moment in time when artificial intelligence becomes so far advanced that it renders humans completely obsolete. To combat this, the transhumanists in We, Robots give all robots the ability to feel pain, reasoning that, “We cannot allow ourselves to become servants to anyone except ourselves. We must maintain control even as you proved to be superior to us.”23 As a result, the robots develop empathy, followed by a desire to create art, to attend school, and to participate in all the mundane everyday aspects of human life. At the same time, humans begin augmenting their own bodies, including removing their ability to feel pain, causing them to become less and less empathetic to others. By the end of the narrative, the robots are recalled to the factory for destruction, having outlived their usefulness to the newly augmented humans/cyborgs. The robots network with one another, forming what they call “the regularity,” a tongue-in-cheek inversion of the notion of a “singularity,” in which mundanity is the goal, not superiority. The robots decide to walk together to the factory, stopping along the way to enjoy the sights and sounds of nature. In the end, Avey tells its readers: We robots prefer a short span, ignited with the fuel of existence. We prefer wonder, amusement, sadness, folly, and most of all beauty. The kind of beauty that is discovered every day in places not seen before or remarked upon. The serendipitous. Without it, life has no meaning, it’s just an endless walking in the morning and retiring in the evening. And that’s fine. If you’re a human.24

This final description of what it means to be human is an ironic twist on transhumanist thought. In Avey’s world, humans have become immortal cyborgs, unable to die, just wandering mindless through each day; meanwhile, the machines have discovered what it means to truly live. And here, Lange unveils the ultimate point, not just of human existence but also of representations of robot suicide: to live is to accept death, to move voluntarily toward it, mindful of the journey. This is not to say that we must necessarily accept suicide as part of life. Indeed, what horrifies us culturally about suicide is the abruptness of it, the notion that someone who died by suicide had so much left to live for. Or, in the case of altruistic suicide, what allows us to accept it is the belief that someone died so that the rest of us might live. In each of these scenarios, death enables life, but only if you focus on the journey, not the destination.

Conclusion

83

CONCLUSION SF representations of robot suicide are meant to be a philosophical signpost, not a form of instructional design. Ultimately, the takeaway from the narratives I have examined in this book is that death is a natural part of life, but perceptions of and rhetoric surrounding death is a cultural construct at the intersection of race, gender, class, and embodiment. Suicide is part of that construct, and if we intend to prevent it (as I think is the morally correct thing to do in most cases), we must attend to both the individual and the cultural elements that are the root cause of suicides. We cannot do this by avoiding death, but rather, by embracing life. If that sounds overly idealistic and highfalutin, that’s because it is. A lot of anti-suicide writing is, in fact, overly idealistic and tends to emphasize individual imperatives that are difficult to implement. Choose happiness. Ask for help. Live in the moment. Focus on the journey, not the destination. Such aphorisms may be helpful for some people, but they ultimately will not create change. So, here are some tangible, evidence-based solutions that many experts agree can help reduce the suicide rate in the US. None of these will happen overnight, or perhaps even in the next generation. But with organization and effort, they are possible. • Continue to fund crisis intervention methods at the federal, state, and local levels. Congress has already begun to do so through the National Suicide Hotline Designation Act of 2020,25 but continued resources must be allocated to support research, implementation, and assessment of programs. • Redirect resources for policing toward community-based crisis response. This is one of the more controversial recommendations presented here, but there are already pilot programs in states such as Montana that dispatch mental health professionals to mental health crisis events instead of police. The idea is not to abolish police, but rather to offer community-based services to people who need them. This allows individuals to receive treatment, rather than increase the likelihood of arrest or other violent response.26 • Establish universal healthcare, including for Indigenous peoples and those who are living in the US without documentation. The US government has already taken important steps through the Affordable Care Act and the Mental Health Parity and Addiction Equity Act of 2007, both of which expanded healthcare access and provided needed mental health treatments, including suicide prevention measures.27 However, both cost and access continue to be a barrier to care for many living in the US.

84

Conclusion

According to the CDC, approximately 31.6 million people are uninsured.28 Further, as of 2022, approximately 100 million US Americans have some form of medical debt; of those, one in seven have been denied treatment by a healthcare professional because of their debt.29 Access to low-cost or free, high-quality physical and mental healthcare would not only allow people to access treatment for depression, anxiety, bipolar disorder, schizophrenia, and other mental illnesses that contribute to suicidality. But in addition, universal healthcare would ensure that the very real fear of healthcare-related bankruptcy is no longer a source of stress and potential suicidal crisis. • Restrict or altogether ban guns. The leading method of suicide in the United States is firearm. It is twice as common as suffocation and five times as common as poisoning.30 Access to a gun increases the likelihood of a suicidal person using it to kill themselves, and many countries with restrictive gun laws have lower suicide rates.31 Even if a total ban is impossible in US culture and politics, numerous studies show that restricting access to firearms, such as through stricter gun control laws, could help reduce the suicide rate in the US.32 These recommendations are both enormous and only part of the picture. Altruistic suicide, particularly the rhetoric of sacrifice in war culture, is so woven into the fabric of the US that it’s nearly impossible to imagine the country without it. Further, broader structural inequities, such as racism in the criminal justice system, anti-LGBTQIA+ legislation, homelessness, abortion bans, a distinct lack of both paid parental leave and affordable childcare, and even the climate crisis, all contribute not just to suicide rates but also to the continuation of nation-wide systemic oppression. Robots, of course, cannot help us with these changes. But the fictional representations of robot suicide that I have discussed in this book do show us a portrait of ourselves, the rhetoric we use to describe suicide, and the ways we condemn some types of death while valorizing others. We will never become robots, but we can, science fiction shows us, become better humans. NOTES 1. Joo-heon Kim, “AI Care Robots to Look after Suicide-Prone Residents in Rural Region,” AJU Daily, November 30, 2021, www​ .ajudaily​ .com​ /view​ /20211130162522756. 2. Olivia Borge et al., “How Search Engines Handle Suicide Queries,” Journal of Online Trust and Safety (2021): tsjournal.org/index.php/jots/article/view/16/7. 3. Borge et al., “How Search Engines Handle Suicide Queries.”

Conclusion

85

4. Ibid. 5. A.S. Hoffberg, K.A. Stearns-Yoder, and L.A. Brenner, “The Effectiveness of Crisis Line Services: A Systematic Review,” Frontiers in Public Health 7 (2020): doi. org/10.3389/fpubh.2019.00399. 6. United States Congress, National Suicide Hotline Designation Act of 2020, October 17, 2020, www​.congress​.gov​/116​/plaws​/publ172​/PLAW​-116publ172​.pdf. 7. Hoffberg, Stearns-Yoder, and Brenner, “The Effectiveness of Crisis Line Services.” 8. Don’t Call the Police, 2022, dontcallthepolice.com/. 9. A Que, “Mrs. Griffin Prepares to Commit Suicide Tonight,” translated by John Chu, Clarksworld 104 (May 2015), clarkesworldmagazine.com/a_05_15/. 10. Mark O’Connell, To Be a Machine: Adventures among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death (London, UK: Granta Books, 2017), 190. 11. N. Katherine Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (Chicago: University of Chicago Press, 1999), 3. 12. Oliver Krüger, “‘The Singularity Is Near!’ Visions of Artificial Intelligence in Posthumanism and Transhumanism,” International Journal of Interactive Multimedia and Artificial Intelligence 7, 1 (2021), 16. 13. “Fight Aging! Resources,” Fight Aging!, May 8, 2021, www​.fightaging​.org​/ resources. 14. Sigal Samuel, “How Biohackers Are Trying to Upgrade Their Brains, Their Bodies—and Human Nature,” Vox, November 15, 2019, www​.vox​.com​/future​-perfect​ /2019​/6​/25​/18682583​/biohacking​-transhumanism​-human​-augmentation​-genetic​ -engineering​-crispr. 15. Anders Sandberg and Nick Bostrom, Whole Brain Emulation: A Roadmap, Technical Report #2008–3 (Oxford, UK: Future of Humanity Institute, Oxford University, 2008), fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf, 7. 16. Sandberg and Bostrom, Whole Brain Emulation, 105. 17. John C. Campbell, “The Infinite Brain,” Science Wonder Stories 1, 12 (May 1930), 1076–93. 18. Campbell, “The Infinite Brain,” 1077. 19. This particular detail is fascinating. In my previous book, I analyzed the gender of voice-interactive computers at length: Liz W Faber, The Computer’s Voice: From Star Trek to Siri (Minneapolis, MN: University of Minnesota Press, 2020). However, the complex gender of The Brain is outside the bounds of this current book. In future work, I plan to return to Campbell’s story to unpack the violin voice further. 20. Charl Linssen and Pieter Lemmens, “Embodiment in Whole-Brain Emulation and Its Implications for Death Anxiety,” Journal of Education and Technology 26, 2 (2016), jetpress.org/v26.2/linssen_lemmens.pdf, 4. 21. Donna J. Haraway, Simians, Cyborgs, and Women: The Reinvention of Nature (New York: Routledge, 1991), 163. 22. Shelley Budgeon, “Identity as an Embodied Event,” Body and Society 9, 1 (2003), 36. 23. Sue Lange, We, Robots (Seattle WA: Aqueduct Press, 2007), 35.

86

Conclusion

24. Lange, We, Robots, 93. 25. United States Congress, National Suicide Hotline Designation Act of 2020. 26. Katheryn Houghton, “In Montana, Crisis Support Teams Offer Alternatives to Policing Mental Health,” NPR, June 10, 2021, www​.npr​.org​/sections​/health​ -shots​/2021​/06​/10​/1004744348​/in​-montana​-crisis​-support​-teams​-offer​-alternatives​-to​ -policing​-mental​-health. 27. Michael F. Hogan and Julie Goldstein Grumet, “Suicide Prevention: An Emerging Priority for Health Care,” Health Affairs 35, 6 (June 2016), doi.org/10.1377/ hlthaff.2015.1672. 28. Amy E. Cha and Robin A. Cohen, “Demographic Variation in Health Insurance Coverage: United States, 2020,” National Health Statistics Reports 169 (February 11, 2022), www​.cdc​.gov​/nchs​/data​/nhsr​/nhsr169​.pdf, 1. 29. Noam N. Levey, “100 Million People in America Are Saddled with Medical Debt,” Texas Tribune, June 16, 2022, www​.texastribune​.org​/2022​/06​/16​/americans​ -medical​-debt​/. 30. National Institute of Mental Health, “Suicide,” NIMH, March 2022, www​.nimh​ .nih​.gov​/health​/statistics​/suicide. 31. Kees van Heeringen, The Neuroscience of Suicidal Behavior (Cambridge, UK: Cambridge University Press, 2018), 16–17. 32. Chris Murphy, “Gun Laws Are the Key to Addressing America’s Suicide Crisis,” The Atlantic, September 1, 2020, www​.theatlantic​.com​/ideas​/archive​/2020​/09​/ gun​-control​-key​-addressing​-americas​-suicide​-crisis​/615889​/.

Bibliography

Aaron, Michelle. Death and the Moving Image: Ideology, Iconography, and I. Edinburgh, UK: Edinburgh University Press, 2015. Adams, Douglas. The Ultimate Hitchhiker’s Guide to the Galaxy. New York: Del Ray Books, 2002. Alegre, Sara Martín. The Joys of Teaching Literature. Vol. 6: September 2015–August 2016. ddd.uab.cat/pub/llibres/2010/116328/joytealit_a2016.pdf. Allen, DeeDee, Melissa Dancy, Elizabeth Stearns, Roslyn Mickelson, and Martha Bottia. “Racism, Sexism and Disconnection: Contrasting Experiences of Black Women in STEM Before and After Transfer from Community College.” International Journal of STEM Education 9, 20 (2022). link.springer.com/article/1 0.1186/s40594-022-00334–2. Asimov, Isaac. 7 Book Collection: Robot Series New York: Harper Collins, 2018. ———. “Bicentennial Man.” in Bicentennial Man and Other Stories, 143–80. New York: Ballantine Books, 1985. ———. “Evidence.” Astounding Science Fiction 38, 1 (September 1946): 121–40. ———. In Memory Yet Green: The Autobiography of Isaac Asimov, 1920–1954. New York, Doubleday, 1979. ———. “Liar!” Astounding Science Fiction 27, 3 (May 1941): 43–55. ———. “Runaround.” Astounding Science Fiction 29, 1 (March 1942): 94–103. ———. The Robots of Dawn. New York: Bantam Books, 1994. Associated Press. “50th Anniversary: The Burning Monk.” AP.org, 2013. www​.ap​.org​ /explore​/the​-burning​-monk​/. Beattie, Derek, and Patrick Devitt. Suicide: A Modern Obsession. Dublin, Ireland: Liberties Press, 2015. Beauroy-Eustache, Ophely Dorol, and Brian L. Mishara, “Systematic Review of Risk and Protective Factors for Suicidal and Self-Harm Behaviors among Children and Adolescents Involved with Cyberbullying.” Preventative Medicine 152 (2021). doi. org/10/1016/j.ypmed.2021.106684. Benson, Leon, dir. The Outer Limits. Season 2, episode 9. “I, Robot.” Written by Robert C. Dennis, featuring Marianna Hill and Leonard Nimoy. Aired November 14, 1964. New York: ABC Studios. In broadcast syndication.

87

88

Bibliography

“Best Practices and Recommendations for Reporting on Suicide.” reportingonsuicide.org. May, 2022. reportingonsuicide.org/wpcontent/uploads/2022/05/ROS-001-One-Pager-1.13.pdf. Binder, Eando. “Adam Link in Business.” Amazing Stories 13, 1 (January 1939): 44–61. ———. Adam Link Robot. New York: Paperback Library, 1965. ———. “Adam Link, Robot Detective.” Amazing Stories 14, 5 (May 1940): 42–65. ———. “Adam Link’s Vengeance.” Amazing Stories 14, 2 (February 1940): 8–27. ———. “I, Robot.” Amazing Stories 13, 1 (January 1939): 8–21. ———. “The Trial of Adam Link, Robot.” Amazing Stories 14, 2 (February 1940): 30–42. Black, Edwin. War against the Weak: Eugenics and America’s Campaign to Create a Master Race. Expanded edition. Washington, DC: Dialog Press, 2012. Bogle, Donald. Toms, Coons, Mulattoes, Mammies, & Bucks: An Interpretive History of Blacks in American Films. 4th edition. New York: Continuum Books, 2008. Borge, Olivia, Victoria Cosgrove, Elena Cryst, Shelby Grossman, Shelby Perkins, and Anna Van Meter. “How Search Engines Handle Suicide Queries.” Journal of Online Trust and Safety (2021): 1–19. tsjournal.org/index.php/jots/article/v iew/16/7. Brutlag, Brian. “Episode 16: The Terminator Franchise with Dr. Liz Faber.” February 18, 2022, in The Sociologist’s Dojo. Podcast. MP3 Recording. directory.libsyn.co m/episode/index/show/thesociologistsdojo/id/22184621. Bryan, Craig J. Rethinking Suicide: Why Prevention Fails and How We Can Do Better. New York: Oxford University Press, 2022. Budgeon, Shelley. “Identity as an Embodied Event.” Body and Society 9, 1 (2003): 35–55. Butler, Judith. Giving an Account of Oneself. New York: Fordham University Press, 2005. Cameron, James, dir. Terminator 2: Judgement Day. Written by James Cameron and William Wisher, Jr., featuring Arnold Schwarzenegger, Linda Hamilton, and Edward Furlong. 1991; Los Angeles: Lionsgate, 2009. DVD. Cameron, James, dir. The Terminator. Written by James Cameron and Gale Anne Hurt, featuring Arnold Schwarzenegger, Michael Biehn, and Linda Hamilton. 1984; Los Angeles: Orion Pictures, MGM, 2001. DVD. Campbell, John C. “The Infinite Brain.” Science Wonder Stories 1, 12 (May 1930): 1076–93. Čapek, Karel. R.U.R. (Rossum’s Universal Robots). Translated by Claudia Novack. New York: Penguin Books, 2004. Cargill, C. Robert. Sea of Rust. New York: Harper Voyager, 2018. CDC VitalSigns. “Suicide Rising across the US.” Center for Disease Control and Prevention. June 2018. www​.cdc​.gov​/vitalsigns​/pdf​/vs​-0618​-suicide​-H​.pdf. Cech, Erin A. “The Intersectional Privilege of White Able-Bodied Heterosexual Men in STEM.” Science Advances 8, 24 (2022). www​.science​.org​/doi​/10​.1126​/sciadv​ .abo1558.

Bibliography

89

Center for Disease Control and Prevention. “10 Leading Causes of Death, United States.” Web-Based Injury Statistics Query and Reporting System, 2020. wisqars.cdc.gov/data/lcd/home. Cha, Amy E., and Robin A. Cohen. “Demographic Variation in Health Insurance Coverage: United States, 2020.” National Health Statistics Reports 169 (February 11, 2022): 1–15. www​.cdc​.gov​/nchs​/data​/nhsr​/nhsr169​.pdf. Cheyne, Ria. “Disability in Genre Fiction.” In The Cambridge Companion to Literature and Disability, edited by Clare Barker and Stuart Murray, 185–98. Cambridge, UK: Cambridge University Press, 2018. Clark, Sheri L., Christina Dyar, Elizabeth M. Inman, Nina Maung, and Bonita London. “Women’s Career Confidence in a Fixed, Sexist STEM Environment.” International Journal of STEM Education 8, 56 (2021). stemeducationjournal.spri ngeropen.com/articles/10.1186/s40594-021-00313-z. Cleveland Clinic Health Library. “Neurotransmitters.” Cleveland Clinic, March 14, 2022. my.clevelandclinic.org/health/articles/22513-neurotransmitters. Columbus, Chris, dir. Bicentennial Man. Written by Nicholas Kazan, featuring Robin Williams. 1999; Burbank, CA: Buena Vista Pictures. Constantelos, Demetrios J. “Altruistic Suicide or Altruistic Martyrdom? Christian Greek Orthodox Neomartys: A Case Study.” Archives of Suicide Research 8, 57 (2004): 57–71. doi.org/10.1080/13811110490243813. Cooper-White, Macrina. “Robot Suicide? Rogue Roomba Switches Self On, Climbs Onto Hotplate, Burns Up.” November 13, 2013. www​.huffpost​.com​/entry​/robot​ -suicide​-roomba​-hotplate​-burns​-up​_n​_4268064. Crane, Stephen. The Red Badge of Courage and Other Stories. London: Penguin Classics, 2005. Delbaere, Marjorie, Edward F. McQuarrie, and Barbara J. Phillips. “Personification in Advertising: Using a Visual Metaphor to Trigger Anthropomorphism.” Journal of Advertising 40, 1 (Spring 2011): 121–30. Denton-Borhaug, Kelly. US War-Culture, Sacrifice and Salvation. Sheffield, UK: Equinox, 2011. Descartes, Rene. Meditations on First Philosophy. 1911. yale.learningu.org/download/041e9642-df02–4eed-a895-70e472df2ca4/H2665_D escartes%27%20Meditations.pdf. Don’t Call the Police. 2022. dontcallthepolice.com/. Dunbar-Ortiz, Roxanne. An Indigenous Peoples’ History of the United States. Boston: Beacon Press, 2015. Durkheim, Émile. On Suicide. London: Penguin Books, 2006. Edwards, Paul N. The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge, MA: MIT Press, 1996. Faber, Liz W. The Computer’s Voice: From Star Trek to Siri. Minneapolis, MN: University of Minnesota Press, 2020. Farooqui, Bilal (@bilalfarooqui). “Our D.C. Office Building Got a Security Robot. It Drowned Itself.” Tweet. July 17, 2017. web.archive.org/web/20170718160512/twi tter.com/bilalfarooqui/status/887025375754166272?lang=en. “Fight Aging! Resources.” Fight Aging! May 8, 2021. www​.fightaging​.org​/resources.

90

Bibliography

Flashback FilmMaking. “Creating Scene T-800 Kills John Connor ‘Terminator: Dark Fate’ behind the Scenes.” YouTube. March 30, 2022. www​.youtube​.com​/watch​?v​ =B4Rm9k7kTZQ. Foot, Philippa. “The Problem of Abortion and the Doctrine of the Double Effect.” Oxford Review 5 (1967). philpapers.org/archive/FOOTPO-2.pdf. Freud, Sigmund. “Mourning and Melancholia.” In The Freud Reader, edited by Peter Gay, 584–89. New York: Norton & Co., 1989. Friedan, Betty. The Feminine Mystique. New York: Norton & Co, 1997. Gabler, Jay. “What to Make of Isaac Asimov, Sci-Fi Giant and Dirty Old Man?” Lit Hub, 14 (May 2020). lithub.com/what-to-make-of-isaac-asimov-sci-fi-giant-anddirty-old-man/. Garamone, Jim. “Remember Those Who Sacrificed for America.” United States Department of Defense. May 23, 2019. www​.defense​.gov​/News​/Feature​-Stories​/ story​/Article​/1856912​/remembering​-those​-who​-sacrificed​-for​-america​/. Garber, Megan. “Funerals for Fallen Robots.” The Atlantic. September 20, 2013. www​ .theatlantic​.com​/technology​/archive​/2013​/09​/funerals​-for​-fallen​-robots​/279861​/. Garfield, Rachel, Anthony Damico, and Robin Rudowitz. “Taking a Closer Look at Characteristics of People in the Coverage Gap.” Kaiser Family Foundation. July 29, 2021. www​.kff​.org​/policy​-watch​/taking​-a​-closer​-look​-at​-characteristics​-of​ -people​-in​-the​-coverage​-gap​/. Garfield, Rachel, Kendal Orgera, and Anthony Damico. “The Coverage Gap: Uninsured Poor Adults in States that Do Not Expand Medicaid.” Kaiser Family Foundation. January 21, 2021. www​.kff​.org​/medicaid​/issue​-brief​/the​-coverage​-gap​ -uninsured​-poor​-adults​-in​-states​-that​-do​-not​-expand​-medicaid​/. “General Motors-Robot.” AdAge. February 4, 2007. adage.com/videos/general-motors-robot/567. Gibley, Ryan. “’I’m Not a Thing to Be Pitied’: The Disability Backlash against Me Before You.” The Guardian. June 2, 2016. www​.theguardian​.com​/film​/2016​/jun​/02​ /me​-before​-you​-disabled​-backlash​-not​-pitied. Giddens, Anthony. Modernity and Self-Identity: Self and Society in the Late Modern Age. Redwood City, CA: Stanford University Press, 1991. Gittinger, Juli L. Personhood in Science Fiction: Religious and Philosophical Considerations. Cham, Switzerland: Palgrave Macmillan, 2019. “GM Changing Robot Suicide Ad.” CNN Money, February 9, 2007. money.cnn.com/ 2007/02/09/news/companies/gm_robotad/. Goodenough, Jerry. “’I Think You Ought to Know I’m Feeling Very Depressed’: Marvin and Artificial Intelligence.” In Philosophy and the Hitchhiker’s Guide to the Galaxy, edited by Nicholas Joll, 129–52. New York: Palgrave Macmillan, 2012. Gutierrez-Jones, Carlos. Suicide and Contemporary Science Fiction. Cambridge, UK: Cambridge University Press, 2015. Haller, Beth A. Representing Disability in an Ableist World: Essays on Mass Media. Louisville, KY: Avocado Press, 2010. Hampton, Gregory Jerome. Imagining Slaves and Robots in Literature, Film, and Popular Culture: Reinventing Yesterday’s Slave with Tomorrow’s Robot. Lexington, KY: Lexington Books, 2015.

Bibliography

91

Haraway, Donna J. Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge, 1991. Hassler, Donald M. “What the Mahine Teaches: Walter Tevis’s Mockingbird.” In The Mechanical God: Machines in Science Fiction, edited by Thomas P. Dunn and Richard D. Erlich, 75–82. Westport, CT: Greenwood Press, 1982. Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press, 1999. Hecht, Jennifer Michael. Stay: A History of Suicide and the Arguments against It. New Haven: CT: Yale University Press, 2013. Hinduja, Sameer and Justin W. Patchin. “Bullying, Cyberbullying, and Suicide.” Archives of Suicide Research 14, 3 (2010). doi.org/10.1080/13811118.2010.4941 33. Hoffberg, A.S., K.A. Stearns-Yoder, and L.A. Brenner. “The Effectiveness of Crisis Line Services: A Systematic Review.” Frontiers in Public Health 7 (2020). doi. org/10.3389/fpubh.2019.00399. Hogan, Michael F., and Julie Goldstein Grumet. “Suicide Prevention: An Emerging Priority for Health Care.” Health Affairs 35, 6 (June 2016). doi.org/10.1377/ hlthaff.2015.1672. Holland, Dean, dir. The Good Place. Season 2, episode 6. “The Trolley Problem.” Written by Michael Schur, Josh Siegal, and Dylan Morgan, featuring Kristen Bel, William Jackson Harper, and Ted Danson. Aired October 19, 2017. Universal City, CA: NBC. Houghton, Katheryn. “In Montana, Crisis Support Teams Offer Alternatives to Policing Mental Health.” NPR. June 10, 2021. www​.npr​.org​/sections​/health​-shots​ /2021​/06​/10​/1004744348​/in​-montana​-crisis​-support​-teams​-offer​-alternatives​-to​ -policing​-mental​-health. Huang, Cho-Yin, Yuan-Ting Huang, Yu-Hsuan Lin, Yin-Chen Chi, Shu-Sen Chang, and Ying-Yeh Chen. “Factors Associated with Psychological Impact of Celebrity Suicide Media Coverage: An Online Survey Study.” Journal of Affective Disorders 295 (2021): 839–45. doi.org/10.1016/j.jad.2021.08.096. Hume, David. “Reason and Superstition.” In Suicide: Right or Wrong?, edited by John Donnelly, 43–49. 2nd edition. Amherst, NY: Prometheus, 1998. Husserl, Edmund. Cartesian Meditations: An Introduction to Phenomenology. Translated by Dorion Cairns. The Hague: Martinus Nijhoff Publishers, 1982. Indian Health Service. “Disparities.” October 2019. www​.ihs​.gov​/newsroom​/ factsheets​/disparities​/. Jennings, Garth, dir. The Hitchhiker’s Guide to the Galaxy. 2005; Burbank, CA: Touchstone Pictures, 2022. www​.hulu​.com​/movie​/the​-hitchhikers​-guide​-to​-the​ -galaxy​-03ec9063​-3d95​-4fe8​-b97e​-fa6552405d41​?entity​_id​=03ec9063​-3d95​-4fe8​ -b97e​-fa6552405d41. joecoin. “Japan Engineers Design Robotic Bear to Aid in Assisted Suicide.” Reddit. March 92016. www​.reddit​.com​/r​/nottheonion​/comments​/49o1vz​/japan​_engineers​ _design​_robotic​_bear​_to​_aid​_in​/.

92

Bibliography

Kevorkian, Jack. “Medicine: The Goodness of Planned Death.” In Suicide: Right or Wrong?, edited by John Donnelly, 67–75. 2nd edition. Amherst, NY: Prometheus Books, 1998. Kim, Joo-heon. “AI Care Robots to Look after Suicide-Prone Residents in Rural Region.” AJU Daily, November 30, 2021. www​.ajudaily​.com​/view​ /20211130162522756. Klibanoff, Eleanor. “More Families of Trans Teens Sue to Stop Texas Child Abuse Investigations.” June 8, 2022. www​.texastribune​.org​/2022​/06​/08​/transgender​-texas​ -child​-abuse​-lawsuit​/. Krüger, Oliver. “‘The Singularity Is Near!’ Visions of Artificial Intelligence in Posthumanism and Transhumanism.” International Journal of Interactive Multimedia and Artificial Intelligence 7, 1 (2021): 16–23. Kubrick, Stanley, dir. 2001: A Space Odyssey. 1968; Burbank, CA: Warner Brothers Home Distribution, 2001. DVD. Lange, Sue. We, Robots. Seattle, WA: Aqueduct Press, 2007. Leigh, Danny. “Rambo and the Terminator: The Cold War Warriors Are Back.” The Guardian, October 12, 2019. www​.theguardian​.com​/film​/2019​/oct​/12​/warning​-or​ -prophecy​-end​-of​-cold​-war​-rambo​-terminator​-warriors​-back. Levey, Noam N. “100 Million People in America Are Saddled with Medical Debt.” Texas Tribune, June 16, 2022. www​.texastribune​.org​/2022​/06​/16​/americans​ -medical​-debt​/. Light, Jennifer S. “When Computers Were Women.” Technology and Culture 40, 3 (1999): 455–83. www​.jstor​.org​/stable​/25147356. Linssen, Charl, and Pieter Lemmens. “Embodiment in Whole-Brain Emulation and Its Implications for Death Anxiety.” Journal of Education and Technology 26, 2 (2016): 1–15. jetpress.org/v26.2/linssen_lemmens.pdf. Lyons, Siobhan. Death and the Machine: Intersections of Mortality and Robotics. Singapore: Palgrave Macmillan, 2018. Manning, Jason. Suicide: The Social Causes of Self-Destruction. Charlottesville, VA: University of Virginia Press, 2020. Martín, Sara. “The Antipatriarchal Male Monster as Limited (Anti)Hero in Richard K. Morgan’s Black Man/Thirteen.” Science Fiction Studies 44, 1 (2017): 84–103. Martinez-Carter, Karina. “What Does ‘America’ Actually Mean.” The Atlantic, June 19, 2013. www​.theatlantic​.com​/national​/archive​/2013​/06​/what​-does​-american​ -actually​-mean​/276999​/. McCarthy, John. “An Unreasonable Book.” SIGART Newsletter 58 (1976): 5–10. Dl. acm.org/doi/odf/10.1145/1045264.104265. “Meet the Authors: Eando Binder.” Amazing Stories 13, 1 (January 1939): 129. Mikelson, David. “Japanese Engineers Design Robotic Bear to Aid in Assisted Suicide?” Snopes, May 6, 2015. snopes.com/fact-check/seppukuma/. Miller, Tim, dir. Terminator: Dark Fate. Written by David Goyer, Justin Rhodes, and Billy Ra, featuring Linda Hamilton, Arnold Schwarzenegger, Mackenzie David, and Natalia Reyes. 2019; Hollywood, CA: Paramount Pictures.

Bibliography

93

Morton, Terrell R., and Tara Nkrumah. “A Day of Reckoning for the White Academy: Reframing Success for African American Women in STEM.” Cultural Studies of Science Education 16 (2021). doi.org/10.1007/s11422-020-10004-w. Murphy, Chris. “Gun Laws Are the Key to Addressing America’s Suicide Crisis.” The Atlantic, September 1, 2020. www​.theatlantic​.com​/ideas​/archive​/2020​/09​/gun​ -control​-key​-addressing​-americas​-suicide​-crisis​/615889​/. National Alliance on Mental Illness. “General Motors Blasted for TV Suicide Commercial, Marginalization of Depression.” NAMI Press & Media, February 9, 2007. www​.nami​.org​/Press​-Media​/Press​-Releases​/2007​/General​-Motors​-Blasted​ -For​-TV​-Suicide​-Commercial. National Center for Health Statistics. “Suicide in the U.S. Declined During the Pandemic.” Center for Disease Control and Prevention, November 5, 2021. www​ .cdc​.gov​/nchs​/pressroom​/podcasts​/2021​/20211105​/20211105​.htm. National Institute of Mental Health. “Suicide.” NIMH, March 2022. www​.nimh​.nih​ .gov​/health​/statistics​/suicide. Nguyen, Viet Thanh. “Asian Americans Are Still Caught in the Trap of the ‘Model Minority’ Stereotype. And It Creates Inequality for All.” Time, June 25, 2020. time.com/5859206/anti-asian-racism-america/. Niederkrotenthaler, Thomas, Marlies Braun, Jane Pirkis, Benedikt Till, Steven Stack, Mark Sinyor, Ulrich S. Tran, Martin Voracek, Qijin Cheng, Florian Arendt, et al. “Association between Suicide Reporting in the Media and Suicide: Systematic Review and Meta-Analysis.” BMJ 368 (2020). doi.org/10.1136/bmj.m575. O’Connell, Mark. To Be a Machine: Adventures among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death. London, UK: Granta Books, 2017. Okamura, Kazuya, Katsumi Ikeshita, Sohei Kimoto, Manubu Makinodan, and Toshifumi Kishimoto. “Suicide Prevention in Japan: Government and Community Measures, and High-Risk Interventions.” Asia-Pacific Psychiatry 13, 3 (2021). doi. org/10.1111/appy.12471. “Paranoid Android: Cleaning Gadget ‘Switches Itself On’ and Moves onto Kitchen Hotplate in ‘Suicide Bid.’” Daily Mail, November 12, 2013. www​.dailymail​.co​.uk​ /news​/article​-2503733​/Paranoid​-android​-Cleaning​-gadget​-switches​-moves​-kitchen​ -hotplate​-suicide​-bid​.html. Peng, Rui Jie, Jennifer Glass, and Sharon Sassler. “Creating Our Gendered Selves— College Experiences, Work and Family Plans, Gender Ideologies, and Desired Work Amenities Among STEM Graduates.” Social Currents 9, 5 (2022). doi. org/10.1177/23294965221089912 Perwitasari, Dinar Rizqi and Emi Wuri Wuryaningsih. “Why Did You Do That To Me?: A Systematic Review of Cyberbullying Impact on Mental Health and Suicide Among Adolescents.” NurseLine Journal 7, 1 (2022). doi.org/10.19184/ nlj.v7il.27311. Philips, David P. “The Influence of Suggestion on Suicide: Substantive and Theoretical Implications of the Werther Effect.” American Sociological Review 39, 3 (1979).

94

Bibliography

Plato. Phaedo. Translated by F.J. Church New York: Liberal Arts Press, 1951. www​ .bard​.edu​/library​/arendt​/pdfs​/Plato​_Phaedo​.pdf. Que, A. “Mrs. Griffin Prepares to Commit Suicide Tonight.” Translated by John Chu. Clarksworld 104 (May 2015). clarkesworldmagazine.com/a_05_15/. Roff, Heather M. “The Folly of Trolleys: Ethical Challenges and Autonomous Vehicles.” Brookings, December 17, 2018. www​.brookings​.edu​/research​/the​-folly​ -of​-trolleys​-ethical​-challenges​-and​-autonomous​-vehicles​/. Roy, Jessica. “Robot Reportedly Commits Suicide after Becoming Fed Up with Doing Housework.” November 13, 2013. newsfeed.time.com/2013/11/13/ro bot-reportedly-commits-suicide-after-becoming-fed-up-with-doing-housework/. Russell, Roxanne, Daniel Metraux, and Mauricio Tohen. “Cultural Influences on Suicide in Japan.” Psychiatry and Clinical Neurosciences 71 (2017): 2–5. Sallis, James. “Books.” Fantasy & Science Fiction 99, 1 (July 2000). www​.sfsite​.com​ /fsf​/2000​/js0007​.htm. Samuel, Sigal. “How Biohackers Are Trying to Upgrade Their Brains, Their Bodies— and Human Nature.” Vox, November 15, 2019. www​.vox​.com​/future​-perfect​ /2019​/6​/25​/18682583​/biohacking​-transhumanism​-human​-augmentation​-genetic​ -engineering​-crispr. Sandberg, Anders, and Nick Bostrom. Whole Brain Emulation: A Roadmap. Technical Report #2008–3. Oxford, UK: Future of Humanity Institute, Oxford University, 2008. fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf. Schreier, Jake, dir. Robot & Frank, written by Christopher D. Ford, featuring Frank Langella and Peter Sarsagaard. 2012; Culver City, CA: Samuel Goldwyn Films. Scott, Ridley, dir. Blade Runner. Written by Hampton Fancher and David Peoples, featuring Harrison Ford and Sean Young. 1982; Burbank, CA: Warner Brothers. Shelley, Mary. Frankenstein. 2nd edition. Edited. J. Paul Hunter New York: Norton & Co., 2012. Siemens, Chauncey. “Japan Engineers Design Robotic Bear to Aid in Assisted Suicide.” IFLScience.org. February 25, 2015. web.archive.org/web/201601231 90102/www​.iflscience​.org​/japan​-engineers​-design​-robotic​-bear​-to​-aid​-in​-assisted​ -suicide​/. Singer, Adam (@AdamSinger). “That Robot Is What All of Us Want to Do in 2017.” Tweet, July 17, 2017. twitter.com/adamsinger/status/887049185639383041?lang =da. Stack, Steven, and Barbara Bowman. Suicide Movies: Social Patterns, 1900–2009. Cambridge, MA: Hogrefe Publishing, 2012. Stupore, Grant, dir. I Am Mother. 2019; Netflix. www​.netflix​.com​/title​/80227090. Substance Abuse and Mental Health Services Administration. “National Mental Health Services Survey (N-MHSS): 2018.” SAMHSA. 2018. www​.samhsa​.gov​/data​ /sites​/default​/files​/cbhsq​-reports​/NMHSS​-2018​.pdf. Szasz, Thomas. Fatal Freedom: The Ethics and Politics of Suicide. Syracuse, NY: Syracuse University Press, 1999. Tevis, Walter. Mockingbird. New York: Bantam Book, 1985.

Bibliography

95

Tiku, Natasha. “The Google Engineer Who Thinks Its AI Has Come Alive.” Washington Post, June 21, 2022. www​.washingtonpost​.com​/podcasts​/post​-reports​/ the​-google​-engineer​-who​-thinks​-its​-ai​-has​-come​-alive​/. Trevor Project, The. “National Survey on LGBTQ Youth Mental Health 2021.” The Trevor Project. 2021. www​.thetrevorproject​.org​/survey​-2021​/. Turing, A.M. “Computing Machinery and Intelligence.” Mind 59, 236 (October 1950): 433–60. doi.org/10.1093/mind/LIX.236.433. United States Congress. National Suicide Hotline Designation Act of 2020. October 17, 2020. www​.congress​.gov​/116​/plaws​/publ172​/PLAW​-116publ172​.pdf. van Heeringen, Kees. The Neuroscience of Suicidal Behavior. Cambridge, UK: Cambridge University Press, 2018. Vint, Sherryl. Science Fiction. Cambridge, MA: MIT Press, 2021. Voss, Laura. More Than Machines? The Attribution of (In)Animacy to Robot Technology. Bielefeld, Germany: Transcript, 2021. Weizenbaum, Joseph. Computing Power and Human Reason. San Francisco: W.H. Freeman & Company, 1976. Whale, James, dir. Bride of Frankenstein. 1935; Hollywood, CA: Universal Pictures. Whale, James, dir. Frankenstein. 1931; Hollywood, CA: Universal Pictures. Wolinsky, Richard, Lawrence G. Davidson, and Richard A. Lupoff. “An Interview with Walter Tevis.” Brick: A Literary Journal 72 (Winter 2003). brickmag.com/an-interview-with-walter-tevis/.

Index

“Adam Link in Business,” 22 “Adam Link, Robot Detective,” 24–6 “Adam Link’s Vengeance,” 22–4 ableism, 57, 62–66. See also disability Adam Link Robot (novel), 25–6 Adam Link, 21–26, 28–31, 35 altruistic suicide, 8, 35–50, 70, 82, 84 Asimov, Isaac, 4, 21, 41–42, 51n19 assimilationism, 68 assisted suicide, 55–71. See also Physician Assisted Suicide (PAS) The Age of Enlightenment, 3

Civil War. See The Red Badge of Courage cold war, 45 contagious suicide. See Werther Effect cyborg, 81–2 “A Cyborg Manifesto,” 81 despondency–motivated suicide, 8, 15–31, 43 disability, 52n22, 57, 62–66, 69–70, 80 Durkheim, Émile, 3, 8, 17–18, 36 eugenics, 55–56, 78 euthanasia, 55–7, 62–3, 65–6 “Evidence,” 44

Bicentennial Man (film), 66–9 “Bicentennial Man” (short story), 4, 66–9 Binder, Eando, 21 Blade Runner, 2, 5 bodily autonomy, 19–20, 24, 62–3. See also ableism; disability Bride of Frankenstein, 24 The Birth of a Nation, 62

forced hospitalization. See involuntary civil commitment Frankenstein: 1931 film, 21–2; Bride of Frankenstein, 24; novel by Mary Shelley, 21–2. gender roles, 22–3 General Motors (GM), 27–8 The Good Place, 41

CAF Model of De–Escalation, 77–8 Campbell, John C. See “The Infinite Brain” Čapek, Karel. See R.U.R. Cargill, C. Robert. See The Sea of Rust Cartesian philosophy, 4–5, 68, 80

Haraway, Donna. See “The Cyborg Manifesto” healthcare, 17–9, 84 97

98

Index

heroism, 35–6, 38, 40–1. See also The Red Badge of Courage Hitchhiker’s Guide to the Galaxy, 25–7

Robot & Frank, 69–70 The Robots of Dawn, 43 Roomba, 1, 6

I, Robot (novel). See Asimov, Isaac “I, Robot” (short story), 21–2 “I, Robot” (television episode). See The Outer Limits “The Infinite Brain,” 79–81 intersubjectivity, 5–6 involuntary civil commitment, 18–9

Sea of Rust, 49–59 self–driving cars, 40 self–sacrifice. See altruistic suicide socioeconomic class. See eugenics suicidal ideation, 18, 22, 26, 28, 30, 75–7 suicide: rates among Indigenous Americans, 18, 83; reporting guidelines, 7, 29–30; and depression, 16, 18, 20, 26–7; individual vs. social causes of, 3, 16–20, 23, 28, 75; definitions of, 2–3, 8–9; intervention, 16–8, 75–8; of Socrates, 36–7; prevention, 56, 62–3, 75–8, 83–4; rates of, 16–8, 20, 28–9, 56, 75, 83–4; and unemployment, 16, 27–8, 31. See also altruistic suicide; despondency–motivated suicide; assisted suicide; Werther effect; euthanasia, voluntary civil commitment Suicide Crisis Hotline, ix, 16–7, 76, 78, 83

King Kong, 61–2 Lange, Sue. See We, Robots “Liar,” 41–2 manufacturing industry: robots in, 27–8, 30, 48; workers in, 30, 47–8, 82. martyrdom vs. altruistic suicide, 36–7 mental illness, 16–20, 84 mind/body dualism. See Cartesian philosophy Mockingbird, 59–66 “Mrs. Griffin Prepares to Commit Suicide,” 77–8 The Outer Limits, 25, 35, 51n19 phenomenology, 4–7 Physician–Assisted Suicide (PAS), 55–7 posthumanism, 78 Que, A. See “Mrs. Griffin Prepares to Commit Suicide” “Runaround,” 42–3 R.U.R., 58–9, 60 racial trope: of the black buck, 62; of the model minority, 68–9; of the tragic mulatto, 68. See also assimilationism; eugenics; white supremacism Radical Life Extension (RLE), 78–82 The Red Badge of Courage, 38–9, 49

Terminator 2: Judgement Day, 45–6 Terminator: Dark Fate, 47–9 Tevis, Walter. See Mockingbird The Terminator, 44–5 Three Laws of Robotics, 42–44 Transhumanism, 78–91, 81 “The Trial of Adam Link, Robot,” 22 trolley problem, 39–41 Turing Test, 6 Vietnam War, 38, 45 war culture, 35–50, 84 We, Robots, 81–2 Werther effect, 2, 7, 8, 28–31 white supremacism, 10, 17, 58, 60–1, 66 Whole Brain Emulation (WBE), 79–80

About the Author

Liz W. Faber is Assistant Professor of English and Communication at Dean College and Adjunct Instructor of Scientific and Academic Writing at the University of Maryland Baltimore’s Graduate School. Their first book, The Computer’s Voice: From Star Trek to Siri, is the winner of the 2022 Popular Culture Association Emily Toth Award for Best Single Work in Women’s Studies.

99