The Ethics of Neuroscience and National Security 9781138331525, 9781138331532, 9780429447259

New advances in neuroscience promise innovations in national security, especially in the areas of law enforcement, intel

456 81 4MB

English Pages [227] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Ethics of Neuroscience and National Security
 9781138331525, 9781138331532, 9780429447259

Citation preview

THE ETHICS OF NEUROSCIENCE AND NATIONAL SECURITY

Copyright © 2021. Taylor & Francis Group. All rights reserved.

New advances in neuroscience promise innovations in national security, especially in the areas of law enforcement, intelligence collection, and armed confict. But ethical questions emerge about how we can, and should, use these innovations. This book draws on the open literature to map the development of neuroscience, particularly through funding by the Defense Advanced Research Projects Agency, in certain areas like behavior prediction, behavior modifcation, and neuroenhancement, and its use in the creation of novel weapons. It shows how advances in neuroscience and new technologies raise ethical issues that challenge the norms of law enforcement, intelligence collection, and armed confict, broadly grouped under the term “national security.” Increasing technological sophistication without attention to ethics, this book argues, risks creating conditions for the development of “dual-use” technologies that may be prone to misuse, are grounded in an incomplete understanding of the brain, or are based on a limited view of the political contexts in which these technologies arise. A concluding section looks at policy and regulatory options that might promote the benefts of emerging neuroscience, while mitigating attendant risks. Key Features • •

• • •

First broad survey of neuroscience as it applies to national security Innovative ethical analysis over a range of cross-cutting technologies including behavior prediction and modifcation tools, human enhancement, and novel lethal and nonlethal weapons Ethical analysis covering all stages from the development, testing, and use (or misuse) of these technologies; and decisions from the individual scientist to the nation state Strong policy focus at multiple levels, from self-governance to international regulation Combination of philosophical analysis with grounded, practical recommendations

Nicholas G. Evans is Assistant Professor in the Department of Philosophy at the University of Massachusetts, Lowell. He is the co-editor of Ebola’s Message: Public Health and Medicine in the Twenty-First Century (2016).

“Advances in neuroscience will raise increasing problems in relation to national security in the post COVID-19 world. This book demonstrates just how complex these problems will be and stresses the collective role that scientists could play in dealing with them. I hope that his ethical analysis of neuroscience and national security will be widely read, particularly by neuroscientists.”

Copyright © 2021. Taylor & Francis Group. All rights reserved.

— Malcolm Dando, University of Bradford, U.K.

THE ETHICS OF NEUROSCIENCE AND NATIONAL SECURITY

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Nicholas G. Evans

First published 2022 by Routledge 605 Third Avenue, New York, NY 10158 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2021 Taylor & Francis The right of Nicholas G. Evans to be identifed as author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identifcation and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data A catalog record for this title has been requested ISBN: 978-1-138-33152-5 (hbk) ISBN: 978-1-138-33153-2 (pbk) ISBN: 978-0-429-44725-9 (ebk)

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Typeset in Bembo by codeMantra

CONTENTS

Acknowledgments List of Acronyms 1 Introduction

vii ix 1

PART I

Copyright © 2021. Taylor & Francis Group. All rights reserved.

BRAINs in Battle

9

2 Predicting the Future

11

3 The Science of Persuasion

25

4 Building a Better Warfghter

36

5 Neuroweapons

48

PART II

Neuroethics and National Security

59

6 Whither Neuroethics?

61

7 Translation

72

vi

Contents

8 Dual-Use 9 Corruption 10 Neurosupremacy

86 106 123

Copyright © 2021. Taylor & Francis Group. All rights reserved.

PART III

Policy

137

11 Self-Regulation

139

12 Organizations

150

13 Nations

159

14 Global Governance

169

15 Restructuring Science

177

List of References Appendix: A working bibliography of neuroethics and national security Index

183 207 211

Copyright © 2021. Taylor & Francis Group. All rights reserved.

ACKNOWLEDGMENTS

This work is a frst for me: my frst sole-authored monograph. It would not have been possible without the support and guidance of a great number of people. First, my thanks to the Greenwall Foundation, and in particular President Bernard Lo and Chief Operating Ofce Michelle Groman for their support in the research that informed this work. That support came in the form of two grants, “Dual-Use Neurotechnologies and International Governance Arrangements” (2016–18) and “Neurotechnological Candidates for Consideration in Periodic Revisions of the Biological and Toxin Weapons Convention and Chemical Weapons Convention” (2016). My sincere thanks to my coinvestigators on the former, Professors Jonathan Moreno and Claire Finkelstein, at the University of Pennsylvania. At the time of publishing, I am now a 2020–2023 Greenwall Faculty Scholar, and fnal adjustments to this manuscript were made possible with such funding. My particular thanks to Jonathan, who mentored me through my postdoctoral research at the University of Pennsylvania from 2014 to 2016. Jonathan’s Mind Wars was an inspiration in writing this book. While my skill at history is nowhere near his equal, it is my hope that I can provide something novel in terms of the philosophical analysis described here. This work has also been supported by separate funding on the ethics of autonomous vehicles under National Science Foundation grant #1734521, “Ethical Algorithms in Autonomous Vehicles.” A considerable amount of the work on institutions and on the relationship between neuroscience and artifcial intelligence came from conversations with participants at the 2019 Autonomous Vehicles Symposium, during which we hosted a workshop under the auspices of the grant. I am grateful to those participants, and those at the 2019 workshop at the University of Florida, “Promise and Problems in

viii

Acknowledgments

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Emerging Technology,” and to Duncan Purves, for conversations related to this book. Most of all, I am grateful to Yuanchang Xie and Liming Yang, my coinvestigator and research assistant, respectively, for their help in understanding some of the technical aspects used in this book. Portions of this work were presented at the 2018 workshop on Philosophical Issues in Research Ethics at Carnegie Melon University. I am grateful to Danielle Wenner for inviting me to present my work on soldier enhancement, and to David DeGrazia, Rebecca Dresser, Spencer Phillips Hey, Alex John London, Ruth Macklin, Stephanie Morain, Laura Specker Sullivan, and Charles Weijer for their comments on that draft. Other portions of this work were presented at the 2017 International Society for Military Ethics and the 2017 American Society for Bioethics and Humanities conferences. I am grateful to participants from both conferences for their contributions to my thinking on this work. This work has also benefted from conversations with a number of other scholars during writing and editing over the last couple of years. I am grateful to Malcolm Dando, Filippa Lentzos, Amanda Moodie, Brett Edwards, Jo Husbands, Rocco Casagrande, Tom Hobson, Luke Kemp, Shahar Avin, Aerin Commins, Chris Park, Lorna Miller, Piers Millet, Megan Palmer, James Revill, Caitríona McLeish, Emily Kelley, Alex Wakeford, and Neil Shortland for their help in thinking through some of the issues described within this book. I am especially grateful to Josh Griton and Lisa Sanders for their thoughts on some of these issues. Any outstanding errors are solely my own. Finally, I am immensely thankful for the support and patience of Kelly Hills: spouse, business partner, and co-conspirator. I am also grateful to Harley, Lexi, and Inky, our cats, for sleeping in the spots I place them when I’m writing (which is more of a gift than it sounds).

ACRONYMS

Copyright © 2021. Taylor & Francis Group. All rights reserved.

AAA ADF AFM AGI AI APA BCI BOLD BRAIN BTO BTWC CCW CIA CNS CWC DARPA DBS DOD EEG EIT EN-MEM FAS FBI fMRI GTMO

American Anthropological Association Australian Defense Force Army Field Manual Artifcial General Intelligence Artifcial Intelligence America Psychological Association Brain computer interface Blood oxygen level dependent Brain Research through Advancing Innovative Neurotechnologies (DARPA) Biological Technologies Ofce Biological and Toxin Weapons Convention Convention on Certain Weapons Central Intelligence Agency Central nervous system Chemical Weapons Convention Defense Advanced Research Projects Agency Deep brain stimulation (US) Department of Defense Electroencephalogram Enhanced interrogation technique Experience Based Narrative Memory Workshop Federation of American Scientists (US) Federal Bureau of Investigation Functional magnetic resonance imaging Guantanamo Bay; ofcially, Naval Station Guantanamo Bay

x

Acronyms

Copyright © 2021. Taylor & Francis Group. All rights reserved.

HTS IARPA LAWS LSD MOD N2 NAS NASA NSA NSABB POW PTSD SOCOM SOF TIA TRADOC UN US USABWL USAF USAMRIID

Human Terrain System Intelligence Advanced Research Projects Agency Lethal Autonomous Weapons Systems Lysergic acid diethylamide (UK) Ministry of Defence Narrative Networks Program National Academy of Sciences National Aeronautics and Space Administration National Security Agency (US) National Science Advisory Board for Biosecurity Prisoner of war Post-traumatic stress disorder Special Operations Command Special Operations Forces Total Information Awareness program US Army Training and Doctrine Command United Nations United States of America United States Army Biological Warfare Laboratories US Air Force United States Army Medical Research Institute for Infectious Disease

1

Copyright © 2021. Taylor & Francis Group. All rights reserved.

INTRODUCTION

In late 2014, I was asked by Jonathan Moreno to develop a small grant on neuroethics and national security. At frst, this was simply a step in the ladder from postdoctoral fellow to assistant professor: a small amount of funding from the Greenwall Foundation to display on my CV. The money would largely go into the University of Pennsylvania’s cofers, but in exchange I’d have greater status on the academic job market. I was familiar, I thought, with concerns about security and neuroethics. With rare exception, I thought, that literature was overwhelmingly concerned with function magnetic resonance imaging (fMRI) and “lie detection.” The frst paper I wrote in graduate school had been on a technology referred to as “brain computer interfaces” (White, 2008), or BCIs, and their implications for military ethics (Evans, 2011), but I hadn’t seen much on it since. And, of course, there was the human enhancement literature, but even the literature on military enhancement seemed largely concerned with questions of what was “human”—questions in which I had no interest. What I wasn’t expecting, as an Australian recently moved to the US, was meeting folks at the pointy end of neuroscience and national security. Not long after, I was invited to a talk at Penn by William Casebeer. Casebeer is a former student of Phillip Kitcher, and like many of Philip’s students he is a careful, rigorous, and heavily naturalistic philosopher (Casebeer, 2003). But, in addition to those credentials, Casebeer, a retired Air Force Lieutenant Colonel, had taught philosophy at military academies around the country. In 2014, he was an outgoing program manager at the Defense Advanced Research Projects Agency (DARPA), the blue-sky research arm of the US Department of Defense (DOD). And what Bill had to say had nothing to do with lie detectors. Rather, Casebeer’s talk centered on a program he had developed at DARPA that sought to understand the neurobiological basis of narrative, and how the

Copyright © 2021. Taylor & Francis Group. All rights reserved.

2

Introduction

stories we tell—how we convey information, not simply what information we convey—infuence us. This had promising applications in improving and streamlining complex warfghter training in the twenty-frst century. But, Casebeer said, there was another application: detecting radicalization online by piecing together the kind, order, and method of delivery of propaganda. I would later learn that the programs run by Casebeer during his tenure (DARPA Program Ofcers are moved on rapidly to keep the agency fresh) were only the tip of the iceberg. The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, launched during the Obama administration to rapidly develop America’s capacity in neuroscience, is heavily involved with the DOD. Of the $110 million or so budgeted at the outset of the BRAIN Initiative in 2013, almost one half was committed by DARPA (White House, 2013). That number has only grown over time, and DARPA’s involvement in neuroscience has deepened from the fruits of the BRAIN Initiative. The national security applications of neuroscience are now diverse, and include research into narratives, pharmacology, and the BCIs I’d written on years before. Neuroscience is booming in national security, and its reach into the institution of national security is broad and deep. Neuroscience is attractive in an operational sense for its potential capacity to detect terrorists, help the wounded walk again, and perhaps one day fnd a cure for post-traumatic stress disorder. It is also a point of “convergence” among other sciences, including artifcial intelligence, synthetic biology, and nanoscience. These convergent sciences and their technological applications promise a range of fantastic possibilities, including cures for a wide range of disease, sentient machines, the end of manual labor, and extremely long (or even indefnite) lifespans. Yet, as with all new discoveries, these possibilities have deep ethical implications. By “ethical,” I mean the decisions made before, during, and after the development of these technologies bear on questions about what kinds of values are important, and tradeofs between values including but not limited to human and nonhuman welfare, equality, justice, and freedom. As national security applications, moreover, these technologies exist in a space in which otherwise impermissible acts, such as killing, may be available to states and their proxies in securing their interests. It is that intersection, between neuroscience and national security, that is the subject of this book.

1.1 National Security Before moving into the meat of this work, some defnitional concerns need to be addressed. The frst is what I mean by “national security.” National security, I take here, is a social institution of the modern nation state. By “social institution,” I mean one of a collection of organizations, policies, laws, and norms that fulfll an important moral end (Miller, 2010). Other institutions include

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Introduction

3

healthcare, education, journalism, and the academy. Social institutions are an important level of analysis in ethics and political philosophy as contributors to, and instantiations of, a moral society. They are also the drivers in important decisions that infuence the lives of millions of people. It might be objected that national security isn’t an institution so much as a collection of institutions. Criminal justice, one could argue, is a separate institution that protects the rights of citizens; state militaries are an institution that protects state sovereignty against external threats. This is a common distinction, but treating national security as a broad level of analysis is useful here for a couple of reasons. Importantly, national security is much broader than just these two organizations. It includes, for example, transnational law enforcement and intelligence operations within its conceptual bounds. What links all of these organizations together is a common telos, or end: the use of force to maintain the structure of a society. Whether the structure of a society is ultimately justifed is another question, and one I won’t answer in detail here. Assuming, however, that our current society—and here, I am primarily concerned with the US—is in part justifed (e.g. as a liberal democracy), even if parts of it are decidedly immoral (e.g. as a nation with an unaccounted for colonial past, or a history of slavery it has yet to fully reckon with), national security is the basic social institution charged with ensuring that the moral project of a society is maintained. Importantly, moreover, national security is empowered to use force, including lethal force, to achieve that end. Throughout, I will distinguish between diferent parts of the social institution of national security, as they achieve the central telos of the institution in diferent ways. However, I take this as partly a coordination problem given that national security interacts with diferent populations who may have diferent claims against it. In particular, the moral claims of other nation states and their resident populations are diferent from the claims of the local population of a state. However, the central aim of national security is maintaining the moral project of a particular nation state, and the organizations and roles within it can be taken as sharing a common moral end even if they approach this end in diferent ways. This is broadly analogous to the way that treating the institution of healthcare (Miller, 2010) must necessarily approach the roles of, inter alia, public health and clinical medicine, which interact in important ways but involve distinct moral commitments (Childress et al., 2002; Childress and Bernheim, 2003). An important additional reason to treat national security writ large as the subject of analysis is that, in the aftermath of the attacks on the World Trade Center and Pentagon, and the crash of UA 93 in 2001, the organizations under the umbrella of national security have become progressively less distinct. The Federal Bureau of Investigation (FBI, 2020), for example, is charged with intelligence collection, national security, and law enforcement operations under its current public information page. The Department of Homeland Security,

4

Introduction

formed in the aftermath of the attacks of 2001, concerns itself with threats both foreign and domestic. Even the Department of Health and Human Services includes ofces devoted to external threats, particularly those concerned with biological terrorism. Morally, there is a diference between the acts of armed confict, intelligence collection, and law enforcement (Evans et al., 2014). However, the prosecution of these acts is spread among a series of organizations that, here, I understand collectively as the institution of national security. This serves the important role of clarifying the diferent moral limits on these acts (and thus the organizations that perform them), and the historical story of how—for better and for worse—these organizations came to be. To understand the ethical issues neuroscience poses, we should treat all three broad classes of national security organization—military, intelligence, and criminal justice—together.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

1.2 Neuroscience Neuroscience, like national security, requires some stipulation. By “neuroscience,” I am concerned with scientifc inquiry into the functioning of the brain, and its relation to individual and collective human behavior. This includes inquiries we would conventionally mean by “neuroscience,” such as studying brains with large diagnostic devices such as fMRIs. But it also involves aspects of cognitive science, microbiology, clinical psychology, medicine, forensics, and even computer science. Importantly, neuroscience is concerned not simply with brains, but also with minds, mental states, and cognition. The relation between these categories is famously contentious, especially in a world as interconnected and mediated by technology as our own. Neuroethics, in particular, has engaged substantively with the relationship between technologies that interact with human cognition with a range of degrees of directness. So a little more should be said here. In general, I am skeptical of theories that posit the brain as the sole location of cognition, and the mind in general. It has been a very long time since humans relied exclusively on their brains for cognition, and that has only become more true in recent years, as information has exploded in both quantity and variety. I am thus a proponent of Clark and Chalmers’ extended mind thesis (1998), but moreover of Neil Levy’s extended cognition thesis, which holds that the structure of cognition is not exclusively located in the brain, but also in other objects (Levy, 2007). I will not attempt to defend either of these theses, which have been interrogated at length by other authors. This view, however, has one particular advantage to it. Because the mind and cognition are not located exclusively in the brain, specifc concerns about neuroscience and technology as “impacting the mind” are largely eschewed in this work. While this means that I have to ultimately answer questions about what makes these technologies worthy of

Introduction

5

distinct ethical concern (which I address in Chapter 6), it does mean that I am not terribly concerned about the mere fact that these technologies impact the mind. This is important, as some of the emerging insights from neuroscience that apply to national security do not express themselves as “neurotechnologies” qua technologies that use direct chemical or electrical intervention with neurons to infuence the mind. Some technologies, such as BCIs or certain chemical weapons, do this; others, such as propaganda developed through neuroscientifc insights into group behavior, do not. I consider both worthy of exploration, and indeed will show how more and less direct applications of neuroscience interact with national security, and each other.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

1.3 Reality versus Hype Many of the technologies I discuss have been in development for years or even decades, but have yet to be deployed as fully functional technologies. Concerns about suggestion and “mind control,” for example, date back half a century into the Cold War, but are being resurrected with new insights into the brain (Seed, 2011). That said, we haven’t really achieved what we might call the cinematic version of mind control just yet. It’s not clear if, or when, we will achieve such a capability. So a skeptic might say in response to novel neuroscience research “so what?” We should take these technologies seriously, however, in a very specifc sense. First, we know that groups want this technology and are willing to spend hundreds of millions of dollars to get it. Intelligence services, domestic and foreign-looking, want to know how to manipulate people’s thoughts to fnd actionable intelligence (Wurzman and Giordano, 2014). There are those in the criminal justice system who want new technologies that can verify whether a person is lying, or not, for purposes of securing a conviction (Dresser, 2008; Morse, 2018). This means that there is incentive, and action, to get to some of these technologies even if the science is still very much out. A common refrain is that “change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difcult, and time-consuming.” This is the so-called “Collingridge dilemma,” named after technologist David Collingridge (1980), which tends to dog those who think about the ethics of technology. The purpose, then, of attending to the aims of these national security organizations is to attempt to decide what change is needed, while it is still easy to make that change. This is an important development, I take it, in approaching the ethics of technology even under conditions of great uncertainty. But the other reason, however, is that we’re closer than you think. In 2009, I was writing about BCIs when the focus of the technology was still primarily animal studies (Evans, 2011). By 2015, there were people driving wheelchairs with BCIs, and one who has even fown a fghter jet (in simulation). We’re not

6

Introduction

living in Ghost in the Shell yet, but our cyborg future is closer than it might seem in the news. There is thus more than enough science out there to start forming some interesting normative conclusions, and hopefully begin acting on those conclusions.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

1.4 Structure of This Book With that in mind, this book proceeds in three parts. The frst part, bolstering this short introduction, deals with four classes of emerging neuroscience and technology that have applications in national security. The frst of these are advances in behavior prediction. It is there that I will deal with, among other things, Casebeer’s brainchild, the N2, and its promise in detecting terrorists before they are radicalized. Predicting behavior gives way to the possibility of controlling and modifying behavior. This has clear implications in forming a scientifc basis for the interrogation and rehabilitation of detainees in armed confict, counterterrorism, and law enforcement scenarios. It also has important implications for training a new generation of soldiers, and curing them of the psychological ills they often return with from battle. This chapter covers the state-of-the-art and the aspirational ends of military neuroscience’s foray into the science of persuasion. In Chapter 4, I turn to enhancement. Professional militaries worldwide are becoming older, and the demands of twenty-frst century conficts have extended expectations for military forces, and Special Operations Forces in particular. This has created a new incentive, in the age-old quest for a better warrior, to seek soldiers with enhanced cognition in addition to enhanced physiology. In this chapter, the promise of soldier enhancement is explored— from the mundane and soon-to-be-used to the blue sky and far in the future. While the focus here is on enhancements achieved through advances in neuroscience and related disciplines, the strong links beyond body and mind are also explored. The fnal chapter of Part I concludes our investigation with a view of new weapons technologies in principle achievable through advances in neuroscience. Two important areas are considered: lethal and nonlethal biochemical agents, and attacks on BCIs. In each case, the current development of these technologies is canvassed, and their future potential is outlined. Part II begins with a discussion of how and why neuroethics has largely neglected concerns of national security. While there has been considerable attention paid to legal concerns around the domestic use of neuroscientifc fndings in criminal proceedings, comparatively little has been paid to direct use in law enforcement, and almost none to unique challenges in counterterrorism, intelligence collection, and armed confict. I draw on previous work on ethics and national security to provide a program of work for Parts II and III. One reason such a large investment in neuroscience has occurred is the potential to translate military discoveries into civilian gains. DARPA’s chief

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Introduction

7

claim to fame in this regard is the creation of the foundations of what we now call the internet. But how translation from military to civilian spheres actually happens, and how the civilian world benefts from military science writ large, is less clear. In Chapter 7, I identify challenges that face the translation of basic neuroscience conducted through military funding into civilian applications. Even if military and/or civilian applicants are achieved, the central concern for any ostensibly benefcial technology with the capacity for harm is that it will ultimately be misused by bad actors. This “dual-use dilemma” in the life sciences has received some attention in the context of neuroscience, primarily from peace studies literature focusing on the Biological and Toxin Weapons Convention (BTWC). In Chapter 8, I review the broader ethical literature on dual-use research, with an eye toward neuroscience. I argue that neuroscience presents an important divergence from classical debates about dual-use research for two reasons. First, its applications are by and large not capable of causing mass casualty events (as opposed to, say, nuclear or biological weapons), though they may be broadly pernicious in other ways. Second, the kinds of actors about which we ought to be concerned in dual-use are often role holders in national security institutions, where the debate in the life sciences has focused on terrorists or other actors presumed to have ill intent. Dual-use typically considers individual instances of scientifc research, without much in the way of the larger context in which science arises. In national security contexts, particularly post-2001, this is a mistake. In Chapter 9, I deal with issues of corruption. In particular, I argue that for many neurotechnologies, the permissibility of use is contingent upon institutions that are, at this moment, fundamentally corrupt. I demonstrate this through an analysis of the role cognitive scientists have had in the US torture program, and use this to draw broader conclusions about the permissibility of compliance and persuasion strategies in neuroscience. The fnal issue I examine is the increasingly popular “argument from supremacy” when it comes to regulating novel technologies. This argument claims that in order to maintain their strategic position—and in the US, its perceived strategic dominance in science and technology—nations need to pursue technological innovation at the fastest pace possible. The corollary to this is that any limits in science and technological innovation are perforce unethical just in case they compromise the strategic position of the country in which that innovation takes place. I argue that the strongest form of this argument is one that justifes supremacy arguments in terms of a broad appeal to the well-being and security of citizens. I then show that, in keeping with the original framing of the supremacy argument, this form of argument fails to justify dominant expenditure in national security settings when compared to other broad risks to society. The issues raised in Part II are in principle solvable with the appropriate political will. I begin Part III, combining the fndings of the frst two parts with practical considerations, by examining policy questions at the level of

Copyright © 2021. Taylor & Francis Group. All rights reserved.

8

Introduction

individual scientists. I draw on work from previous attempts to regulate emerging technologies, including genetic engineering, to outline the contours of the argument for scientifc self-regulation. I argue that even if we were justifed in believing self-regulation was sufciently robust to address the challenges posed by other forms of emerging technologies, we should not rely on it in the case of neuroscience. This is because the relationship between neuroscience and the state undermines the professional norms of science that serve as the foundation of scientifc self-governance. Next, I address potential measures for dealing with the moral challenges posed by emerging neurosciences at the institutional level: scientifc journals, universities and corporations, and professional institutions. This work draws on previous suggestions to regulate emerging science and technology with implications of national security at the institutional level. I argue ultimately that while institutions may be able to address ethical issues posed by neuroscience and national security in limited ways, the powerful incentives brought around by national security funding of scientifc projects undermine the possibility of efective institutional regulation. I then suggest how institutions can work to guarantee their independence from these incentives, and the role of institutions in holding the national security establishment accountable. In terms of legal or other formal regulations, the nation state remains the locus of much attention in scientifc governance. Here, I address challenges that face the nation state in preventing the deleterious efects of emerging neuroscience. I canvas laws and regulations governing surveillance, armed confict, and transparency as a means to address the challenges posed by neuroscience. By reframing these laws and regulations in terms of their efect on human cognition, I argue, nation states can impose barriers against the misuse of neurotechnologies; these changes, however, are only stopgaps in search of a broader change in scientifc governance. The highest levels of policy I address are global governance issues. In this chapter, I identify existing segments of global governance structures that might be easily leveraged to address the challenges posed by neuroscience and national security. I argue that while there is often a temptation—such as the eforts of some in the Campaign to Stop Killer Robots—to argue for some new convention against an emerging technology with national security applications, the potential ethical challenges posed by neuroscience in Part II can be addressed with tractable and concrete changes to existing global governance structures. I conclude with an exploration of how we might alter science as a crosscutting endeavor—from individual practice to global governance—to address the challenges posed by emerging neurotechnologies. I argue that one of the challenges faced by neuroscience is a lack of international harmonization in managing both security and scientifc concerns. Building on proposed international health governance arrangements as a model for responsible innovation, I identify key aspirations for scientifc governance to prevent the misuse of emerging science and technology.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

PART I

BRAINs in Battle

Copyright © 2021. Taylor & Francis Group. All rights reserved.

2 PREDICTING THE FUTURE

2.1 Chapter Summary This chapter deals with the use of neuroscience to predict individual and group behaviors. First, the motivations behind this broad aim are described. Then, the use of neuroscience to inform these aims is discussed, dealing with two central cases. The frst is the role of neuroscience in informing eforts in artifcial intelligence. The second is in understanding how narratives afect behavior, using as a central case DARPA’s Narrative Networks Program (N2). The possible future of these projects, and its connection to other themes in this work, is then described.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

2.2 Introduction There is a longstanding belief that confict is won and lost on information— and, in particular, deception, or controlling the fow of information between you and your adversary (Musashi, 2005; Tzu, 2012). The best case scenario is that you hold all the cards, and your enemy holds none. They can’t deceive you, but you can deceive them. Asymmetries in information can be leveraged to great efect, moreover, by less powerful adversaries. A central concern in the wake of the airplane and later anthrax attacks of 2001 was the capacity for small groups of relatively anonymous individuals to plan and conduct mass casualty attacks. That capacity was further enhanced by a corresponding lack of capacity for intelligence and law enforcement agencies in the US to share—and act on—information related to terrorist activity.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

12

BRAINs in Battle

For the CIA, the possibility that a small group of non-state actors could produce biological weapons led to a surge in intelligence collection eforts motivated by concerns for national security. The report by the agency based on a meeting of the Strategic Assessments Group of the National Academy of Sciences (NAS), The Darker Bioweapons Future, noted that terrorists now had the potential capability to create biological weapons using the tools of the modern life sciences, and that unlike nuclear weapons these biological weapons would be exceeding difcult to detect (CIA, 2003). The combination of the failures of intelligence to detect and stop “Amerithrax,” or the plane attacks on the World Trade Center and Pentagon, triggered a program to radically increase the capabilities of the US Intelligence Community (IC) to detect and respond to terrorists. A critical problem arises, however, in the context of information management. More information is not always better, and information fails to become “intelligence” if it cannot be presented, shared, and acted upon in a useful way. The paradigm example of an instance where national security intelligence collection failed despite the existence of information are the attacks on September 11, 2001. It is now widely accepted that CIA possessed intelligence that the attacks were in the planning, but a lack of institutional capacity between intelligence agencies led to the information not being acted on appropriately (Mazzetti, 2007). Intelligence collection, then, experiences a tension between collecting as much information as possible and making sure that information becomes useful, actionable intelligence for decision makers. Considerable amount of attention has been paid by national security institutions to ensure that the apparent problems of September 11, 2001, that can be attributed to a failure of intelligence collection can be solved. It would be a mistake to think that these ideas exist because of the 2001 attacks, so much as gained political momentum in the wake of that crisis. Nonetheless, a central theme of this book will be the change in national security as an institution in the wake of 2001, including the creation of programs to collect and manage information for intelligence purposes. The research arm of national security, in both private and publicly funded spheres, has contributed to the efort to better capture and act on intelligence. Neuroscience, among other felds, features in the development of these novel strategies and technologies. Neuroscience has made an impact on intelligence collection in a number of important ways, and this chapter is devoted to detailing those advancements. The frst is in predicting social behavior, with a focus on determining how individuals telegraph, or are indoctrinated to commit violent acts. The second is in the development of algorithms to process very large information sets to extract intelligence. Third and fnal is in developing tools to arrange information to provide actionable intelligence to analysts and decision makers.

Predicting the Future

13

In this chapter, I’ll cover the prehistory that informs the use of neuroscience in intelligence collection. I’ll then move on to the three dominant ways neuroscience is informing this collection, before diving into my central case study for this chapter, the Narrative Networks Program (N2), which deals primarily with the frst and third of the above contributions. I’ll then look forward to how these kinds of technologies might be deployed in the future.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

2.3 Shots in the Dark Intelligence collection is arguably the art of induction. The capacity to reach a conclusion based on inference from a limited set of previous information is, both logically and historically, a fraught exercise. While this book is in part a work of philosophy, I’m not going to rehash the problems with induction in logic and science (e.g. Godfrey-Smith, 2009), except to note that despite the well-known problems with induction, it is a common and in many ways necessary part of neuroscience and national security: one relies on inferences clinching the relationship between brain states and mental states; the other, between signals and the behaviors of adversaries. In some cases, induction in national security is relatively straightforward. Consider a state military attempting to determine if an adversary has developed the capabilities for nuclear weapons. Satellite imagery can give us the outlines of centrifuge plants and assembly factories, machine shops, and silos. A “smoking gun” would be, for example, a clear signal of the decay profle of highly enriched Uranium-235 (U-235), especially a strong signal relative to other decay profles to demonstrate that amounts of U-235 are being created well in excess of what is required for nuclear power. This is a difcult intelligence activity in terms of actually attaining such data, but once acquired the data itself can be fairly unequivocal. While there isn’t just one thing someone can be doing with that much U-235, the number of things is close enough that the confdence we have from previous decay signatures observed in other places and times gives us good reason to believe a novel case is the same. Less clear is determining if an adversary is about to become a belligerent. The ongoing dispute between Arab nations and Israel about the onset of the 1973 Israeli-Egyptian War turns on whether Egypt was preparing to attack, making Israeli actions a preemptive strike; or whether Egypt’s assertion that it was simply undertaking standard and justifable exercises near the Israeli border, in which Israel’s actions are at best preventative, unjustifed aggression. What this turns on are a complex of electronic signals (including intercepted communication) collection, pictures, and inferences based on strategic postures on both nations. As a historical exercise it might be possible to answer; as an ex ante decision, however, the task is very difcult (Mueller et al., 2006; Kurtulus, 2007).

Copyright © 2021. Taylor & Francis Group. All rights reserved.

14

BRAINs in Battle

Intelligence collection in service of counterterrorism gained renewed energy in the wake of the World Trade Center attacks of 2001. Over the intervening decades, terrorist networks have become more sophisticated, more distributed, and less obvious in their activities. They are increasingly contiguous with organized crime, which means military and intelligence communities need to know more about domestic activities that support terrorist groups, such as the drug trade. Terrorist organizations also make heavy use of the internet—in 2004, terrorist groups would become early adopters of Twitter—as a recruitment tool, communications network, and surveillance platform (Goodman, 2016). The contemporary problems with terrorism are even more extreme than 20 years ago. Terrorist activity previously attributed to “lone wolf attackers” is now attributed to general climates of radicalization. A combination of propaganda by active terrorist organizations, conspiracy theories, online forums, and a sympathetic (or at least not overtly hostile) media landscape creates the conditions that lead to terrorist action. While the phrase “stochastic terrorism” is relatively new, its antecedents can be found in decades of maneuvering by extremist groups to create terrorists without the infrastructure of a traditional network. These actions, moreover, are seen in radical movements from more traditional targets of contemporary counterterrorism, that of Islamic terrorism (Munoz, 2018), but also in white nationalist and Christian identity movements (Zeskind, 2009), and novel right-wing extremist movements in the US (Neiwert, 2019). The obvious solution to the problem, at least following 2001, was to simply monitor everything. An early attempt, DARPA’s Total Information Awareness (TIA) program in 2003, attempted to bring together all publicly available information under a single agency. The authoritarian implications of this program, however—assuredly not helped by the TIA’s logo, a panopticon—meant that it received considerable pressure by both legislators and the concerned public (Stevens, 2003), which would ultimately lead to its shuttering (Federation of American Scientists (FAS), 2003). The program also sufered from a lack of tools to process information: something, in retrospect, DARPA almost certainly had foreseen. A presentation at a 1999 DARPA technology symposium that heralded the TIA acknowledged that machine learning was still in a relatively immature state, and so lacked the capacity to facilitate the analysis of huge amounts of information (Fernandez, 1999). TIA, however, did not disappear entirely. Even as the project was defunded the program’s core technologies were transferred from the DOD to the National Security Agency (NSA) (Pontin, 2006). What came next was out of sight. Programs such as Keystroke X, and PRISM, among others, were TIA’s successors. They utilized the physical nature of bandwidth to tap into the vast majority of the internet’s trafc. The internet is designed, in principle,

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Predicting the Future

15

so that information takes the fastest path to its destination. The cables linking South America and the Middle East directly, however, are comparatively tiny to those linking the Middle East to Europe, Europe to North America, and North to South America. So much trafc across the globe moves through North America—physically—in order to reach its destination. And so the US, in conjunction with its allies (in particular the “Five Eyes” Intelligence alliance comprising the US, the UK, Canada, Australia, and New Zealand) was able to collect broad swathes of internet data and metadata. The latter would only become a common term after 2013, through leaked information on the intelligence collection eforts of the US and its allies (Sottek, 2013). It is hard to assess the success of these mass information collection eforts. On the one hand, the CIA maintained, in its report to congress, that more than a dozen programs were undermined by Snowden’s leaks. However, the exact nature of these programs is still classifed, and unknown to the public (Sottek, 2013). Importantly, it isn’t clear that the bulk of the mass surveillance program determined these eforts in any signifcant way. The programs uncovered by Snowden also included the collection of specifc, actionable data by identifable individuals. An important concern was that the noise of mass surveillance was just that, and that the signals that constituted grounds for action by the national security establishment, from Federal Bureau of Investigation raids at home to drone strikes abroad, were signals that could have been otherwise detected through older collection eforts. As Panayotis Yannakogeorgos (2013) has put it, “most of cyberforensics is really just forensics” so too, most cyberintelligence may really be just the use of older intelligence collection, albeit behind a computer. Nonetheless, the desire to (a) collect as much data as possible from the vast quantity of communications brought about the information revolution; and (b) process all of that to discern intelligence patterns remains. This desire has a new target, however: less what existing terrorist networks will do in an operational sense, and more in detecting the rise of new terrorist actors. Online spaces provide a rich environment for terrorist recruitment, and determining who is going to enter into terrorist activity, and when, is a difcult new task for intelligence agencies.

2.4 Insights from Neuroscience Neuroscience’s greatest contribution to information technology is perhaps so obvious as to escape attention. The “neural net,” a subset of a deep learning algorithm, is modeled on the analogy of a neuron as a basic but fexible unit of information processing. To understand how these work, and their connection to neuroscience, we need to understand deep learning. AI, in many ways, is already here. These intelligences, however, are incredibly narrow and purpose built. Moreover, there are a variety of subspecies of

Copyright © 2021. Taylor & Francis Group. All rights reserved.

16

BRAINs in Battle

AI, which use diferent methods and achieve diferent results. The feld of AI is specialized enough that it may not be practical, in many cases, to refer to AI simpliciter: at a 2019 conference on the ethics of AI, a panelist complained that it is rare to see computer scientists talk directly of “artifcial intelligence” (Goodman et al., 2019). Deep learning algorithms, in the most basic of terms, tend to operate in similar ways. A basic structure for the algorithm is described, listing what kinds of properties the algorithm ought to look for in data it is provided. The general structure of the connections between these properties and the outcomes the algorithm should print is then described, though not in great detail. The magic happens when historical data is added. This data serves to guide the algorithm by showing how particular instances of relations between basic properties and outcomes are set up. In large, complicated data sets, these relations appear stochastic to humans. Over many iterations, the algorithm begins to take a shape such that a particular set of inputs will generate a particular set of outputs. Once it has been fed the historical data, new data describing only properties can be provided, and the algorithm will predict a set of outcomes based on those properties. Neural nets are a subtype of these algorithms that take as their starting point the structure of human neural connections.1 In the brain, each neuron is connected to multiple others, allowing for the parallel processing of information, and the reinforcement of particular patterns as the brain learns. Neural networks start of as relatively homogeneous sets of weightings, but as information is added the weightings between each connection begin to take the shape of the data provided, creating or eliminating pathways between multiple sets of properties in response to stimuli. These kinds of algorithms are behavioral in nature, but are often of very limited scope. AlphaGo was a deep learning algorithm designed to play the game Go, indigenous to Japan and China. Go is a game involving placing tiles on a 19×19 grid, in order to capture territory. The rules are very simple, but the large board size means that a game of Go has 2×10170 potential confgurations. It was initially thought impossible for a machine to play Go at the same level as a human, but neural nets allowed for a machine, fed historical matches, to identify patterns and create strategies that would ultimately lead to the defeat of Ke Jie, the top Go player in the world (Brundage et al., 2018). But AlphaGo can’t itself do anything but play Go. Neuroscience has been an often silent partner in the development of these algorithms. Much like Go, human behavior is very complex. Unlike Go, however, intelligence collection is highly multi-modal: it involves a very broad range of signals including social media, metadata, photographic, and audio recording. The task is to synthesize these into actionable data, determine the connections between them, and decide what matters. Here, neuroscience aids the processing of data through the creation of neural nets.

Predicting the Future

17

This relationship between neuroscience and AI is self-reinforcing. On the one hand, insights from neuroscience provide a basis for thinking about designing algorithms to process data and make decisions. On the other, these algorithms can be trained to predict neural function. The Leifer Lab at Princeton University, for example, studies the dynamics of neural systems in Caenorhabditis elegans, the nematode worm (Nguyen et al., 2016). The lab has constructed computational models, using optogenetics (in which genetically engineered worms’ brains emit light as they process information), to describe all 23 neurons of a worm’s brain. In case these seem trivial, let’s put the kind of experiment in context: using computer science, the lab has created a near-perfect model of the neurology of a worm. This is a highly accurate model of a very simple brain, contrasted to human neuroscience, in which we tend to provide only rough models of one of the most complex brains on the planet. What this collaboration between neuroscience and AI provides to national security is a set of tools to describe, and predict, the behavior of adversaries. A 2018 issue of The Next Wave, the NSA’s technology review, noted that as researchers incorporate insights from neuroscience and AI into successive versions of machine learning algorithms, they hope to devise solutions to complex information processing tasks, with the goal of training machines to human-like profciency and beyond (McLean and Kreiger, 2018). The goal here is to train machines to perform much of the work of human analysts, but on scales that are too time-consuming and/or complex for older forms of intelligence collection.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

2.5 Contemporary Trends: Narrative Networks One of the most interesting applications of neuroscience in intelligence collection is in the study of narrative as a means of surveillance and social behavior prediction. This subfeld has a number of interesting characteristics. The frst is its intimate connection to DARPA, a central funder of work on the feld. The second is the relationship between neuroscience insights into human behavior, and that of AI and machine learning. There are a multitude of competing conceptions of narratives (Paruchabutr, 2012), but here I broadly mean the way information is encoded into stories that provide not merely a collection of propositions or facts, but an explanatory arc that makes sense of those facts. This sense-making is key to the study of narratives, particularly for national security purposes, because it frames features of the world in the context of stories in which readers play a role (Finlayson and Corman, 2013). The focus on narratives follows from its relationship to behavior. People tell stories about themselves and others as a way to make sense of the world. These stories can be broad, public, and enduring, such as religious texts that present everything from the genesis of the universe to specifc moral lessons in narrative form. But stories can also be particular and personal: we are the heroes of

Copyright © 2021. Taylor & Francis Group. All rights reserved.

18

BRAINs in Battle

our own story, and construct internal narratives to make sense of our own lives (McCarthy, 1961). What links narrative to national security is the way public stories shape personal narratives. Take, for example, the Hoda Muthana, a woman born in the US to Yemeni parents and known for being an “American bride of ISIS.” After her father was fred from the Yemeni embassy, Muthana grew up in Alabama. In an interview with the New York Times, Muthana recounted a strict religious upbringing in which she was socially isolated from (non-Muslim) peers in her community, and her turn to online message boards and communities. Muslimmajority online communities (described by Muthana as “Muslim Twitter”) contained sub-communities that were engaged in group exegesis of the Qu’ran, with increasingly radical interpretations that were supportive of Islamic State of Iraq and the Levant (ISIL or IS, popularly known as ISIS). Encouraged by this community, Muthana left her family for Syria, where she became a wife to a series of ISIS fghters. In turn, Muthana’s contribution to ISIS’ war efort was to work as a social media propagandist, constructing narratives with the intention to recruit other Muslims living in the West (Callimachi and Yuhas, 2019).2 This episode illustrates the concerns of senior national security ofcials, doctrine writers, and scholars (Paruchabutr, 2012). Casebeer (2018) notes that groups such as Al Qaeda, Al Shabbab, and Daesh were engaged in online propagandizing for the purpose of recruitment, and were highly efective in doing so. The challenge, they claimed, was (a) identifying radicalizing material, and (b) determining what combination of material would pull particular individuals into involvement with terrorist groups. These are two nontrivial technical challenges. On the one hand, we need to distinguish between radical and radicalizing material. Lest we risk excessive focus on acts of terrorism committed by Muslims, let us look instead at white nationalist terror organizations, and the rise of mass shootings in the US in which the perpetrators held white nationalist, or adjacent ideologies. Not all ideology on the political right, or within white nationalist discourse is, by itself, radicalizing or evidence that an individual is a terrorist. John Bolton, the 27th National Security Advisor of the US, is a radically right-wing political fgure whose comments on Muslims, and Iran in particular, are well documented, and who ran a think tank that, under his leadership, produced articles that contained white nationalist talking points about a “Great White Death” among other anti-Muslim conspiracies (Przbyla, 2018). His words are assuredly dangerous. But it would be a stretch to say that, in isolation, John Bolton is a terrorist or responsible for the recruitment or creation of domestic terrorists in the US. In particular, John Bolton’s work is radical, and maybe even radicalizing in the broader context of white nationalism. But it is not radicalizing to the same degree as, for example, The Turner Diaries, an explicitly white nationalist and anti-Semitic work of fction by William Luther Pierce. That novel describes

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Predicting the Future

19

a violent revolution in the US which, among other things, leads to the global extermination of non-whites. The Turner Diaries is a cornerstone of American white nationalism that was a direct inspiration for the Oklahoma City Bombings: the book describes, in close detail, a terrorist act that mirrors the bombings, and is attributed as a source of inspiration for Timothy McVeigh (Berger, 2016). Likewise, Neo-Nazi David Lane, who coined the “14 words” common to white nationalist and Neo-Nazi groups, was infuenced by The Turner Diaries; the 14 words would later become iconography used by Brenton Tarrant, the alleged perpetrator of the Christchurch Massacre in March 2019 (Evans, 2019). The degree to which each is a causal element in terrorist recruitment, however, is crucial to the development of neuropsychological accounts of how terrorism arises, leading to the second claim. It has been established for some time that serious psychological harms can arise when a person is exposed to large quantities of objectionable content over a period of time. We know this from detailed accounts and social scientifc literature examining the role of pornography censors, law enforcement ofcers who work in sex crimes units, and—recently—content moderation staf for companies such as Facebook (e.g. Arsht and Etcovitch, 2018). What is less clear, however, is how the same content can motivate a person to commit acts of harm themselves, and precisely who is at risk of being radicalized. While the words of John Bolton might not be causal in radicalizing a particular individual, their context in a right-wing media landscape—including but not limited to mainstream media outlets, political institutions that can communicate directly with constituents, and social media platforms—might lead to the rise of violent individuals. We know that not everyone exposed to such language, even among those sympathetic to the claims conveyed, is moved to violence. So what constitutes radicalizing content, individually and in aggregate, and to whom, is a serious challenge for intelligence collection in national security. This kind of challenge has received considerable attention, particularly among intelligence and military arms of the national security landscape. In 2009, DARPA held a workshop titled Experience-based Narrative Memory (EN-MEM) (Finlayson, 2013), in an attempt to begin to grapple with the question of how narrative infuenced cognition. This blossomed, from 2011 to 2014, into the N2, including the Stories, Neuroscience and Experimental Technologies (STORyNET) workshop in 2011 (DSO, 2011). Outside of DARPA, the Army’s Asymmetric Warfare Group published in the 2016 White Paper Maneuver and Engagement in the Narrative Space, detailing the efects of narrative on warfghter engagement in occupied territories (DeGennaro and Munch, 2018). Less clearly obvious, but adjacent to the focus on narrative was the Human Terrain System (HTS), which utilized anthropologists to provide accounts of local knowledge for use by warfghters in understanding and defusing potential conficts in the feld (Lucas, 2009).

Copyright © 2021. Taylor & Francis Group. All rights reserved.

20 BRAINs in Battle

Of all of these, N2 provides the clearest relationship between neuroscience and narrative, albeit not the only one. Commissioned by DARPA’s Biological Technologies Ofce (BTO), under the direction of Casebeer, N2 sought to understand narratives from a neuroscientifc basis. Program announcements, in this case, are instructive: Narratives exert a powerful infuence on human thoughts and behavior. They consolidate memory, shape emotions, cue heuristics and biases in judgment, infuence in-group/out-group distinctions, and may afect the fundamental contents of personal identity. It comes as no surprise that because of these infuences stories are important in security contexts: for example, they change the course of insurgencies, frame negotiations, play a role in political radicalization, infuence the methods and goals of violent social movements, and likely play a role in clinical conditions important to the military such as post-traumatic stress disorder. Therefore, understanding the role stories play in a security context and the spatial and temporal dimensions of that role is especially important (reprinted in Sterling, 2011). Assuming (as good physicalists will) that mental states are located in neural structure, narratives will have an infuence on the structure of the brain. This structure, moreover, is plastic. It follows that if we can know how narratives, individually and together, infuence the brain, we can know how they infuence the mind. The task of N2 was frst to determine the efects of particular narratives on particular brains. This requires specifying exactly what narratives look like, and which narratives exist. Casebeer, in 2014, provided Freytag’s triangle (below) as an example of the development of a narrative (Casebeer, 2014). Freytag (2018), writing in 1863, described a fve-part structure to narrative involving exposition (of a story and its context), rising action, climax, falling action, and dénouement (in which the resolution is revealed and catharsis obtained). This applied, in Freytag’s work, to Greek and Shakespearean dramatic structure, and to plays in particular. Another narrative structure that might be described is the monomyth, also known as the “Hero’s journey” described by Joseph Campbell (1990), and attributed to (mostly male, see Estes, 1992) protagonists from Luke Skywalker to the historical Buddha. The task of N2 was to determine what efect these narratives had on brains, typically using fMRI. Bruneau (2013) and colleagues, for example, noted that particular narratives of the sufering of others had a signifcant impact on regions of the brain associated with empathy. The hypothesis, then, is that some narratives are better than others at eliciting empathy. The second task is determining relationships between particular pre-narrative neural structures and their post-narrative states, to determine which kinds of brains (and thus minds) were particularly receptive to certain narrative structures. That is, explain why particular narratives afect only certain brains, and cause radicalization. Work by psychologists such as Jonathan Haidt has claimed,

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Predicting the Future

21

for example, that there are fve basic kinds of value commitments (harm, fairness, in-group, authority, and purity) that distinguish individuals; progressives (termed “liberals” by Haidt) are more likely to care about harm and fairness where Haidt claimed conservatives tend to have proportionally more even sets of concerns (Haidt, 2012, though on Haidt’s theories and ethics cf. Kennett and Fine, 2009). Once this preliminary work is done, it follows that particularly dangerous narratives can be identifed. The risks with which we might be concerned could be described in terms of both the breadth of efect and the reliability of efect. An example of the frst kind of description is a narrative that efects a large number of minds—a narrative that large number of minds are receptive to, although perhaps not especially receptive. The second kind of description might be a narrative to which one very particular kind of mind is particularly receptive. These exist on a spectrum—there are in principle very persuasive narratives that are widely acceptable to large numbers of minds. We might be concerned about both, but particularly concerned about the latter in terms of their capacity to quickly radicalize individuals or groups. The potential of N2 is still emerging, and so the trajectory from current research to ultimate application is speculative. Further, in keeping with its mission the results of the project focused on basic science. Research under the N2 demonstrated the neuropsychological efects of narratives on subjects, and attempted to correlate them to behavioral responses. For example, subjects shown a range of suspenseful scenes from movies demonstrated a narrow attentional focus. Subjects were less likely to recall peripheral images from the scene, and fMRI measured the blood-oxygen-level-dependent (BOLD) response in visual centers of the brain. It was theorized, further, that subjects had increased empathy with characters in a narrative as suspense built and subsided. Narrative, and particularly suspenseful narrative, can narrow an individual’s perception of the world outside that narrative—in the case of this study, particularly in the visual feld (Bezdek et al., 2015). Another study, again measuring BOLD responses through fMRI, demonstrated that inter-subject correlation in neural response to a naturalistic—that is, a real world—stimulus in a small sample of viewers was strongly predictive of the behavior of large groups (of thousands of more) to the same stimuli. Researchers utilized advertisement and popular television, recruiting subjects who had not seen particular episodes or advertisements but for whom social media provided large population-level evidence of time-specifc reactions to the stimuli. Inter-subject correlation in subject’s neural response predicted 34% of the reactions to stimuli such as an episode of The Walking Dead, or Super Bowl ads, that had elicited large popular reactions. This prediction was stronger than the prediction of individual subject reactions, which researchers claimed established a BOLD response as a good predictor of a mass reaction controlled less by individual values or local peer efects (Dmochowski et al., 2014).

22

BRAINs in Battle

Copyright © 2021. Taylor & Francis Group. All rights reserved.

2.6 Surveillance and Counterpropaganda The strategic aims of the N2 extend predominantly into surveillance. One expected application is being able to examine narratives and accurately predict the emergence of security threats. The particular focus of former DARPA is predicting the radicalization of future terrorists, and Islamic terrorists in particular (Casebeer and Russell, 2005; Casebeer, 2014). The central goal here is to develop a comprehensive account of the mechanism by which particular narrative structures afect neural states and behaviors. Current intelligence analysis focuses on propositional content to determine intent, i.e. looking for particular speech acts that signal a threat. What N2 proposes to do is step back in a potential terrorist’s timeline, and look at the structure of stories people are receiving to determine how and when terrorists are being recruited. This structure of stories, so the theoretical underpinning of narrative networks goes, is as important as the content itself. The way that content is arranged into a narrative that transports the reader to a conclusion is a strong predictor of their chances of identifying with the conclusion, and then acting on it. The added, in principle advantage of such a system is that it can identify and leverage a communicative medium bad actors can never make secret: recruitment propaganda. By reducing propaganda to a scientifc, causally determined (or at least strongly predictive) enterprise, the N2 project could use propaganda as an insight into terrorist activities where they might not be able to access traditional signals intelligence such as wire transfers, emails, and phone calls. In cases where terrorists are increasingly distributed, such as white nationalist terror or the latter-day successors to Al Qaeda, this propaganda can be understood as a syndromic surveillance system. In public health, syndromic surveillance uses the appearance of symptoms from geographically diverse healthcare institutions to infer the appearance of particular disease states, particularly infectious disease. The appearance of propaganda or social media statements that have a high radicalizing potential can be understood as the terrorist analogue of a series of unexpected upper respiratory infections and high fever signaling the emergence of an epidemic fu strain outside of seasonal variation. We should not underestimate the scope of the task here. A complete construction of an intelligence apparatus would require: 1 2 3 4

a broad understanding of the narrative structures humans use to communicate information; collection of narratives common to insurgent or radical groups; an understanding between the narratives in both (1) and (2) and their efect on neural states; and a predictive algorithm of how (3) causes radicalization and/or terrorist behavior.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Predicting the Future

23

Aim (1) is well-studied in psychology, rhetoric and literature studies, and political science; (2) has been covered at length by the social sciences, including the predecessors of N2 such as project AGILE: a broad spectrum program launched during the Vietnam War that included psychological warfare, but is more famous as the project that created Agent Orange (Unknown, 1964; Weinberger, 2017). N2 has made inroads into (3), albeit primarily regarding (1); the application comes in the form of (4). Note also that given the large datasets and potentially very complex evidence sets at hand, the other aspects of neuroscience-informed behavior predication, such as neural nets, may play a role in (4) also. An additional need in the realm of stage (3) is naturalistic experimentation on the efects of (2). This presents some logistical and ethical issues in itself, as exposing individuals to radicalizing material—in addition to the risk of radicalization—can harm otherwise unsuspecting human subjects of research. Displaying, for example, the manifesto of Brenton Tarrant, or a Daesh-produced beheading video, could be traumatic for research subjects. Nonetheless, measuring neural correlates of these particular narratives may be necessary, if we cared to fnish the strategic aims of the project, to ensure that the model systems used (flms, Super Bowl ads) are valid for narratives with similar structure but more extreme content. The fnal implementation of N2 will ideally solve the problem we began with. Terrorist communicate in narratives; so do intelligence agencies. Part of the decision to use, for example, law enforcement or military force against someone is the story we can tell about who they are, what they are doing (or about to do), and why they are doing it. In a world in which a potential terrorist actor’s paper trail may include thousands of social media posts, video content, and personal communications, this story can be hard to recognize by analysts, and harder to communicate to decision makers. N2 solves both problems: it provides a basis to understand the kinds of efects narratives have on potential terrorists, and also how to communicate data in stories. An acknowledged use case of N2, then, is in the development of stories to communicate to analysts and their bosses (Miranda et al., 2015). N2 thus gives an end-to-end view of modern surveillance and intelligence collection. It provides insight into the wide range of communications humans have with each other; a mechanism to understand how those can lead to violent action; and a way to package that analysis into a story analysts and decision makers can understand. The goals of intelligence collection for neuroscience are not changed, but rather reframed through the use of narratives.

2.7 Conclusion Neuroscience gives us an insight into human behavior. This insight carries with it the intuitive appeal of being able to predict behavior by knowing the neural

24

BRAINs in Battle

correlates of stimuli that prompt signifcant responses. For the national security establishment, signifcant behavior includes radicalizing events, or planned attacks. For others, such as corporate entities, signifcant behavior might be purchasing goods or services. Narrative networks pose the capacity, in principle, to develop an account of both of these phenomena. The ethical story of narrative networks, and behavior prediction in general, continues in the following chapter with a view to the development of behavior prediction tools into behavior modifcation strategies. The limitations of the N2 are discussed in Chapter 7, “Translation.” The central ethical concerns with N2 as a surveillance program are discussed frst in terms of dual use in Chapter 8, and then in terms of corruption in Chapter 9. After reading Chapter 9, it will become clear that ethical issues raised by N2 cross jurisdictions, but I place particular emphasis on individual and national approaches to those issues in Chapters 11 and 13 in particular.

Notes

Copyright © 2021. Taylor & Francis Group. All rights reserved.

1 I hesitate to say that neural nets take the human brain as their starting point, as it isn’t clear their designers are looking for the kinds of function and structure— including mental states—that we typically associate as being unique (or at least distinct) about the human brain (Evans, 2021). 2 In addition to the New York Times print reporting on Muthana, there is a long audio interview with Muthana, “The American Women Who Joined ISIS”, Feb 22, 2019. However, in the interest of future readers, it is worth noting that a similar, more famous story by that same group, “Caliphate,” turned out to be a fabrication. Muthana’s story appears to hold up at the time of printing, but I note this in case other interviews by that same group turn out to rest on inaccuracy or fabrication. See Wemple (2021).

3 THE SCIENCE OF PERSUASION

3.1 Chapter Summary

Copyright © 2021. Taylor & Francis Group. All rights reserved.

In this chapter, I detail strategies of persuasion and compliance inspired by modern neuroscience, and their connection to broader national security aims. I begin with a historical sketch of drug use and attempts to engineer forms of “mind control,” before turning to the (more prosaic, and much darker) use of torture as a compliance strategy in contemporary intelligence collection in the US. This forms the basis of the appeal to neuroscience for a better form of persuasion and compliance. I detail three forms these innovations might arrive in: counterpropaganda; truth telling and lie detection; and nonviolent behavior modifcation. I then look at the development of countermeasures to these methods as a foreseeable outgrowth of the feld, before looking at future applications and development trajectories.

3.2 Introduction In the previous chapter, I dealt with advances in surveillance and behavior prediction in which neuroscience had a key role. I described the Narrative Networks Program (N2) in detail, and its potential future uses. We now turn to contributions neuroscience is making to persuasion, compliance, and control. At the risk of stretching an analogy too far, once prediction is completed, the next step in most scientifc activities is to seek to control the subject of inquiry. This is true of nuclear physics, of biology, and now of neuroscience. The ultimate strategic aim for national security, here, is to develop an account of persuasion that can apply to both individuals and groups. On the level of individuals, “persuasion” is the most charitable term to use for this kind of

26 BRAINs in Battle

activity. No one is manipulating anyone; they are simply trying to fnd the best possible way to express content that will make it palatable to the listener. If logic is an important component of good speech, rhetoric is certainly also permissible. A less charitable interpretation of this kind of project is “manipulation.” This manipulation may be simply linguistic, produced by lies of omission or a careful selection of facts to provide the most promising reading of a situation or claim. But manipulation can also have invasive or interventional properties. At the social level, persuasion can give way to “propaganda,” the development and deployment of materials designed to infuence communities and make them susceptible to control (Blank, 2017). Propaganda is an ancient tool of both armed confict and statecraft, but the current surge in neuroscience research has lessons for propagandists as well. The study of propaganda has experienced a resurgence in the last two years, however, with revelations that Russian intelligence services sought to undermine the US presidential elections in 2016 with online propaganda, including but not limited to the dissemination of misinformation and the creation of fctional individuals to disseminate that misinformation. This chapter deals with the contribution of modern neuroscience to the science of persuasion, in both its benign and unsavory forms. As with the previous chapter, I leave the question of ethics open for now, in order to better capture the scientifc landscape, its historical context, and the aims of the programs that support the research and development activities with which this book is concerned. There is no question that ethical issues abound. But, as with many of the examples in this book, the ethical issues that arise in the science of propaganda are connected to or parallel seemingly unrelated issues around artifcial intelligence and chemical warfare among others.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

3.3 Mind Control Humans on the whole, it seems, cannot not partake in drugs. In fact, mammals as a class seem to not only share in common experiences of intoxication and inebriation, but many mammals have been observed actively seeking their own highs. Some of these are reassuring and familiar, such as reindeer who seek out psychedelic mushrooms. Others are, to us, bizarre, such as goats who eat poisonous lichen, or monkeys who intentionally envenomate themselves with millipede venom that has hallucinogenic properties. Beyond, even bees and parrots can get drunk (Evans, 2016). Messing with mental states is clearly not a human afair; as far as I am concerned, the idea that animals get high is as close to an argument for the existence of mental states in animals as any. In national security contexts, the most immediate connection to drugs is in armed confict. The use of pharmaceuticals to enhance soldier strength, resilience, or alertness dates back to antiquity: the Greeks took opium mixed with

Copyright © 2021. Taylor & Francis Group. All rights reserved.

The Science of Persuasion  27

wine to calm their nerves both before and after battle, while inhabitants of the North Asian steppe would consume dried, psychoactive toadstools to enhance stamina and inure themselves against pain (Kamienski, 2016). In modern militaries, long operations prompted militaries to prescribe amphetamines, and later compounds like modafinil to increase warfighter alertness (Repantis et al., 2010). Another more unfortunate use of drugs in national security concerns returning service personnel. Returning personnel self-medicate, in the absence of adequate facilities and—in conflicts with strong anti-war sentiments such as Vietnam—civilian hostility. In popular culture, the Australian band Cold Chisel’s 1978 song “Khe Sanh” was banned from Australian radio for a time allegedly for confronting the sex lives and drug use of veterans, including the acknowledgment of a “growing need for speed and Novocaine” (see also Trochowska, 2018). Empirical work bears out the song’s assertion: veterans have a higher prevalence of substance abuse disorder and alcohol abuse disorder, though the use of illicit drugs tends to be at the same rates as civilians (Teeters et al., 2017). Prisoners of war (POWs), however, have been reported to have a higher incidence of use of psychoactive drugs than civilians (Ursano and ­Benedek, 2003). There is one use, however, that is particularly relevant to the issue of persuasion. Popularized as “mind control,” the use of psychoactive drugs to influence prisoners for information, to encourage defection, and—in the minds of some military planners—to plant double agents has been a persistent concern for state militaries. This concern reached alarming proportions during the Cold War, where the idea that adversaries of the US were pursuing mind control proliferated in public and policy circles. This idea was not limited to the US, and certainly its adversaries had similar concerns at one point. The concern that US citizens were being brainwashed to aid the enemy, particularly as the number of soldiers taken prisoner by the North in Korea skyrocketed, pushed concerns about mind control to new heights (Seed, 2011). The result is the now infamous MKUltra program, and its lesser known sister ARTICHOKE. MKUltra was a program devised by the Central Intelligence Agency (CIA), in collaboration with elements of the US Army Biological Warfare Laboratories (USABWL), to test the efficacy of psychoactive drugs as a means to control to behavior or individuals. The drug of choice for the CIA was lysergic acid diethylamide (LSD), though the CIA also made extensive use of barbiturates administered serially with amphetamines, heroin, psilocybin, and other compounds. The CIA Inspector General, writing in 1963, described the program in broad terms as “concerned with research and development of chemical, biological, and radiological materials capable of employment in the clandestine operations to control human behavior” (Faden, 1994). Importantly, US and foreign cognitive science, including the predecessors to neuroscience, were heavily involved in the MKUltra program. Henry Beecher,

28 BRAINs in Battle

arguably one of the fathers of biomedical ethics in the US (Beecher, 1966), had traveled to Europe in the late 1940s to understand “ego depressing” drugs and the existence of a “truth serum” by the Nazis, including Beecher’s own subject of interest: the use of LSD. While there, Beecher would visit the CIA headquarters in Germany, the British Ministry of Defence (MOD), and the Allied headquarters at Marly-le-Roi. The Allies were already interested in mind control, and truth sera for the purposes of interrogating POWs. While Beecher would ultimately never work with the CIA (despite his eforts), his and others’ work informed the already burgeoning interest in mind control, and certainly the work of other, more intimately connected scientists (Marks, 1988; Moreno, 2016). MKUltra was put on hold in 1973 after a series of deaths—notably, of a soldier in New York who threw himself out of a window—but to our knowledge, no results came out of the program (Moreno, 2012). This didn’t signal the end of US eforts to attempt to harness insights from cognitive science for the purpose of controlling behavior. The latest and most infamous incarnation of these programs, again with individual detained by the national security establishment, came in the form of the US torture program in the twenty-frst century.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

3.4 Torture The US torture program is by now widely known, and I won’t spend much time rehashing the blows of the program itself. We know that the US medical establishment was deeply involved in both the administration of torture and the supportive care of torture patients in between sessions (Miles, 2004, 2009). What is less explored by scholars is the role of cognitive scientists—including psychologists, psychiatrists, and neuroscientists—in the US torture program. Reports in 2015 confrmed that psychologists helped the Department of Defense (DoD) develop and implement the now infamous “enhanced interrogation techniques” (EITs). In keeping the above in mind, let’s not mince words: EITs are forms of torture, disguised under a US legal fction despite their status as torture under the United Nations (UN) Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment (hereafter, “The Convention against Torture”). They include acts of humiliation, audio and visual harassment of a detainee, and pain compliance methods that fall just short of more classical torture methods (pulling nails, crushing, etc.). In other words, the American Psychological Association (APA)—one of America’s largest, most prestigious healthcare guilds—was accused of being coopted by the intelligence community and military to assist in torture. Especially disturbing was the identifcation of the APA’s Ethics Director, attorney and psychologist, Stephen Behnke, as a participant in the development of the EIT program. This development did not come overnight, nor was it straightforward. Beginning with Executive Order 13440, the W. Bush administration revoked

Copyright © 2021. Taylor & Francis Group. All rights reserved.

The Science of Persuasion 29

US support for provisions in the Geneva Conventions that prohibit the use of torture. At the same time, and over a period of years following, legal and policy documents revised what constituted a permissible form of interrogation. The 2006 updates to Appendix M of the Army Field Manual (AFM), to which the agency subscribes for the purpose of interrogation, allowed for interrogation of subjects to include, for example, incommunicado or “forced separation” of detainees (Department of the Army, 2006). This kind of measure has been subject to critique as potentially violating the Convention against Torture, to which the US is a party, because sensory deprivation and social deprivation can deeply and permanently traumatize detainees. The complexity of implementing these methods in a way that does not constitute torture, or cruel and unusual punishment, has also been identifed as a source of what former General Counsel to the US Navy Alberto Mora refers to as “force drift” in which the moderate or permissible use of force is used as a stepping stone to other, impermissible uses of force (Siems, 2012). The institutional barriers in national security—former Director Pompeo noting that the CIA uses the AFM for guidance on the interrogation in 2017 (US Senate, 2017), linking the military and intelligence communities in how they understand torture—are thus weaker than they were prior to the commencement of torture activities in the early years of the twenty-frst century. While the CIA is now theoretically committed to an established set of principles on the interrogation of subjects, those rules are broader than they once were (and arguably ought to be), and the same 2017 comments by Director Pompeo included mention of a desire to adopt a parallel, broader set of guidelines governing permissible interrogation for the intelligence community (US Senate, 2017). The institutions that guard against torture have been damaged, in addition to the moral crime of torture itself. At the center of this controversy was a psychologist by the name of Stephen Behnke. According to the Report to the Special Committee of the Board of Directors of the APA (The Hofman Report), Behnke orchestrated a set of subtle but profound changes to the ethics guidelines that provided psychologists cover to participate in torture. These changes were based not in a set of reasonable ethical arguments but in political concerns for public relations. Investigators found that ethics “positions were taken to please DOD based on confdential behind-the-scenes discussion and with an eye toward PR strategy” (Austin, 2015, pp. 31,208–209). As director of ethics, Behnke was charged with reviewing and consulting on cases related to the clinical practice of psychologists. He was to be involved with the education of group members. At the level of the APA, what seemed to happen was personal and political ideology—including, according to the Hofman report, fnancial motive (Austin, 2015, p. 46)—creating a toxic stew wherein an ethicist stopped doing ethics and started supporting clandestine military work.

30 BRAINs in Battle

Of greater concern, Behnke seemed to go rogue when the APA Board attempted to reign in his behavior. Behnke allegedly worked secretly with the DOD as an informant, consultant, and paid interrogation trainer behind the backs of APA leadership. He assured his DOD contacts that “[n]othing could diminish…my commitment to continue to support all of your eforts, and the eforts of the great men and women who protect our country and our freedoms” (Austin, 2015, pp. 38–39). Behnke, the report charged, changed the use of ethics in the APA by assuming the appropriateness of psychologists participating in interrogations, and designing an ethical framework with that assumption in mind (Austin, pp. 195–196). As a result, the APA, the largest organizational body in the feld of psychology, is in damage control. While promises of structural improvements and increased transparency may help prevent a catastrophe of this magnitude in the future, the damage has already been done. As a society we are more likely to remember a single egregious event rather than years of efectiveness and progress. Such is APA’s new dilemma: how to continue to represent and improve the feld in the face of the malfeasance of a select few.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

3.5 Contemporary Trends The use of torture, however, is recognized—even within the national security establishment—as an inefective strategy. Empirically, psychological literature on the role of pain in compliance behavior has determined that the quality of information we can expect from torturing detainees is not simply of a poor quality: the incentives we create for the tortured are such that even if a detainee believes what they say, they do so because they are primed to believe whatever string of words stops the torture, eliminating the possibility of knowing whether what a detainee believes is really actionable intelligence (O’Mara, 2015). The CIA Ofce of Medical Services has claimed that its interrogations (without specifying what kinds of methods were used) provided actionable information, and that medical and psychological after efects were not evident on detainees (CIA, 2004). This, however, conficts with the longstanding empirical literature that suggests torture is inefective (Blakeley, 2011; O’Mara, 2015), and fndings by the CIA Inspector General that cast doubt on the efcacy of any particular EIT in generating actionable intelligence on a pressing and immediate national security threat (CIA, 2004). Moreover, torture is strategically problematic in a world in which “hearts and minds” are frequently the issue at stake in asymmetric confict involving actors who blend into and may even receive the support of a local populous. With this in mind, neuroscience-informed military strategy has sought to develop better techniques of persuasion that can assist in intelligence collection without either crossing the line into torture or requiring the development of an

The Science of Persuasion 31

account of interrogation that deals with goalpost-moving defnitions of “enhanced” or other interrogation.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

3.5.1 Counterpropaganda The frst development in this feld derives from the N2 and its successors. The intuition here is fairly simple—if you can predict what narratives will do to people, you can presumably counter the efects of propaganda with new propaganda. Counterpropaganda has existed as long as its counterpart, and N2 purports to ofer a scientifc basis for both. The connection between neuroscience, persuasion, and interrogation is at times explicit. In 2007, Canli and colleagues discussed the use of neuroscience in national security terms, and noted that the merits of persuasion were that they obviated the apparent strategic advantage of other forms of interrogation (Canli et al., 2007). The rationale here is that while there is a perceived need for strategic intelligence collection, if this can be done without the need for interrogation techniques, then any pro tanto justifcation for torture evaporates. We could consider this an extension of the least harm principle by which liberty-limiting measures are unjustifed just in case there is an efcacious, proportionate response that is less interfering (Childress et al., 2002; Allen and Selgelid, 2017). That is, if there is a pro tanto reason for pursuing torture, it can only be if there is no less infringing option available that achieves the same ends. If persuasion techniques really can work as well as or better than torture or EITs, then there is a reason to pursue persuasion. A key opportunity recognized by Casebeer and Russel (2005), in defending the use of narratives in national security, is the idea that if we understand narratives we can engage in counter-radicalization programs in lieu of extended detainment of terrorists. A purported issue behind the detention of terrorists in places like Guantanamo Bay (GTMO) is that once incarcerated, it is incredibly difcult for them to be released. Part of the reason is tied to the gray jurisdictional properties of GTMO and other bases; insurgents are neither POWs nor being held as civilian criminals. But another factor and why even home nations of detainees are reluctant to accept them back is the prospect that these individuals will simply turn around and commit other acts of terror. Counterpropaganda utilizing neuroscience involves the steps outlined in my discussion of the N2 in Chapter 2. This is then extended to the idea that if we can understand the neurological basis of certain narratives, we can reverse engineer those narratives and populate them with our own countervailing message. Alternately, we can create a novel narrative that is more powerful than existing narratives. These can then be disseminated either by US forces or by their proxies in the civilian media landscape, to counter extremist narratives or foreign interference.

32 BRAINs in Battle

Copyright © 2021. Taylor & Francis Group. All rights reserved.

3.5.2 Truth Telling Something that has intrigued neuroethicists, and perhaps the most discussed potential use of neuroscience in the national security context, is the possible use of neuroscience in giving an account of truth and falsehood that is more accurate than standard “lie detection” methods such as the polygraph. These technologies primarily utilize electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) to determine regions of the brain associated with lying and truth telling. As these devices miniaturize (EEG in particular), suspects or detainees can be determined to be telling the truth, or lying. This led to a brief spate in the early 2000s of companies posting to provide “truth telling” fMRI technologies, but currently only one company, NoLieMRI, markets itself commercially as neuroscience-inspired lie detection services (Choudhury et al., 2010). A key advantage purported by these companies is a higher accuracy than other lie detection modalities. The polygraph, arguably the most famous form of lie diction and still widely used in law enforcement, measures blood pressure, pulse, perspiration, and skin conductivity to determine whether a person is telling a lie. The polygraph, however, is infamous for its lack of accuracy. The polygraph is fairly easy to fool, if one can remain calm and keep track of the pattern of questions asked by the interviewer (Lykken, 1998). Moreover, being nervous in an interview involving a polygraph can generate signifcant false positives. Given that lie detection at the hands of law enforcement is undoubtedly a stressful event, regardless of one’s guilt, this is a particularly serious technical problem for polygraphs. Lie detection involving neuroscience purports to make use of neural correlates of lying and truth telling. If lying is understood as an intentional act, then one cannot lie without some kind of intention. If that particular intention—to represent one statement as truth, knowing it is false (Bok, 2011; Jenkins, 2016)—if neurologically distinct, and can be isolated through neurological imaging, then it can be tracked. In principle, then, we determine when someone intends to lie. A second signal denoting that someone has actually lied would give us the data we need to know what was said is a lie. This kind of technology has yet to be accepted as admissible evidence in court. It has, however, been used in interrogation or for questioning suspects or detainees (Thomsen, 2015). If lies can be detected, then they can be responded to appropriately, either leading a respondent to reveal their deception or confronting them on their falsehood as required. This has the purported beneft of reducing the need to engage suspects in lengthy interrogations by quickly sorting truth from falsehood in real time, and using that (in conjunction, for example, with narrative study) to generate a productive resolution to an interrogation (Marks, 2007).

The Science of Persuasion 33

3.5.3 Behavior Modifcation

Copyright © 2021. Taylor & Francis Group. All rights reserved.

At the edges of current technology, mind control has begun to return to the fore as a potential avenue for the development of neuroscience. In its most benign form, this involves the alteration of pathological mental states that arise, for example, in (but not only in) the line of duty in violent occupations. Post-traumatic stress disorder (PTSD) is not unique to armed confict or law enforcement, but the return of service personnel from Iraq and Afghanistan regenerated interest in the use of cognitive science to disrupt recurring thoughts that can lead to panic attacks, substance abuse, and suicide. A number of interventions have arisen in this context. Two that are already familiar are the tools of the MKUltra program: LSD and psybecilin. Both have been found to have strong potential as clinical tools in treating PTSD, as does the medical use of cannabis. While these drugs were placed on Schedule I lists under the Nixon administration as part of the 1960s culture wars (Gasser, 1994), they are increasingly viewed as important tools for those coping with trauma that cannot be solved, either in principle or efcaciously, with therapy. A similar application has been found for brain stimulation. Transcranial magnetic stimulation (TCMS) and deep brain stimulation (DBS) have both been used in clinical cases of PTSD resistant to other forms of treatment. In TCMS electrodes are applied to a patient’s skull and a magnetic feld is applied across the skull. DBS uses electrical current rather than magnetic fux, and also involves an implantable device that is placed inside the skull. Both work in similar ways, by stimulating electrical signals in the brain. Experimental clinical attempts have been made to treat PTSD using these modalities, with some success. However, as yet there have not been clinical trials to determine the efcacy of TCMS or DBS in treating PTSD in concert with or compared to other treatments (Tennison and Moreno, 2012; CADTH, 2014; Lavano et al., 2018).

3.6 Countermeasures Counternarratives, lie detection, and behavior modifcation are particularly attractive to those engaged in intelligence collection outside of domestic law enforcement. The purpose of those techniques there is not to secure a conviction, but form the basis for future intelligence collection, diplomatic maneuvering, and armed action. The knowledge, however, that these techniques exists immediately creates the concern that the same technologies might be used against intelligence operatives or warfghters captured by adversaries. Moreover, even the possibility of these techniques existing raises the specter that others might have already found a way to master them. Countermeasures are thus also immediately attractive.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

34 BRAINs in Battle

Countermeasures are a strong driver of activity in the science of persuasion. The P300 signal in EEG, identifed as a potential signal for detecting truth and deception (Farwell, 2001) and the subject of a successful startup marketed to law enforcement and intelligence, Cephos,1 has been subject to the development of successful countermeasures (Rosenfeld et al., 2004; Kathikeyan and Sabarigiri, 2012). It is likely the race between detecting deception and countering detection methods will continue apace. This is true of many of the technologies I discuss here, but the race might be most stark in the area of countermeasures to detection. The reason for this is that countermeasures in information security are some of the most hotly contested areas of innovation. In the feld of cryptography, for example, the creation of countermeasures to encryption has been so productive that the standing policy—following a failed attempt at censorship—of an intelligence agency is to work in more or less total transparency. The idea is that encryption that we know can be broken is better than encryption that can be known, but we don’t know it (NRC, 2004). The brain is, in this way, no diferent from other information systems. Measures to detect certain kinds of process—in this case, deception, or radicalizing tendencies or behaviors—are vulnerable to countermeasures. In the case of persuasion, itself a form of counterpropaganda, we can and should expect the formation of counter-counterpropaganda. To see this process in action, consider the evolution of white nationalist terrorism over the last half century. It is widely regarded—even by those somewhat sympathetic to his vision—that George Lincoln Rockwell’s attempt to create a thriving American Nazi Party ultimately failed. However, the realization that the American polity lacked the sensibilities for an overtly national socialist government did not dissuade those who came after Rockwell. Rather, American Nazism evolved into a more mainstream version, such as William Pierce. An Atlantic article on the rise of the new right notes that where American Nazism failed, its successors increasingly saw value in normalizing rhetoric, and appealing to less extreme ideological positions as an intermediary for their own views. Berger, the author, notes that in writing The Turner Diaries, Piece targeted an audience he knew was already racist, focusing “less on the ‘why,’ than the ‘what’ and… ‘how,’” to propel readers into more extreme views (Berger, 2016). The logical extension of this, in a world post-Oklahoma City, in which Pierce’s views were the inspiration for a terrorist attack, is the “altright” of recent years, which again puts a more respectable face on the same views, taking them to major conservative conferences and displaying them in popular media. These counternarratives are to the—to white nationalists—“propaganda” of multicultural, democratic liberal societies. The task viewed by the DOD and other organizations, though typically with Islamic terrorism in mind, is to reframe narratives away from the talking points of terrorists. In this way, they might succeed, but almost certainly with the result that their adversaries will continue to change tactics.

The Science of Persuasion 35

3.7 The Future of Mind Control

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Serious questions remain about how these techniques will hold up to strong scientifc and ethical scrutiny—assuming they work at all. Many of these techniques will, at the time of writing, be on their way out, to be replaced by novel techniques. This replacement will be either technical in nature as the efcacy of a technique becomes disputed or strategic in nature as countermeasures are developed. The kinds of technology covered here, however, are important, and refect the changing nature of strategic priorities around armed confict. These priorities, moreover, point at two increasingly signifcant application for this technology. The frst is in the move toward “countering violent extremism” as a source of intervention, where the aim is to use narrative to stop terrorists from being created from citizens. This is the extension of existing eforts to combat terrorism as a central priority of states, evolving to deal with the stochastic nature of terrorist activity. The second is in state-on-state propagandistic eforts. The use of propaganda by foreign governments to infuence domestic politics is not new, but revelations that Russian-backed groups created signifcant amounts of content during the 2016 US federal elections have refocused interest on propaganda. In particular, and in light of the very permissive stance American institutions take toward speech, there are new pressures on developing ways for governments to counter propaganda eforts with their own messaging. This concludes our foray into persuasion and compliance. Understandably, much of the normative concern around these technologies centers on the acts of government around torture and other acts, and thus fnds a place in Chapter 9. This will leads organically to national implementation in Chapter 13; however, the place and role of professional organizations in combating these issues will also be discussed in Chapter 12.

Note 1 These days, if you Google Cephos, you will fnd a consultancy whose founder has an alleged “rich history of forging breakthroughs in molecular testing and product development, clinical studies, and brain-based lie detection.” However, there is no further mention of their lie detection services, but rather DNA profling—another grift, for another book.

4 BUILDING A BETTER WARFIGHTER

4.1 Chapter Summary In this chapter, I deal with the contribution neuroscience has made to warfighter enhancements. I start by introducing historical approaches to warfghter enhancement, and their motivation. I then describe what might be considered the paradigm of neuroscience-inspired enhancement, the fatigue countermeasure modafnil. I then address brain computer interfaces (BCIs) as one of the most sophisticated forms of future enhancement.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

4.2 Introduction In the previous chapter, I noted that humanity’s love of drugs is a persistent feature of our species, and a common trait we share with diverse animal communities. I then discussed, among other things, the use of neuroscientifc advances of both pharmacological and electronic means to infuence behavior and make adversaries more susceptible to persuasion or interrogation. The third important way in which neuroscience has contributed to national security, however, is on the consenting—or at least assenting, depending on how you understand military service. Here, I’ll talk about human enhancement in the military, a crucial element of emerging doctrine in modern national security. While the majority of human enhancement literature is focused on physical enhancements, cognitive enhancement is a feature in US national security. While surveillance and persuasion are important to national security broadly, human enhancement is critically important to state militaries in particular. The frst is that training competent warfghters is an increasingly resourceintensive task. The wars of the twenty-frst century have been long, and heavily

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Building a Better Warfghter 37

integrated with civilian populations overseas. The strategy of “winning hearts and minds,” however, has largely been a failure in part because the knowledge required to win over such hearts and minds has been lacking in a doctrinal capacity by state militaries. The Human Terrain System (HTS), the US Army’s Training and Doctrine Command program designed to give troops access to anthropological data for use in peacekeeping and reconstruction eforts in the wars in Iraq and Afghanistan, was largely deemed a failure (Dixon, 2009, Lucas, Jr., 2009, Zehfuss, 2012). Part of this was that the data and skill needed to interpret anthropology was not able, as the HTS attempted to do, to be represented in fash-card-like information packets that were accessible to enlisted personnel. Inducting all personnel in months or even years of education for a single theater of war seems absurd on its face. At least, it does with current human capabilities. This is a particular concern for the Special Operations Forces (SOF). At a meeting in November 2017 at Special Operations Command (SOCOM) in Tampa, FL, I was lucky enough to participate in a “Far Ridgeline” exercise, a horizon scan on biological technologies that, once mature and vetted for us in military operations, could be used by SOCOM.1 What was made clear from the outset was that SOF is a rapidly aging cohort of warfghters. The average age of an operator in SOF, one participant reported, was 38 in 2017, and growing. Operators often speak multiple languages, and in addition to their skills in lethal force are trained—formally, or on the job—to interact or integrate with communities to build trust and identify insurgents. Moreover, while the US military has relaxed its recruitment standards in order to make up a shortfall in enlistment over the two ongoing wars in the Middle East, SOF has not: as such, it is drawing from an increasingly small pool of recruits, and using those recruits for longer. The skills of warfghters have long been under-recognized as the value, in real terms, of warfghters to the military. I suspect part of this is the impression that in mechanized war, individual soldiers are replaceable parts. Robert Heinlein’s 1959 Starship Troopers begins with the protagonist, Johnny Rico, listening to his commanding ofcer list the cost of their training and deployment, fnishing with the reprimand “we can replace you, but we can’t replace [your gear.] Bring it back!” There’s a lie, to this, however: the cost of creating a warfghter is high, and growing as warfghting becomes a more complicated profession. As a central tool for US foreign policy, SOF has been heavily utilized, but that utilization comes with a personal cost to operators. Beyond the training demands, warfghting is a physically demanding job, and a job that frequently compromises executive function. Warfghters are required to function in high-stress situations often on little sleep, food, and hydration, overburdened by equipment. In these situations, the capacity to make decisions on anything but the most basic level of reasoning can be inhibited. This is particularly concerning in irregular conficts in which enemy forces

38 BRAINs in Battle

Copyright © 2021. Taylor & Francis Group. All rights reserved.

may wear plain clothes, or be embedded in civilian populations, or where the application of force is justifed based on complex intelligence information. Enter enhancement, or depending on the year and controversy surrounding the term, “maintaining human performance and capacity.” The ability to technologically enhance a warfghter’s skills, prevent them from losing those skills in either crisis situations or long term, and to help them adapt to society after their deployment are all related. In this last category, prevention of the onset of PTSD might be claimed, somewhat controversially, as relevantly “enhancement.” As others has observed, the status of vaccines as either therapy (against a future disease) or an enhancement (of the immune capacities of an individual or community) is likewise contested (Buchanan et al., 2001). So inoculating against mental illness could be enhancement in the same way as inoculation against an infectious agent. I and others have argued elsewhere that the socalled “treatment-enhancement” distinction is spurious from a philosophical perspective (Evans and Moreno, 2015; Evans et al., 2020), though I will return to it as a regulatory and legal fction that presents challenges for medicine in national security contexts. In this chapter, I will canvas enhancement technologies. After a brief history of cognitive enhancement (also sometimes called “neuroenhancement,” e.g. Repantis et al., 2010; Fecteau, 2013; Levasseur-Moreau et al., 2013) in national security contexts, I will turn to current trends. I start with the poster child of cognitive enhancements: modafnil, the wakefulness drug and treatment for narcolepsy. The remainder of the chapter will focus on what is arguably the holy grail of cognitive enhancements: the BCI. This class of devices, in which information systems and their peripherals are connected to human minds through neural interfaces, is a key if aspirational element of future national security operations. I canvas what kinds of applications are already in the wings for BCI, before fnishing up with potential future national security applications outside of armed confict.

4.3 History Performance enhancement is an ancient tool of war. If alcohol is humanity’s most enduring chemical relationship, cafeine is a close second, and in war it has seen use since the beginning of recorded oral history to maintain alertness and activity for soldiers. The often fctionalized Viking “beserkers” are now believed in Nordic historical circles to have been warriors intoxicated by Amanita muscaria, a psychoactive mushroom. A. muscaria has been used in a variety of cultures, and similar stories are told about individuals under its efects imagining they have become wild animals; with other forms of conditioning these lead to the purported superhuman abilities and ferocity of the Viking berserkers (Kamienski, 2016).

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Building a Better Warfghter 39

In World War Two, the Nazis made copious use of Pervitin, an early form of methamphetamine. It is now acknowledged that, not only in the Blitzkrieg of the early war but during the Nazis losing campaigns into Russia, soldiers made use of methamphetamine to stave of cold and maintain fghting strength (Kamienski, 2016). An article in 2013 in Speigel Online argues that the ongoing methamphetamine epidemic in developed and developing nations today has its origins in the German military’s use of amphetamine in the war (Hurst, 2013). However, this undersells the scope of the use of pharmaceuticals as a tool of enhancement in war, and in World War Two in particular. In all theaters, and on both sides of the confict, drug use was an essential part of the war efort. Arguably, the greatest pharmacological enhancement to force strength is not commonly recognized as an enhancement. Penicillin, discovered in 1928 by Alexander Fleming, was rapidly developed by the US government during the war, with 2.3 million doses produced by 1945. One-third of casualties in World War One were from disease, and while we do not know how many were saved by penicillin, it is likely that hundreds of thousands of service personnel were saved from or prophylactically spared death and disability from bacterial infections (Quinn, 2013; Gaynes, 2017). More conventionally understood “enhancement” was perhaps the most pronounced in—perhaps unexpectedly—Finland, through what might be an unlikely source. Even after the war, Finland recorded as much heroin use as the UK despite having one-tenth its population, the majority of it was legally available until changes in European politics and approach to harm reduction led to its discontinuation. During the war, Finnish soldiers were outftted with heroin, opium, and Pervitin as part of standard equipment. While heroin’s analgesic properties were of course useful, one account describes the Finns as using heroin much like aspirin, for colds or even headaches. Opioids are vasodilators and in small quantities can increase respiration capacity, so in the extreme cold of the eastern front Finnish commandos were able to operate by using heroin to increase circulation and fend of the symptoms of upper respiratory infections common to fghting in the cold (Kamienski, 2016). In the Gulf War, US air power made extensive use of amphetamines. Air missions during Desert Storm/Desert Shield were typically long, and air power was deployed continuously day and night. Of air crew surveyed in the aftermath of the war, 65% reported the occasional use of amphetamines for long missions or during periods of fatigue. That research demonstrated efectiveness to the tune of about 5 mg amphetamine every 4 hours, with few side efects (Emonson and Vanderbeek, 1995). Similar results were found in Operation Iraqi Freedom, with pilots preferring dextroamphetamine to short in-fight naps, and reporting higher levels of efectiveness with the former (Kenagy et al., 2004). Arguably, enhancement has been at the core of modern warfghting. Joanna Bourke, in her Killing in War (2000), identifed a key issue for the emergence

40 BRAINs in Battle

of the military as part of the modern nation state was the lack of desire regular soldiers had to engage the enemy and fre their weapons, much less do so with intent to kill. The task for modern militaries has been to create a person capable of following orders, participating in battle, and using lethal force in technically and professionally sophisticated ways (Henschke and Evans, 2012). It is almost certain this has its costs, and thus might be both enhancing and disabling in some contexts (Evans et al., 2020). Nonetheless, it identifes a central part of enhancement in national security: soldiers must be created.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

4.4 Modafnil If there is a candidate for “poster child of cognitive enhancement,” it is surely modafnil. Proponents of widespread cognitive enhancement such as Nick Bostrom and Anders Sandberg (2009) reference modafnil as a paradigm way in which humans could efectively—and safely—pharmacologically enhance themselves and increase global welfare. The condition modafnil purports to cure is sleep. Discovered in the late 1970s (Kamienski, 2016) and approved for use in narcoleptic patients in the US in 1998, modafnil’s central marketable feature is its capacity to promote wakefulness in those who take it, even under conditions of extreme sleep deprivation. The exact mechanism under which modafnil works is unclear: a combination of biochemical compounds including serotonin and dopamine (Gerrard and Malcolm, 2007), moderating free radicals in the brain, and its efects on particular groups of neurons associated with the wake-sleep cycle (Lin et al., 2018), modafnil allows individuals to operate in excess of 40 hours with comparably little reduction in executive function or alertness (Pigeau et al., 1995). This kind of staying power had, understandably, huge appeal to the DOD. I’ve already discussed the ongoing war on sleep in the military, but previous methods came with serious drawbacks. The mainstay of the military, and the US Air Force (USAF) in particular, was dextroamphetamine until 2017 (USAF, 2017). However, amphetamines can have serious side efects that presented a series of risks in operations. Amphetamines can increase response time, but not necessarily accuracy in response. Amphetamines and other stimulant “go-pills” can require doses of “no-go pills,” usually hypnotics or sedatives (or both), to get pilots to sleep after operations on which they were dosed (Caldwell et al., 2009). And, of course, amphetamines carry with them a risk of addiction. In what will become a familiar story, the precise nature of this risk is not well characterized in publicly available literature. An air crash in 2012 was attributed in the media to the use of amphetamine, or co-prescribed Ambien (Drummond, 2013), though later investigation would reveal the report on that crash ignored mechanical difculties inherent to the kind of vehicle involved, a V-22 Osprey that takes of like a helicopter but fies like a fxed wing aircraft

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Building a Better Warfghter 41

(Axe, 2012). Likewise, one of the US servicemen charged in the Tarnak Farms friendly-fre incident in Afghanistan that killed four and wounded eight Canadian soldiers blamed the “forcing” of dextroamphetamine on service personnel ( Jedick, 2014), though the Canadian report on the killings made no mention of the efects of drugs on the pilots in question (Canadian Government, 2002). It is unclear how safe amphetamines are in combat, and until recently the USAF authorized both dextroamphetamine and modafnil for fghter and bomber missions only; however, use of either modality requires the exhaustion on non-pharmacologic approaches, continued adherence to crew rest requirements, and written approval in advance from the commander and senior fight surgeon (Meadows, 2005). The USAF ultimately ended ofcial authorization for dextroamphetamine in favor of modafnil in 2017 (USAF, 2017). Modafnil—whatever its mechanism—has a diferent clinical efect to amphetamines. It does not function as an “upper” and provide the feeling of a stimulant like amphetamine or cafeine. While there are side efects and adverse reactions to modafnil, they are rare, and nowhere near as severe as other stimulants. Modafnil, while (at least in part) a dopamine reuptake inhibitor like cocaine or amphetamines, has in principle the potential for addiction; thus far addiction has been presented as a rare side efect (Volkow et al., 2009). Modafnil’s net positives are so great that they are occasionally referred to as “eugeroics,” or drugs that create “good arousal” relative to other options (Kamienski, 2016). Modafnil’s efectiveness, additional benefts beyond amphetamines, and low risks are a central reason the drug is now the primary and potentially only ofcial go-pill of the USAF. The drug allowed fight crews to operate for up to 60 hours with relatively little reduction in performance through maneuvers, and without the anxiety, tolerance buildup, or dependence of cafeine or amphetamines (Caldwell et al., 2009). The further beneft was that it appeared to obviate as strong a necessity for no-go pills, as studies demonstrated that combining modafnil with hypnotics did not increase performance over time (Storm, 2008). Modafnil is reported to, in fact, not interfere with the natural circadian rhythms of users, suppressing them temporarily but allowing their natural return (Kim, 2012). The USAF (2012, 2017) maintains its requirements for ground trials, but the use of modafnil has increased in the USAF steadily as its reputation for benefts without side efects is cemented. That said, there are good reasons the USAF retains a ground trial policy before dispensing modafnil, and ofcially restricts its use to select (typically high-risk and/or combat) operations. While there is a growing body of literature on modafnil, these data are frequently collected from samples that are quite distinct from combatants, such as students or chess players (Battleday and Brem, 2015). Moreover, studies in which modafnil is studied among combat populations rely on self-reported efects by crew, which raises the question about the degree to which modafnil’s reputation may enhance its perceived

42

BRAINs in Battle

efects. Double-blind controlled trials have shown that while modafnil does exert some efect in wakefulness, its efects are less than its proponents have claimed (Repantis et al., 2010). If we do take seriously the benefts of modafnil through research with alternate populations, moreover, we should think about the risks and limitations of the drugs in the same. In a study of chess players, German researchers found that modafnil increases alertness and refective decision-making, but only under untimed games. Timed games not only demonstrated no beneft, but in some cases worse performance than controls (Franke et al., 2017). In a study of students—who take modafnil as a study drug—it was found that modafnil for study was associated with increased rates of drug abuse (Teter et al., 2018). It seems unlikely that modafnil is the cause of this, but in high-pressure contexts where drug abuse is more likely (such as the military) modafnil may be part of a cocktail of abused compounds and thus worthy of caution. Importantly, modafnil’s interactions are less well known than its primary mechanisms.

4.5 Brain Computer Interfaces

Copyright © 2021. Taylor & Francis Group. All rights reserved.

If modafnil is the poster child of cognitive enhancement, the BCI—sometimes referred to as the brain-machine interface2 —is the holy grail. Still in their infancy, BCIs present the prospect for a “platform technology” that could allow warfghters, as well as intelligence analysts and law enforcement, to perform duties with direct neural connection to a range of devices. While the implications of BCIs are typically cashed out in terms of weapons technology, treating the BCI as a mere weapon is likely an error (albeit I’ve committed in the past, see Evans, 2011). The true capacity of the BCI, I will contend, lies in its connection to information and human cognition. The basic idea behind a BCI is simple: with sufcient information about a person’s neural states, a device can be constructed that 1 2

Reads a person’s neural states and converts them to machine-readable code; and Converts code into signals that activate parts of the human brain and induce mental states that correspond to a digital signal.

Currently, these devices are typically implanted in the human skull, and are often highly invasive. However, improvements in neuroscience have generates new, noninvasive forms of BCI (Sellers et al., 2014). These noninvasive BCIs are typically better at reading neural states but limited in writing, but signal the potential for efcacious read-write devices. This noninvasiveness could come in two forms. The one Defense Advanced Research Projects Agency (DARPA) tends to suggest is the use of electromagnetic induction to detect brain waves (DARPA, 2019). This would be truly

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Building a Better Warfghter 43

noninvasive in the sense that it does not involve any surgical requirements at all. A pseudo-noninvasiveness might be the prototype implantation device recently produced by Neuralink, a venture by Elon Musk. This device is designed to implant very fne wires into the skull with high degrees of precision, using miniature laparoscopic incisions. This is not noninvasive, but considerably less invasive—and thus, in principle, less risky—than open surgery for the purpose of implanting a BCI (Hamilton, 2019). The frst proof of concepts of BCIs emerged in the late 1990s (see Lebedev and Nicolelis, 2006), but a useful early illustration of the promise of BCIs is “Belle,” an owl monkey who was one of the frst test pilots of a BCI (Brower, 2005). Scientists attached a BCI to Belle’s brain and instructed her to play a game for pieces of fruit, using a joystick. As Belle played, her brain signals were read via EEG and the neural correlates for her decision to move the joystick one way or another were determined. The joystick was then disconnected, and fruit dispensed based on neural correlates rather than the (sham) joystick movements. Finally, the joystick was removed altogether, and Belle was able to play her game in exchange for fruit using only her thoughts—or, at least, neural correlates of those thoughts. BCIs have since been applied to patients sufering from extreme restrictions on mobility, such as tetraplegia or amyotrophic lateral sclerosis. Implantable BCIs have been connected to wheelchairs (Lebedev and Nicolelis, 2006; Ng et al., 2014) among other devices. Recent work has shown the development of proprioceptive feedback so that patients can receive input from their BCIs as well as direct output (Ramos-Murguialday et al., 2012). With the conclusion of long-term follow-ups from randomized control trials, the BCI is beginning to show relative maturity as a therapeutic device (Abiri et al., 2019; RamosMurguialday et al., 2019). Foundational work on BCIs was funded by DARPA’s Revolutionizing Prosthetics program, and BCIs are hoped to one day be complex enough to allow individuals to control jointed prosthetic arms with the same (or even better) control as their fesh-and-blood equivalents. But, of course, it doesn’t stop there. In 2015, Wired reported that Jan Sheuermann, a woman with tetraplegia, piloted an F-35 Joint Strike Fighter in simulation using her BCI. DARPA denied to Wired that Sheuermann is “a test pilot for a new generation of mindcontrolled drones,” but the implication is fairly clear: BCIs are a tool from which to control technology, including technologies that can project lethal force (Stockton, 2015). The appeal of BCIs for use in national security applications when connected to physical devices—that is, devices that perform some electro-mechanical task in the world—is twofold. If BCIs can one day both read and write, BCIs could in principle allow for direct control of devices entirely through neural connections. Right now, the connection is only one way, and so a hypothetical drone pilot would still need a screen and other forms of feedback through their other sense

Copyright © 2021. Taylor & Francis Group. All rights reserved.

44 BRAINs in Battle

data, in order to complete a task. One day, however, that could change, and the full range of data required to use a device could all be fed directly into a brain. With enough tuning, BCIs could give individuals very fne control, and very fast, as the kinetic reactions of humans typically lag their neural correlates by up to half a second (Evans, 2011). This lag is the action potential between a human brain producing a signal and the body interpreting that signal and producing action. While there are and will be latency issues with devices, once outside the human brain the limitations to information transfer are solvable engineering issues. The speed of light is much faster than an individual human brain signal, and so transnational actions might be accomplished by a user of a BCI with very little delay. In terms of reading machine code and writing to brain states, we could envisage a future in which peripheral devices can feed a variety of information into a person. A common point of discussion in meetings I have been to is the use of BCIs to feed visual data into the optic nerve. For example, a BCI connected to a transceiver could allow a drone at 30,000 ft with a long-range lease to “see” on behalf of a warfghter. We could also imagine this in terms of law enforcement accessing surveillance footage directly during operations, or at crime scenes. This capacity for a range of data, and devices, makes BCIs a “platform technology” that is highly desirable because it can be used in a range of ways that nontrivially enhance human capacities. There has been a long discussion in performance enhancement circles, for example, about how we might make individuals see beyond the visual spectrum. Some arachnids, and crustaceans like the mantis shrimp, have a more diverse set of cones in their eyes, allowing them to see in infrared and ultraviolet (and potentially beyond) ranges on the electromagnetic spectrum. It is possible—indeed, it is possible now—to genetically engineer an eye to replace lost cones. But implanting novel kinds of cones, much less deciphering the optic nerve, is a much greater challenge. Moreover, these changes are more or less irreversible: once you give a person infrared vision, taking it away is a costly medical procedure. A more convenient way would be to install a BCI: this only a single procedure, and one that has many more functions to boot. Moreover, there exists a huge range of devices that detect information in a range of spectra and frequency, or levels of accuracy higher than a human eye, and with the ability to have their gain tunes to much more specifc levels. Being able to represent this as sense data to an individual is not an easy technical feat, but presents an attractive way forward in the race to provide warfghters with better information. Instead of attempting to use a Geiger counter to detect a radiation source by reading diferent levels as an operator turns, a BCI could convert that data into visual sense data—using multiple detectors, much like binocular vision, could provide directional data straight to an individual’s optic nerve. The same could be done for magnetic fux or infrasound, converting data we otherwise only

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Building a Better Warfghter 45

experience as numbers or complex plots into optical data we can understand like we understand space through vision. This, then, is where the BCI could come into its own. It is widely recognized that the future of national security is in the capacity to process large amounts of information. We saw this in Chapter 2 through the TIA; in the eforts of the CIA and the NSA to conduct mass surveillance online; and in recent turns in law enforcement toward predictive policing, sentencing, and facial recognition. As infectious disease has increasingly overlapped with national security eforts (Weir, 2012; Evans, 2016; Evans and Inglesby, 2019), the tools of epidemiology have become essential to monitoring confict, and predicting the onset of infectious disease as a result of confict. These data, however, are often disparate, and interpreting them is not straightforward. The next step for BCIs after simple devices is to enable individuals, or BCI-augmented groups, to interface with artifcial intelligence (AI) to access and analyze these complex data. In its earliest form, DARPA’s image of this interaction is simply a neural version of what we have now. The BCI will form the basis of delivering commands to AI, which will then initiate searches or analysis on behalf of the user. As AI—or more likely artifcial general intelligences (AGIs)—and BCIs both become more sophisticated, however, this relationship is anticipated to deepen. By AGI I mean AI that is capable of a broad range of functions, beyond the narrow foci of the AI I described in Chapter 2. An AGI is not necessarily an AI with mental states, but it is capable of devising and acting on its own reasons. What is projected is that in future scenarios, warfghters (or law enforcement, or intelligence ofcers) will enter into sophisticated, cooperative relationships with AI that is not only able to assist with the analysis of data, but is personalized to the user or group in a way that allows for better collaboration and optimization of functions. In these contexts, the AGI is less a simple data retrieval or analysis tool, but more like a partner that responds, through the BCI, to the needs of the individual or group in a tailored way. This is where the technologies of this and the previous chapters intersect at the operational level, with a future operator in law enforcement, the military, or intelligence. If this sounds like something out of science fction, it is. The idea of a cooperative (or antagonistic) AI exists in a good number of stories. Perhaps the most famous AGI from fction are HAL 9,000 of 2001: A Space Odyssey and SkyNet of the Terminator franchise. But these weren’t in the kind of relationship DARPA envisions. Rather, in the 2000s, the bestselling computer game franchise Halo featured an AGI named Cortana who forms a cooperative relationship with the story’s protagonist, and ultimately becomes a key fgure in the events of the story of Halo. That AGI is shown to interface with the protagonist, in one of the early scenes of the game, through a BCI. The lore of Halo shows that this BCI also functions as a “friend or foe” indicator for the protagonist, the means by which you tell fctional allies from enemies. (It  is

Copyright © 2021. Taylor & Francis Group. All rights reserved.

46

BRAINs in Battle

not a coincidence that Cortana is also the name of Microsoft’s personal AI assistant—their Siri or Alexa.) DARPA’s connection to modern games and movies is not an accident, but a full account of that relationship would take us too far afeld. What is important is to note, again, the strategic aims of modern national security agencies, and their anticipated outcomes. The technologies that will inform these relationships, moreover, are coming from the same companies that make these games: Microsoft—who owns the Halo property, and the real Cortana—was contracted in 2018 to created augmented reality technology for the US Army, to allow warfghters to view data overlaid onto their vision of terrain (Hollister, 2019). It is almost certain that their image processing and information retrieval will involve the rudiments of the AI that will, if DARPA is to be believed, form the basis of cooperative relationships between AGI and humans through BCIs. I have mentioned groups, and it should be noted that these groups are not simply anticipated to be connected through their BCIs to AI/AGI, but to each other. This is the fnal and perhaps most technically challenging element of BCI technology—facilitating mind-to-mind communication between individuals. Depending on the way encoding works for these, it is possible that this will form the basis of future, secure communication between individuals, by piping signals directly into the nerves and neurons that interpret audio, visual, and even proprioceptive information. This gives the capacity not simply to convey information, but to show another member of one’s team what you see, and even what you feel about what you see. Importantly, these applications extend past armed confict. In the previous chapters, I discussed at length the reorganization of terrorist networks and transnational criminal activity around the internet, and the distributed nature of these enterprises. BCIs and their interaction with AI/AGI ofer the promise of allowing information analysts to better interpret the relationships between diferent groups. One can envisage an AGI “pushing” (in the sense that my phone “pushes” email to my phone’s lock screen) a novel relationship into an analyst’s mind, preventing them from missing a connection and raising it to the level of awareness. Alternately, multiple experts could collaborate (including with AI/AGI) to bring together real-time information from a wide array of sources, and potentially disciplinary areas. These may sound like very expensive versions of social media; the real innovation, in principle would be establishing a common mental language that allowed subject matter experts to work with each other’s minds to interpret data.

4.6 Conclusion Here, I’ve discussed the potential for human enhancement. This is not a complete survey of human cognitive enhancements in which the Army is interested,

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Building a Better Warfghter 47

and I have purposefully set aside enhancements that are purported to apply only to training (Tennison and Moreno, 2012). These are surely important, but I do not think they raise quite the same questions as some of the others. Where they do, however, many of their issues coincide with modafnil and future drugs of this kind, which are common study aids among students (particularly at elite universities). That is, we have questions about how efcacious these techniques are, what risk-beneft ratio they provide, and if there are any adverse events or drawbacks to their use. With that in mind, the primary area of these interventions is in Chapter 7. That is, the elephant in the room around enhancement is the issue of translation, from promising candidate to reliable intervention. I am not, as I stated in the introduction, someone who is particularly interested in the capacity for enhancement to “change human nature”—or, as a high-ranking member of the National Institutes of Health said to me recently, “change our essence.” I fnd such a concept particularly suspect in the context of national security, and especially when we consider armed confict, where the explicit purpose of military training is to rewrite a human’s intuitions about fear and danger to better serve the state (Bourke, 2000). Rather, the question is how much and in what ways we ought to pursue such a practice, and what risks are acceptable to expose others too in the process. However, BCIs will make a return in the following chapter. This is because as connected devices, BCIs are vulnerable to attack by outside forces in the same was as our phones, or our laptops, or—increasingly—our refrigerators. BCIs are the fnal frontier for human enhancement, but a new landscape for cyberattacks. Their deployment raises issues for translational neuroscience, which I address in Chapter 7. They also have the potential to be new avenues of attack against individuals, which I cover in Chapter 8. The ensuing confict between state and non-state groups to secure their minds against intrusion will thus make BCIs an important technology as we move into Chapter 10, and Chapters 13 and 14 on governance.

Notes 1 This meeting was run at the unclassifed level but under the Chatham House Rule, meaning information can be used but not attributed to participants at the meeting. For the remainder of this paragraph all information is communicated under that rule. 2 The separation of these terms comes about largely from the linguistic problem that computers are popularly conceived of as the things that sit on our work desks, rather than the processors that exist in a range of devices including our phones, watches, and increasingly household appliances. To acknowledge the potential of BCIs, some prefer the use of “machine.” I see no reason, at least here, to prefer either, and so stick with the older term in the interest of consistency.

5 NEUROWEAPONS

5.1 Chapter Summary

Copyright © 2021. Taylor & Francis Group. All rights reserved.

I conclude Part I with a discussion of neuroscience and weapons. I begin by discussing what in particular constitutes a neuroweapon and how they might difer from other weapons. I then address two kinds of neuroweapons. First, I discuss advances that may lead to pharmacological agents that attack the central nervous system, and in particular their application through nonlethal chemical and biological weapons. I then address attacks on neural systems through, among other things, brain computer interfaces (BCIs). I conclude with a view to how these two areas might eventually intersect, as an example of convergence in diferent areas of neuroscience creating novel efects.

5.2 Introduction The last set of technologies I will cover concerns the use of technologies to attack the brain. In a 2009 report, the NAS referred to this as the transition from enhancement—the subject of the last chapter—to degradation of cognitive capacities (NRC, 2009). This is a concern not just for the weaponeer, but for those who seek countermeasures to potential future threats. To begin, it is again important to describe the frame of this chapter. All forms of violence target or at least afect mental states in some way. When you are struck, you might feel fear, or anger, or some other strong emotion. If you are shot, you may experience the cognitive efects of shock. If you are shot at, you might experience an acute stress or “fight or fght” response. My colleagues in sociology or criminology will claim, at times, that in cases of violent abuse it is the fear and control that abusers aim for, not the physical harm of

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Neuroweapons

49

violence. This is also why some violence, properly defned, need not be kinetic: abusers might use emotional or fnancial means to traumatize their victims as surely as physical violence. The same is true of kinetic and non-kinetic operations in national security contexts. In armed confict, the phrase “shock and awe” isn’t simply metaphorical, but a description of the kinds of afect an attack is intended to accomplish. In law enforcement, riot police are not simply armored in a way that protects them from rioters, but are dressed to look imposing and intimidating by design. In the area of nuclear deterrence, Israel’s policy of deliberate ambiguity surrounding its alleged nuclear arsenal is designed to cause uncertainty in adversaries. Perhaps most importantly, the last few chapters were pretty violent in and of themselves. That goes especially for Chapter 3, in which I discussed brainwashing and torture. So for us to talk about weapons, and in particular weapons that arise from the neurosciences, as a distinct entity requires some clarity. So the term “neuroweapons,” while a term of art in the literature, is often unclear as to what counts (Giordano and Wurzman, 2011; Tracey and Flower, 2014; Wurzman and Giordano, 2014). For example, directed energy weapons such as the US active denial system, which uses microwave radiation to stimulate water molecules on a target’s skin causing intensive discomfort but no permanent damage, is sometimes referred to as a “neuroweapon.” But it is not clear what the “neuro” is doing here. Moreover, in “Neuroscience goes to war,” Tracey and Flower describe both neuroscience-coupled weapons and the BTWC, but are not clear precisely on what is a weapon, and what is a neuroscientifc device used in armed confict. My concern, here, is considerably more specifc. In this chapter, I refer to weapons as those technologies that intentionally, specifcally, and directly target the brain or central nervous system for the purpose of causing harm, or securing military advantage. By intentional, I mean that unlike kinetic force that may or may not cause fear, for neuroweapons the aim of the attack is to cause a cognitive change in the target. By specifcally, I mean that unlike the use of intimidation tactics, a neuroweapon is designed to target the brain and/ or nervous system but not necessarily other parts of the body. And unlike the active denial system, neuroweapons are direct in that the agents or means they use act on the brain or central nervous system, and not through a proxy. This narrows the scope of inquiry considerably. In particular, I will consider two kinds of neuroweapons. The frst already exist and are being refned in the present day: biochemical compounds designed to afect the central nervous system. In particular, I will look toward the trend in weaponizing calmatives and hallucinogens for “nonlethal” biological and chemical warfare. The second are forthcoming in that their mode of attack is yet to be widespread. These are attacks on brain states using, inter alia, BCIs as methods of intrusion. Cyberattacks on physical systems are now widespread as a means of

50

BRAINs in Battle

attack by state and non-state actors. They are mature enough that cyberwarfare was part of what, under the Obama administration, the DOD referred to as the Third Ofset Strategy (where the frst and second are conventional forces and nuclear deterrents, respectively). A future iteration of cyberwarfare arises where connected devices that interface directly with human brains are vulnerable to attack. Here, I will describe what such attacks might look like. One might object, utilizing the extended cognition thesis, that a crucial piece of the puzzle I have omitted is the use of propaganda to disrupt adversaries. The proliferation of “fake news” sites by Russian operatives (among others, including domestic actors) in the 2016 US presidential elections were attacks that intentionally targeted people’s mental states, were specifc to those states, and directly accessed them in the sense that they impacted information processing directly (or as directly as possible, through sense data). Why not these as neuroweapons? I concede that, at least insofar as we consider them “direct,” propaganda is surely an old-fashioned but very reliable way to attack the mental states of a person. I’ve already discussed propaganda in Chapter 2, however, so I will set it aside here but acknowledge that it is a potential candidate, in principle, for a neuroweapon. I’ll return to this issue in Chapter 8 when I consider the dual-use implications of emerging neuroscience and its application in national security.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

5.3 Biological and Chemical Neuroweapons Cognitive science has long been a handmaiden of the military—as I discussed in Chapter 3, in the middle of the twentieth century the emergence of whole classes of psychoactive compounds into modern medical research signaled a new relationship between the emerging felds of neuroscience and cognitive psychology, and the military and intelligence services of the US (and a comparable relationship in the Soviet Union). Until the late 1960s, and 1990s, respectively, this also included the development of biological and chemical weapons. The relationship between cognitive science and the biochemical weapons establishment mirrors the relationship between DARPA’s funding of BCI research: its potential uses in both national security forums and therapeutic interventions. Consider, for example, the drugs known as α-2 adrenergic receptor antagonists. Adrenergic receptor agonists are a class of compounds that inhibit a number of receptor sites in the brain associated with cardiac function, and with pain. Discovered in the 1940s, a subtype of these, the α-2 receptor agonists, mimic the action of natural transmitters that inhibit the fring of neurons in the locus coeruleus, a site in the brain stem that synthesizes norepinephrine, a hormone central to human cognitive activity and expressed in high amounts during times of stress. It also suppresses the expression of norepinephrine; the combined efect is a sedative efect. Upon their discovery, these agents were

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Neuroweapons

51

used in the management of hyperactivity disorders, and as a sedative and anesthetic (Giovannitti Jr., et al., 2015). In the 1980s, their efect on human cognition lent them to the management of tic disorders such as Tourettes (Weisman et al., 2013). At the same, however, the US Army explored the role of α-2 adrenergic receptor antagonists as incapacitating agents (ICAs). As calmatives, α-2 adrenergic receptor antagonists meet the defnition of ICAs, producing temporary physiological and/or mental efects via action on the central nervous system. Efects may persist for hours or days, but victims usually do not require medical treatment, although treatment may speed recovery. ICAs include but are not coextensive with riot-control agents (RCAs) such as tear gas or pepper spray, which typically produce irritating or painful physiological efects as their mechanism of action. However, this distinction is far from set in stone, and difers from jurisdiction to jurisdiction: the Chemical Weapons Convention (CWC) only uses the latter term, but it includes any compounds which disable a person physically, whether or not it is irritating (OPCW, 2005). In the 1980s, ICAs were seen a useful form of potential warfare. World War One saw the extensive use of chemical weapons on ground troops, in particular mustard gas. In World War Two, agents such as Zyklon B had been used in the holocaust, and in the Eastern Theater the Japanese had used both biological and chemical agents against the Chinese in their invasion of Manchuria. The agents used, however, were frequently lethal and persisted in the soil for a relatively long time—a few days for mustard gas (ATSDR, 2003), but longer for biological compounds such as anthrax. ICAs based on volatile compounds, if weaponizable, were a means to capture territory and hold it by rendering combatants unable to resist. The purportedly nonlethal dimensions of compounds like α-2 adrenergic receptor antagonists make it an attractive form of ICA. If weaponized appropriately, nonlethal ICAs could be deployed such that even if civilians or other noncombatants are afected, the likelihood they would die from exposure was relatively low compared to e.g. an airstrike. In principle, proponents of nonlethal weapons claim, this lethality could be reduced to near zero, meaning territory could be captured without loss of life (Gross, 2010a). The desirability of nonlethal ICAs continues today. In 2002, a National Research Council report noted “the theoretical possibility of peacefully incapacitating combatants/agitators, reducing the need for the violence that is frequently associated with many of the current methods” (NRC, 2002). A US multi-service document on how to deploy nonlethal weapons notes that using RCAs in war is prohibited by both the CWC and a 1975 US executive order, but contends that these agents may nonetheless be employed as “defensive means of warfare” for such tasks as controlling riots, dispersing civilians who are being used as shields, and conducting rescue missions (United States Armed Forces, 2007).

Copyright © 2021. Taylor & Francis Group. All rights reserved.

52

BRAINs in Battle

Calmatives have two major applications in national security. First, domestic law enforcement may fnd them attractive in crowd control scenarios. Tear gas, while not uncommon in riot control, has signifcant capacity for harm. It is also, in an online age, terrifying for the reaction it provokes in those exposed; video footage of crowds of protestors in pain can reduce public confdence in police video. Nonlethal solutions that don’t rely on pain compliance are thus an attractive possibility for law enforcement looking to accomplish crowd control without the morbidity and mortality attached to tear gas or pepper spray. Second, it easy enough to see why some state militaries fnd the idea of using nonlethal neuroweapons, including calmatives, in armed confict appealing. There is, in principle, a case for using them in “unconventional” confict scenarios, such as hostage situations, when an enemy is using human shields, or when insurgents are occupying civilian buildings such as hospitals. The thought that a gas weapon could put everyone to sleep and allow belligerents to be apprehended is seductive, because it neutralizes the capacities of asymmetric belligerents who use civilian areas as staging grounds. Of course, there are complications. The most famous recent use of a novel, purportedly nonlethal ICA in recent memory occurred in 2004, when Russian special forces used an unidentifed gas (later named as a derivative of the anesthetic fentanyl) to end a hostage crisis in a Moscow theater (Riches et al., 2012). The siege was indeed lifted, and all 50 of the separatists who initiated the crisis were killed. But in the process, between 117 and 200 hostages were killed in the process, with another 150 placed into intensive care (Satter, 2013). The main reason the hostages died is that while the Russians classifed their agent as nonlethal, “lethality” is hard to defne. Dose makes the poison, and in practice we know very little about the difusion of gaseous weapons in combat environments to be able to state confdently that a weapon is nonlethal. An allergic reaction, hyperventilation, or simply being too old or too young could render a neuroweapon fatal, particularly for noncombatants. Context matters, further, when we think about what counts as lethal or nonlethal force. Cancer drugs can be “efective,” where efective might count as statistically signifcant rates of cancer remission in ideal circumstances. Neuroweapons, in a parallel sense, could be “nonlethal” in the sense that they have killed fewer than 0.1% of exposed individuals under ideal conditions. By “ideal conditions” one might refer to specifc, well-defned, and stable atmospheric conditions including temperature, pressure, humidity, and the concentration of other particulates in the air. However, idealization might also refer to things like “the exposed individual standing is standing on solid, level ground.” We can say that gently shoving an able-bodied, conscious person on a fat surface is nonlethal, while still acknowledging that a gentle shove of a clif is fatal for most people, even if it isn’t the shove but the fall that ultimately kills. Neuroweapons in pragmatic contexts could still kill, and kill reliably if in roundabout ways—knocking a person

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Neuroweapons

53

unconscious while they are behind the wheel of a car, in stairwells where excess concentration of a gas weapon are concentrated, or in the middle of combat operations. In all these cases, how we count whether or not “the weapon killed you” matters when we measure nonlethality. For some, even if it was the rapidly accelerating engine block of your car that stopped your heart, the fact that your vehicle was out of control because you were knocked unconscious by a weaponized calmative might be the deciding factor. In the same way that precision munitions in the wrong context can cause signifcant civilian casualties, the ideal of nonlethality can slip once it comes into reality. The history of chemical and biological weapons suggests that commanders often vastly over-estimate their efcacy and underestimate the confusion they can wreak in managing the battlefeld. Their tactical advantages over other options are limited at best, yet they have led to expensive and error-prone arms races—part of the reason US President Richard Nixon renounced the frst use of chemical weapons and all methods of biological warfare in 1969 (Dando, 2015). Conventional ICAs are clearly a suboptimal means of controlling a space in a way that doesn’t harm bystanders. This is equally a problem in contemporary law enforcement, where public approval of RCAs has dipped precipitously in the aftermath of protests in places like Ferguson, Missouri. Headlines reading that tear gas is a “banned weapon of war” dominated headlines and thinkpieces following these incidents, and further concern as RCAs were used against migrants at the southern US border in 2018 (Horton, 2020). Increasing attention to the long-term efects of exposure to RCAs has, moreover, drawn attention to the harms to civilians and bystanders (in the case of law enforcement operations) posed by the use of RCAs and has raised the question of whether indiscriminate but ostensibly nonlethal operations really are preferable to a higher barrier to the use of force resulting in a more targeted, lethal response (Dimitroglou et al., 2015). Novel ICAs thus present an attractive option for states looking to operate in asymmetric, civilian-heavy conficts; or for law enforcement organizations looking for alternatives to disperse crowds or defuse conficts between large groups of citizens and law enforcement ofcers. In particular, agents that target very specifc sites in the brain, or produce particular cognitive efects, are preferable to the broad spectrum efects of legacy ICAs. This is especially true if novel agents can cause strategically useful mental states that are less problematic than simply rendering individuals unconscious. Novel techniques in neuroscience are of particular interest in leveraging a better understanding of existing compounds that give rise to agents that alter mental states to reduce aggression without sedation efects. One option, for example, is the weaponization of methylamphetamine, an early medication for the treatment of attention defcit hyperactivity disorder (ADHD). During the Cold War, Hungary along with other Warsaw Pact states investigated the use

Copyright © 2021. Taylor & Francis Group. All rights reserved.

54

BRAINs in Battle

of methylamphetamine as a deliriant, or a drug that causes delirium rather than unconsciousness (Wheelis et al., 2006). So it might be possible to craft a neuroweapon that results in a target population losing the capacity to say, fght, without the risks that unconsciousness can bring. In apartheid South Africa, the South African chemical and biological weapons programs, under the name Project Coast, allegedly involved the manufacture and weaponization of large quantities of MDMA (ecstasy), and later the chemical agent known as BZ, for use in riot-control scenarios (Cross, 2017). This kind of strategy seeks to reduce aggression and create euphoria in a population, as a means to subdue a crowd. Put simply, if the crowd is high, they’re less likely to fght. Appropriately used, the thought goes, crowds could be controlled by inducing delirium or euphoria that makes individuals prone to suggestion, or at least nonthreatening to law enforcement or military personnel responding to a crisis. Of course, these methods are far from foolproof—a community-wide “bad trip” could be as much a disaster as a fentanyl attack—but they present a path forward for the development of more and more specifc drugs. At the interface between civilian and military neuroscience, there are strong incentives to research and develop biochemical compounds that activate narrow parts of the brain for particular therapeutic purposes, such as memory recall (or memory replacement, in the case of PTSD), auditory and visual hallucinogens, and psychosis (Dando, 2015). Moreover, existing drug development also provides an avenue toward weaponization. Contrary to popular media narratives such as Amazon’s latest reboot of the Jack Ryan franchise, the creation of chemical weapons from chemicals is a nontrivial task, particularly if those compounds are (a) relatively delicate, and (b) need to be or are optimally inhaled. This, however, is also a concern of modern medicine; creating nasal sprays of medications provides a way to deliver medications into porous mucosal membranes inside the nose and into the blood stream, which means that drugs can be delivered at lower dosages to achieve the same clinical efects, and with quicker onset times. So the aerosolization of biochemical compounds for delivery to humans is also a necessary form of medical innovation with applications more broadly, including the delivery or pharmaceuticals (Zilinskas and Alramini, 2012).

5.4 Hackers in the Brain The idea of a weaponized hallucinogen sounds like a science fction plot, and in fact is: around the time the US national security establishment was enjoining the National Academies to seriously consider new forms of riot control (NRC, 2003, 2009, 2012), Christopher Nolan’s Batman Begins depicted a “fear toxin” attack that produced terrifying hallucinations for those afected. If that sounds too far-fetched, however, what comes next is bizarre.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Neuroweapons

55

The BCI, in the last chapter, presented a novel enhancement that formed a platform to connect human neurology directly to a wide range of devices— everything from a wheelchair; to an unmanned aerial vehicle (UAV), lethal and nonlethal, individually or as part of a swarm; to an AGI. Moreover, future BCIs aim to allow those devices to feed information back into the human brain; the connection, at present, is almost exclusively one-way in its application. If we can connect to a person’s BCI in friendly way, however, what about in adversarial ways? The BCI, ultimately, will form a two-way street between a human brain, or multiple human brains, and a range of devices. In the same vein as connecting our existing devices has created novel cybersecurity concerns, connecting a BCI creates an incredible new vulnerability. Attacks on BCIs have been anticipated since at least 2009 (Denning, 2009). The idea also has its roots in the emerging—though far too late—concerns about cybersecurity in medical devices. In 2000, to-be Vice President Dick Cheney underwent surgery for his pacemaker. The problem was not the function of the pacemaker, but that many pacemakers are outftted with a Bluetooth transmitter. This connection allows clinicians to run diagnostics on the pacemaker without performing surgery. However, Bluetooth is notoriously vulnerable to intrusion, and uses fairly lax protocols to secure information. The concern, as Cheney stood to become the second-in-line to the Presidency, was that an adversary could hack and take control of the Vice President’s cardiac system through his pacemaker (Peterson, 2013). If a pacemaker is vulnerable, a BCI is in principle also vulnerable to hacking. The earliest hacks are likely to have primitive, if problematic efects. That is, lacking the capacity to interpret data or provide novel information, a series of attack options are available to an adversary. The frst is to simply disconnect the operator from their device, potentially by rendering the BCI completely inoperable. This could, in national security contexts, render a user unable to communicate intelligence information, or control their UAV, or receive data. As an attack, this is likely to be a part of information warfare suites that seek to deny a force capability. Note, moreover, that it could equally be organized crime interfering with the BCIs of law enforcement than insurgents attacking warfghters. The second kind of primitive hack would be to food the BCI with white noise. If the BCI only reads neural signals, this would probably have the same efect as the frst kind of attack, but potentially also destroy the device it targets. One could imagine a UAV programmed to return to base if the BCI connection is severed, but it might be forced to crash if it simply received large amounts of nonsense information in the same way a website might be taken down from a denial-of-service attack. Why would a person do this? In the realm of criminal behavior, one obvious reason would be ransom. The use of cyberattacks for ransom purposes is already widespread: locking individual or groups out of critical systems, and charging

56

BRAINs in Battle

a fee to release the system back to its owner. This kind of ransom could be applied directly to BCIs, which for people with disabilities, for whom these devices could be essential to a high quality of life, would be devastating. For individuals who require a BCI for their work in national security, moreover, this kind of ransom attempt could comprise security. Importantly, BCIs are often depicted and designed as implantable. This means that an analyst who uses a BCI would need to have a means to guard their BCI from attacks once they leave the workplace. If a BCI is not a device— though advances in noninvasive technologies may make this possible—one can lock it in a drawer; it is instead a potential security threat not just within operations, but between operations. What a denial-of-service attack would do to the person attached to a “write” BCI is unclear. It isn’t certain a brain would be able to interpret random information, or how it would do so. I’m not even certain how one would test this, but we can anticipate that someone will almost certainly try it as BCIs become ubiquitous. At its most devastating, a denial-of-service attack on a human brain would be lethal, as the brain is overloaded with random signals.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

5.5 Looking Forward As the technology matures, however, there are other possibilities. One could hijack a signal and provide new, interpretable inputs to either the device or the user. In the case of the device, we could imagine that an adversary could use a BCI to commandeer a drone, for example, and provide it with new commands without the understanding of its hapless controller who, their will notwithstanding, is unable to change the outcome. In this sense, a BCI opens up simply another point of attack on existing devices. The case of attacking the human, however, is more sinister and more speculative. Here are a few possibilities. One would be that such an attack be able to implant new ideas or memories into a user. Whether these appear to the user as unusual or unrecognizable as adversarially placed ideas is unclear. The most sophisticated kind of attack, presumably, could infuence a user of a BCI without them knowing. Another would be to place a program inside a BCI that acts as a recording device, collecting sense data from a user potentially even when disconnected from the device they accessed during an intrusion. A user could become an unwitting, human-sized version of the keyloggers currently used by hackers to record keystrokes and collect password or other sensitive information from particular computers. Here, however, the data could include thoughts and sense data, rather than merely inputs into a computer. It might also be possible, depending on how pervasive a BCI is, to take control of a user. If suggestions can be provided to a user, they could theoretically be compelled to perform tasks for an intruder. Depending on how BCIs are

Neuroweapons

57

integrated with human neurology, and how our understanding of the mind works, these suggestions could appear to be the person’s own intentions, indistinguishable from their own plans and motivations. The basic idea here would be to use a BCI to implant a desire to perform some action, coded in such a way that the individual is unable to distinguish between their authentic desires and those implanted in them. A strong enough desire could compel an individual to act in certain ways, including violent ways, that they otherwise would not do. The long game in this instance would be to carefully rearrange an individual’s desires so they work against the purposes, say, of the national security organization they serve. It is not certain if these types of attacks are possible. The complexity of BCIs may never reach such a state that it is possible to hijack someone’s desires with the precision needed to compel them to specifc acts. But if they are in principle possible, they pose challenges for the design of BCIs. Each level of intrusion, and complexity of attack, relies on the pervasiveness of a BCI on a person’s neural processes. There are clear advantages to more information access from the perspective of a BCI, but just like the potential to, for example, hack cars through their stereos, the more centralized control is through a BCI, the more vulnerable the entire system—in this case, the human mind—is to attack.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

5.6 Synthesis As both classes of technology mature, there are important possible convergences between them. These come in two distinct ways, and while they are both speculative, I will consider them with an eye to the future. The frst is using neurochemical attacks to disrupt BCIs. A BCI will rely, at least for the foreseeable future, on distinct connections between human neural circuits in order to transmit and receive data from the brain. It is foreseeable that with enough research, a chemical or biological construct could be designed that attacks these connections, and renders a BCI inoperative. Pharmacology has, as discussed in this chapter, made considerable gains in identifying compounds that disrupt and mimic basic processes in neural circuits. If the design of BCIs follows predictable patterns, it can be strategically advantageous to design biochemical agents to degrade the connection between the user and the interface. It may even be possible to design a compound that makes a BCI’s user vulnerable to hacking: introducing the neural equivalent of a hardware-based vulnerability that can be exploited through code. The next important way in which these technologies could converge is in BCI-induced neurochemical attacks. This is an extension of creating novel mental states. Neural signals play an important role in the release and regulation of chemical markers in the brain, and a sufciently complex attack on a BCI could invoke a targeted neurochemical response in a person, by triggering the appropriate brain function. We could imagine, for example, a BCI receiving

58

BRAINs in Battle

instructions to tell its user’s brain to upregulate adrenaline, causing a stress cardiomyopathy. Or we could imagine code introduced into an implanted BCI that instructs the brain to deregulate its supply of oxytocin, leading a user to develop crippling depression and potentially disrupting their life and activity. Both of these are possible attack scenarios as these felds move forward. This is a case of convergence between biomedical engineering, chemical and biological synthesis, and artifcial intelligence to create new modalities for attack. In the near term—even the near-term advent of widespread use of BCIs in medicine—it is likely that the resource constraints on such attacks will be considerable. However, like the cost of biological attacks (Evans and Selgelid, 2014) this cost will only decrease with time, raising the possibility that these attacks will become widespread, and potentially involve multiple victims.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

5.7 Conclusion In this chapter, I described the ways in which advances in neuroscience could be weaponized against their users and broad populations. The kind of weapons I have in mind, moreover, are distinctively neuroscientifc, rather than the broader forms of violence discussed in earlier chapters. In the case of BCIs and biochemical agents, the action on the brain at an electrical and chemical level is what makes the weapon. In some cases, these topics intersect, a case of convergence we should anticipate in the medium term as these technologies develop. This analysis extends us to the future of neuroscience and national security concerns. I will return to BCIs in Chapter 7, and again in Chapter 8, to consider their future uses and the ethics therein. Moreover, I will consider the implications for the national and global regulation of these devices in the context of national security extensively in Part III. It also marks the end of the frst section of this book. In the following section, I begin to unpack the ethical issues that arise from emerging neuroscience in the context of national security. I begin with an analysis of the state of the feld of ethical analysis specifcally, and ask why neuroethics has largely neglected national security concerns. I then move into the four key ethical issues for neuroscience in national security: translation, dual-use, corruption, and supremacy.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

PART II

Neuroethics and National Security

Copyright © 2021. Taylor & Francis Group. All rights reserved.

6 WHITHER NEUROETHICS?

6.1 Chapter Summary The chapter being Part II, an examination of ethical issues that may arise for neuroscience in the context of neuroscience, I begin by asking why neuroethics has largely ignored national security concerns, and postulate a number of reasons for this lacuna. I then ask what is new about neuroscience in thinking about emerging technologies, ethics, and national security. I conclude by addressing objections that the role of neuroscience in national security may be overhyped and overstated.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

6.2 Introduction Over the frst part of this book, I described a series of scientifc and technological innovations occurring within neuroscience, and the cognitive sciences more broadly, that posed interest to the military. These innovations were grouped in four broad areas: 1 2 3 4

behavior prediction; persuasion and compliance; enhancement, also known as “maintaining human performance”; and neuroweapons, which might also be termed as “degredation.”

These four areas, and the technologies and innovations described therein, while interesting in a descriptive sense, also pose a series of normative questions about the ethics of the study and the development and deployment of new neurotechnologies1 for national security purposes.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

62 Neuroethics and National Security

It is to this normative dimension I will now turn. These four areas raise ethical issues in the sense that their study, development, and deployment concern important human values. On the one hand, they invoke values that should be familiar to anyone who has studied applied ethics: autonomy, the balance of risks and benefts, and justice among others (Levy, 2007; Selgelid, 2009; Evans, 2015b). On the other, the specifc focus of this work means that the values at the heart of a moral justifcation for national security are also invoked: the value and justifcation of sovereignty, human rights, limits of the use of lethal force, and the interplay between civil and human rights and the priorities of collective safety and security (Kleinig, 2014; Walzer, 2015). These two sets of values are not, in principle, distinct from each other, and share common features. Particularly in areas like privacy, which both criminal justice and bioethics examine, there is considerable overlap in analysis. In others, particularly armed confict, with rare exception there has been little to no work completed (Gross, 2006, 2012). As I articulated in Chapter 1, the reason I began with an account of these technologies that was as close to descriptive as possible, in addition to better developing the ways this book can be read, is because the technologies described, while technically distinct, raise a series of common ethical questions. The questions I will deal with here are translation from basic science to military application and between civilian and military applications; the dual-use nature of neuroscience research; the potential for neuroscience to be implicated in institutional corruption; and the value of neuroscience, among other areas of science and technology, as a way to ensure supremacy of a nation among its peers. First, however, I will articulate why neuroethics—and indeed applied ethics— thus far has neglected or misrepresented elements of these issues, and why they are deserving of particular attention. This chapter, then, will begin by asking the question: “whither neuroethics?” Why has neuroethics until now devoted so little energy to discussions of national security? I argue that among a host of methodological reasons, it may be that researchers do not view there being anything new about the intersection of neuroscience and national security. I take this claim seriously, and attempt to show where and what new issues arise that are apt for ethical analysis. I then treat a basic objection to this endeavor, namely, that ethical analysis is premature in light of the lack of capacity for these technologies to do what people imagine they will—the intersection of critical neuroscience (Marks, 2010) and “neurohype” (Caulfeld et al., 2010)—I argue that though we should take the impact of particular technologies with a grain of salt, the presence of both (a) strategic trajectories and (b) platform technologies should give us reason to subject these classes of technology to rigorous analysis.

6.3 Whither Neuroethics? The connection between neuroscience, its emergence, and its relationship to national security has already been established. This is not a novel claim

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Whither Neuroethics?

63

(Moreno, 2012). What is notable, however, is how little work has been done in neuroethics around the issue of national security. This is not to say this absence is surprising. Bioethics—and I take neuroethics to be continuous with bioethics—is not terribly active in areas of national security, and does not deal with national security broadly. Where bioethics does engage with national security, its inquiry is confned to a few very niche topics, namely, military medicine (e.g. Selgelid, 2008), the medicalization of torture (e.g. Miles, 2009; Lepora and Millum, 2011; Evans et al., 2019), the securitization of health (e.g.  Annas, 2002; Gostin, 2002), and biosecurity (e.g. Miller and Selgelid, 2008; Evans, 2013a, 2013b).2 To understand the scope of shortfall in neuroscience, consider the literature. Getting an estimate of applied ethics articles is a nontrivial task, as indexing for applied ethics is radically incomplete. A survey of bioethics journals in the Bioethics Research Library at the Kennedy Institute of Ethics shows the top 100 bioethics journals; of these journals, only 82 are indexed in the National Library of Medicine. This is, moreover, the most expansive coverage of bioethics journals by an academic database.3 If we attempt to use multiple databases—PubMed, PhilPapers, ISI Web of Science, and Sociological Abstracts—a survey of “national security,” “neuroscience,” and “ethics” turns up 41 articles, and 2 books. Of these, 26 articles are concerned with the application of functional magnetic resonance imaging (fMRI); 28 articles are concerned with the interrogation of detainees in the criminal justice context. Put another way, the overwhelming concern is the use of brain imaging diagnostics in criminal justice contexts, typically in detecting deception by those charged or held by police, for the purpose of securing a conviction. (See Appendix A for a bibliography of these titles, as well as the methodological limits of such a search.) This is, to be sure, an important concern. But note that it takes up one section, of one chapter, of the frst part of this book. It is true that fMRI and other diagnostics are being considered as replacements for or adjuvants to conventional lie detection measures. And it is true that much of this work applies to fMRI. In particular the following chapter will be useful, given that proponents of fMRI are infamous for their overinfation of the capacities of the technology to provide a reliable, much less admissible form of evidence for the purpose of securing convictions (Morse, 2018, 2019). But the scope of this issue is larger, broader, and more complicated than simply the issue of neuroscientifc diagnostics for criminal justice. It is larger in the sense activities such as intelligence collection have diferent thresholds for use, owing to their diferent aims, and thus pose diferent ethical concerns. It is broader in the sense that questions of deception detection dovetail, and may be further complicated by compliance measures. And it exists in a more complicated environment brought about by the change in surveillance neuroscience purports to bring around, the use of new propaganda methods on populations that may heavily interact with criminal justice, and the enhancement of law

Copyright © 2021. Taylor & Francis Group. All rights reserved.

64 Neuroethics and National Security

enforcement ofcers. These issues are also cross-cutting, and need to be addressed in combination to understand the full implications of the science on criminal justice. So the question is, why has so much been missed? One initial hypothesis is simply a lack of capacity. Neuroethics, among other felds, needs to broaden its scope to accommodate scholarship on the ethics of armed confict and policing if it is to do justice to the issues raised by neuroscience and its attendant technologies on national security. These additional felds are necessary to answer questions about the ethics of using neuroscience— whether initially designed for civilian use, or purpose-built for supporting national security institutions—to violate the rights of others. This literature falls outside of the historical antecedents of neuroethics, which I take to be inter alia biomedical ethics and philosophy of mind. Importantly, those who have considered the criminal justice possibilities of novel neuroscience applications tend to be skeptical of the novelty of diagnostics (Morse, 2018). This skepticism does not obviate ethical concerns, but rather refocuses our concerns away from the central concern for writers who assume the technology will fulfll its potential and somehow undermine free will or autonomy. Rather, the concern is that the lack of appropriate function makes these devices dangerous precisely because, if they are used to secure a conviction, they will (a) provide a false result that leads to a false conviction (and thus violate principles of natural justice by punishing the innocent); or (b) provide a conviction of someone who counterfactually should have been found guilty, but do so through false means (and thus violate procedural justice, or—worse— provide grounds for a mistrial even if the result was overdetermined by other evidence) (Marks, 2010). A second related hypothesis is that neuroscience has largely ignored the largest innovation in bioethics and national security in the last 20 years: the literature on dual-use research. This is surprising, given the US BRAIN Project involves signifcant investment by defense organizations. Neuroscience is dual-use in two senses. First, the technologies developed are likely to have the capacity to either help or harm humanity (Miller and Selgelid, 2008). Second, there is a strong historical crossover between military and civilian research and development in the neuroscience that should be part of the history of the feld (Evans and Moreno, 2017). There is an existing literature on dual-use in biomedical ethics,4 but so far contributions that focus on neuroscience have been sparse. Moreover, much of the push for neuroethics to notice dual-use has come not from what we might call the core scholarship of neuroscience, but from adjacent felds such as peace and confict studies (Bartolucci and Dando, 2013; Dando, 2013, 2015). Providing an analysis of dual-use from the perspective of neuroethics could incorporate progress from biomedical ethics, while ofering a novel perspective on dual-use.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Whither Neuroethics?

65

A third hypothesis is that, even once we account for novel normative concerns and methodological tools, the policy implications of national security— particularly armed confict and intelligence collection—are thought to be (and perhaps should remain) distinct from those of medicine and science. Neuroethics at the intersection of neuroscience and national security will inevitably have to deal with international arms treaties, and international humanitarian law among others. This is a nontrivial addition to any discipline, as these topics are themselves major components of law, philosophy, and social science. Moreover, the place of neuroscience in these regimes is uncertain: components of neuroscience that deal with AI arguably fall under the Convention on Certain Weapons (CCW), while those that involve chemical and biological agents fall under the CWC and the Biological and Toxin Weapons Convention (BTWC), respectively. This is a ripe area for development; however, the BTWC is presently a stagnant treaty, mired in a lack of consensus between state parties to the Convention, struggling to make progress on issues such as whether and how to periodically review advances in science and technology for their capacity to beneft or run contrary to the Convention (Dando, 2015). The introduction of a new breed of activist-scholars in the form of neuroethics would be a new venue for neuroethicists to conduct their work, but also a welcome addition to the academic observers and non-governmental organizations that work for change around major arms control treaties. A further fourth hypothesis is that the typical framing of ethical issues in science and technology is, in part, inadequate for the task of assessing emerging technology and national security. The “ethical, social, and legal implications” (ELSI) movement has been critiqued, a number of times, for being principally reactive rather than prospective, and thus less able to involve itself in considerations that might guide the development of technologies of ethical ways (e.g. Kitcher, 2003). This, it has been suggested to me, is by design: ELSI arose in the context of the Human Genome Project, which designated 5% of its budget to ELSI. This funding, however, was primarily allocated—according to one observer, by design—to projects that concerned post-discovery ethics, rather than to projects that sought to interact with scientists before the stage of discovery (Wikler, personal communication). Neuroethics has an opportunity to branch out into the post-ELSI world ahead of allied disciplines such as bioethics. To date, there has been little normative analysis of research projects before they are initiated. Here, neuroethics enters a largely under-developed territory for any applied ethical discipline. However, the literature on risk analysis in the realm of biosecurity may provide a fruitful link to analyzing neuroscience research looking forward (Haas, 2002), as would recent attempts to provide forward-looking risk assessments of potentially dangerous virology experiments (Gryphon Scientifc, 2016). The fnal, and most troubling hypothesis is that neuroethicists fnd nothing novel or distinct about national security concerns. So the question is, what is

66 Neuroethics and National Security

new about neuroscience and national security? This is the question to which I will turn, as it is a question ethicists are not fond of, nor good at, answering.

6.4 What’s New?

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Neuroscience and national security simpliciter are not new. Cognitive science has a long, strong relationship to national security, including the former chemical and biological weapons programs of the US, its allies, and its adversaries. This means, if anything, that scientists, warfghters, and policymakers have been dealigned with the normative aspects of applying neuroscience to national security concerns for at least half a century. MKUltra and its sister programs were, in part, an exercise in medical ethics, albeit one in which lessons were learned the hardest way. “Bioethics and national security” is not only not new; bioethics in the US has its origins in national security concerns. The Common Rule, and its history in the Belmont Report, arose from the Tuskegee syphilis study. Although the study was pursued by the US Public Health Service, the reasons for its initiation in the interwar period arose from the concern that syphilis afecting midshipmen in the US Navy would undermine war preparedness and force strength in the event of another confict. The Tuskegee syphilis study was a piece of civilian research conducted for military reasons (Washington, 2006). What is often neglected is the US military had already subscribed to early medical ethics documents before the Common Rule emerged and was widely adopted by the US interagency. In his testimony before the committee that investigated the human radiation experiments conducted by the US Department of Defense, Jonathan Moreno noted that …the now-famous…Wilson Memorandum from February of ’53 in which the new Secretary of Defense, Charles Wilson, signed of on an Armed Forces Medical Policy Council recommendation for the DOD essentially to adopt the Nuremberg Code with one addition, the 11th Commandment, namely that prisoners of war should not be used in research. (Faden, 1994) Eileen Welsome, in her Plutonium Files, noted that the human radiation experiments even featured primordial informed consent documents that required explaining the procedure to those being studied. The central issue, Welsome noted, is that these procedures were highly inconsistent in the application in part because of the “born secret” doctrine that classifed all information pertaining to fssile material, including the nature of the research being conducted on the efects of plutonium in the human body (Welsome, 2010). So neuroscience and national security, and bioethics and national security, are intimately linked. Does that mean there is anything new to normative

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Whither Neuroethics?

67

analysis of neuroscience and national security? Or should this be treated more as a historical and social scientifc endeavor? The crucial point here is that some issues in neuroethics that relate to neuroscience are well trodden in virtue of their development in other areas. In particular, the creation of novel “lie detectors” for use in criminal prosecutions is, on its face, not a terribly interesting development for two reasons. The frst is that the jurisprudential, legal, and criminal justice critique of deception detection devices is an extensive literature. Insofar as neuroscience might be a source of deception detection technologies in the future, these technologies present remarkably conventional concerns around the admissibility of diagnostics including DNA testing (Kreimer and Rudowsky, 2002), polygraphs (BenShakhar et al., 2002), micro expression analysis (Porter and Brinke, 2011), and mass surveillance (Zedner, 2005), and the rights of suspects to not incriminate themselves. This latter concern is and has been a concern about lie detectors in general for some time (e.g. Kaplan, 1963; Meijer et al., 2017). Note that this concern about polygraphs, and other deception detection methods, also encompasses the potential that these devices are simply inaccurate, as some have charged (Marks, 2010). The issue of the creation of actionable advice from poor diagnostics is a strong topic in bioethics writ large, and ties into concerns about medical reversal (Prasad and Cifu, 2015). We should therefore be skeptical that there is something new about these issues, though there may be room to build or restate calls for action and activism in these spaces. The second is that the concern tied to neuroscience around authenticity and free will abut the general concerns—or lack of—around moral naturalism and determinism. Novel deception detection mechanisms, particularly those that make use of fMRI and electroencephalogram (EEGs), have been claimed to threaten our concepts of authenticity (see Farah, 2014). Yet this, if anything, is a feature of fMRI (or EEG) qua fMRI, not fMRI qua deception detection mechanism. So the question here would absolutely be, in what sense is the fMRI qua deception detection mechanism a novel threat to authenticity above and beyond the threat to authenticity, if any, posed by fMRI qua fMRI? This is a question neuroethics has yet to answer.

6.5 What Is New With all this in mind, there are new things to discuss about (some) national security applications of neuroscience. Some of these relate to neuroscience as the type of technology for discussion; others to neuroscience as a token of a larger debate yet to be had or resolved in neuro- and/or bioethics. These conversations extend from the gaps identifed above. First, bioethics has yet to fully account for principles of the ethics of armed confict, or law enforcement, in thinking about the commitments of life

Copyright © 2021. Taylor & Francis Group. All rights reserved.

68

Neuroethics and National Security

scientists and medical personnel regarding the regulation of novel technologies. Some work has been done on this, but it is a very narrow part of the feld (Gross, 2006; Canli et al., 2007). So insofar as neuroscience and national security represent emerging biomedical technologies and national security writ large, there is substantial room for debate. Second, the neuroscience and technology I have discussed thus far is an important kind of dual-use research and technology: one in which the (or a) dominant funded of research is the military, and the future uses of the research and technology are explicitly stated to be military. This contrasts the debate in the life sciences—the microbiological sciences—in which the intended applications are typically assumed to be civilian. Some of this dual-use research has been funded by the military (e.g. Cello et al., 2002), but even then the military application was not clearly stated, and there has been very little discussion about the status of military funders despite their overwhelming role in guiding some felds in the life sciences thus far (Kuiken, 2015). Third, the transition from ethical concerns, to normative guidance, to policy, is almost totally absent for neuroscience research that impacts national security. The small body of work that does exist, within the lethal autonomous weapons system (LAWS), and biological/chemical weapons spaces, is disconnected from the larger policy discussion about neurosciences. That is, this literature is not central concerned with neuro, but it does bear on neuroscience in important ways as described in Part I. This is an important area of growth, as policy bodies including the CCW, BTWC, and the US Director of National Intelligence’s ofce have all raised concerns about “convergence” between biology, nanotechnology, and neuroscience as a site of national security concern (Evans, 2019). Bringing neuroscience into the fold with other areas of concern is an important policy aim, and ethicists should be involved in those discussions. Fourth and fnally, the emerging status of many of these technologies gives us a ripe opportunity to conduct some post-ELSI analysis. In this regime, we can think critically about how and why novel technologies in neuroscience are being developed, and along what trajectories they ought to progress. Unlike genomics, which is now a close-to-mature feld, neuroscience and national security presents an opportunity to engage in a critical project of how to proactively design technologies with ethics in mind.

6.6 Neurohype and Critical Neuroscience A fnal concern emerges from two spaces in neuroethics that take—justifably so—a critical approach to both the promise and peril of neuroscience. Critical neuroscience takes as its starting point the need to examine, inter alia, the economic and political drivers of neuroscience research, the limitations of the methodological approaches employed in neuroscience, and the manner in which fndings are disseminated (Marks, 2010; Choudhury and Slaby,

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Whither Neuroethics?

69

2018). From the view of crucial neuroscience, or what Marks calls the related “neuroskepticism,” the gap between current neuroscience research and actual application is very large indeed. With this in mind, any analysis of neuroscience must take as its starting point a clear-eyed view of what neuroscience can actually promise, rather than the most optimistic (or pessimistic) claims of its proponents (or detractors). A second related critique is that neuroethics, as a feld examining neuroscience, should play close attention to concerns of “neurohype” (Caulfeld et al., 2010; Morse, 2019). That is, we should worry what the implications are of the marketing of neuroscience-inspired interventions that have, at best, questionable evidence bases from which to guide action. This is especially true in cases in which neuroscience informs a medical intervention, such as pharmaceutical prescription or surgery, but also applies to something like intelligence collection. We should take these concerns seriously. I don’t subscribe to critical neuroscience per se, but the insight that both the benefts and risks that apply to neuroscience in national security contexts is one I have treated elsewhere in other felds (Evans, 2013a, 2013b, 2014, 2018). We should be very careful about analyzing claims of purported risks and benefts when dealing with the ethics of emerging technologies. There are two ways to address this, methodologically. The frst is to look at classes of technology when thinking about ethics, and then regulatory process. Let’s say, for example, that in the near future long-term studies on modafnil show that it has serious health risks and this results in it being discontinued from use in the USAF. This may lead us to believe modafnil is not a good candidate for routine use, and thus some of the concerns about it (e.g. as an enhancement technology) might not apply. However, it is likely that some and perhaps many ethical concerns about modafnil apply to other fatigue countermeasures. Assuming novel pharmacological fatigue countermeasures arise post-modafnil, the kinds of ethical concern we have about modafnil qua fatigue countermeasures will apply there also. Our uncertainty about the kinds of technology we will end up with, however, should itself be a subject of ethical analysis. Many of these technologies are indeed speculative; even if it turns out ex post that some of these technologies were not safe to develop and deploy, we typically recognize that ex post assessments of risk are not the best assessments on which to base our ex ante reasons for action (Hansson, 2003). In this, national security forums represent potentially distinct kinds of risk assessments. On the one hand, the purpose of national security institutions means that role holders within those institutions are subject to risk, sometimes greater risk than the public. So our assessment of the various risks and benefts associated with adopting a technology ought to be subject to ethical analysis.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

70

Neuroethics and National Security

On the other hand, those impacted by the risk these technologies don’t work, and the risk they do, are often individuals and groups who have special kinds of claims against the user. Sometimes national security concerns justify the use of force, or some other infringement on the rights of individuals or groups to promote another important social value (such as security). However, the bar for rights-infringing acts is typically higher than a simple risk-beneft calculus (e.g. Kleinig, 2014). Setting the bounds for approving technologies where those harmed by the technology are not simply the user is an important ethical concern. The distribution of risks and benefts may accrue to diferent individuals. For example, a particular technology applied to criminal justice may improve safety and security for one population, but reduce it for another in virtue of receiving disproportionate scrutiny, and even the use of force, by police. In these cases, principles may be needed to decide what kinds of distributions are permissible, rather than simply inquiring whether there is a net community/ society/global beneft from the application of these technologies. The second way to think about crucial neuroscience is to think about, as I have in the previous section of this book, strategic aims. Some of the ethical concerns that arise in neuroscience and national security are not issues that arise in the use of a particular technology, but the use of some kind of technology for a specifc purpose. In these cases, neuroscience might play one of a number of roles. Most obviously, neuroscience might be the type of technology that falls into a particular purpose. The case of neuroweapons is probably the most obvious of these, because by defnition this requires attacking neural/mental states directly. Neuroscience could also be a token for a set of classes of technology for which a particular strategic aim is particularly problematic. The rise of surveillance and countering violent extremism, for example, has generated a large body of neuroscience and neuropsychology research on violent extremism. This is not the only way we could understand violent extremism, but it is an important (albeit contingent) aspect of the feld to date. The concerns that might arise from the N2 project include those that are concerned with the larger surveillance apparatus of twenty-frst-century asymmetric confict. This debate is not solely concerned with neuroscience, but neuroscience is an important component of the debate as it progresses at this time.

6.7 Methods: Reprise With this in mind, my strategy is as follows. In the next four chapters, I will discuss four key ethical issues that arise in neuroscience and national security. I will typically begin by laying out the issue and its constituent features. I will then lay out in what sense we should consider neuroscience a component of these debates, whether neuroscience is a new type of concern in the debate, or

Whither Neuroethics?

71

whether it is a token of a larger debate. I’ll then dive into particular concerns for neuroscience and neurotechnologies. The frst section picks up where we left of with critical neuroscience— translation. That is, how should we think about the move from early scientifc advance to fully functioning technology, when we think about neuroscience and its application in national security. Moreover, how should we think about the transition between national security fora, for example, when military technologies move to the law enforcement system? This will set us up for a discussion of risk, on which the next three chapters will depend.

Notes

Copyright © 2021. Taylor & Francis Group. All rights reserved.

1 Despite my best eforts to avoid the use of “neuro-” terms, I will use them as typically appears in the literature. I may, by the end of this book, have also introduced at least one new neuro-term. 2 I’m sensitive to the objection that I’ve mischaracterized precisely what bioethics is, or what counts as bioethics. This is of course a much broader issue and one that has a long history in the feld. Here, however, I mean philosophically informed normative ethical analysis of national security issues in medicine and the life sciences. There is, of course, a much broader literature on the military and biomedical issues in history, anthropology, and law among others. 3 For a list of these, please see http://www.nicholasgevans.com/bioethicslitreviews. 4 For a fairly comphrensive list as of 2016, see Selgelid (2016).

7 TRANSLATION

7.1 Chapter Summary In this chapter I address translational issues in neuroscience as they apply to national security. I start by describing why the role of model systems makes translation in neuroscience challenging. I then address translational issues with the Narrative Networks Program (N2), before returning to nonlethal weapons and enhancement as cases of biomedical areas where translation poses serious ethical concerns.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

7.2 Introduction The development of science and technology proceeds in phases, and neuroscience is no exception. A central question that arises in this development process is, when ought we progress to the next level of development or implementation, or when ought we to rethink or even abandon our work? This is the problem of translation, and it afects every application of basic science to applied, practical technologies. A core question for any science is how to develop promising basic scientifc insights into functional technologies or interventions, and neuroscience is no diferent in this respect. In addition to the basic scientifc problem of translation, the process of development is a central concern of national security. Much like medicine, national security practices feature the application of (at times, lethal) force in the execution of their duties. National security role holders, moreover, perform acts that put them in harm’s way, and rely on technology to ensure their mission is accomplished without loss of life. The kinds of risks this entails mean that decision-making around the kinds of translational technology we ought to accept as ready for deployment are critical ethical decisions.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Translation 73

To draw from another area of national security development—albeit one with a connection to neuroscience—consider the F-35 Joint Strike Fighter ( JSF). Selected in 2001 after a fve-year competition between Lockheed-Martin and Boeing, the JSF was meant to replace the aging air feets of the four largest branches of the US military (Army, Navy, Marines, and Air Force), as well as a host of allied forces (Bolkcom, 2003). The brief, moreover, was ambitious: the new plane had to be designed to take of from regular air strips, aircraft carriers, and perform vertical take-of and landing, refecting the diferent priorities of the services. The project design was unconventional, and part of its design arguably led to the problems that would come. Militaries have, conventionally, been a fan of “fy before you buy” deals in which a working model of a technology is produced and, after rigorous testing, is approved for purchase. The F-35, however, projected in 2002 at $23.2 billion in development and up to $38 million per unit, and the Department of Defense (DOD) opted for a model of “concurrency,” in which the project development was funded and units ordered, with the understanding that development would continue as units were rolled out to consumers. Put another way, the F-35 was purchased the same way as an iPhone app: with the understanding that patches would be needed (GAO, 2018). How many patches, however, was unexpected. The F-35 was projected to enter into service in 2008, but as of 2018 was still in development phases. The project was intended to cost $233 billion, but over time this has ballooned to an estimated total lifetime cost of $1.5 trillion over the next half century. Moreover, the JSF program was infamous for seemingly unimaginable kinds of failure and error for a trillion-dollar fghter program. The most striking of these errors included a point at which the all-digital control system would freeze up, forcing a mid-air, and at times supersonic reboot (Gallagher, 2016). The weapons systems were difcult if not impossible to aim, had predictable and routine failures such as the central cannon pulling up and to the right whenever fred, and were difcult to target using the new helmet-mounted display (US DOD, 2017). While services around the world do use the F-35, and the DOD announced the development phase was ofcially complete in 2018, the plane is undoubtedly a work in progress. This work in progress is one with potentially lethal consequences to the operator. But most importantly, the failures of the JSF could have extreme and deadly consequences for noncombatants within the area. This is a key oversight, and one that pervades the military. The DOD defnes operational utility in the development and deployment of new technologies through their Test & Evaluation Management Guide. Operational utility is a term encompassing a number of factors, but key to these is which the system can be placed in operational feld use, with specifc evaluations of availability, compatibility, transportability, interoperability,

74

Neuroethics and National Security

Copyright © 2021. Taylor & Francis Group. All rights reserved.

reliability, wartime usage rates, maintainability, safety, human factors, manpower supportability, natural environmental efects and impacts, logistics supportability, and documentation and training requirements. (US DOD, 2012) It is clear that the JSF has for some time failed to meet this kind of defnition. Some of the technologies I will discuss will also fail to meet this standard without a better understanding of both the standards and the path the technology ought to take. What neither technology can fail is to meet an ofcial standard that ties technology to ethics. This is because the Test & Evaluation Management Guide does not mention ethics, or international humanitarian law, the customary international law that prescribes the use of disproportionate and indiscriminate force in armed confict. To the best of my knowledge, moreover, no comparable document in intelligence or law enforcement requires that the technologies used by their organization adhere to certain kinds of ethical standards. This does not mean such standards do not exist, only that they are fragmented and—in the case of the above defnition—unclear as to precisely what it means for a technology to be ethical (including, inter alia, to be safe). This is a central issue for the ethics of translation: how we get from a good idea to a viable technology, and what it means for those technologies to be relevantly “viable.” In this chapter, I deal with the ethics of translating neuroscience into functional military technologies, and how ethical concerns arise in the context of these translations. These concerns are not specifc to neuroscience: to use the last chapter’s parlance, neuroscience is a token for a larger type of concern. That concern, however, is not sufciently explored in the literature. Moreover, there are unique characteristics of the cognitive sciences that require important consideration. The primary kind of consideration to be taken here is that while all sciences use models to explain behavior and generate productive hypotheses, cognitive science increasingly uses neural models to explain behavior. These models, and their connection to national security applications, deserve inquiry in their own right. In particular, we should ask whether the kinds of models that national security-apt neuroscience research uses are sufcient to provide an explanation that should lead us to use them in technology development. A common intuition is that the risks incurred by national security professionals allow us to expose them to greater, or other risks (see e.g. Savulescu, 2015). Let’s call this the “base risk argument.” In broad terms, the base risk argument is that given that the risks of national security work are, in the main, greater than (most) other kinds of work, it is not impermissible to expose national security personnel to new risks through the application of novel technologies so

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Translation 75

long as we are not signifcantly increasing the overall risks that person incurs as a function of their (chosen and justifed) occupation. I address each of these concerns in this chapter. I begin with the problem of translation. I argue that the kinds of model we are talking about matter, and that basic research that leads to or is intended for national security application ought to consider the kinds of model and target chosen for study in light of fnal application. In this, I focus on persuasion, enhancement, and nonlethal weapons (Chapters 3, 4, and 5, respectively) as connected cases that demonstrate the kinds of concerns that arise when we think about translation. I then argue that when we think about enhancement, contrary to preceding arguments of its kind, the base risk argument fails on a number of counts. First, it fails empirically in that in aggregate—and contrary to popular belief—the overall risks incurred by national security personnel are not actually greater than other occupations. Rather, the kind of risk we subject warfghters to is part of their institutional role, where being a human research subject is not perforce part of that role. Moreover, the kinds of risks warfghters accept and the sociological fact of their reduced autonomy do not necessitate treating warfghters any diferently, in principle, from civilians in testing enhancement. As such, even if we were to treat translation of technologies as not in principle distinct from other occupational risks as the base risk argument assumes, we have a reason to mitigate those risks. I conclude with a view to potential governance measures, to set up the next part. As a beginning note, I will set aside much of the discussion of whether the fnal aim of translation is worth pursuing. This is important in research ethics, not only in terms of the risks and benefts of research, but as the political and moral motivations of actors engaged in the research process apply to the social value created by research (Brown and Evans, 2016). I reserve that, however, for what I think is a more ftting area for discussion: that of dual-use in the following chapter. In the cases below I assume that there is some pro tanto reason for pursuing the applications in question, and focus on the problem of getting from the science to that application.

7.3 Models and Targets One of the things that make translating a scientifc discovery into a working application so difcult is that scientifc inquiry often has as its starting point a model system. These models, moreover, are incomplete views about a particular problem. Getting from these incomplete descriptions to a working technology is often the most difcult part of biomedical science. Here, I tackle two problems with translation in neuroscience and national security: from the perspective of models and from the perspective of targets.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

76 Neuroethics and National Security

By “model” I mean any kind of representation in scientifc inquiry that replaces the direct study of an object or phenomena (Weisberg, 2013). Models come in three broad kinds. A concrete model is a physical system that stands in for another. A paradigm example of a concrete model is the design of a very large system in miniature to observe the consequences of changes to that system. An example of a concrete model in neuroscience is the study of a particular brain (or a set of brains) as an analogue for all brains. These brains, moreover, need not be human—we could imagine the study of rat or mouse brains to observe chemical changes brought on by pharmacological agents. A mathematical model is one that describes the relationship between properties of objects in numerical or set-theoretic terms. The paradigm example of a mathematical model is predator-prey dynamic systems using diferential equations, describing the variance of animal and other populations. In neuroscience, we have formulaic descriptions of the relationships between activation potentials in neural cells and physical acts. Computational models use algorithmic (including mathematical) representations to generate results about systems. There is some disagreement about whether computational models are distinct from or a subset of mathematical models that cannot be analytically solved. For my purposes here, however, I’ll treat them as distinct, as neuroscientifc computational models need not be strictly mathematical in nature. One example would be the development of heuristics that predict social behaviors from narratives, as in Chapter 2. Another more ambitious project would be the development of artifcial brain projects; here computers are used to create large-scale mimics of human neural networks: the subject of the ongoing European Human Brain Project. A central component of models is idealization. Models purport to represent some facet of the world, but not necessarily all of them. In neuroscience, for example, neural correlates represent associations between certain kinds of behaviors or reported mental states, and particular arrangements of activity in the brain. What a neural correlate does not explain is (a) the causation between the structure of the activity and the behavior; or (b) the relationship between the underlying structure and the mental state (Clark, 2007; Levy, 2007). The model, further, is statistical: a person may have that structure of neural activity but no corresponding behavior without invalidating the model, because the model simply establishes a connection with some likelihood rather than determines behavior. This means that the kind(s) of model we choose to represent a particular system matter. Here, we can distinguish between two kinds of information models provide us. A model can give us information, but without choosing an appropriate model that information won’t necessarily give us actionable knowledge of the world. This, however, need not be a concern if our aim is to simply derive knowledge of specifc properties of a system rather than something more ambitious. In this way, models—whatever kind they are—can be further

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Translation 77

divided into two classes. On the one hand, there are hypothesis generation models that give us idealized models that create more questions, and, on the other hand, there are pragmatic models that give us knowledge about the world at large (LaFollette and Shanks, 1995). While hypothesis generation models are not in principle worse than pragmatic models, they perform diferent functions. Importantly, this distinction is logically independent from the type of model: concrete, mathematical, or computational. The role of models has long been debated in cognitive science. A good chunk of our knowledge of humans in psychology, for example, comes either from animal studies or from self-reports of experiences of undergraduates (Peterson, 2001; Beery and Zucker, 2011). Both models have critical limitations. To the former, we should be concerned about the connection between any biological process in a nonhuman animal and the same process in humans, but particularly when we think about human neural processes. The human brain is considerably more complex than a rat’s brain, and—without discounting the intelligence of rats, or overinfating that of humans—presumably only gives us limited information about humans if it gives us any all. To the latter, students enticed into psychological studies may exhibit aberrant behavior relative to those in naturalistic settings. Students are also often in translational states between adolescence and adulthood, and may provide a diferent set of responses to developmentally stable adults. Like animal models, neither of these features of undergraduates is necessarily a reason to never use them as psychological research subjects. However, their use may provide limited information depending on the context—though, given the young age of enlisted warfghters (outside of SOF), they may be a better model than most. The “target” of a pragmatic model system is important. This is a common problem in drug trials. Ideal trials that are strongly controlled may produce information that does not easily translate to clinical outcomes in real settings: the target of the model of “human clinical trial participant” is not other clinical trial participants, but rather those receiving clinical care. That care can be highly diverse, and so the knowledge generated about clinical care by randomized clinical trials might ultimately not, as Nancy Cartwright says, “clinch causation” (Cartwright, 2011) but rather only generate an idea for further pursuit (Borgerson, 2011, 2013). This is particularly concerning in national security contexts. Much of the research we have discussed so far has as its basis civil society. The choice of model here is critical, and often substandard for the aims of the work. An important preliminary concern is that the choice of model might not be anything close to the fnal application of the technology. Consider, for example, current eforts by the National Institute for Standards and Technology (NIST) to create facial recognition systems for, inter alia, law enforcement. Part of this efort is the development of the Facial Recognition Verifcation Testing (FRVT) program, a library of images on which

78

Neuroethics and National Security

to train facial recognition systems. The FRVT program is designed to allow organizations who are training deep neural nets to “train” their algorithms on a common, large, standardized set of information. It also allows NIST to compare facial recognition algorithms against historical algorithms and track their progress. However, the FRVT program was revealed in 2019 to hold a potentially fatal faw. Keyes, Stevens, and Wernimont published an account of the FVRT program that noted that the images that made up the program were drawn from

Copyright © 2021. Taylor & Francis Group. All rights reserved.

images of children who have been exploited for child pornography; U.S. visa applicants, especially those from Mexico; and people who have been arrested and are now deceased. Additional images are drawn from a Department of Homeland Security [DHS] scenario in which DHS staf simulated regular traveler-image capture for the purposes of testing. Finally, individuals booked on suspicion of criminal activity are the subject of the majority of the images used in the program (Keyes et al., 2019) This means that the NIST facial recognition program is based on a very particular slice of humanity, but is being used in diverse sets of projects, some 100 in total (Keyes et al., 2019). This is a potentially serious problem for national security applications. To head of a frst, early objection, these images are indeed of people who fall under the rubric of national security concerns: people entering the country and people who have been tracked by law enforcement and the DHS. But this is neither necessary nor sufcient for the creation of a model for national security purposes. There are a number of reasons for this, but they all have to do with the aims of these models, and the way neural models are determined by their input data. It is clear from the kinds of use cases in which we already see facial recognition that a central use for these technologies is national security. The national security establishment has a long record of using facial scanning in surveillance: the paradigm use of this is at ports of entry by Customs and Border Patrol to screen entrants to the US. The use of mugshots is another paradigm use. In both cases, artifcial intelligence (AI) can function to streamline the process, and in principle more reliably identify people in complex situations. Yet the training set for this AI should, if it is to be efcacious, refect the behaviors of people over whom they are going to be deployed. This representativeness, moreover, must track important features of people’s faces. While the most obvious features are physiological, they are also the easiest against which to deploy countermeasures. Small changes in apparent facial morphology through makeup, for example, can fool facial recognition. AI, instead, needs to account for these properties. The ability to detect faces through countermeasures requires a much larger set of inputs. The ability to

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Translation 79

detect facial features against hair color or style changes, or small changes in critical facial features, is desirable. Moreover, the ability to discern what a person likely looks like over time is important. This time might be short time, e.g. the time it takes to grow or lose a beard. But it might also be a long time, e.g. a persons’ aging over years or decades. There is another more central reason, however, why the development of facial recognition using inappropriate models is dangerous. These data are biased in a way that is bad for detecting criminals, or security threats, because it overrepresents security threats. It overrepresents these in two ways. On the one hand, it lacks a commensurate body of “true-negatives,” images of folks who are decisively not security threats. More concerning, however, is that there is no account of the “false-positives,” folks who have entered into the apparatus of the law enforcement or homeland security system who are not, in fact, security threats. To understand how this problem might have implications in the feld, consider the case of Ousmane Bah, who was arrested by the New York Police Department in 2019 for allegedly shoplifting at several Apple Stores. Bah is suing Apple for $1 billion on account of their use of facial recognition technologies, which he claims falsely identifed him as the ofender. Bah claims that between the true shoplifter using a false ID linked to Bah and Apple potentially using an image of Bah from an older document, he was profled and arrested. Bah is African American, where African Americans are subject to both underrepresentation in algorithmic learning, and who have facial morphology on which algorithms typically underperform (Shaban and Flynn, 2019). NIST notes that error rates are low, but that these rates increase as the size for identifcation of images increases. The typical size of a sample in NIST testing is 1–4 million: at most, only slightly more than 1% of the human population of the US, and much less than that when considering the 500 million travelers in US airspace. So we should expect these models to fall of in efcacy as the size of the pool of potential faces increases (Grother et al., 2019).

7.4 Narrative Models Model choice is particularly important in the study of radicalization through N2. Recall that the detection of social behavior requires four things: 1 2 3 4

a broad understanding of the narrative structures humans use to communicate information; collection of narratives common to insurgent or radical groups; an understanding between the narratives in both (1) and (2) and their efect on neural states; a predictive algorithm of how (3) causes radicalization and/or terrorist behavior.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

80 Neuroethics and National Security

Parts (1) and (2) of this scheme require computational models to depict the role of narrative structures, and the kinds of narratives that create insurgency. Part (3) requires computational models in the form of neural correlate research, which is then fed into (4) to provide a predictive model that forms (part of ) the basis of action for intelligence and law enforcement services. The concern here is how we connect these diferent steps into a useful trajectory for research. First, it is not clear that the current work on (1) really connects to that in (2). The favored narrative structure in national security discussion of narrative invokes “Freytag’s triangle,” a tool of dramatic analysis that understands the narrative as featuring exposition, rising action, climax, falling down, and resolution (Casebeer, 2005, 2014; Freytag, 2018). This is taken by some authors to be the general structure of narratives—including authors funded by the Defense Advanced Research Projects Agency (DARPA). Importantly, however, Freytag’s model—and we should consider this a model in the same sense that, say, predator-prey relationships are models—isn’t the only model available. The model of the monomyth, for example, is another very powerful model of how narrative proceeds in the telling. Both of these, however, are limited to particular kinds of narrative, and to very specifc cultural contexts. It has been further hypothesized that the creation of narratives is culturally specifc (e.g. Fox, 2006). In particular, these diferences may refect diferences in how we convey evaluative judgments within a narrative: what matters and why. There are also questions of perspective, and some evidence to suggest that Americans disproportionately prefer frst-person points of view, while other cultures do not (Leung and Cohen, 2016). This may be particularly important in the context of modern radicalization, which frequently occurs online. These spaces are places where narrative styles may intermingle, providing frst-person histories with exegesis of texts, news reporting, and debate. These platforms combine multiple distinct narratives, all of which pull apart from Freytag’s triangle in the way they formulate and communicate information, and provide implicit and explicit evaluative mechanisms over that information. Two cases are useful here. The frst is, again, of Hoda Muthana, who traveled to Syria to join ISIS. In conversation with The Daily from the New York Times, Muthana reported that a key source of her radicalization was “Muslim Twitter,” the (relatively) distinct network of Muslims that communicate on the social media platform. What Muthana described was not simply interaction with ISIS propaganda. Rather, she engaged with ostensibly non-ISIS but right-wing Muslims in information sharing, forming relationships, and Q’ranic interpretation. Feeling isolated from her physical environment, Muthana entered into a social environment that allowed her to engage with her religion and ethnic peers, and share her social isolation among like-minded folk.1 These narratives did not come about in a Freytagian story—even if we could shoehorn Muthana’s story itself into such a story—but rather included exposure to

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Translation 81

lots of short narratives that, by themselves, lack key components of Freytag’s triangle, in particular falling down and resolution. Contrast this with someone like Dylan Roof, who killed nine congregants at a church in Charleston. Roof ’s story itself is hardly a rare narrative—one of racism, social isolation, and postbellum white culture and alienation in America—but his own narrative seems to also be less Freytagian than it is Campbellian. Roof, it is now understood, considered himself engaged in a Manichaean struggle between good and evil, part of a “secret battle” against a perceived replacement of “good” white American society (Ghansah, 2017). This replacement theory is a core part of white nationalist propaganda, including The Turner Diaries, in which whites are engaged in a heroic struggle to resist the end of “white culture” by invading nonwhite cultural norms (Bishop, 1988). The far right Christian identity terrorist group, The Covenant, Sword and Arm of the Lord, who attempted a cyanide attack on New York and Washington, likewise subscribed to this hero’s myth in which they were the protagonists (Stern, 2000; Lazebnik, 2013). These narratives share certain features. In Roof and Muthana’s case, they both describe radicalization based on social isolation. They also display some kind of heroic myth the radicalized tell about themselves. In between, however, there are strong divergences. Muthana’s isolation found its way into strong communities, and the promise of nation building through ISIS. Roof ’s isolation only deepened, and his war borne of racist theories of replacement descended into a mass shooting. Muthana, for all she sufered, played a role in the ISIS regime as a propagandist, implying she retained some kind of agency in her captivity. These narratives are thus distinct. Because they are distinct, moreover, it is not clear that their neural basis will be similar. A key challenge for DARPA and the successors, then, is developing an account of narratives that are precise enough to pick out particular radicalized individuals, but broad enough that the kind of radicalized individual they pick up is not unduly constrained. Without this, the idea that we have a comprehensive understanding of radicalization narratives is limited. This poses a serious risk that the particular targets of such an analysis would simply deploy countermeasures, and change their message. Theories of narrative that look at the neural basis for the development of radical terrorist views must have a broad enough foundation in order to develop an account that is sensitive to changes over time in communicative medium, the kind of narrative used, and the authorial sources of those narratives.

7.5 Nonlethal Weapons In Chapter 5, I laid out the in principle case for nonlethal biological and chemical agents. But principles must give way to facts when making applied ethical decisions. The hypothetical nonlethal neuroweapon is—to borrow from

Copyright © 2021. Taylor & Francis Group. All rights reserved.

82

Neuroethics and National Security

Michael Davis (2012)—like a fying pig in a thought experiment: for the example to guide action, we would need to live in a universe where pigs are capable of fying. Because we don’t live in that kind of universe, the example is invalid. We likewise don’t live in a universe where the ideal nonlethal weapon is possible, much less plausible. This is a serious concern when considering the use case for nonlethal biochemical agents. In particular, the aerosolization of an agent targets three groups that are typically subject to some kind of protection. The frst are civilians and the second are noncivilian noncombatants (including wounded soldiers). The third are what we could call “excess combatants.” Even actions that target combatants are limited in virtue of the amount of harm they cause proportionate to the goals of a war. Biological and chemical weapons are pro tanto impermissible in general because even if there were no noncombatants, they cause excessive sufering and death in war. Nonlethal weapons are purported to be a humane alternative because, even if they target noncombatants or afect large numbers of people, they only do so in a nonlethal sense. This, the argument goes, provides a reason to deploy such a weapon. Even though there are large numbers of people in the balance, the kinds of harm they will experience is presumably limited. This last claim requires interrogation. To understand why, let’s be really clear about exactly how dangerous this is for individuals. Subduing a civilian in a war zone isn’t simply dangerous in the context of the chemical reaction inside the person’s body. The relationship between a person’s body and their environment is crucial here in three ways. The frst is that civilians abroad are necessarily a diverse population. Not all humans are alike, and we have little reason to believe that a population of, say, Afghani civilians will respond to our novel chemical agent in exactly the same way as a bunch of American service personnel. This is not to say that Afghanis are in some way biologically distinct from Americans. But variation exists, and especially in the uncertain world of chemical agents, small changes in dose can have huge clinical efects. This second point is an often underappreciated aspect of war. Measuring the excess morbidity and mortality that result through war is difcult, but what is known is that it is quite extreme. In the Democratic Republic of Congo, for example, up to 5 million excess deaths have been estimated to have occurred as a result of deprivation and disability resulting from armed conficts over a decade between 1998 and 2008 (Moszynski, 2008). Noncombatants may be malnourished or dehydrated; have contracted an infectious disease; or be in an advanced state of a disease that in peacetime would be treatable. They are almost certainly in an advanced state of fatigue from the toll of being in a war zone places on a human being. Civilians in war are simply—and medically— not the same kinds of people they are outside of that war zone.

Translation 83

Moreover, incapacitating a person inside a war zone exposes them to serious additional risk beyond the clinical efects of a so-called nonlethal chemical or biological agent. There is falling debris; there may be glass or other hazards in the immediate area in which they are incapacitated. There are third parties who are hostile: looters, organized crime, or simply predators (human or animal). If the area is crowded, the person may simply be trampled, as happened to a number of people inside the theater in Moscow. Or, if an agent use fails, and escalates to lethal force, they may not be able to get out of the way for being incapacitated. This is a central reason proponents of novel incapacitating chemical agents (ICAs) miss the danger of the technology they advocate. A person, fnding themselves in a war, will have the greatest assets in their wits and a lot of luck. ICAs, even if they are in ideal circumstances less lethal than a bullet, are also things that undermine the assets people have to navigate war zones.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

7.6 Enhancement We have thought about risk in translation: the kinds of model chosen in narratives, and what the appropriate target is in thinking about “nonlethal” ICAs. A fnal area to consider in translation is the kind of risk we expose groups to. For that, I look at enhancement. Recall the base risks argument: the idea that because those in national security occupations are exposed to greater risk in virtue of their occupation, it is less concerning to add a small amount of risk to their occupation through testing novel enhancements than it might otherwise be. Put another way, being a soldier is already risky, and so it isn’t clear that exposing them to slightly more risk through experimental enhancements is of signifcant concern (Savulescu, 2015). One important reply is to simply deny that this account of risk on warfghters is accurate. Warfghting in the US and most other developed nations is a profession, or a professional service role in the case of mandatory service. In all cases, it is a job. And like most other modern jobs, it comes with what epidemiologists refer to as a “healthy worker efect.” Those who are reliably employed are acknowledged to have better health and life prospects than those who are not. And those in the military are, on average, longer lived and healthier than their civilian counterparts (McLaughlin, 2008). This is not to say that warfghting does not carry risks, including lethal risks. Nor does it speak to the mental trauma of warfghting, and the large body of veterans with mental health and substance abuse disorders as discussed in Chapter 3. What it is to say is that any argument from expected risk falls short, as the expected lifetime risk of warfghting is, at least to a frst approximation, better than the expected lifetime risk of almost any other occupation. This is, however,

Copyright © 2021. Taylor & Francis Group. All rights reserved.

84

Neuroethics and National Security

not true of police ofcers, though the causes of death tend to be stress and cardiac-related rather than from violent occupational hazards (Violanti et al., 2013; Han et al., 2018). (In case anyone is confused by this—consider the adage that “Soldiering is 99% boredom, 1% terror.” Following this, the basic idea is that the social determinants of health associated with routine, a sense of purpose, a tight-knit social circle, and a regular paycheck likely outweigh that 1% terror associated with the violence of war, on average. There are obviously many outliers, and the distribution is not normal. But that, in itself, is a reason to think carefully about aggregation!) An argument that derives from the increased risk of warfghting fails to give us a reason why we should expose warfghters to additional risks relative to other individuals when testing novel medical interventions. To understand, further, why we ought to refrain from treating warfghters diferently from others in bringing novel enhancements to maturity, we should think instead about why we consider warfghting to be a particularly risky occupation. That reason, I contend, is that we associate the risk of warfghting with a special role, one that allows the exposure of individuals to a special kind of risk—death in defense of a community. That risk, however, is tied to a warfghter’s institutional role. It doesn’t admit that just any kind of risk is permissible. Moreover, when we think of the tendency to provide subsidized or free medical care to returning warfghters, the justifcation for this is as part of society’s obligation to support veterans in light of the role they held. Seeing combat is not, for example, a deciding factor in whether or not one receives veterans’ support such as healthcare; rather, it is a special kind of obligation (Buchanan, 1984; but cf. Gross, 2006). A corollary of this argument is that there is, by itself, no special reason to waive or reduce consent requirements for warfghters undergoing experimental research. Being subject to experimental research is a risk, but it is not a risk that inheres to the role warfghters occupy in service of a social institution. There are good reasons to think, moreover, that the hierarchical nature of the armed forces can undermine the autonomy of warfghters and make them more susceptible to coercion or exploitation (Parasidis, 2014, 2016), but this does not obviate the requirement to secure informed consent. This means that translation in national security contexts should largely follow the same ethical requirements and best practices as its civilian counterparts. Eforts should be made to reduce risk by (a) conducting rigorous, valid animal studies on an intervention; (b) moving to limited human trials to ensure safety; before (c) moving to large-scale efectiveness trials, including carefully constructed “feld trials” of equipment in pragmatic contexts; and fnally (d) collection of longitudinal data on cost efectiveness, potential adverse events, and sequelae post-deployment. These should be held to the same standards for consent to undertake research, and risk-beneft calculations as typical trials.

Translation 85

7.7 Conclusion In this chapter, I have listed some difculties with translation from basic science into military applications. The primary issues with translation should be familiar in broad strokes with those familiar with research ethics: the search for promising model data, defnitional and measurement concerns, and protection of human subjects. Each has their own complications, however, owing to the forum in which this research ultimately comes to fruition. The chief part of the story left out here is that I have, so far, not considered whether this kind of research and technology is itself permissible. This is the issue to which I will now turn. To examine the permissibility of diferent kinds of neuroscience in a national security context, I will turn to the ongoing debate in the life sciences about dual-use research.

Note

Copyright © 2021. Taylor & Francis Group. All rights reserved.

1 This report comes from the New York Times’ audio coverage of Muthana, February 21.

8 DUAL-USE

8.1 Chapter Summary

Copyright © 2021. Taylor & Francis Group. All rights reserved.

The previous chapter set aside whether the ultimate uses of technology were permissible. This chapter takes up that charge, through the lease of the dual-use dilemma. I frst defne dual-use research as once and the same piece of research that can help or harm humanity. I then argue that dual use in national security circles is distinct from previous inquiries on the subject because here the use of force is often (though not always) the intended goal of research and development eforts. I then examine the dual-use implications of nonlethal weapons, followed by those of the Narrative Networks Program (N2). I return to broader issues that arise in the context of escalation of conficts through dual-use, and “use ratchets” that bracket out other (good) options for action. I fnish with a view of the role of institutions in managing dual-use concerns.

8.2 Introduction In the previous chapter, I inquired into the ethics of translating neuroscience from bench to battlefeld. Importantly, I only inquired about the ethics of getting to deployment. Some of these concerns extended beyond deployment in important ways, such as what constitutes nonlethality in the development of neuroweapons. However, it was still taken as given that we ought to develop these technologies, and there was an in principle case for using them. This chapter, and the ones that follow it, moves beyond the previous chapter in inquiring why, and when, we ought to develop neuroscience and its attendant technologies for national security purposes. A central component to this analysis is the idea of dual-use. The reason dual-use is so central, and why

Dual-Use

87

national security applications of technology are so controversial, is because of the implicit capacity of most, if not all national security applications for misuse.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

8.3 Defnitions “Dual-use”1 has three distinct meanings, corresponding with historical periods with the capacity for one piece of science or technology to be used in multiple ways. The “dual” in dual-use is rarely used to refer to two and only two uses, but rather two classes of uses. While I will concern myself predominantly with the latter meanings of the term, all three are relevant to the subject of this book. The original use the term of “dual-use” is in characterizing technologies that can be used for both civil and military purposes (Molas-Gallart, 1997). In this sense, dual-use typically applies to technologies that have some important military or strategic role that a nation would rather keep to itself or among its allies, and a second civilian application. Dual-use technologies may thus be the subject of export control legislation that seeks to restrict access to key technologies that might provide a strategic advantage to a nation’s adversaries. While devices such as nuclear centrifuges, which are used in both domestic power and weapons production, are the most illustrative of dual-use technologies in this sense, other more ubiquitous items include things like electronics and advanced chipsets. In 1999, for example, the Apple G4 chipset, while available in the US as a civilian computer, was a restricted item for the purpose of international exports because it was classifed as a “supercomputer” (Uimonen, 1999). The second important sense arose in the early twenty-frst century, in response to civilian research that had the potential to help or harm humanity. In 2001, Australian scientists released experimental data that demonstrated how a poxvirus could be genetically altered to become 100% lethal in mammals ( Jackson et al., 2001). The subject of the study was Ectromelia variola, or mousepox, but a concern arose that the research could be applied (as it later was by the US Army Medical Research Institute for Infectious Disease (USAMRIID)) to a human pox virus such as smallpox (Connell, 2012). While the research was conducted for the purpose of addressing rodent plagues in Australia, and rabbits in particular, a 100% lethal version of smallpox would constitute a global threat. The Darker Bioweapons Future noted this and other “dual-use research” arising in the rapidly developing life sciences entailed a serious security risk in the aftermath of the 2001 anthrax attacks (CIA, 2003). The fnal sense in which “dual-use” is sometimes applied is somewhat of an intermediate or perhaps subset of the frst depending on who you ask. Sometimes, the same technology can be used for ofensive (i.e. to cause harm) or defensive (i.e. to protect against harm) purposes. These don’t cleanly track permissible or impermissible uses—harm can be permissible in some cases (Lazar, 2017)—but may track certain kinds of conventions or norms in important ways. For example, in 2001 it was revealed by New York Times writers that

88 Neuroethics and National Security

the US government had been pursuing a series of biodefense projects with the intent of countering biological warfare. One of these, Project Clear Vision was a project at Batelle Memorial to backward engineer Cold War-era Soviet biological bomblets. The international community claimed the program was a violation of the Biological and Toxin Weapons Convention (BTWC); the US reply was that as the project was building biological bomblets for defensive purposes it didn’t violate Article I of the BTWC, which implicitly defnes biological weapons in terms of ofensive use, and allows state to pursue biological research for defensive purposes. I am primarily concerned with the second of these meanings of “dual-use.” There are obvious connections to the frst and third meaning in the descriptions provided in Part I; the main foci of my analysis are the explicitly ethical conceptions of beneft and harm in the second meaning of dual-use. The distinction between civil and military uses in the frst sense of “dual-use” matters, but it is instrumental to understanding how certain civilian and/or military uses might constitute permissible and/or impermissible uses of technology. Moreover, the individual arms of the frst distinction do not reliably track individual arms of the second. There are some uses of technology that might be permissible in armed confict in military settings, but are impermissible in civilian settings, and vice versa. Finally, we should remember that law enforcement and some intelligence organizations are civilian branches of the institution of national security, and not all (im)permissible uses in one national security modality will be permissible in others.

Copyright © 2021. Taylor & Francis Group. All rights reserved.

8.4 Neuroscience and Dual-Use Unlike life sciences such as virology (Evans, 2013b) or synthetic biology (Evans and Selgelid, 2014), neuroscience has been paid little attention in terms of its dual-use potential (Bartolucci and Dando, 2013). Yet, neuroscience shares common features with other felds in which there is a consensus that dual-use research possesses a serious security and safety challenge. Far from being diferent from these sciences, there are important commonalities between the felds that mean we should pay as much if not more attention to neuroscience as a feld with strong potential for dual-use. The recent ethics literature on dual-use can be broadly divided into two groups. The frst are statements, often by national groups or consortia, acknowledging the dual-use potential of neuroscience. The kinds of acknowledgment, and what this translates into, are heterogeneous. The EU Human Brain Project, for example, has a specifc dual-use working group that provides recommendations to that project on issues related to both civil-military and permissible-impermissible senses of dual-use (Aicardi et al., 2018). The neuroethics subgroup in the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) project (2019) has considered dual-use, though

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Dual-Use

89

it does not maintain active capacity in that area. Importantly, not all major neuroscience projects incorporate dual-use; in a special issue of Neuron, of reports by ethics groups on major national neuroscience initiatives, only the EU mentions dual use or national security at all (Salles et al., 2019); groups from the US, Japan, Australia, Canada, and the Republic of Korea don’t mention either (Carter et al., 2019; Jeong et al., 2019; Ramos et al., 2019; Sadato et al., 2019; Wang et al., 2019; Neuroethics Subgroup Workshop, 2019). Within these groups, moreover, there are gaps: the Royal Society reports titled Brain Waves references dual-use on national security qua armed confict, but not in its fourth volume on criminal justice. This lack of sustained analysis is concerning in light of the strong relationship between research in neuroscience and national security. I have already established that military funding of neuroscience is, if not the majority, a key source of funding: almost one-third of neuroscience funding under the US BRAIN project is situated at the Defense Advanced Research Projects Agency (DARPA). A central aim of this funding is to advance military objectives, and through them objectives of other national security organizations. An objection to this characterization is that this isn’t strictly what people have in mind when they think of dual-use qua beneft/harm. The central paradigm in thinking about dual-use in the life sciences is an ostensibly benefcial, civilian piece of science or technology that is weaponized or otherwise appropriated by a state or non-state actor for malicious use (Tucker, 2012). Given there is a more or less universal prohibitory norm against the use of biological weapons (Lentzos, 2014), this neatly characterizes the debate around dual-use in terms of civilian-benefcial and military/terrorist-harmful. I think this objection, however, rests on a confusion about how deeply tied to national security the life sciences is. I am thinking in particular synthetic biology which has received considerable attention in the US and internationally for its dual-use potential (Evans and Selgelid, 2014). The standard narrative around dual-use and synthetic biology is that it comprises civilian researchers and civilian uses on the one hand and state and non-state military organizations and potential future hostile uses on the other (e.g. Jeferson et al., 2014; Ahteensuu, 2017). But a focus solely on the afliations of individuals who conduct research doesn’t speak at all to the kinds of relationship between the military and synthetic biology. In particular, the US Department of Defense (DOD) represents the lion’s share of funding for that research: in 2014, the US DOD provided 67% of the federal funding for US synthetic biology; DARPA provided 58% alone (Kuiken, 2015). This need not be ethically impermissible but highlights that synthetic biology, as a feld with strong dual-use potential, has as much if not more relationship to the national security state than neuroscience. DARPA has an important place in the history of the dual-use dilemma in synthetic biology. In 2002 researchers at The State University of New York Stoney Brook, funded by DARPA, successfully created a virus from its base

Copyright © 2021. Taylor & Francis Group. All rights reserved.

90 Neuroethics and National Security

chemical sequence for the frst time (Cello et al., 2002). The research, while a milestone in the life sciences for being the frst viral synthesis from base chemical components of DNA, was concerning for creating the possibility that a biological weapon could be made with a natural source of a biological agent. This study was part of the Darker Bioweapons Future, and along with the mousepox study framed the early work on dual-use (CIA, 2003). Defense funding complicates the intuitive distinction between civilianbenefcial and military/terrorist-harmful that dominates the debate about dual-use research. DARPA’s mission is explicitly defense oriented—under the Mansfeld Amendment in 1969, DOD-funded research projects must have military, operational applications as their central purpose. This purpose is signed of by the Secretary of Defense, and while the Secretary provides a very wide latitude to what constitutes “military and operational purposes,” there is still an organizational trend toward military applications (Geiger, 2017). In intelligence applications, we can tell a similar story. The Central Intelligence Agency opened In-Q-Tel, a corporation designed to keep the intelligence community up to date with the latest information technology, in 2001. This organization is not legislatively mandated to have a national security purpose; however, its organizational structure is expressly intelligence based. Over the last decade In-Q-Tel has fnanced considerable amounts of biotechnology, including the creation of sequencing technologies and hardware, though no one has conducted a thoroughgoing analysis of their dual-use potential. The information technology capacity, moreover, has fnanced neural net research of the kind described in Chapter 2 (In-Q-Tel, 2018). It is thus important to ask what the dynamics of dual-use research are when the source and stated intention of the research is military. We should explore, in particular, what kinds of use are permissible, and what kinds are not, and under which contexts the diference between permissible and impermissible use arises. This is particularly important in the context of uses that are, in principle, rights-violating or welfare-impairing in their own right. The creation of weapons is the most obvious case of this. While there are permissible uses of weapons, there are most certainly a great number of impermissible uses. There are also categories of weapon for which there are convincing arguments that most, if not all, uses are impermissible. So we need to ask very carefully what kinds of issues arise when the central site of innovation—the institutional raison d’etre—is acts that implicate rights violations.

8.5 What Constitutes an Impermissible Use? We can now turn to ethical issues that arise in the context of dual-use research whose stated purpose is explicitly national security. A particular issue I wish to address is how the permissibility of weapons use is partly defned by the kind

Copyright © 2021. Taylor & Francis Group. All rights reserved.

Dual-Use

91

of theater we are concerned with. It is here that the kind of national security institution with which we are dealing is important to a discussion of dual-use. I’ll deal with two kinds of problem for dual-use: dual-use within a particular national security modality and then dual-use across modalities, in particular the transition between military and civilian uses as a source of moral concern. Dual-use, with rare exception, treats the use of force as the alternate use facing otherwise peaceful qua nonviolent uses. The poliovirus synthesis was thought of as remarkable but concerning because it gave rise to the possibility that a biological weapon could be synthesized from a chemical basis without a living template. The polio synthesis study made the 2001 mousepox study more concerning because it no longer required, as was the case at the time, a living copy of smallpox to bring about its harmful use. In 2016 Canadian scientists successfully synthesized horsepox, a cousin of smallpox, both of which are much larger viruses than the polioviruses (Kupferschmidt, 2017). The key feature of these and other cases of dual-use in the life sciences is that the weapon is the impermissible use case, where the permissible use cases are better and easier scientifc inquiry, vaccines and therapeutics, and disease surveillance (Evans et al., 2015). But what if a weapon is the use case? Traditional accounts of dual-use don’t often anticipate this possibility. This is particularly important in the case of neuroscience, however, as weapons are sometimes the point. The creation of biochemical compounds for attacks on the central nervous system (CNS) is a form of weaponized biology in which governments have interest. Brain computer interfaces (BCIs) also potentially fall into the category of weapons: while the BCI is a platform, it is difcult to separate advances in BCIs from their function as tools for piloting drones. Compliance strategies are absolutely a form of use of force, though as I will discuss there are reasons to think this is not dual use in the sense that the central purpose may be in principle impermissible. These examples pose a challenge because the use case of a weapon is not as morally or intuitively straightforward as, for example, a piece of research that could lead to a cure for a disease. That’s because a case of dual-use where a weapon or ofensive purpose is a use case invites us to consider how the context of the use case defnes its permissibility, rather than two distinct use cases with diferent normative valence. That is, while “traditional” dual-use cases view the use of force as one arm of the dilemma—and typically the impermissible arm—the development of a weapon is permissible or impermissible depending on how we think about the ethics of the use of force, and our beliefs about the group developing the weapon. The central tension here is that the use of force against another is pro tanto impermissible, where this impermissibility is grounded in the right to life (in the case of lethal force) and/or bodily integrity (in the case of nonlethal, disabling force) possessed by an individual, or in the harm that the use of force entails. This may be overridden in some cases, the two most common being

Copyright © 2021. Taylor & Francis Group. All rights reserved.

92

Neuroethics and National Security

threat against oneself and threat against another. The justifed role of the national security institutions, if it is anything, is the use of force for the purpose of securing the rights and welfare of a community against external threats (in the case of militaries) or internal threats (in the case of policing). John Forge has suggested, in response to this problem, that we are obligated to refrain from making novel weapons, even though we are not forbidden from using those weapons in applying ( justifed) lethal force. Forge’s reasoning is that if we are intentionally building weapons, we are building things with the intention to cause harm. Forge makes no distinction between the building of a weapon that might cause harm and intending that it be used to cause impermissible death (Forge, 2008). This is clearly a mistake on two counts. The frst is that the category “weapon” is shockingly vague. If we were to defne weapon as “a technology intentionally designed with the purpose of causing harm” (Forge, 2013), we would surely ignore the vast majority of weapons in history, including hooked weapons (often farming tools), staves (i.e. pieces of wood), and most edged weapons (construction or hunting tools).2 Moreover, it would idiosyncratically render many things “weapons.” A surgeon’s scalpel could be a weapon, and it is certainly designed to cause harm (with people’s consent—but that’s arguably still harm), but we don’t refer to surgeons as using weapons. Still, the focus on harm is instructive. The potential for harm might entail obligations to prevent the misuse of weapons, including refraining from the pursuit of these weapons. Aaron Fichtelberg has argued that engineers are, at least in part, bound by certain principles common to just war theory in the pursuit of military engineering (Fichtelberg, 2006). Just war theory derives from the works of both Augustine and Aquinas (Allhof et al., 2013), and from the development of international law (May, 2007), that share a foundation in determining when it is permissible to declare war, and to kill during war. Of particular interest to us is jus in bello, lego-ethical prescriptions of the use of force during war (contrasted with jus ad bellum, which concerns declarations of war). Jus in bello holds that killing in war is justifed only in cases where the killing is necessary to resolve the confict, is proportionate to proximate and ultimate aims of the war, and discriminates between permissible and impermissible targets. Other considerations may arise, but necessity, proportionality, and discrimination are the strongest and most widely accepted principles (Walzer, 2015). Fichtelberg ties these restrictions to the construction of devices that might enable or in principle constitute violations of just war theory if used. If a weapon is incapable in principle, or in practice, of satisfying the in bello demands of just war theory, it follows that making that weapon is itself impermissible. The iconic in principle impermissible weapon is arguably nuclear weapons. While there might be a permissible use of nuclear weapons against, say, an overwhelming extraterrestrial attack, the use of nuclear weapons in existing potential wars

Dual-Use

93

TABLE 8.1 Classical Dual-Use

Use 1 (U1)

Copyright © 2021. Taylor & Francis Group. All rights reserved.

P(p|U) P¬(p|U)

1.0 0

Use 2 (U2) 0 1.0

is impermissible because it is impossible to discriminate between combatants and noncombatants, and the kinds of threat against which it would be a permissible use of force are hard if not impossible to imagine. An example of in practice impermissible technologies would arguably be something like biological weapons. Biological weapons, at least historically, are not in principle incapable of being disproportionate or indiscriminate. However, the long history of biological weapons programs suggests that, in general, the kinds of biological weapons that are strategically valuable are those that are not capable of fulflling the demands of just war theory. It is for this reason that, if not in principle, it is in practice regarded that biological weapons are incapable of being used ethically, and are thus unethical to create or stockpile. For dual-use research, however, our calculus is more complicated. In one important way, weapons creation might seem less of a problem for dual-use research than it appears. Consider that in classical dual-use problems, we have two possible uses. Moreover, the conditional probabilities of those uses being permissible or impermissible are binary, and opposite (Table 8.1). Here, we have a technology with two uses U1 and U2. Let us suppose that U1 is the intended use, i.e. the use for which the technology is pursued. In the mousepox study, for example, the intended use was a solution to rodent plagues. In this case, U1 is uncontroversially good/permissible: it is always permissible to use technology in U1, or, put another way, the conditional probability that U1 is permissible (p), P(p|U1), is 1.0.3 Likewise, U2 is always impermissible, P(¬p|U2) = 1.0. An example of this kind of dual-use research is the conventional “cure for disease/weaponized virus” story discussed above. Dual-use, when the sponsor or intended application is military, is diferent. Table 8.2 describes a version of dual-use where, for example, the ultimate intended use of the technology is as a weapon in armed confict. In the case of Table 8.2, the probability of permissible use is not 1.0. Rather, it is some number 0≤ x ≤1, describing the probability that the use will be TABLE 8.2 Dual-Use with Weapons

Use 1 (U1) Use 2 (U2) P(p|U) P(¬p|U)

x 1−x

0 1.0

Copyright © 2021. Taylor & Francis Group. All rights reserved.

94

Neuroethics and National Security

permissible. Likewise, there is some probability 1 − x that the technology will be used in an impermissible way. Let us suppose this means that for any randomly selected use of the technology, there is some probability x that the selected use will be permissible. It is likely if not certain that the chance that a technology capable in principle of being used in an impermissible way will ultimately be used in that way eventually. That is, for any P(¬p|U)>0, P(¬p,t) = 1 as t → ∞. What it means for something to be used impermissibly is, of course, contested. For a utilitarian, permissibility means something like “will maximize expected aggregate well-being.” So a use is permissible just in case it maximizes expected well-being. But permissibility might also mean that a technology respects rights, promotes the right kind of fourishing, or satisfes the requirements of distributive justice (e.g. Allen and Wallach, 2009). We are obviously, then, concerned frst with how likely it is that something will be misused. In recalling biological weapons, for example, it might be contended that for biological weapons, P(¬p|U)