Artificial Intelligence and Global Security: Future Trends, Threats and Considerations 9781789738124, 9781789738117, 9781789738131

Artificial Intelligence and Global Security: Future Trends, Threats and Considerationsbrings a much-needed perspective o

966 58 3MB

English Pages [217] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Artificial Intelligence and Global Security: Future Trends, Threats and Considerations
 9781789738124, 9781789738117, 9781789738131

Table of contents :
Cover
Artificial Intelligence and Global Security
Artificial Intelligence and Global Security: Future Trends, Threats and Considerations
Copyright
Dedication
Table of Contents
About the Editor
About the Contributors
List of Contributors
Preface
Acknowledgments
1. Artificial Intelligence and the Future Global Security Environment
Abstract
Introduction
Innovation and Advances in AI
Civilian and Military Applications: Shaping Expectations
AI and Preparing for Future Warfare
Cyberattacks and AI
Autonomy and Artificial Intelligence
Artificial Intelligence and Machine Learning: Advantages
International Perspectives on AI
AI: Future Forecast of Global Security
Impact of AI Technologies on Leader Decision-making
Conclusions
Disclaimer
References
2. Artificially Intelligent Techniques for the Diffusion and Adoption of Innovation for Crisis Situations
Abstract
The Case for Artificial Intelligence
Relevance in Innovation Ecosystems
Distinguishing Types of Artificial Intelligence for Diffusion and Adoption
D&A of Classically Narrow AI
AI Techniques for Diffusion and Adoption in Crisis Response
Conclusion
Disclaimer
References
3. “Something Old, Something New” – Reflections on AI and the Just War Tradition
Abstract
A Certain Trajectory but an Uncertain Future
If Not Just War, Then What?
Clarification or Confusion?
Concluding Thoughts
Disclaimer
References
4. Space War and AI
Abstract
Introduction
What Is AI?
Recent, Influential Definitions of AI and Its Goals
Artificial Versus Natural Intelligence?
AI: Questions of Moral Status
Narrow AI Versus AGI
Robots as Embodied AI and LAWS
What Is Traditional JWT?
Jus ad bellum, jus in bello: The Independence Thesis?
What Matters for the Independence Versus Mutual Dependence (Interdependence) Thesis?
Relevance of LAWS to Issue
“Ought Implies Can” Issues and Moral Luck
Can LAWS Have Moral Responsibility?
Scenarios, Including Ultimate Scenario
Asteroid Mining and Exploration – Is It Clearly Civilian?
Could Tourist Spaceflight Be Dual-use?
The Ultimate Scenario – Existential Risk
Conclusions
New Principles
Final Conclusion
References
5. Building an Artificial Conscience: Prospects for Morally Autonomous Artificial Intelligence
Abstract
Introduction: Ethics of Artificial Intelligence Versus Artificial Intelligence Ethics?
What Is an Artificial Conscience?
Why Build an Artificial Conscience?
How Would We Build an Artificial Conscience?
Concerns and Rejoinders
Preparing for the Moral Future of Autonomy Today
References
6. Artificial Intelligence and Ethical Dilemmas Involving Privacy
Abstract
The Virtual Environment
Scene: Ext. Anytown, USA: Morning-Sunrise
Definitions
Data Privacy Versus Data Security: Don't Get It Twisted
Benefits and Drawbacks of AI
Data Ecosystem
Digital Authoritarianism and the Rise of the Surveillance Economy
Options to Consider?
Discussion
Conclusion
Disclaimer
References
7. Artificial Intelligence and Moral Reasoning: Shifting Moral Responsibility in War?
Abstract
Introduction
Identifying AI
War as Existential, Not Just Instrumental
Continuum Argument
Moral Reasoning
Moral Agency
What Role Does AI Play in War?
Conclusion
Disclaimer
Notes
References
8. Ethical Constraints and Contexts of Artificial Intelligent Systems in National Security, Intelligence, and Defense/Milit ...
Abstract
Introduction
Delineating Major Types of AI
Moral Guidance for Autonomous AI
Locating Moral Responsibility
Programming AI Morality
Ethics in Context and Community
Toward Synthesis: An AI “Cooperating System” for Ethics
Conclusion
Acknowledgments
References
9. An AI Enabled NATO Strategic Vision for Twenty-First-Century Complex Challenges
Abstract
NATO Adaptation: An Evolution of Change and Collaboration
NATO Transformation and AI
A Strategic Vision for NATO's Organizational Transformation
References
10. AI Ethics: Four Key Considerations for a Globally Secure Future
Abstract
Coming to Terms with AI
Narrow, Narrow, Narrow
AI Technology: Are We There Yet?
Can LAWS Be Moral?
The Dual-use Paradox
Ethics and AI
Conclusion: Four Key Considerations for a Globally Secure Future
Disclaimer
References
11. Epilogue: The Future of Artificial Intelligence and Global Security
Index

Citation preview

Artificial Intelligence and Global Security

This page intentionally left blank

Artificial Intelligence and Global Security: Future Trends, Threats and Considerations EDITED BY YVONNE R. MASAKOWSKI US Naval War College, USA

United Kingdom – North America – Japan – India – Malaysia – China

Emerald Publishing Limited Howard House, Wagon Lane, Bingley BD16 1WA, UK First edition 2020 Copyright © 2020 Emerald Publishing Limited Reprints and permissions service Contact: [email protected] No part of this book may be reproduced, stored in a retrieval system, transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without either the prior written permission of the publisher or a license permitting restricted copying issued in the UK by The Copyright Licensing Agency and in the USA by The Copyright Clearance Center. Any opinions expressed in the chapters are those of the authors. Whilst Emerald makes every effort to ensure the quality and accuracy of its content, Emerald makes no representation implied or otherwise, as to the chapters’ suitability and application and disclaims any warranties, express or implied, to their use. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Disclaimer The views and material presented in this book are the sole responsibility of the authors/editor and do not reflect the views or the official positions of the US Naval War College, The US Department of the Navy. The US Department of Defense (DoD) or other agencies of the United States Government. This book is presented solely for educational and informational purposes. Although the authors and publisher have made every effort to ensure that the information in this book was correct at press time, the authors and publisher do not assume and hereby disclaim any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors of omission result from negligence, accident, or any other cause. The authors and publisher are not offering it as legal, accounting, or other professional services advice. Neither the authors nor the publisher shall be held liable or responsible to any person or entity with respect to any loss or incidental or consequential damages caused, or alleged to have been caused, directly or indirectly, by the information or advice contained herein. Every company and organization is different and the strategies and recommendations contained herein may not be suitable to your organization and/or situation. ISBN: 978-1-78973-812-4 (Print) ISBN: 978-1-78973-811-7 (Online) ISBN: 978-1-78973-813-1 (Epub)

This book is dedicated to all military personnel who place their lives on the line to preserve our freedoms. I would like to thank my family and US Naval War College friends and colleagues, especially Dr. Timothy J. Demy and Dr. William F. Bundy for their encouragement and support. Special thanks to those military students whom I had the privilege to teach and mentor during my tenure as a Professor at the US Naval War College. Special appreciation to my students from the Ethics and Emerging Military Technologies (EEMT) graduate certificate program for many hours of exciting and inspirational discussion on Artificial Intelligence and future warfare. These Officers are the military’s brightest officers selected as students at the Naval War College. These officers are our nation’s future military leaders.

If you want one year of prosperity, grow grain. If you want ten years of prosperity, grow trees. If you want one hundred years of prosperity, grow leaders. Chinese Proverb

This page intentionally left blank

Table of Contents

About the Editor

xv

About the Contributors

xvii

List of Contributors

xxi

Preface

xxiii

The book was written by researchers and authors across many disciplines and domains – Artificial Intelligence (AI), autonomous unmanned systems, undersea, space, as well as humanitarian crises, including the ethical, theological, and moral conflicts of war. These areas and topics encompass numerous moral and ethical issues. Among the questions these chapters will explore are: What are the implications of AI for the individual, for personal identity, privacy, and freedom, and for society? What are the consequences of AI advances related to national and global security? These chapters will examine the perspectives and consequences for the integration of AI in our daily lives, as well as its influence on society and war. Authors will present their perspectives on the potential for significant consequences related to the impact of AI on an individual’s identity that may place society at risk. What are the moral and ethical boundaries and responsibilities of each person’s life as AI blends with humans into the whole of society? Does humanity lose its identity in the process? Where are the lines drawn between AI systems and the human? These are but a few of the questions that will be examined in these chapters. Whatever the course of action, AI will continue to be part of our future world. As such, humans must chart a course of action to navigate the waters of the future that we design for ourselves.

viii

Table of Contents

Chapter 1 Artificial Intelligence and the Future Global Security Environment Yvonne R. Masakowski

1

Advances in Artificial Intelligence (AI) technologies and Autonomous Unmanned Vehicles will shape our daily lives, society, and future warfare. This chapter will explore the evolutionary and revolutionary influence of AI on the individual, society, and warfare in the twenty-first-century security environment. As AI technologies evolve, there will be increased reliance on these systems due to their ability to analyze and manage vast amounts of data. There are numerous benefits in applying AI to system designs that will support smart, digital cities, as well as support the future warfighter. However, advances in AI-enabled systems do not come without some element of risk (Hawking, Musk, & Wozniak, 2015). For the military, AI will serve as a force multiplier and will have a direct impact on future global security. The military seeks to exploit advances in AI and autonomous systems as a means of achieving technological superiority over their adversaries. We will explore the advantages and potential risks associated with the emergence of AI systems in the future battlespace (Armstrong, Bostrom, & Shulman, 2016). This chapter will serve as the foundation for examining issues such as ethical decision-making, moral reasoning, etc., related to the integration of AI systems in our daily lives, as well as in the future battlespace. Consequences for integrating AI into all aspects of society and military operations will be explored, as well as the implications for future global security.

Chapter 2 Artificially Intelligent Techniques for the Diffusion and Adoption of Innovation for Crisis Situations Thomas C. Choinski

35

The diffusion and adoption of innovation propels today’s technological landscape. Crisis situations, real or perceived, motivate communities of people to take action to adopt and diffuse innovation. This chapter will discuss the ability of Artificial Intelligence to resolve the challenges confronting the diffusion and adoption of innovation. Capacity, risk, resources, culture, complexity, ethics, and emerging situations affect the pace of diffusion and adoption. Artificial Intelligence can search the solution space, identify potential solutions, reduce risk, and mitigate unintended consequences while addressing the value proposition in order to chart courses of action through social networks. In doing so, artificial intelligence can accelerate the diffusion and adoption of technological innovation and contribute to the resolution of immediate crisis situations, as well as chart courses of action through emerging landscapes. However, Artificial Intelligence can help humans, but not replace their role this process. Achieving this goal will require a better understanding of human and machine interaction.

Table of Contents

Chapter 3 “Something Old, Something New” – Reflections on AI and the Just War Tradition Timothy J. Demy

ix

53

This chapter will focus on the relationship between the centuries-old and prevailing tradition of warfare in the West and the innovations and challenges of AI with respect to the tradition and emerging norms of warfare. It will investigate whether the nature of warfare is changing or the weapons of warfare are changing and how such change affects the normative framework.

Chapter 4 Space War and AI Keith A. Abney

63

New technologies, including AI, have helped us begin to take our first steps off the Earth and into outer space. But conflicts inevitably will arise and, in the absence of settled governance, may be resolved by force, as is typical for new frontiers. But the terrestrial assumptions behind the ethics of war will need to be rethought when the context radically changes, and both the environment of space and the advent of robotic warfighters with superhuman capabilities will constitute such a radical change. This chapter examines how new autonomous technologies, especially dual-use technologies, and the challenges to human existence in space will force us to rethink the ethics of war, both from space to the Earth, and in space itself.

Chapter 5 Building an Artificial Conscience: Prospects for Morally Autonomous Artificial Intelligence William D. Casebeer

81

Discussions of ethics and Artificial Intelligence (AI) usually revolve around the ethical implications of the use of AI in multiple domains, ranging from whether machine learning trained algorithms may encode discriminatory standards for face recognition, to discussions of the implications of using artificial intelligence as a substitute for human intelligence in warfare. In this chapter, I will focus on one particular strand of ethics and AI that is often neglected: whether we can use the methods of AI to build or train a system which can reason about moral issues. Here, I discuss (1) what an “artificial conscience” consists of and what it would do, (2) why we collectively should build one soon given the increasing use of AI in multiple areas, (3) how we might build one in both architecture and content, and (4) concerns about building an artificial conscience and my rejoinders. Given the increasing importance of artificially intelligent semi- or fully autonomous systems and platforms for contemporary warfare, I conclude that building an artificial conscience is not only possible but also morally required if our autonomous teammates are to collaborate fully with human soldiers on the battlefield.

x

Table of Contents

Chapter 6 Artificial Intelligence and Ethical Dilemmas Involving Privacy James Peltz and Anita C. Street

95

This chapter explores how data-driven methods such as Artificial Intelligence pose real concerns for individual privacy. The current paradigm of collecting data from those using online applications and services is reinforced by significant potential profits that the private sector stands to realize by delivering a broad range of services to users faster and more conveniently. Terms of use and privacy agreements are a common source of confusion, and are written in a way that dulls their impact and dopes most into automatically accepting a certain level of risk in exchange for convenience and “free” access. Third parties, including the government, gain access to these data in numerous ways. If the erosion of individual protections of privacy and the potential dangers this poses to our autonomy and democratic ideals were not alarming enough, the digital surrogate product of “you” that is created from this paradigm might one day freely share thoughts, buying habits, and your pattern of life with whoever owns these data. We use an ethical framework to assess key factors in these issues and discuss some of the dilemmas posed by Artificial Intelligence methods, the current norm of sharing one’s data, and what can be done to remind individuals to value privacy. Will our digital surrogate one day need protections too?

Chapter 7 Artificial Intelligence and Moral Reasoning: Shifting Moral Responsibility in War? Pauline Shanks Kaurin and Casey Thomas Hart

121

How does AI shift the burden of moral responsibility in war? It is no longer merely far-fetched science fiction to think that robots will be the chief combatants, waging wars in place of humans. Or is it? While Artificial Intelligence (AI) has made remarkable strides, tempting us to personify the machines “making decisions” and “choosing targets,” a more careful analysis reveals that even the most sophisticated AI can only be an instrument rather than an agent of war. After establishing the layered existential nature of war, we lay out the prerequisites for being a (moral) agent of war. We then argue that present AI falls short of this bar, and we have strong reason to think this will not change soon. With that in mind, we put forth a second argument against robots as agents: there is a continuum with other clearly nonagential tools of war, like swords and chariots. Lastly, we unpack what this all means: if AI does not add another moral player to the battlefield, how (if at all) should AI change the way we think about war?

Table of Contents

Chapter 8 Ethical Constraints and Contexts of Artificial Intelligent Systems in National Security, Intelligence, and Defense/Military Operations John R. Shook, Tibor Solymosi and James Giordano

xi

137

Artificially intelligent (AI) systems are being considered for their potential utility and value in a variety of warfare, intelligence, and national security (WINS) settings. In light of this, it is becoming increasingly important to recognize the capabilities, limitations, effects, and need for guidance and governance of different types of AI systems and uses in WINS initiatives. Most generally, AI systems can be regarded as “soft” or “hard,” although, in reality, these are not wholly binary, but rather exist along a continuum of structural and functional complexity of design and operations. “Soft AI” retains and reveals the fingerprint of responsibility incurred by the human builder and operator. It is programmed and controlled, and at best only semi-autonomous. Here attribution can be placed upon human factors in the design and/or articulation-of-action/event(s) chain. “Hard AI” (e.g., autonomous systems), however, can break that chain, if and when the system moves beyond its initial human-programmed input–output features to evidence “developed/inherent” – and not simply directly derived – characteristics and traits. Law will likely infer that there is a builder’s bias and basis for any manmade system, and thus could argue that burden of responsibility would rest upon the human enterprise that evidenced the most probable custodianship/ stewardship of the feature evoking the action in question. But this could fail to obtain if and when a system becomes autonomous (e.g., via hierarchical generative encoding). In such an instance, the system could develop novel characteristics as adaptive properties, in some cases in response to and as rejective reaction to certain exogenous (i.e., human) attempts at constraint, imposition, and control. This result would prompt questions of responsibility, attribution, and considerations of developmental trajectories, possible effects, and regulation and possibilities, viability and definable constraints of use. This chapter will focus upon these possibilities, and address:

1. Peer capability conferred by AI in asymmetrical engagements. 2. Concepts and content of what constitutes jus contra bellum, jus ad bellum, and jus in bello parameters of AI use in political and military engagements. 3. Effect of AI use on relative threshold(s) and tolerances for warfare. 4. Proposed approaches to risk analysis, mitigation, and operational guidance of AI systems in WINS settings.

xii

Table of Contents

Chapter 9 An AI Enabled NATO Strategic Vision for Twenty-First-Century Complex Challenges Imre Porkol´ab

153

In the twenty-first century, the international defense community has largely struggled with how to organize, strategize, and act effectively in increasingly complex and emergent contexts where the previous distinctions between war and peace have blurred beyond comprehension. Popularly termed “black swan events” continue to shatter any illusion of stability or extension of normalcy in foreign affairs. Western armed forces as well as intergovernmental military alliances such as NATO appear increasingly unable to deal with these problems using traditional planning and organizing methodologies alone. What had worked well previously no longer appears to possess the same precision and control. The formal operationallevel military planning process, initially developed to cope with Cold War Era largescale military activities in “a conventional, industrialized state vs industrialized state setting” now is seemingly incapable of providing sufficient means of getting the organization unstuck. Within this new and increasingly complex context, coupled with the increasing tempo of the Fourth Industrial Revolution, NATO has to fulfill all three core tasks at the same time, and in a sense go through a complete digital transformation. It requires new and noble approaches from policymakers, and military personnel alike. Artificial Intelligence is playing a crucial role during this digital transformation. In this chapter the author will address Artificial Intelligence in future multinational military operations, introduce the most recent political discussions, as well as the research trends and the implications on future warfare for the Alliance. Specific topics that will be covered include:

• Possibility to use AI in future foresight analysis – better foresight through machine learning. • Artificial Intelligence and the lessons learned process – how can we transform NATO into a learning organization? • Capability building – possibilities for using AI and big data analysis to assess capability gaps and improve the defence planning capability system. • Joint research – cooperation in AI research (Pentagon JAIC, CNAS AI center, and NATO AI research facilities).

Chapter 10 AI Ethics: Four Key Considerations for a Globally Secure Future Gina Granados Palmer

167

Harnessing the power and potential of AI continues a centuries-old trajectory of the application of science and knowledge for the benefit of humanity. Such an endeavor has great promise, but also the possibility of creating conflict and

Table of Contents

xiii

disorder. This chapter draws upon the strengths of the previous chapters to provide readers with a purposeful assessment of the current AI security landscape, concluding with four key considerations for a globally secure future.

Chapter 11 Epilogue: The Future of Artificial Intelligence and Global Security James Canton

177

Dr. James Canton, CEO and Chairman of the Institute for Global Futures (www.GlobalFuturist.com), will provide a Futuristic view of Artificial Intelligence.

Index

185

This page intentionally left blank

About the Editor

Yvonne R. Masakowski, PhD, MPhil, MA, has a distinguished career in Psychology and Human Factors spanning over 25 years. She was recently appointed as a Research Fellow by the US Naval War College following her retirement as an Associate Professor of Strategic Leadership and Leader Development in the College of Leadership and Ethics at the US Naval War College. At the Naval War College, Dr. Masakowski focused on the advancement of leader development for the US Navy and the impact of advanced AI technologies on military affairs. Dr. Masakowski currently serves as the US Chair of a NATO panel on Leader Development for NATO Multinational Military Operations (NATO HFM RTG 286). She also serves as a dissertation thesis mentor on Artificial Intelligence in the Ethics and Emerging Military Technology (EEMT) graduate certificate program at the US Naval War College. Prior appointments included serving as an Associate Director for Human Factors, Office of Naval Research Global office in London, UK, and as the CNO Science Advisor to the Strategic Studies Group (CNO SSG). Dr. Masakowski earned her Doctorate of Philosophy in Psychology and Master’s Degree in Philosophy at The City University of New York. She received a Master’s Degree in Psychology (Psycholinguistics) from the University of Connecticut and her Bachelor of Arts in Experimental Psychology from Rutgers University. She earned a diploma from the MIT Seminar XXI program in Foreign Policy and National Security. She has also attended Yale University

xvi

About the Editor

where she studied biomedical ethics. She has taught leadership, ethics, crosscultural competence, and the humanities to graduate students. She has also provided executive leader development to US Navy Admirals and US Navy Attorneys. Her research interests include Artificial Intelligence, decision-making, autonomous systems, leader development, cross-cultural competence, and the Humanities. Dr. Masakowski is the author of numerous publications, articles, as well as book chapters. She has edited a book on Decision Making in Complex Environments, and has written the following book chapters: The Impact of Synthetic Virtual Environments on Combat System Design and Operator Performance in Computing with Instinct; Cultural Competence and Leadership in Multinational Military Operations in Military Psychology from an International Perspective; The Dynamics of NATO Multinational Military Operations Inclusive Leadership Practice in Global and Culturally Diverse Leaders and Leadership; and Leaders and Ethical Leadership in Military and Leadership Development. Dr. Masakowski’s leadership and results-oriented philosophy have been recognized nationally and internationally. She has recently been awarded the US Department of Defense Superior Civilian Service Award (2019) and the Lifetime Achievement Award by the Albert Nelson Marquis Lifetime Achievement Award from the Marquis Who’s Who Publications Board. Dr. Masakowski has also been recognized by the Czech Republic and awarded their nation’s highest military Medal of Honor, The Cross of Merit. She has also been the recipient of awards from France and Poland for her efforts in advancing Science and Technology for military applications.

About the Contributors

Keith A. Abney, MA, is senior lecturer in the Philosophy Department and Research Fellow of the Ethics & Emerging Sciences Group at California Polytechnic State University, San Luis Obispo. His areas of expertise include many aspects of emerging technology ethics and bioethics, especially issues in space ethics and bioethics, robotics, AI and cyberethics, autonomous vehicles, human enhancements, and military technologies. He is a co-editor of Robot Ethics (MIT Press) and Robot Ethics 2.0 (OUP) as well as author/contributor to numerous other books, journal articles, and funded reports. James Canton, PhD, is a global futurist, social scientist, serial entrepreneur, and advisor to corporations and governments. For over 30 years he has been forecasting global trends, risks, and game changing innovations. He is the CEO of the Institute for Global Futures, the Institute for Global Futures, a leading think tank that he founded in 1990. He has advised three White House Adminstrations and over 100 companies including the US Department of Defence (DoD), National Science Foundation and MIT’s Media Lab, Europe on future trends. Previously, he worked at Apple, and was a US policy advisor, investment banker, and founder of five tech companies. He is the author of Future Smart, The Extreme Future, and Technofutures. William D. Casebeer, PhD, MA, is Senior Director of Human-Machine Systems at Scientific Systems Company, Inc. Bill was the Director of the Innovation Lab at Beyond Conflict, and was the Senior Research Area Manager in Human Systems and Autonomy for Lockheed Martin’s Advanced Technology Laboratories, where he led technology development programs to boost the ability of humans and autonomous systems to work together. Bill served as a Program Manager at the Defense Advanced Research Projects Agency from 2010–2014 in the Defense Sciences Office and in the Biological Technologies Office. He retired from active Air Force duty as a Lieutenant Colonel and intelligence analyst in August 2011 and is a 1991 graduate of the USAF Academy. Thomas C. Choinski, PhD, Naval Undersea Warfare Center, has over 40 years of experience encompassing innovation, management, and engineering that has led to an interdisciplinary approach for technological innovation. Tom has published or presented more than 70 papers in journals or through symposia on

xviii

About the Contributors

topics including innovation, autonomy, unmanned systems, wargaming, digital signal processing, and microwave design. He holds graduate degrees in business and engineering. Dr. Choinski completed a PhD in humanities (concentration in technology, science, and society), as well as a fellowship through the MIT Seminar XXI Program in National Security Studies. Timothy J. Demy, PhD, ThD, ThM, MSt, MA, is Professor of Military Ethics at the US Naval War College, Newport, RI. Previously, he served for 27 years as an officer in the US Navy. He earned the ThM and ThD from Dallas Theological Seminary, and the PhD from Salve Regina University. Additionally, among other degrees, he earned the MA from the Naval War College and the MSt from the University of Cambridge. He is the author and editor of numerous articles, books, and encyclopedias on a variety of historical, ethical, and theological subjects. James Giordano, PhD, is Professor in the Departments of Neurology and Biochemistry, Chief of the Neuroethics Studies Program, Chair of the Subprogram in Military Medical Ethics, and Co-director of the O’Neill-Pellegrino Program in Brain Sciences and Global Law and Policy at Georgetown University Medical Center, Washington, DC. He currently chairs the Neuroethics Subprogram of the IEEE Brain Initiative; is Research Fellow in Biosecurity, Technology, and Ethics at the US Naval War College; Advisory Fellow of the Defense Operations Cognitive Science section, SMA Branch, Joint Staff, Pentagon; Bioethicist for the Defense Medical Ethics Committee; and is senior appointed member of the Neuroethics, Legal and Social Issues Advisory Panel of the Defense Advanced Research Projects Agency (DARPA). The author of over 300 papers, 7 books, 21 book chapters, and 20 government white papers on brain science, national defense and ethics, in recognition of his achievements he was elected to the European Academy of Science and Arts, and named an Overseas Fellow of the Royal Society of Medicine (UK). Casey Thomas Hart, PhD, is an Ontologist at Cycorp, where he teaches AI what it needs to know to reason about the world. He earned his doctorate in philosophy from the University of Wisconsin-Madison, where he specialized in formal epistemology and the philosophy of science. He lives in Austin, TX. He is inspired by his family: his tireless and wonderful wife Nicole and their two adorable daughters, Juliette and Elizabeth. Pauline Shanks Kaurin, PhD, is a Professor of Military Ethics at the US Naval War College, Newport, RI. She earned her doctorate in Philosophy at Temple University and is a specialist in military ethics and Just War theory, the philosophy of law and applied ethics. Recent publications include: When Less is notMore: Expanding the Combatant/Non-Combatant Distinction;With Fear and Trembling: A Qualified Defense of Non-Lethal Weapons and Achilles Goes Asymmetrical: The Warrior, Military Ethics and Contemporary Warfare (Routledge 2014). She served as a Featured Contributor for The Strategy Bridge. Her new book on Obedience will be published by US Naval Institute Press in Spring

About the Contributors

xix

2020. She has also published with Clear Defense, The Wavell Room, Newsweek and Just Security. Gina Granados Palmer, MLA, is a faculty member at the US Naval War College, Newport, RI. Ms. Palmer received a Master of Liberal Arts degree in International Relations from Harvard University’s Division of Continuing Education and a BS in Mechanical Engineering from California Polytechnic State University, CA. She is currently completing her doctoral dissertation on literary and visual representations of war termination in the Pacific Theater at the end of World War II. She is focusing her doctoral research on leadership, ethics, technology, war and the balance between diplomacy and defense at Salve Regina University, Newport, RI. James Peltz, PhD, is a program manager with the US Government. He is also a graduate of the US Naval War College. He has a decade of experience managing government research portfolios in the field of nuclear science, nuclear energy, and nuclear non-proliferation. He has specific experience and scientific expertise in predictive best estimate analysis of engineering systems to include model verification, validation, and uncertainty quantification. James has published several refereed articles and has given several invited presentations on these topics to domestic and international audiences. Imre Porkol´ab, PhD, earned his post-graduate degrees at military and civil universities in Hungary and the United States, including Harvard and Stanford. He is a highly decorated military professional with operational tours in Iraq and Afghanistan. He has also played a crucial role in the development of the Hungarian Special Operational Forces (SOF) capabilities. From 2011 he served in the US, at the NATO Allied Command Transformation, as Supreme Allied Commander Transformation (SACT) Representative to the Pentagon. Since 2018, he has been directing the transformation work on the area of the HDF’s innovation, and building the national defence industrial base. He is an expert in guerrilla and counterterrorism warfare, and his research areas include unconventional leadership, change management in a VUCA environment, innovative methods of organizational transformation, and the applicability thereof in the business world. He is an accomplished international speaker, and writer. His first book titled Szolg´alj, hogy vezethess! was published in 2016; his second book, A strat´egia m} uv´eszete, came out in 2019. John R. Shook, PhD, teaches philosophy at Bowie State University in Maryland, and Georgetown University in Washington DC. He also teaches research ethics and science education for the University at Buffalo’s online Science and the Public EdM program. He has been a visiting fellow at the Institute for Philosophy and Public Policy at George Mason University in Virginia, and the Center for Neurotechnology Studies of the Potomac Institute for Public Policy in Virginia. At Georgetown University, he works with James Giordano of the Pellegrino Center for Clinical Bioethics. Shook’s research encompasses philosophy of science, pragmatism, philosophical psychology, neurophilosophy, social neuroscience,

xx

About the Contributors

moral psychology, neuroethics, and science-religion dialogue. He co-edited Neuroscience, Neurophilosophy, and Pragmatism (2014), and American Philosophy and the Brain: Pragmatist Neurophilosophy, Old and New (2014). His articles have appeared in Cortex, Neuroethics, AJOB-Neuroscience, Cambridge Quarterly of Health Care Ethics, Philosophy, Ethics, and Humanities in Medicine, and Journal of Cognition and Neuroethics. Tibor Solymosi, PhD, teaches philosophy at Westminster College in New Wilmington, Pennsylvania. He has previously taught at Allegheny College, Bowie State University, Mercyhurst University, and Case Western Reserve University. His research focuses on the consequences of the sciences of life and mind on our self-conception as conscious, free, and morally responsible selves. He is co-editor of Pragmatist Neurophilosophy (Bloomsbury), and Neuroscience, Neurophilosophy and Pragmatism (Palgrave Macmillan). He is currently working on the intersection of neuroscience and democratic culture, specifically regarding the effects of social media, digital devices, big data, and artificial intelligence. Anita C. Street, MS, is a Technical Advisor with the US Government. She has 30 years experience working in the area of strategic foresight, environmental science, emerging technologies, and national security. She has edited and co-authored a number of publications on nanotechnology and clean water applications, Life Cycle Analysis of emerging technologies, peak phosphorus, and the influence of science fiction on research and development of converging technologies.

List of Contributors

Keith A. Abney, MA James Canton, PhD William D. Casebeer, PhD, MA Thomas C. Choinski, PhD Timothy J. Demy, PhD, ThD, ThM, MSt, MA James Giordano, PhD, MPhil Casey Thomas Hart, PhD Pauline Shanks Kaurin, PhD Yvonne R. Masakowski, PhD, MPhil, MA Gina Granados Palmer, MLA James Peltz, PhD Imre Porkol´ab, PhD John R. Shook, PhD Tibor Solymosi, PhD Anita C. Street, MS

California Polytechnic State University, USA Institute for Global Futures, USA Scientific Systems Company, Inc., USA Naval Undersea Warfare Center, USA US Naval War College, USA Georgetown University Medical Center, USA Cycorp, USA US Naval War College, USA US Naval War College, USA US Naval War College, USA Department of Energy, USA Hungarian Defence Forces, Hungary University at Buffalo, NY, USA Westminster College, USA US Government, USA

This page intentionally left blank

Preface

All Warfare is Based on Deception

–Sun Tzu.

The arms race is on and Artificial Intelligence (AI) is the fast track for twenty-first century global dominance. Nations view AI technologies as a force enabler and the key to achieving global dominance. During the twenty-first century, AI technologies will control information, people, commerce and future warfare. It is our responsibility to be at the helm and shape our future as AI joins the fight. (Masakowski, 2019) This book had its origins in hours of discussion between the author and her students, colleagues, fellow scientists, and engineers on the important role of Artificial Intelligence (AI) in shaping the future of society and warfare. The topic of Artificial Intelligence is one that lends itself to lively debate as the technology itself has advanced exponentially, often times being overpromised and underdelivered. Indeed, there is a wide array of perspectives on this topic as some view AI as the great problem solver, while others as a threat to humanity itself. I contend that there is a bit of truth in both perspectives. We are facing an unknown entity in many ways. Foremost among these issues is whether AI technologies will achieve total self-awareness and become a serious threat to humanity. For the moment, we can rest assured that there will be a human-in-the-loop and human-on-the-loop to ensure that AI systems do not present dangers to the human (Porat, Oran-Gilad, Rottem-Hovev, & Silbiger, 2016). However, there are serious considerations regarding the ethical, theological, and moral challenges these technologies present to us during times of war. War itself is debated within the context of Just War theories. So, for this purpose, how will AI technologies influence the rules of engagement and Just War practices of warfare in the future? Should AI systems be designed with ethical rules and algorithms that will constrain its actions and serve as its conscience in future conflicts? In the course of researching and writing this book, I’ve discussed these topics with my colleagues and invited them to contribute their expertise and knowledge, as well as speculative theories for future warfare in light of advances in AI technologies. We also need to consider the impact of advanced AI systems

xxiv

Preface

fighting against AI systems in the future? How will AI technologies reset the rules of engagement for future warfare? What are the potential ethical and moral implications of such a future war? Futurizing warfare is a risky business as we can wargame future concepts however these are often limited by our unique personal experience and knowledge. We need to step out of our comfort zone and imagine a world without ethical and moral boundaries for that is what our adversaries will do. They will not be contained or constrained by such limitations. This book will address questions related to the influence and impact of Artificial Intelligence technologies being applied across a wide array of crises and domains as well as address the ethical and moral conflicts of war. Among the questions these chapters will explore are: What are the implications of AI for the individual’s personal freedom, identity, and privacy rights? What are the consequences of AI advances related to national and global security? Is there a need to develop an AI conscience? What are the potential impacts of AI to AI system warfare? Each chapter will examine the perspectives and consequences for the integration of AI in our daily lives, as well as its influence on society and war. There are considerable consequences for underestimating the potential impacts of AI in warfare. Sun Tzu would have fully appreciated the potential benefits of AI as a tool of deception, as he stated, “The supreme Art of War is to subdue the enemy without fighting.” We anticipate that AI will continue to evolve and expand its reach on a global scale. Whatever its course, advances in AI will present challenges and risks for its implementation in daily life, as well as in times of war. It is left to the human to chart a course that will help mankind navigate unknown territory and shape the future world in which we want to live.

Acknowledgments

My long-standing interest in research and the topics of neuroscience, brain development, and cognitive psychology has afforded me the opportunity to work with a number of outstanding individuals across civilian and military communities. I believe that our thinking is a tapestry woven throughout the course of our lives shaped by our education, experience, knowledge, and insights gained through research and dialogue with others. I am indebted to those friends, colleagues, mentors, and students with whom I have engaged and learned from over the years. Indeed, they are far too numerous to mention here. However, I would like to share the following acknowledgments with you regarding those who contributed to making this book a reality. Please allow me to acknowledge the contributions of my contributing authors for sharing their knowledge, expertise, and perspectives on Artificial Intelligence. Special thanks to Dr. Timothy Demy for his encouragement in my research, teaching, and writing. Special acknowledgment to my friend and colleague, Dr. William Bundy who recently passed away. Dr. Bundy was a champion for all of us who reached out to the future to anticipate how technology would change our lives and warfare. He was a friend, teacher, and mentor to many students and colleagues as he led the Gravely Group at the US Naval War College. I am especially grateful to my US Naval War College, Ethics and Emerging Military Technology (EEMT) graduate students for the hours spent discussing, brainstorming, speculating, and imagining a future with ubiquitous Artificial Intelligence shaping and controlling our future lives and warfare. Special thanks to my editors at Emerald Publishing for their patience, support, and continuous encouragement. This book is evidence of their commitment to excellence in their work and publishing.

This page intentionally left blank

Chapter 1

Artificial Intelligence and the Future Global Security Environment Yvonne R. Masakowski

Abstract Advances in Artificial Intelligence (AI) technologies and Autonomous Unmanned Vehicles are shaping our daily lives, society, and will continue to transform how we will fight future wars. Advances in AI technologies have fueled an explosion of interest in the military and political domain. As AI technologies evolve, there will be increased reliance on these systems to maintain global security. For the individual and society, AI presents challenges related to surveillance, personal freedom, and privacy. For the military, we will need to exploit advances in AI technologies to support the warfighter and ensure global security. The integration of AI technologies in the battlespace presents advantages, costs, and risks in the future battlespace. This chapter will examine the issues related to advances in AI technologies, as we examine the benefits, costs, and risks associated with integrating AI and autonomous systems in society and in the future battlespace. Keywords: Artificial intelligence; future; autonomous; unmanned; decisionmaking; modeling; deep learning; military; strategy; tactics The world is very different now. For man holds in his mortal hands the power to abolish all forms of human poverty and all forms of human life. –John Fitzgerald Kennedy.

Introduction The next world war will not be focused on who fired the first shot. Rather, future warfare will be focused on the control and dissemination of information across all domains, including information about everything and everyone! AI networks will

Artificial Intelligence and Global Security, 1–34 Copyright © 2020 Emerald Publishing Limited All rights of reproduction in any form reserved doi:10.1108/978-1-78973-811-720201001

2

Yvonne R. Masakowski

serve as the grid and blueprint of the new battlespace of the twenty-first century. Becoming the leader in Artificial Intelligence (AI) is the key to achieving global dominance. Nations will use AI technological superiority to shape the geopolitical, social, and economic environment to their nation’s strategic advantage. Nations will integrate AI technologies and networks across a wide spectrum of applications to expand and enhance their economic, geopolitical, and military agendas. Advances in AI will influence every phase of humanity and daily life, thereby impacting the individual, society, and shaping the battlespace of war. As nations strive to address each of these global challenges, they seek to identify tools and technologies that will support them in their decisions during times of crises across a range of domains. We see AI being used today during the Global Pandemic in the development of models and surveillance systems to monitor people’s movements and travel patterns on a local and global level. Nations require information management tools that help to sort through the elements of crisis management, such as logistics, personnel management, supply chains, the distribution of goods, and technologies for maintaining regional security (Hamel & Green, 2007). Each of these topics requires a means of managing vast amounts of data and indeed, AI technologies excel at Big Data management (ICO, 2017). Advances in AI technologies will support both government and nongovernment agencies such as the United Nations and the International Committee of the Red Cross, to prepare and manage crises as they emerge. The United States, NATO, and its allied partners have identified AI technologies as critical for ensuring technological superiority and achieving the military strategic advantage (US Department of Defense, 2019; Masakowski, 2019d). AI affords the military and nongovernment agencies the means of managing and analyzing large data sets and supporting decision-making at all levels (Tegmark, 2018). Indeed, there is a great deal of enthusiasm for the development of AI-enabled technologies to improve daily life and enhance warfare. AI technologies serve as a force multiplier for the military as it can streamline the information pipeline, analyze data, and accelerate the decision cycle. However, we contend that it is essential to keep the human-in-the-loop as they alone have the ability to understand context and integrate the nuances of contextual information across a wide domain of contributing factors that the AI system has yet to master (e.g., culture, organizational hierarchy, etc.). AI technologies, to some extent, have been overpromised and underdelivered as people become excited about its capabilities and often misunderstand the complexities and limitations of AI systems. Simply stated, you can’t buy a pound of AI and insert it like plaster in a crack in the wall. You have to first understand the problem and related issues within the context of the situation itself. This requires knowledge and understanding of the situation in which you want to deploy AI as part of a network, part of the organization, or merely augment a human decision-maker’s thinking. One of the principal advantages for the military is to develop adaptive and agile AI networks that will facilitate information management during times of crises. Distributed sensors collecting information across all domains including air, land, sea, undersea, and in space will be analyzed and managed by integrated and

Artificial Intelligence and the Future Global Security Environment

3

distributed AI networks. Satellites will be linked as information nodes within the grid of AI networks emanating across all domains and at all levels to capture data and attempt to make sense of it all. AI is the key to achieving global dominance for all nations seeking to control information across all domains to their strategic advantage! China and Russia have provided evidence for this objective in their national strategic strategies. China has put its stake in the ground stating that it aims to be the global leader of AI technologies by 2030 and is well on its way to make good with its claim (Allen, 2019; Chan & Lee, 2018; China State Council, 2017; Dai & Shen, 2018). China has invested heavily in AI technologies for numerous applications, from facial recognition for identity management, population control, resource management (i.e., water, minerals, materials, etc.) to manufacturing applications. China has provided further evidence of the potential impact of AI technologies on society and global security. Likewise, Russia has invested in AI to shape their global political and strategic objectives. Their nation is investing in AI to weaponize information to their political and strategic advantage. Regardless of our personal or political views, the fact is that AI technologies have facilitated new, advanced capabilities. The question is: What type of future do we want to create? How can democratic nations use AI technologies to ensure global security? Advances in AI technologies will provide a means of safeguarding a stable environment in which the national security strategy, productivity, and economic progress is ensured. Indeed, national strategies are designed to ensure economic and geopolitical stability. In the book, Future Smart, we are advised about game-changing trends that will shape the future of business and warfare. (Canton, 2015). Canton highlights the similarities between current advances in AI technologies with those innovations and inventions of the Industrial Revolution. He posits that AI will influence and accelerate the development of AI technologies across a wide range of domains including medicine, manufacturing, and warfighting (Canton, 2015). He further hypothesizes that the impact of AI will be further accelerated by the integration of quantum computing, as well as advances in batteries, and materials. One can extrapolate from this hypothesis that the integration of AI advances with new materials, batteries, and computing power will only expand and enhance its capabilities. Time to decide and act will grow shorter with each advance! Each advance in AI technology and computing power increases the level of risk in global security as advances will present new and unanticipated challenges to maintain global security. China is aware of these benefits and has already integrated AI technologies in the design of what they call, Intelligentization of design (Zhou, 2013). Shifting from innovation to intelligentization refers to the fact that new systems and technologies being designed will be integrated with a level of intelligence that will make them safer, more reliable, self-sustaining, and just plain smarter than previously thought. For example, the design of a locomotive that has been integrated with AI intelligence affords it increased capabilities of being selfaware, self-learning, self-maintaining, and ensures network connectivity and communication that makes the locomotive safer, more reliable, and cost-

4

Yvonne R. Masakowski

effective. Like the Industrial Revolution, we anticipate that intelligentized systems will continue to expand and contribute to a network of self-sustaining, automated systems capable of manufacturing a host of products, technologies, systems, and tools of war. We recognize that AI is having an impact on society, politics, economics, and gains in manufacturing capabilities. Advances in AI are also having significant effects on the battlespace and twenty-first century warfare. AI networks have accelerated the decision cycle for the warfighter and for the adversary. AI networks and intelligent agents can quickly detect, collect and analyze data to inform decisions and make recommendations for the benefit of the decision maker. However, these same capabilities may also prove to be a weapon in the hands of the adversary. The real advantage democratic nations have is its people. From the American Revolution through the Space race, the human spirit has launched mankind into new territories. This time is no different. Humans have a pioneering spirit and adapt quickly to change. This is such a time that calls for adaptation and innovation. For the twenty-first century is a time rife with opportunity, complexity, and uncertainty. We must acknowledge that global security rests not on an individual nation but requires all nations to consider working together collaboratively to achieve a sense of collective security for all its members. There are challenges and concerns regarding trusting the source of data and determining whether the information received is trustworthy. It is essential that democratic nations work together to ensure that trust and truth are the essential ingredients of AI networks and systems to ensure global security. Indeed, nations must reckon with issues related to national sovereignty that, up until now, prohibited them from sharing historical database and information that is proprietary. However, if we are to ensure global security; then, nations must work collaboratively and cooperatively to achieve unity of effort to ensure the security of our world. There will be a trade space in which nations must contribute to the greater good of maintaining global situation awareness (GSA) if they are to ensure global security. “Those who would give up essential Liberty, to purchase a little temporary Safety deserve neither Liberty nor Safety” (Franklin, 1776). Advances in AI present a paradigm shift for nations to be assured of controlling their national security agenda. Nations must collectively defend against future adversaries as their collective security to safeguard their individual national security agendas will depend upon it. The responsibility for ensuring and safeguarding national agendas will necessarily be a collective venture between nations across the private and public sectors. This collective investment is due to the tremendous investment it will take to ensure that nations have AI systems in place that can detect behaviors, if not anticipate, adversarial intent before negative events occur! Anticipatory and predictive AI networks and agents must be developed if we are to be agile and adaptive in this new complex and uncertain global security environment. AI networks facilitate cyberwarfare and the weaponization and manipulation of information presenting cyber threats that raise serious challenges and concerns for ensuring global and national security. The question is: How will AI be used to

Artificial Intelligence and the Future Global Security Environment

5

shape future governments and future generations? How will we defend our nation in the cyberspace of the future to secure global security? For now, we are at the intersection of innovation, intelligentization, and an evolution and revolution of AI networks and system designs that present opportunities and challenges, each of which may have grave consequences for mankind. It is apparent to some of us that corporate and government defense organizations must work together if we are to be successful in ensuring a secure future for all. Future AI networks and systems will be used to shape the global security landscape. Indeed, AI dominance will ensure a nation’s global geopolitical and economic dominance as they attain control of the information environment and its resources across all domains, air, land, sea, undersea, and most especially, space. China and Russia recognize the military gain in this regard. There is no time to waste for nations including NATO and its Allied partners to invest in AI technologies as a means of ensuring global security. If nations (NATO, Allied partners, etc.) are to remain adaptive and agile, we must develop AI systems that are anticipative and predictive. Future AI situational awareness (SA) systems will play a pivotal role in providing nations with the decision advantage. We no longer will have the luxury of time. It took approximately 22 minutes to rally a military response to 911 attacks. Today, we have less time to respond and far less time to understand the complexities of the situation. AI technology and networks can reduce the time and accelerate the decision cycle to ensure global security. Investment in AI technologies will provide the military strategic advantage that is essential for global security. Intuitive AI systems, designed with adaptive neural networks and machine learning algorithms, will detect intent and forecast events in the future. Automated bots, adaptive computer algorithms, and smart systems will transform business and warfare. Disseminated sensors, embedded from sea floor to satellites in space, will facilitate global situational awareness (GSA) with predictive capabilities. AI technological superiority will support nations who collaborate and cooperate with each other to ensure global security. This will take the political will of those leaders who wish to gain the strategic advantage and those who make decisions regarding sharing of national information to achieve a common understanding of a complex and uncertain environment. Today’s operational environment has become more complex as a result of advances in AI, and AI-enabled technologies such as robotics and Autonomous Unmanned Vehicles (AUVs) and systems. Indeed, the character of warfare has been transformed by the emergence and utilization of AI networks and unmanned autonomous systems including AUVs and Unmanned Underwater Vehicle (UUV) systems. These AI-enabled system designs have expanded our SA in the battlespace with extensive networks of sensors from sea floor to space satellites. The information garnered from these distributed sensor networks has been collated and analyzed by AI networks to facilitate more effective mission planning and execution (Finney & Klug, 2016). However, these systems have yet to be challenged strategically by adversaries equipped with similar capabilities. That day will come. Consideration has been given to the potential impact of AI

6

Yvonne R. Masakowski

technologies in urban warfare and one can only hope that it doesn’t come to that to experience such a reality. Advanced AI technologies are considered critical components for gaining the military decision advantage by our global competitors such as China, Russia, and North Korea. Each of these nations has also developed strategic doctrine on AI technologies and how they intend to invest in their future development for maintaining global military dominance. Advances in AI and computing power will continue to increase system capabilities and contribute to the complexity of the operational environment. It is a propitious time to consider the effects of AI technology within the context of future global security. As nations compete to be the global leader in AI technologies, the security of the world is in play and at risk. AI networks of the future must be designed to ensure global security for adversarial nations seeking global dominance will exploit every vulnerability to their strategic and economic, geopolitical advantage. Nations must work collaboratively to develop the AI technologies necessary to ensure global security. Global security with AI networks and systems must be a strategic imperative!

Innovation and Advances in AI There have been significant advances in AI technologies as evidenced by the wide array of systems developed by the commercial marketplace. From smart phones to digital cities, we have only begun to see the potential for the integration of AI technologies in our lives (Masakowski, Smythe, & Creely, 2016). We anticipate that advances in AI technologies and tools will expand and infiltrate every aspect of life. Just as the telephone, radio, rail, and the airplane emerged and influenced society during the Industrial Revolution, so too, AI technologies will reshape twenty-first century society and warfare. Such technological changes will be both evolutionary and revolutionary. Evolutionary in that AI technologies will continue to modify our behaviors and the way that we live and conduct daily life. Whether we are booking a flight, traveling from one city to another, get stuck in traffic, or look for a restaurant in a new city, we check our AI-enabled smart phone to help guide our decisionmaking. The military uses AI technologies to collect and manage sensor data, gain SA, and conduct Intelligence Preparation of the Battlespace (IPB) which are essential elements for ensuring mission success. We are sitting at the precipice of great changes in society and in our lives. The arms race is on for global dominance (Masakowski, 2019a, 2019b). AI is the fast track to the future where nations will compete to be the leader in AI as a step toward controlling global commerce and warfare. As nations and society integrate AI advances to improve the quality of daily life, such changes will also become integrated in the future battlespace. As advances in AI technologies continue to evolve, we anticipate that the revolution in warfare has only just begun (Geissler, 2019). Specifically, advances in AI-enabled technology will shape the roles and responsibilities for individuals,

Artificial Intelligence and the Future Global Security Environment

7

society, and the military (O’Flanagan, 2018). We must acknowledge that as AI technologies advance and become integrated in society, the military will seek to exploit such advances to their strategic advantage (Masakowski, 2019c). Once turned on, these capabilities will not be turned off. We anticipate that AI technologies will reshape the battlespace similar to what occurred as a result of the development of the long bow, tank, radar, airplane, and the aircraft carrier. Regardless of its era, each technological advance was a game-changer on the battlefield that revolutionized warfare for its time. However, unlike the long bow, whose use was confined to the battlefield, AI technologies will not be confined nor constrained. Advances in AI technologies will extend beyond the battlespace and reach into our homes and personal lives. Today, the commercial community is redesigning the environment with digital cities, energy grid management systems, AI networked air traffic control systems, and integrated networks that link individuals, cities, and governments. For the military, this constellation of AI networks presents opportunities for nations and adversaries to manage information and exploit the AI network to their tactical and strategic advantage. I would argue that the blueprint design of digital cities may be readily converted to the command and control grid for future warfare. The integrated network designed to manage a city’s infrastructure may also be seconded and repurposed for warfare by our adversaries. There are grave implications for global security when thinking about the ways in which such AI networks can be used in the future. Foremost among these is the issue of managing crises and conducting warfare in urban regions, a topic that has been and will continue to be explored (Geissler, 2019). The future operational environment extending from 2020 to 2050 and beyond will continue to be dynamic and challenging with increased complexity and uncertainty. We must take time to consider how these changes will affect future warfighting. Nations must develop doctrine, policies, and warfighting rules of engagement, as well as legal and ethical practices that will integrate AI systems into the network of mission plans as a critical step for ensuring their nation’s security. One of the principal questions my students raised in this regard has been: Will I be court martialed if I don’t listen to the AI system recommendations and follow them? As well as, Will I be court martialed if I listen to the AI system recommendations and they are wrong? There are serious ethical considerations for military leaders and warfighters that must be addressed by policymakers. Technology in and of itself is not the principal issue. Rather, there are considerable ethical and moral consequences related to the implementation and integration of advanced AI systems for society and warfare (Geissler, 2019; O’Flanagan, 2018; Peltz, 2017). We must remain vigilant and perspicacious regarding advances in AI technologies as these will provide capabilities to the military as well as to our adversaries. We must anticipate and war game worstcase scenarios and the potential for adversaries to use such technological advances as weapons against society (Ashrafian, 2015a; 2015b; Bostrom, 2014).

8

Yvonne R. Masakowski

Civilian and Military Applications: Shaping Expectations As a society, we have already become somewhat accustomed to advances in artificial intelligence technology. We have a set of expectations regarding the capabilities of such technologies at our fingertips. We Google our questions, searching for answers and trusting that the information provided to us is accurate. Regardless of our search engines, we anticipate that these tools will help us find the answer to our question. We implicitly trust that the information will be reliable. Our expectations have been shaped by our interactions with these AI networks. We have adapted to this new world environment where our perceptions are shaped by the news media. This sense of trust that we have acquired based on our interactions with AI systems have set the stage for shaping our societal and political perspectives as well. Recent revelations regarding the Russian intrusion into our last presidential election has shown that AI facilitates our adversaries’ ability to shape our nation’s politics. We have come to accept cameras and surveillance systems monitoring our decisions and movements. Alexa placed in our homes, provides a means of monitoring our thoughts, social interactions and political perspectives. We have unwittingly invited personal surveillance into our homes as an accepted part of the twenty-first century norm. We have abdicated our personal privacy rights as part of this process. We anticipate that surveillance systems will continue to monitor and mold every facet of our lives, including national economic and political agendas. As AI systems become more integrated in each city’s infrastructure, we anticipate that events will be monitored continuously on a local and global level. Monitoring unwitting civilians has far-reaching ethical consequences for society as it violates individual privacy rights and intellectual freedoms (Geissler, 2019; Masakowski et al., 2016; Peltz, 2017). Therefore, attention must be paid to the level of intrusiveness and control that AI technologies will be allowed to exert on the civilian population. We must align our system designs with the values and ethics of the society in which they will function. From an individual perspective, members of society will be presented with privacy challenges that will affect every aspect of their personal lives, their jobs, housing, education, associations, friendships, and political views. From a national perspective, nations will shape and control their citizens’ perceptions. Private citizens will be monitored in their home and workplace. Friendships and associations will be scrutinized by AI systems that collect data to ensure that individuals are aligned with government policies as a means of ensuring stability within a region. It really is all about control of information, people, and warfare. However, there are consequences for this approach. According to Geissler (2019): This Orwellian capability creates a one-sided Panopticon, leaving the targeted population paralyzed and without privacy. Being left with little choice to fight back, distrust and fear will disrupt the informal life and damage the social fabric.

Artificial Intelligence and the Future Global Security Environment

9

We must consider ways to protect and ensure that future AI systems will be integrated with ethical values consistent with the societal ethical values. This is critical in that there is a real danger that failing to do so will have grave consequences for society and nations (Geissler, 2019; O’Flanagan, 2018; Peltz, 2017). As AI systems evolve and become ubiquitous on a global scale, the complexity of this pervasive AI network will become more deeply integrated and not easily managed by humans. This presents challenges to designers and developers as they must consider the potential for negative consequences of developing advanced AI systems that may present risks to society. For the present, AI systems are intended to augment and support human decision-making. However, this may change as AI system designs progress in the future. Future AI designs may be the intrusive educator that identifies the unsuspecting and unwilling citizen who is not aligned with national perspectives. AI may become the sole source for reshaping human behavior and perspectives aligned with a national agenda. China serves as an example of how AI surveillance technology can be used to shape political perspectives using digital social scoring. The government uses AI surveillance systems to monitor social interactions and behaviors that do not align with the Communist Party's political agenda. Individuals who have been deemed as behaving contrary to China’s national policies are removed from society to be re-educated into the national Communist political agenda. Do you think that this could never happen in the United States? Weren’t we all educated in schools that begin class with the pledge of allegiance? What does this mean for US citizens who have been known historically as independent thinkers? How will AI be used against each nation’s citizens? For the military, concerns are raised regarding the ways that adversaries will make malicious use of advances in AI technologies (Ilachinski, 2017). Adversaries, whose ethics fail to align with those of society at large, will not be reluctant to use AI advanced technologies to their advantage regardless of the gravity of the consequences for their fellow human beings. Theirs will be a strategic objective, unconstrained by a code of ethics, moral values, or ethos that prevents them from doing harm to others.

AI and Preparing for Future Warfare Future warfare will be focused on economic and geopolitical control. The control of information across the globe, control of people, places, resources, and economies will enable nations to achieve global dominance across all domains – air, sea, land, undersea, especially resources in space. Economic prosperity is essential to maintain a nation’s infrastructure, secure environment, as well as its flow of trade and commerce. From a national perspective, nations will have to secure control of their nation’s political and economic stability to maintain their place on the global stage and in the global economy. Nation-states that fail to maintain economic and political stability are at risk of war, whether on a civil or regional scale. Failed economies give rise to political upheaval and civil unrest. Regional

10

Yvonne R. Masakowski

crises will emerge over critical resources such as food, land, and water, etc. AI will play a critical role in ensuring the security of a region as well as sustaining economic stability. Maintaining a secure economic environment is of paramount importance in major cities as they serve as the hub of economic progress. Ports of call in the United States, such as Boston, New York, and ports in California are critical to monitor to ensure that commercial transit continues unhindered by adversarial actions. The maritime commons will need to remain free for extending global commerce that is essential for supporting each nation. AI surveillance and monitoring technologies across each domain (land, air, sea, and space) will ensure that the transit of goods continues unimpeded and thereby support continued economic trade and growth. AI technologies offer great promise in support of global security. However, leaders must achieve an understanding of AI technology as there is a potential for cascading consequences that will flow from these advances to shape our future. For example, AI surveillance systems provide a means of collecting and analyzing vast amounts of data. These data may be used for planning and providing courses of action in complex operations. Regardless of which agency or military use these systems to conduct their operations, information is time critical. AI systems excel at information management and analysis. The question is: What are the objectives for gathering this information? How will this information be used and by whom? The military is a primary user of AI-enabled systems that manage Big Data. Their principal concern is whether the AI system is reliable and trustworthy. They must find ways of validating information, options, and recommendations provided by AI systems. From an individual soldier to the NATO leader, each person shares these concerns as nations recognize the inevitable integration of AI technologies in future warfare. Soldier-centric AI technologies will prove to be an asset on the future battlefield. However, soldiers need to have technologies designed that are tailored to their specific roles and responsibilities. Such systems must also have ethical rulesets embedded in their design to ensure that a mistake in targeting or mishap does not occur as a result of using AI technology that is not developed with ethical values integrated in its design. We know that AI networks will provide leaders with options and recommendations for developing mission plans. However, there is a need to consider the fact that such systems will need to support human ethical values as it carries out its mission. Namely, AI should be designed to be self-aware, self-defensive, optimize SA, minimize errors, and mitigate collateral damage, be predictive regarding adversarial actions, and ensure operational success in an ethical manner. This is not the current state of AI technology but rather a wish list of characteristics that will be essential for future AI systems. NATO is addressing the topic of AI-enabled systems integrated in the battlespace. They are seeking ways to exploit AI technologies to make NATO operationally agile and able to support a wide range of military operations. NATO is developing plans for educating its leaders and encouraging the design of AI systems that will support the leader and decision-maker. Further, NATO has published its set of guidelines for AI designs, highlighting the need to develop AI

Artificial Intelligence and the Future Global Security Environment

11

systems that are ethical, lawful, robust, resilient, and trustworthy (NATO, 2019). Although the template of the future has been presented, the reality of these designs will be challenged with the ethical application of AI. Each nation is concerned about the influence of AI and how it will affect future warfare. There are numerous questions as to how AI systems will be designed and how these AI systems will be used in the multinational military operation. Among the questions raised: How will they share information with their allied partners? How will they make systems interoperable and interchangeable? How will they come to an agreement as to who will lead the operation with AI systems in play? Regardless of the questions raised, each nation readily recognizes the value that AI information gathering and analysis tools bring to each military operation. Unity of effort is key to the success of future NATO operations as they seek to form an alliance of sharing AI networks and benefitting from its output and recommendations. NATO’s decentralized mission command is an integral plan for distributed teams to work collaboratively and with total shared SA that will enable their ability to make independent decisions. This approach moves the decision-making down at each level to empower and enable soldiers to have a shared understanding of commander’s intent, shared SA, and make informed and effective decisions. However, there are risks remaining related to trust and truth of the information provided (Singhvi & Russell, 2016; Russell, Dewey & Tegmark, 2015; Shim & Arkin, 2012; Sheridan, 1992). How can soldiers and leaders verify the source, validity, and accuracy of the information received when this occurs so rapidly and enmeshed in an AI network? It is easy to say that we want AI systems that are robust, resilient, reliable, and trustworthy. However, it is another thing to design and develop such systems. At least for the moment, AI is being developed with these considerations in mind. As AI moves forward in design and capabilities, there will be an ever-increasing level of concern regarding its ability to operate independently of the human. There are no definitive answers to date as AI has yet to achieve the level of human general knowledge required for AI systems to formulate independent decisions. We would argue that such delays have given us the time to reflect and consider the capabilities society and the military require, as well as the costs and risks we are willing to incur. As we prepare for the future, we must consider that there may be increased challenges and potential inherent threats from integrating AI systems into military systems on a global scale. There is also a risk regarding the design of AI systems themselves (Singhvi & Russell, 2016). Nations whose designers go rogue due to some ideologies may present potential threats as they integrate malicious content in the AI network that presents challenges and dangers to society and/or the military (Arkin, 2007; Tegmark, 2018). Indeed, we envision a future where all things are possible and prepare for worst-case events and scenarios based on the potentially negative applications of emerging AI technologies. Perhaps, such considerations should far outweigh those given to the benefits, costs, and risks associated with AI advanced technologies. Leaders such as Elon Musk, Stephen Hawking, and other notable scientists and leaders have argued that the risks associated with AI systems far outweigh the

12

Yvonne R. Masakowski

benefits. Scientists have published an open letter warning the world of the dangers and risks associated with the development of AI systems (Hawking, Musk, & Wozniak, 2015; Mack, 2015; Russell, Dewey & Tegmark, 2015). Scientists and others forecast that AI systems ultimately will evolve to dominate humans and alter the world as we know it (Tegmark, 2018; Hawking, Musk, & Wozniak, 2015; Kurzweil, 2006). Indeed, some have speculated that AI systems may well lead us into WWIII in error! Regardless of the disparity of viewpoints among scientists, the emergence and evolution of AI systems is inevitable. AI systems will continue to advance and evolve. Therefore, each nation must be prepared to deal with worst-case scenarios if we are to preserve our freedom and security. Within this context, we contend that it is prudent to take time to reflect and consider the potential societal and military impact of AI systems on the future global security environment. Whether AI will be used to maintain a city’s infrastructure, secure a nation’s economic and political stability, and/or ensure unrestricted commercial trade, there are consequences for integrating AI technologies into the fabric of society and the military. AI technologies have only just begun to become an integral part of our daily life. It is only a matter of time before such systems are integrated into every nation’s infrastructure, institution, and military organization. To ensure continued economic and political stability and security, nations must first consider the various perspectives and potential consequences of implementing and integrating AI technologies and systems in society and in warfare.

Cyberattacks and AI The increasing complexity of the environment has reconfigured the battlespace from one of well-defined geographic borders and readily identified adversaries to one without borders or boundaries and replete with invisible adversaries. Nations are no longer limited to traditional physical combat on the battlefield. Rather, individuals and nation-states may be equally effective taking advantage of AI systems and networks to launch a cyberattack that violates and destabilizes a neighboring nation’s economic and political stability, without firing a shot! The Russian intervention in the U S Presidential election serves as evidence for this type of political shaping. The weaponizing of information became a new tool for cyberwarfare. Nations readily recognize the value of shaping national and political perspectives by weaponizing information to their strategic advantage. The dangers of such information manipulation highlight the importance of developing AI systems and networks that can ensure validation of trusted agents as a source and a means of evaluating the truth of the information itself. Given the potential for exploiting advances in AI over the coming years, there is a need to address how such technologies might be used to defend a nation’s economy and political interests. This new ideological cyberwarfare sets the stage for a shift in the balance of power on a global scale. While China has led the way in its application of AI technologies to shape its national profile and social structure,

Artificial Intelligence and the Future Global Security Environment

13

democratic nations such as the United States and its allies must forge their national strategies to ensure personal freedoms are preserved while maintaining national security. Democratic nations must lead the charge in developing AI systems that can verify, validate, and promote the means for developing trust in the information pipeline and for ensuring that we can trust the information provided to each nation. Nations must optimize their approach by taking steps to employ AI technologies to ensure that the balance of power remains favorable to human rights and freedom. Ethical guidelines must be integrated in future AI networks with checks and balances to ensure the security of information disseminated on a global scale. Nations must take a defensive posture against cyber deception and the manipulation of information by adversaries who use AI to exploit the cyberspace network to their strategic advantage (Demchak, 2019). All warfare is based on deception! (Sun Tzu) One can readily see the advantages, if not necessarily the benefits for governments to develop AI technologies that will provide GSA to ensure global security. China has demonstrated the utility of maintaining a network of surveillance systems to maintain social order among the civilian population by using facial recognition AI systems. As the population increases, there is a need to maintain the stability and security of the environment. This approach ensures stability within their nation, as well as developing a pervasive, integrated AI surveillance network that can easily be transformed into a command and control network for future warfare. AI technologies are dual purposed in China. Satellites and sensor networks used for communication serve both civilian and military objectives. AI technologies are designed and implemented to support economic, political and military objectives that will ensure their national strategic imperatives. This strategy serves them well as the AI surveillance systems can be converted to a command and control network; therefore, they are also always prepared for war. Russia has used AI systems to weaponize information management and influence other government’s elections and economies (Behrmann, 2020; Konaev & Vreeman, 2019; Putin, 2019; Polyakova, 2018; Polyakova & Boyer, 2018). Therefore, military readiness mandates the need to develop AI technologies that will provide the competitive military advantage and ensure the security, safety, and stability of each nation’s citizens. However, the integration of AI-enabled systems does not come without risk. Rather, it raises the need for total GSA, and the need to develop a constellation of redundant satellites and systems, networks, and capabilities that will enable nations to be operationally agile as they seek to maintain peace and security. The future global security environment is challenged by the complexities of a battlespace replete with an integrated network of autonomous systems, distributed sensors, robotics, and AI-enabled systems that serve as the grid for managing today’s complex operational environment. The integration of advanced technologies such as AI-embedded systems and networks, including autonomous unmanned systems such as AUVs, and UUVs, has altered the way that the military and indeed, the individual soldier must function and fight in the twenty-first century battlespace. This is important because nations must address a wide range of operations that will require AI technologies to help nations anticipate where they will be needed, to verify whether these events will be catastrophic, and how

14

Yvonne R. Masakowski

best to manage crises as they emerge (US Department of Defense, 2012; 2018a; 2018b). Attention must also be given to developing agile AI networks that are capable of self-awareness, with implicit alert systems that inform the military of adversarial intrusions and AI systems that are capable of self-defense considering such events. The complexities of the global environment mandate an operational imperative to address these challenges using AI technologies to anticipate events and prepare to manage each crisis.

Autonomy and Artificial Intelligence Toward the end of the twentieth century, there was a great deal of focus on the revolution in military affairs that argued for netcentric warfare and advances in technology including AI and autonomous unmanned systems such as AUVs, and UUVs (Cummings, Clare, & Hart, 2010; Cummings & Guerlain, 2007; Goodrich and Cummings, 2014). Netcentric warfare was the focus of the day as it described a network of integrated systems that could be managed by the human-in-the-loop. As AI technologies advanced in combination with machine learning (ML), deep learning (DL), and adaptive neural networks (ANNs), there was a conviction that AI would address every crisis. Some believed that it was only a matter of time before the human could step aside and let AI make all decisions (Tegmark, 2018; Turing, 1950). However, we know that we aren’t there quite yet. Today, research continues in the design of autonomous systems with AI-embedded systems. Advances in AI in ML and ANNs have facilitated the development of adaptive AI systems capable of learning the properties of the environment and forming innovative solutions (Alpaydin, 2016). Unlike previous AI systems that were deterministic, combinatorial algebraic equations, ML uses algorithms and rulesets that optimize learning by recombining layers of information in a hierarchical manner. Indeed, advances in AI, ML, and computational modeling have moved research in the direction of affective computational models and machine consciousness (Aberman, 2017; Arkin, 1992; Chandra, 2017). DL models human thinking using a complex and integrated network of nodes with hidden layers that process data to make sense of information and to understand data in a holistic manner. Deep learning posits that algorithms learn the data’s structure and modifies its internal node network as it integrates layers of data to facilitate pattern recognition of properties such as those in visual recognition and speech patterns (i.e. prosody, pitch, temporal patterns, etc.) (Masakowski, 2018d; Chandra, 2017). Namely, the AI system identifies specific features as it builds and integrates data as it learns and transfers primitive features from one layer to the next. Like a child learning a language, it learns specific properties and integrates features of speech (prosody, pitch, cadence, etc.) to learn a language. AI systems that use DL can capture multiple layers of information in their ANN to generate predictions based on their algorithms. This capability has relevance for the development and design of AI-enabled systems that can maintain SA that is essential for mission planning and execution. Recent field exercises such as Sapient (Andrew, 2018; Masakowski, 2019d; Ward & Evans, 2018) have

Artificial Intelligence and the Future Global Security Environment

15

begun to demonstrate the value of integrating such AI systems in the battlespace. MIT scientists have demonstrated that an autonomous unmanned underwater vehicle (AUV), enabled by AI, can form its own mission plans (Chu, 2015; Porat, Oron-Gilad, Rottem-Hovev, & Silbiger, 2016). Although there is still human oversight, this shift in capabilities from human-centric to AI system-centric decision-making will influence future warfighting. The transition from open combat on the battlefield to cyberwarfare has also transformed the battlespace (Finney & Klug, 2016). Today, open combat warfare is conducted within the context of cyberwarfare and irregular warfare. In the future, AI-embedded robotic soldiers will fight on the battlefield. Adversaries take advantage of every domain to gain their advantage in this new battlespace. Among the advantages, it is difficult for adversaries to reverse engineer AI networks. Adversaries will exploit vulnerabilities in the system network to gain the decision advantage. AI will provide a means of protecting our systems by serving as a network sentry on guard for persistent surveillance to defend our networks in the future. As AI systems become self-aware and capable of defending their networks, we will see the emergence of other AI systems that will search for flaws and vulnerabilities in these defense systems. AI networks must be designed to detect mimicry and deception, as well as defend itself against adversarial actions from nations who intend to intercept command and control of the AI system. Future system designers will develop AI systems that will be defensive and capable of countering the adversaries’ ability to take over the system and defend itself (Shim & Arkin, 2012). Pervasive, persistent, vigilant, and agile artificial networks will be critical for the defense of future global security.

Artificial Intelligence and Machine Learning: Advantages Artificial Intelligence (AI) systems and networks can deter and disrupt adversaries’ attempts at violating a nation’s networks. AI systems have the ability, due to their complex computational capabilities, to discover, learn and recognize patterns, as well as detect intrusions and threats. In addition, deep learning has afforded AI systems the ability to learn from experience and adapt to changes in the environment, form and test hypotheses, draw conclusions, formulate plans, make decisions and act. AI can be a powerful asset for nations and adversaries alike who seek to gain the decision advantage in cyberwarfare (Ilachinski, 2018). Advances in AI technologies have shaped a new strategic path for each nation’s security strategy. Artificial Intelligence (AI), machine/deep learning, and autonomous unmanned systems will serve as a force multiplier in support of a nation’s strategic objectives, as well as a leader’s decision-making capabilities. It is incumbent upon society to consider the impact of these systems if we are to be prepared to defend against these systems being used against us in the twenty-first century battlespace. The future military force will be required to understand how to operationalize such technology vs the specifics of its design.

16

Yvonne R. Masakowski

As nations invest in advanced technologies, AI and autonomous unmanned systems will be used to reshape the global geopolitical landscape. AI technologies afford nations the ability to monitor its systems, shape their citizen’s perspectives, and control their behaviors in ways that were previously not possible. Given the potential power of future AI systems, there is a need to consider this capability as a new form of threat and one that must be defended against. AI capabilities have increased over time in terms of recognizing perceived patterns of speech and images. This ability to model human perception affords devices such as smart phones and tools such as “Alexa”, and/or robots like “Sophia” to collect data on individuals in the privacy of their homes without recrimination. As surveillance data is collected, there is no guarantee that these data will not be used to develop a profile of the individual for nefarious purposes (Peltz, 2017). In contrast, AI systems also can be exploited to reduce risks and loss of life in combat situations (Ramiccio, 2017). Although we are cognizant of AI technology’s positive outcomes, we must remain vigilant regarding the potential for malicious applications of this technology as it may be used for adverse purposes (Ilachinski, 2018; 2017a; 2017b). Nations seek to gain the strategic and tactical advantage over potential adversaries by exploiting AI technologies. Although this is the principal driver for military applications, there are challenges and risks associated with achieving this objective. Among these issues, humans have difficulty with trusting AI systems and relying on their decisions. Soldiers will develop trust with AI systems as their algorithms are successfully executed. The ability to verify and validate information from an AI system remains the challenge as data is embedded within the deep layers of the neural network. Rather, the human will have to rely on algorithms that will formulate options to shape their decision-making. One of the issues related to this is the fact that these systems may not provide full data to the human to evaluate. Rather, depending on the AI’s capabilities, one may not receive full contextual data but rather proscribed data based on the algorithm’s selection of relevant information. That is, the AI NN system will develop options and decide which information is relevant for the warfighter. Advances in AI technology will accelerate the scale and speed of information processing as it excels at Big Data management. AI systems have demonstrated tremendous capacity for managing vast amounts of data that can enhance Situational Awareness (SA) and decision-making in the military operational environment. The integration of cognitive models and mission plans in the design of autonomous unmanned systems has propelled this technology forward as an independent platform. However, as these systems become more automated and capable of independent decision-making, one must consider the consequences of relinquishing the authority of decision-making in the combat operational environment. This is especially relevant given the focus on decentralized mission command and decision-making. Decision-making at every level must be supported by reliable, accurate, and valid information provided to the leader at all levels.

Artificial Intelligence and the Future Global Security Environment

17

International Perspectives on AI The twenty-first century security environment is one that presents numerous global security challenges such as mass migrations, regional conflicts, human trafficking, piracy, counter-terrorism, as well as humanitarian and natural disaster relief crises. Each of these crises presents challenges to the military, governments, and non-government agencies as they attempt to meet these demands. As crises emerge, there is a need for the military, governments, and agencies to manage the vast amounts of data related to each crisis. In addition, government and nongovernment agency leaders may be presented with conflicting information and need to make critical decisions that may have ethical and moral consequences. Unlike previous war eras, the United States is no longer the leader across all technological domains, specifically in the area of AI technologies. China has committed itself to become the leader in AI technologies and has invested in AI technologies to preserve its national strategic and political agenda. They have developed AI systems for satellites, surveillance, manufacturing, and facial recognition, among other applications related to the control and dissemination of information. It is worth noting that advances in AI technologies will play a pivotal role in achieving global supremacy (Allen, 2019). China and Russia have demonstrated the importance of AI in their role as global leaders. Indeed, China has committed to becoming the world leader in AI by 2030 (Burrows, 2018; Kania, 2018; Ma et al., 2018). China has invested heavily in AI technologies for numerous applications including facial recognition, social scoring and population control, resource management (i.e. water, minerals, materials, etc.) and manufacturing applications. China has provided further evidence of the potential impact of AI technologies on society and global security. China is currently using AI facial recognition technologies to shape its economic and political landscape, as well as for social control and positive behaviors of its population (Campbell, 2019; Chan & Lee, 2018; China State Council, 2017; Chutel, 2018; Dai & Shen, 2018). The level of commitment and investment in AI by China and Russia exemplify the importance of AI in their respective nation’s national security strategy (Behrmann, 2020; Fedor & Fredheim, 2017; Heath, 2013; Hoffman, 2012; Konaev & Vreeman, 2019; Polykova, 2018; Polyakova & Boyer, 2018). Nations must address the complexities of these challenges as they are advancing at a rapid rate and will affect both nations and adversaries. The application of AI technologies and networks to shape the political landscape on a global scale presents a clear and present danger if nations fail to address and defend against this. China’s strategic plan is to become the world leader in AI technologies and attain global dominance (Chan & Lee, 2018). Indeed, President Xi Jinping has stated that becoming the world leader in AI technologies is essential for global military and economic power (Allen, 2019; Burrows, 2018; Chan & Lee, 2018; Heath, 2013; Kania, 2018; Ma et al., 2018).

18

Yvonne R. Masakowski

They have demonstrated their commitment by heavily investing in the development and design of AI technologies. They now dominate the amount of funding invested in AI start-ups globally and intend to become the leader of AI by 2030 (Robles, 2018). China has invested in building digital smart cities with AI-embedded networks to manage the infrastructure. They have developed facial recognition and surveillance technologies to ensure social control and compliance of their nation’s people and secure China’s Community Party leadership’s agenda. China’s use of AI facial recognition systems is aimed at the control of information that influences their economic and political strategies. Today, China uses AI facial recognition for social control of their population. For some of its citizens, it appears to ensure security by providing a surveillance system that reduces crime. For others, social scoring and social credits may be earned doing good deeds for society such as donating blood. In return for good deeds, citizens may earn social credits and benefits such as improved health care, thereby shaping each citizen’s behavior for the benefit of society (Campbell, 2019). China assures its people that such technologies are in place to ensure their security; however, it appears that these systems are dual use in purpose and aimed more at constraining their people to ensure political stability aligned with their national agenda. It also affords China the opportunity to lay down a network of surveillance technologies that can be modified for dual use in warfare. Those same networks of surveillance systems, satellites, and AI networks will support China in its endeavor to maintain global situation awareness and control information on a global scale for its economic and political gain. China reshapes society and the political landscape by assigning digital social scores to individuals, based on their behaviors and associations, that is used to limit their access to jobs, education, and housing. China uses this approach to ensure its people’s compliance with their Communist party policies (Dai, 2018). This policy is aimed at ensuring that citizens uphold the Chinese Communist party leadership and agenda, as well as maintain social order (Heath, 2013, 2019; Hoffman, 2012; NATO Technology Trends Report, 2017). China's Artificial Intelligence and the Future Global Security Environment uses social scoring as a means of controlling and containing an individual's personal freedoms, access to education, housing, jobs, and/or professional growth. Representatives in the US Congress are rallying against this type of government surveillance and have associated the surveillance of US citizens with facilitating the emergence of a “global rise in authoritarianism and fascism” (Ocasio-Cortez, 2019). For western democratic societies, this level of government intrusion into the personal beliefs and behaviors is a violation of their personal freedoms. In contrast, it places China in the leadership role for exporting its AI technologies around the globe and expands its influence on other parts of the world. The integration of ubiquitous AI surveillance systems has implications for people around the world (Geissler, 2019; O’Flanagan, 2018). Total information management and control are the ultimate goals of using AI to monitor all domains from sea floor to deep space. Dual use in design, AI systems can readily be modified to conduct operational warfare, serve as command and control nodes, and support overall global SA for their commanders. These surveillance systems

Artificial Intelligence and the Future Global Security Environment

19

also impinge on an individual’s freedom, such as privacy and the expression of ideas independent of the nation’s political perspectives. If you want a glimpse of the future, you have only to look closely at recent events in Hong Kong. AI plays a dominant role in the events unfolding in Hong Kong where citizens are rioting in the streets as Chinese facial recognition technologies scan the scene to identify each participant and protestors. Masks are being worn by protestors to protect their identities. Japan has developed glasses to mask identity and trick the technology (Campbell, 2019). This is in stark contrast to the openness of FaceBook, Instagram, and other social media digital platforms that are open and lack individual privacy protection. Using AI for population control, crime intervention, and political shaping affords us a prescient, if unpleasant, glimpse of the future. AI technologies provide a means for shaping the political landscape consistent with each nation’s security, political, and economic agenda (Fedor & Fredheim, 2017). AI technologies connect and control economic, political, and social agendas integrated in this new geopolitical environment. AI technologies have proven to be effective tools for such applications and require only one’s imagination to envisage novel applications to support a government or to resolve a security crisis. Russia is using AI technologies to weaponize information to their strategic advantage for shaping political perspectives by disseminating misinformation aimed at supporting their nation’s political agenda forged ahead in terms of investment in AI as a means for achieving global dominance. Russia has illustrated its nation’s emphasis on AI technologies and shifted the global security paradigm by weaponizing data and information to its political and strategic advantage (Konaev & Veeman, 2019; Polyakova & Boyer, 2018; Putin, 2019). Advances in cognitive computing have also made an impact by shifting our focus from data management to knowledge management. The visualization of knowledge provides the decision-maker with an advantage in intuitive understanding of a complex operational environment. One can easily imagine using AI to resolve complex situations using machine learning and neural networks to acquire knowledge that will augment human decision-making. Advances in AI allow intelligent agent networks to build models from data that will support the decision-maker. AI systems can learn and have demonstrated their adaptive capability. We anticipate an even greater paradigm shift associated with advances in quantum computing, advanced materials, and batteries that will combine to produce even greater flexibility and adaptability in future AI system designs. Russia uses AI as a weapon to reshape the political landscape to their strategic advantage and launch cyberattacks for their strategic objectives. This shift in militarizing the cyberspace generates a new battlespace narrative in which AI will play a pivotal offensive and defensive role in the future – a new type of warfare has been born (Deibert & Rohozinski, 2010; Putin, 2019; Fedor & Fredheim, 2017; Kerr, 2016, 2018; Ragan, 2012; Subbotovska, 2015). Russia has demonstrated the power in shaping the political agenda on a global scale, as well as initiating a novel warfare environment in which information serves as the weapon in cyberspace. This shapes a new type of cyberwarfare where a nation can achieve its goals without direct combat while achieving its military and political

20

Yvonne R. Masakowski

objectives. Open combat is not required in this new frontier of twenty-first century warfare. This approach to cyberwarfare places society at risk as adversaries may also insert their ideologies as a weapon of war and helps shape an individual’s perspectives. Citizens have become unwitting participants in ideological and/or policy shaping as information is disseminated widely. The United States has developed doctrine focused on the importance of achieving technological superiority by exploiting advances in AI technologies (US Department of Defense, 2019). The United States must also financially commit and invest in the development of AI technologies. The United States must take advantage of advances in AI technologies if they are to achieve technological superiority and be prepared to defend against adversaries in the future. Corporations and government agencies must work together in a collaborative manner if they are to compete on the global stage in achieving AI superiority in the global domain. If they fail, democracy is at risk for all democratic nations. There is an ancient proverb, “The Enemy of my enemy is my friend,” that was quoted by Churchill during WWII. Today, it serves as a warning to us to be cognizant of the influence of AI in shaping our enemies. Truth and trust remain primary considerations in evaluating who is our friend and a reminder to us all when thinking of how enemies are established. Within the context of NATO and its allied nation partners, AI systems will also influence and impact the way that these nations will share information. AI technologies will also shape their interactions and expectations for providing information in a timely manner (Jans, 2018). New rules of engagement will need to be established among these nations to support the ways in which AI will function and share information and how these systems will be used. There are significant issues related to national sovereignty rights and the sharing of information that may constrain how AI systems are used. In addition, interoperability issues may also present challenges for future collaboration and cooperation that will need to be addressed. Fortunately, the discussion on AI has begun and policies and plans are currently being written and implemented. There will continue to be concern about ownership of data but we contend that nations will work together to develop a strategy that will provide support across these nations. The stakes are high regarding maintaining global security and therefore gathering actionable intelligence is in every nation’s interest. It is an operational imperative to maintain global vigilance if we are to maintain global security (Kasapoglu & Kirdemir, 2019). Pervasive, distributed AI neural networks provide a means of achieving this end. One can readily conclude that future war will be one initiated without firing a shot but rather one where nations achieve global control of information. Global information dominance may well be achieved by using AI networks as tools and a weapon for shaping the economic and geopolitical landscape. Adversarial nations are not above using AI technologies to develop fake historical accounts and shape their political agendas and perspectives. Democracy can be eradicated by adversaries who will use social media and AI to spread their propaganda. Similar to the Chinese of today who do not believe that the 1989 Tiananmen Square massacre occurred or who believe that, if it did occur, it was nothing more than an act of Western propaganda to spread the benefits of democracy, i.e., freedom of speech and right to gather and protest. Reshaping

Artificial Intelligence and the Future Global Security Environment

21

historical accounts within the social media AI network by our adversaries will teach each new generation who they ought to distrust and hate. Therefore, shaping historical beliefs and cultural perspectives can have long reaching and long-term negative impacts. Thus, it is critical for nations to work collaboratively with private and public organizations to develop AI technologies and tools to share based on trust and truth. For we need our partners to engage with each other to ensure global security for all. AI is an enabler for surveillance technologies and for achieving information superiority. These capabilities are essential components for those nations aimed at achieving global dominance. Therefore, NATO and its allied nation partners must work together to defend against such capabilities if they are to sustain the strategic and operational military advantage.

AI: Future Forecast of Global Security Although the current level of our national investment in this area lags behind that of China and Russia, the United States is striving to develop partnerships with the corporate world and other nations, especially the United Kingdom, to develop AI technologies that will help to achieve global security. The United States and the United Kingdom view China and Russia’s emphasis on AI as an implicit threat to global dominance. As such, a partnership between nations is essential if we are to become technologically superior. The United States must invest in the development of its youth’s scientific abilities and technical expertise to ensure that future generations will be secure. AI will continue to contribute to our understanding of the cyberspace environment. Cyberspace refers to those networks, from sea floor to space, that support and provide a global security network. We anticipate that the future battlespace will become more complex as the number of AI systems continues to develop. Nations, agencies, and NATO must make it a priority to prepare for a future that will be replete with AI networks (Masakowski, 2019d; Kasapoglu & Kirdemir, 2019; Jans, 2018; NATO Technology Trends Report, 2017). Their challenge is to develop universal standards for establishing the monitoring of these networks, a common ontology/language that allows information to be shared by nations, as well as a governance body to ensure ethical applications and use of these systems (European Commission, 2019; Masakowski, 2019c, 2019d, 2019e). The command and control of the future will be a modular one where nations access an open service network that will enable interchangeability of systems vs striving to achieve interoperability. Numerous organizations and agencies are conducting research in AI technologies to develop tools that will support the warfighter (Lockheed Martin, DARPA, ONR, etc.). Scientists at MIT, Lincoln Labs, among others, are designing AI-enabled systems that will function independently of humans and work collaboratively with each other. The combination of AI and autonomous systems lends itself to the development of smart decision support systems that will support the future warfighter. Nations will be able to subscribe to an AI network to meet their needs within the context of each situation

22

Yvonne R. Masakowski

or crisis they may face. Critical aspects in the design of future AI networks and cyber systems will include connectivity, communication, and context. Namely, AI systems that can optimize the distribution of information via enhanced connectivity will be essential for supporting neural networks in the future. However, AI ML neural networks must go beyond just connecting the dots of information. Rather, future AI systems will need to excel at sensemaking and achieve an understanding of context. In this way, such AI systems will empower individuals by providing context and a rationale for their recommendations. Intuitive AI systems will be developed in the future with advances in cognitive computing. One can envisage a future where AI systems will present options to humans that reflect cognitive processing, hypotheses testing, context evaluation, and ethical considerations that reflect human reasoning and decision-making processes (Kurzweil, 2006; Turing, 1950). However, today humans still have the cognitive advantage for processing information within various contexts (Newell & Simon, 1972). At least for now, AI systems merely augment human decision-making by streamlining the data analysis problem and providing options for potential decisions that include probabilities of error. In contrast, human cognitive capacities enable humans to search their mental lexicon and select mental models based on their knowledge and expertise, to evaluate situations and make decisions. AI systems do not do this at this time. Rather, AI systems are excellent at reducing vast amounts of information but that is only useful if the system provides a means of sensemaking and prioritizing its recommendations and options based on situational context. We are far from doing this with AI at this time. AI systems in the future will need to be reliable, intuitive, and predictive and provide validation for recommendations made to the decision-maker. Sensemaking and validation will be required in future AI designs that will provide the decision-maker with actionable information. However, before we can build that future AI global network, there is work to be done. We need to develop intuitive, predictive, and anticipatory AI systems that can help humans identify emerging situations as a node/pulse of information that emerges from an integrated network of data points from across the neural network. AI must integrate information from an array of networks, from social media networks to distributed sensors. AI will need to integrate data into a recognition pattern that is predictive and evaluative to anticipate and predict events to prepare, if not prevent, situations evolving into a crisis. This type of anticipatory AI system design could assist in detecting adversarial intent and provide global alerts to agencies and militaries to harness the assets necessary to defend against adversarial actions before they occur. Designers would need to develop an AI network capable of being self-aware, self-defensive, and sensitive to potential intrusions and/or manipulation. Advanced AI networks will be resilient, reliable, agile and adaptive. Such designs will facilitate the ability to detect and deter intrusion and defend against potential adversarial attacks. Today, we merely wait for the crises to emerge and then respond. Often adversaries detect vulnerabilities within the network and exploit those to their advantage. We need to develop predictive and proactive AI systems in the future.

Artificial Intelligence and the Future Global Security Environment

23

Like weather forecasting systems, AI can garner information based on trends, probabilities, and interconnectedness nodes related to social networks, political alliances, and other relevant information to formulate a probabilistic forecast of events. An anticipatory AI system would enable nations, and agencies to identify adversarial intent by integrating information across the network, to anticipate, prepare, and shape the conditions for countering the effects of predicted events a priori. This tool would aid in the management of resources, people, and platforms that would be required to support each event. Nations, agencies, and NATO must learn to manage the complexity and chaos of the future environment. NATO and other nongovernment agencies will need to develop innovative, decentralized command and control, and communication networks to ensure that their multinational military teams remain operationally agile and adaptive. NATO has recently redefined its approach as an enhanced C3 (consult, command, and control) capability that will support a mission command approach. This new decentralized mission command approach is intended to increase distributed SA and accelerate the decision cycle at every level. This new approach will be used to enhance and empower leaders at all levels to make decisions independently in support of distributed teams. Given the demand for increased military support around the globe, there is a need to use AI, ML, and AUVs and autonomous systems as a force multiplier. Networks will need to be established that will collect, analyze, and distribute information on a global scale, from the sensors on the sea floor to the satellites in space as a means of ensuring GSA. One can envision that such a global sensor network will present vulnerabilities and opportunities for adversaries to exploit every node as a means of enhancing their capabilities. AI systems will be used to protect and preserve the integrity of each nation’s network, and AI systems will be the sentry of the future, waiting and watching. Military forces, including NATO and its allies, will find it critical to maintain connectivity and communications with their partners on a global level to sustain their ability to react in a timely and effective manner. AI technology is critical to achieve future global security. Nations must commit to the development of future AI technologies and work together to ensure collaboration and cooperation on a global scale. There is an operational imperative to exploit advances in AI technologies as these will be the primary means of accomplishing this goal. AI technologies will enable nations to monitor their economic and political status, as well as that of their global partners. As these technologies are available to all, adversaries will continue to exploit advances in AI technologies to their advantage. They will continue to be creative and innovative in their methods and use every vulnerability to undermine global economic and political stability. However, global uncertainty and disruption does not rely solely on the actions of adversaries. Rather, the force of nature itself, as evidenced by events related to climate change such as rising ocean levels, can create chaos and civil unrest. Rising tides, mass migrations, pandemics, natural disasters, as well as geopolitical differences, cultural clashes and intolerance may also yield major military actions that require AI systems to be part of the decision-making process, as well as alongside the

24

Yvonne R. Masakowski

soldier of the future. The United States will only succeed by committing its resources and people to invest in a future where they lead in AI system design if they are to ensure global security. The United States must treat the development of AI technologies and capabilities as a strategic imperative if they are to maintain their status as a world leader and ensure global security.

Impact of AI Technologies on Leader Decision-making AI systems are flexible and adaptive such that they can provide numerous options for humans to coordinate their efforts. Leaders must learn to understand the output of the AI system and evaluate its recommendations based on their knowledge, skills, and abilities (Loten & Simons, 2017). This does not mean that leaders will need a doctoral degree to understand how the system works. Leaders will need to understand the output of such systems as they evaluate the set of recommendations presented to them. This will take education and experience for leaders to acquire an understanding of how best to use these AI systems to their decision advantage. War gaming, virtual training, and operational exercises will provide the experience necessary to achieve the knowledge for effectively using AI networks in the future. Leaders must learn to understand the capabilities of AI technologies if they are to achieve operational success. Designers must provide leaders with a means of validating information received from an AI system to ensure that the information is credible, accurate, reliable, and actionable to ensure the operational environment. This may require that designers develop an Enigma-like coding scheme for warriors and leaders at every level to verify the veracity of the information source. The future operational environment will continue to be volatile, uncertain, complex, and ambiguous (VUCA). However, we contend that AI technologies can level the playing field and clarify the battlespace. Military forces will need to be adaptive, innovative, and creative, if they are going to maintain SA and operational agility. Each nation’s ability to optimize the delivery and accuracy of information to the operator will enhance military operations. It is left to the military leader to evaluate the information within the context of the operational environment and make effective decisions. To date, we have not had to defend our nation’s autonomous systems from another nation’s AI system. This will change in the future as nations become more sophisticated and develop an AI network of their own. We contend that there will also be consequences for independent decision-making by AI systems that must be considered a priori. Namely, one can envision the day when an AI system can develop its own targeting strategy so we must design these systems to make decisions aligned with human values and ethics (Arkin, 2007, 1992). We anticipate that as the level of sophistication increases in the design of autonomous systems, there will be an increased level of AI capabilities. Advances in AI, increased memory capacity, novel energy sources, composite materials, and increased biological modeling capabilities will help to shape the design of future AI systems that will be more adaptive, agile, and independent (Shim & Arkin, 2012). Innovative AI designs that model human behavior and cognitive processes will facilitate the ability to make such systems capable of independent decision-making.

Artificial Intelligence and the Future Global Security Environment

25

We must anticipate the time when such systems will not respond in a directive manner to a human supervisor (Ilachinski, 2018; Sheridan, 1992). We must also consider how our adversaries will use these systems against us. We need to consider how to manage AI systems that will use reason and understanding in response to human directives. Leaders must acquire knowledge and experience in partnering with AI systems to lead with their help. AI systems will provide guidance, advice, options, and recommendations to leaders and accelerate the time for decision-making. This will be a challenge as we often don’t trust our GPS when driving in new areas. However, as AI systems become more reliable over time, we may well take their existence for granted and have a new set of expectations as to how we consider and use their input in our decision-making. Designing AI systems from a human biological perspective will help to develop systems that will be capable of functioning in a human-like manner. Autonomous systems will be transformed from being an extension of human decision-making in the battlespace to become an independent agent capable of sensing, targeting, and killing, independent of their human partner or supervisor. This will take time, but one can forecast the future in terms of what AI has afforded us to date. We already rely on it for advising us on a range of issues in our lives from medical to financial; we are using AI to inform our decision-making. Autonomous systems, embedded with AI, will be designed to be self-aware and learn to understand relationships in the environment, and use AI cognitive models to inform the characteristics of potential targets in their environment. Systems designed with self-awareness will transform and enhance a system’s capability as it gains perspective of self and can evaluate a variety of options and potential outcomes (Dobbyn & Stuart, 2003; Dutt & TaheriNejad, 2016). While these advances sound promising, they raise several ethical and moral reasoning issues regarding second- and third-order effects on the warfighter, the leader, as well as society. It is incumbent on leaders to evaluate advances in these technologies in terms of their value in military defense, as well as understanding their impact within the context of societal ethical values. Transparency of a leader’s decision-making processes and the evaluation of potential consequences are essential if we are to sustain trust between the military and civilian population. The potential for overriding the barriers of security and ignoring the ethical consequences of such actions should be taken as a warning sign (Giordano, Kulkarni, & Farewell, 2014). The ethics of technology, a recent discipline of study and scholarship, is a new topic for most leaders. Today, there is a gap between the development of AI technologies and its ethical application on the battlefield. Military leaders must make ethical decisions that are often challenging when using AI technologies. For the most part, leaders do not necessarily understand what AI technologies do and rely solely on their output. They fail to understand the system design but must consider all aspects of potential consequences of violating societal norms and/or potential collateral damage to the civilian population. Leaders must therefore achieve a level of understanding of AI as well as its benefits and risks associated with its use. Today’s commander must be adaptive and prepared to engage adversaries in conventional and asymmetrical warfare in the Third Offset Strategy landscape. Future warfare will become even more complex as advances in cyber-, neuro-,

26

Yvonne R. Masakowski

nano-, and biotechnologies continue to emerge. These advances will challenge leaders and their ability to maintain an ethical response to conflicts and support the Just War Tradition.

Conclusions The evolution of AI technologies and AI-enabled autonomous unmanned systems will continue to shape our daily lives and future warfare (Tegmark, 2018; Russell, Dewey & Tegmark, 2015). This book is aimed at developing an understanding of the issues at stake related to AI technologies and their contribution toward ensuring future global security. We aim to promote an understanding of the benefits, costs, and risks related to the integration of AI systems in the future battlespace (Yudkowsky, 2007). We contend that there is an operational and strategic imperative for all citizens, military personnel, and especially those leaders who forge our national security strategy, to achieve an understanding of AI technologies. For society, we anticipate that advances in AI will continue to influence and shape our lives. We anticipate that AI will benefit humanity in the medical realm as we gain greater knowledge and understanding of the brain. We must remain vigilant regarding the ways that AI may be used to control us as individuals. We must guard against violations of our personal freedoms and independent thought for that is the very essence of humanity (Hawking, Musk & Wozniak, 2015; Kurzweil, 2006). Our ability to think, create, and be innovative stems from our individuality. Removing or reducing our individuality puts mankind at risk of entering a new Dark Age. For the military, nations will continue to exploit AI advances as they seek to achieve the competitive strategic advantage over their adversaries. AI systems and technologies will continue to enhance military readiness capabilities. We anticipate that advances in AI technologies will continue to present ethical challenges for leaders and soldiers alike. Government leaders will continue to be challenged with weighing the costs, benefits, and risks associated with developing advanced AI technologies. Future information superiority will rely on AI systems to anticipate and capture our adversaries’ intent well before any actions are taken. Future AI systems will use pattern recognition to sort through information generated by our adversaries to inform the military of potential dangers on a global scale. We recognize that the future will continue to be complex especially in the military operational environment. Therefore, we need to develop AI technologies that will reduce the uncertainty of the environment and help decision-makers to make decisions even when they can’t know everything. AI may reduce the timeline in decision-making but must also reduce uncertainty by filtering information for the decision-maker as there is just too much information to sort through. AI systems can process information and test numerous hypotheses at one time, thereby filtering potential consequences related to each potential outcome of decision-making. This would be extremely helpful for the future military decision-maker. Intuitive, adaptive AI technologies and system designs of the future will provide the decision advantage and accelerate the decision cycle. We must ensure that the data provided to the AI system is accurate at the outset. We must validate and verify information

Artificial Intelligence and the Future Global Security Environment

27

throughout the processing cycle to ensure that leaders have the right information and valid recommendations from the AI system to inform their decision-making and minimize potential second- and third order consequences, such as unintended consequences and collateral damage. Anticipatory AI (AAI) networks, like precognition, will be developed and used to capture data related to intention (Giordano et al., 2014). There are ethical consequences for adversarial actions that may be detected and prevented by applying AI networks to examine adversarial intrusions and detect intention a priori. AI systems will conduct the analysis of social networks, identify potential adversaries, and predict potential adversarial actions, well before any plan has been executed. Passive, persistent surveillance using AI technologies, from sea floor to space, will afford the military a decision advantage. Regardless of the domain (i.e., ground, air, undersea, sea, and in space), AI technologies will play a critical role in national and global security in the future. There will continue to be the persistent, pervasive surveillance of private citizens to ensure security and stability in society. There is a real risk of losing personal freedoms in the future. Our freedom to think independently will present risks for societal compliance. Innovation and discovery may also be challenged in this new world environment. We contend that there is a need to address these issues with a sense of urgency if we are to ensure intellectual freedom as well as ensure a peaceful future. Nations are competing to gain technological superiority in AI. We must fuel the fire and garner support if we are to gain the competitive advantage in this race for AI superiority. As a nation, it is incumbent upon our leaders to seek every opportunity to exploit the advances of AI-enabled technologies that will ensure future global security. Failure to address the ethical challenges related to the development of advanced AI technologies will result in the development of a vulnerability in the military defense system. We must acknowledge our commitment to society and the military’s professional responsibility to defend the trust between the military and fellow citizens. We must also prepare to defend ourselves against our adversaries, and perhaps, even prepare to defend against the AI-enabled autonomous systems in the future. Enabling the development of AI autonomous systems will enhance our military capabilities and ensure that we are competitive with our adversaries and capable of defending our national and global security. Like the sentry at their post, we must remain vigilant regarding the ways that adversaries may exploit future AI designs for their strategic advantage (Finney & Klug, 2016; Lin, Bekey & Abney, 2008). There is a clear and present danger that exists in conjunction with advances in AI technologies when one considers how adversaries can reconfigure it for malicious intent. Nations must be circumspect and vigilant in this regard and defend against deception, manipulation, and attacks that threaten future global security. Nations must work together, collaborate, and cooperate to preserve the security of our world. It is about global security not which nation will lead in AI. We must preserve our intellectual freedom, be innovative, and protect our rights to be private citizens in a world where privacy has been traded away for a false sense of security. It is incumbent upon us to create the future we want. We

28

Yvonne R. Masakowski

owe it to future generations to be steadfast in our demand for ethical values to be maintained in a society where it is far easier to allow our values to erode. We offer this book as a tool for discussion. It is intended to address the challenges and concerns for preserving our freedoms and ensuring global security. This book is aimed at increasing an understanding of issues related to implementing AI technologies in daily life and warfare. Each chapter conveys critical aspects of integrating AI technologies into our lives and the future battlespace. AI technology’s impact and influence on our lives will challenge society, ethics, morals, values, freedoms, Just War theory, space warfare, conscience, privacy rights, as well as challenges related to NATO military operations and global security. We believe that the integration of AI in future warfare is a complex topic and have only addressed but a few of the trends, threats, and considerations. We aim to foster thought and inform those interested in Artificial Intelligence technologies of the challenges related to ensuring global security. We contend that the time has come for every nation to focus on Artificial Intelligence technologies and the role these technologies will play for securing our future.

Disclaimer The views expressed in this presentation are those of the author and do not necessarily reflect the official policy or position of the US Naval War College, the US Navy, the US Department of Defense, or the US Government.

References Aberman, J. (2017). Artificial intelligence will change America. Here’ how. The Washington Post, February 27. Retrieved from https://www.washingtonpost.com/ news/capitalbusiness/wp/2017/02/27/artificialintelligence-will-change-america-hereshow /?utm_term5.3e325159efd9 Allen, G. C. (2019, February 6). Understanding China’s AI strategy. Retrieved from https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy Alpaydin, E. (2016). Machine learning: The new AI. Cambridge, MA: MIT Press. Andrew, S. (2018). Streets Ahead: British AI eyes scan future frontline in multinational urban experiment. UK Ministry of Defence. Retrieved from http://www.warfare.today/ 2018/09/26/sapient-takes-on-contested-urban-environment/ Arkin, R. C. (1992). Modeling neural function at the schema level: Implications and results for robotic control. In R. D. Beer, R. E. Ritzmann, & T. McKenna (Eds.), Biological neural networks in invertebrate neuroethology and robotics (pp. 383–410). Cambridge, MA: Academic Press. Arkin, R. C. (2007). Governing lethal behavior: Embedding ethics in a hybrid deliberative/reactive robot architecture. Technical Report GIT-GVU-07-11. Georgia Tech GVU Center, Atlanta, GA. Retrieved from http://www.cc.gatech.edu/ai/ robotlab/onlinepublications/formalizationv35.pdf Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: A model of artificial intelligence development. AI & Society, 31(2), 201–206.

Artificial Intelligence and the Future Global Security Environment

29

Ashrafian, H. (2015a). AIonAI: A humanitarian law of artificial intelligence and robotics. Science and Engineering Ethics, 21(2), 29–40. Ashrafian, H. (2015b). Artificial intelligence and robot responsibilities: Innovating beyond rights. Science and Engineering Ethics, 21(2), 317–326. Behrmann, S. (2020, February 20). Reports: Intelligence official warned lawmakers that Russia was interfering in 2020 to help Trump. USA Today. Retrieved from https://www.usatoday.com/story/news/politics/2020/02/20/russia-helping-trump2020-official-told-lawmakers-reports/4825498002/ Bostrom, N. (2014). Superintelligence: Paths, dangers, and strategies. Oxford: Oxford University Press. Burrows, I. (2018). Made in China 2025: XI Jinping’s plan to turn China into the AI world leader. Retrieved from https://www.abc.net.au/news/2018-10-06/china-plansto-become-ai-world-leader/10332614 Campbell, C. (2019, November 22). The fight for our faces. Time, 194(24/25), 52–55. Canton, J. (2015). Future smart: Managing the game-changing trends that will transform your world. Boston, MA: De Capo Press, Inc. Chan, E., & Lee, A. (2018). ‘Made in China 2025’: Is Beijing’s plan for hi-tech dominance as big a threat as the West thinks it is? South China Morning Post, September 25. Retrieved from https://www.scmp.com/business/china-business/ article/2163601/made-china-2025beijings-plan-hi-tech-dominance-big-threat Chandra, R. (2017). Artificial Intelligence and Cybernetics Research Group, Technical Report. Software Foundation, Nausori, Fiji. An affective computational model for machine consciousness. Retrieved from https://www.researchgate.net/publication/ 312031524_An_affective_computational_model_for_machine_consciousness China State Council. (2017, July 20). New generation artificial intelligence development (AIDP). Retrieved from https://www.newamerica.org/cybersecurity-initiative/ digichina/blog/full-translation-chinas-new-generation-artificial-intelligence-develop ment-plan-2017/ Chu, J. (2015, May 6). Engineers hand "cognitive" control to underwater robots. MIT News. Retrieved from http://news.mit.edu/2015/cognitive-underwater-robots-0507 Chutel, L. (2018, May 25). China is exporting facial recognition software to Africa, expanding its vast database. Quartz Africa. Retrieved from https://qz.com/africa/ 1287675/china-is-exporting-facialrecognition-to-africa-ensuring-ai-dominance-throughdiversity/ Cummings, M. L., Clare, A., & Hart, C. (2010). The role of human-automation consensus in multiple unmanned vehicle scheduling. Human Factors: The Journal of Human Factors and Ergonomics Society, 52(1), 17–27. doi:10.1177/0018720810368674 Cummings, M. L., & Guerlain, S. (2007). Developing operator capacity estimates for supervisory control of autonomous vehicles. Human Factors: The Journal of the Human Factors and Ergonomics Society, 49(1), 1–15. doi:10.1518/001872007779598109 Dai, S. (2018). Tech start-ups push to make China’s facial recognition systems part of daily life across Asia. South China Morning Post, July 3. Retrieved from https:// www.scmp.com/tech/start-ups/article/2153471/tech-start-ups-push-make-chinas-facialrecognition-systems-partdaily1&cd54&hl5en&ct5clnk&gl5us Dai, S., & Shen, A. (2018). ‘Made in China 2025’: China has a competitive AI game plan but success will need cooperation. South China Morning Post, October 2. Retrieved from https://www.scmp.com/tech/article/2166177/made-china-2025china-has-competitiveai-game-plan-success-will-need

30

Yvonne R. Masakowski

Deibert, R., & Rohozinski, R. (2010). Control and subversion in Russian cyberspace. In R. J. Deibert, J. G. Palfrey, R. Rohozinski, & J. Zittrain (Eds.), Access controlled: The shaping of power, rights, and rule in cyberspace (pp. 15–34, Chapter 2). Cambridge, MA: MIT Press. Retrieved from http://www.access-controlled.net/ wp-content/PDFs/chapter-2.pdf Demchak, C. (2019). AI cyber deception and navies. Beyond the Hype: Artificial Intelligence in Naval and Joint Operations Workshop. U.S. Naval War College, Newport, RI, 24–25 October 2019. Dobbyn, C., & Stuart, S. (2003). The self as an embedded agent. Minds and Machines, 13(2), 187–201. doi:1022997315561 Dutt, N., & TaheriNejad, N. (2016). Self-awareness in cyber-physical systems. Paper presented at the 29th international conference on VLSI design and 15th international conference on embedded systems (VLSID), Kolkata, India. Retrieved from http://ieeexplore.ieee.org/document/7434906/ European Commission. (2019). Ethics guidelines for trustworthy AI. High-level expert Groupon artificial intelligence set up by the European Commission. Retrieved from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai Fedor, J., & Fredheim, R. (2017). We need more clips about Putin, and lots of them: Russia’s state-commissioned online visual culture. Nationalities Papers, 45(2), 161–181. Finney, N. K., & Klug, J. P. (2016). Mission command in the 21st century: Empowering to win in a complex world. Fort Leavenworth, KS: The Army Press. Forrest, C. (2015). Chinese factory replaces 90% of humans with robots, production soars. TechRepublic, July 30. Retrieved from https://www.techrepublic.com/article/ chinese-factory-replaces-90-ofhumans-with-robots-production-soars/ Franklin, B. (1776). Benjamin Franklin quote about liberty and safety. Retrieved from https://wisdomquotes.com/liberty-safety-benjamin-franklin/ Geissler, H. (2019). Bless the fog of war: How panopticon will lose the war in metropolis. Dissertation, U.S. Naval War College, Ethics and Emerging Military Technologies graduate certificate program. Newport, RI. Giordano, J., Kulkarni, A., & Farwell, J. (2014). Deliver us from evil? The temptation, realities, and neuroethico-legal issues of employing assessment neurotechnologies in public safety initiatives. Theoretical Medicine and Bioethics, 35(1), 73–89. doi:10.1007/s11017-014-9278-4. Goodrich, M. A., & Cummings, M. L. (2014). Human factors perspective on next generation unmanned aerial systems. In K. P. Valavanis & G. J. Vachtsevanos (Eds.), Handbook of unmanned aerial vehicles (pp. 2405–2423). New Delhi: Springer. Hamel, G., & Green, B. (2007). The future of management. Boston, MA: Harvard Business School Press. Hawking, S., Musk, E., & Wozniak, S. (2015, January). An open letter: Research priorities for robust and beneficial artificial intelligence. Retrieved from https:// futureoflife.org/ai-open-letter Heath, T. (2013). Xi’s mass line campaign: Realigning party politics to new realities. China Brief, 13(16), 3–6.

Artificial Intelligence and the Future Global Security Environment

31

Heath, T. (2019, February 7). The Consolidation of political power in China under Xi Jinping: Implications for the PLA and domestic security forces. Testimony of Timothy R. Heath before the U.S.–China Economic and Security Review Commission. Retrieved from https://www.uscc.gov/sites/default/files/Heath_USCC%20Testimony_FINAL.pdf Hoffman, S. (2012). Portents of change in China’s social management. China Brief, 12(15), 5–8. ICO. (2017). Big data, artificial intelligence, machine learning and data protection v. 2.2. Wilmslow: Information Commissioner’s Office. Retrieved from https://ico.org.uk/ media/fororganisations/documents/2013559/big-data-ai-ml-and-data-protection.pdf Ilachinski, A. (2017a). AI, robots, and swarms: Issues, questions and recommended studies. Retrieved from https://www.cna.org/cna_files/pdf/DRM-2017U-014796 Ilachinski, A. (2017b.) Artificial Intelligence & autonomy: Opportunities and challenges. Center for Naval Analyses, VA. Ilachinski, A. (2018, February). The malicious use of Artificial Intelligence: Forecasting, prevention, and mitigation. Center for Naval Analyses, VA. Jans, K. (2018, July 10). NATO needs to get smarter about AI. Atlantic Council. Retrieved from https://www.atlanticcouncil.org/blogs/new-atlanticist/nato-needsto-get-smarter-about-ai/ Kania, E. (2018). Made in China 2025: Xi Jinping’s plan to turn China into the AI world leader. Center for a New American Security. Retrieved from https://www.cnas.org/ press/in-the-news/made-in-china-2025-xi-jinpings-plan-to-turn-china-into-the-aiworld-leader Kasapoglu, C. & Kirdemir, B. (2019). Artificial Intelligence and the future of conflict. Carnegie Europe. Retrieved from https://carnegieeurope.eu/2019/11/28/artificialintelligence-and-future-of-conflict-pub-80421 Kerr, J. (2016). Authoritarian management of (cyber-) society: Internet regulation and the new political protest movements. Doctoral Dissertation, Georgetown University. Retrieved from http://hdl.handle.net/10822/1042836 Kerr, J. (2018). Information, security, and authoritarian stability: Internet policy diffusion and coordination in the former soviet region. International Journal of Communication, 12, 3814–3834. Kerr, J., Loss, R., & Genzoli, R. (2018, April). Cyberspace, information strategy, and international security: Workshop summary. Center for Global Security Research. Lawrence Livermore National Laboratory. Retrieved from https://cgsr.llnl.gov/ content/assets/docs/CGSR_Cyber_Workshop_2018_Summary_Report_Final2.pdf Konaev, M. & Vreeman, A. (2019, October). National strategy: For the development of artificial intelligence over the period extending up to the year 2030 (Decree of the President of the Russian Federation). Center for Security and Emerging Technology. Kurzweil, R. (2006). The singularity is near: When humans transcend biology. New York, NY: Penguin Books. Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. Retrieved from http://www.dtic.mil/docs/citations/ADA534697 Loten, A., & Simons, J. (2017). Leadership evolves amid tech changes – Management styles shift to embrace shorter, more frequent data-fueled development cycles. The Wall Street Journal, January 4. Retrieved from https://search.proquest.com/ docview/1855011133?accountid5322

32

Yvonne R. Masakowski

Ma, H. Wu, X, Yan, L. Huang, H. Wu, H. Xiong, J, & Zhang, J. (2018). Strategic plan of Made in China 2025 and its implementation. Retrieved from https:// www.researchgate.net/publication/326392969_Strategic_plan_of_Made_in_China_ 2025_and_its_implementation Mack, E. (2015). Hawking, Musk, Wozniak warn about Artificial Intelligence’s Trigger Finger. Science. Retrieved from https://www.forbes.com/sites/ericmack/2015/07/27/ hawking-musk-wozniak-freaked-about-artificial-intelligence-getting-a-trigger-finger/ #17446a487416 Masakowski, Y. R. (2019a). NATO Industry Forum 2019. Artificial Intelligence and Global Security. Washington, DC, 13–14 November 2019. Masakowski, Y. R. (2019b). NATO multinational military operations and AI. Beyond the Hype: Artificial Intelligence in Naval and Joint Operations Workshop. U.S. Naval War College, Newport, RI, 24–25 October 2019. Masakowski, Y. R. (2019c). AI2: Stable vs unstable factors. Presentation at Georgetown University, Washington, DC. Masakowski, Y. R. (2019d). NATO and Artificial Intelligence. The Technical Cooperation Program Panel – AISC workshop, 13–15 August 2019. Masakowski, Y. R. (2019e). Ethical Implications of Autonomous Systems and Artificial Intelligence Enabled Systems. Institute of Navigation Cognizant Autonomous Systems for Safety Critical Applications Conference. Miami, FL, September 16–17, 2019. Masakowski, Y. R., Smythe, J. S., & Creely, T. E. (2016). The impact of ambient intelligence technologies on individuals, society and warfare. Northern Plains Ethics Journal, 4(1), 1–11. Retrieved from http://www.northernplainsethicsjournal.com/ NPEJv4n1/The%20Impact%20of%20Ambient%20Intelligence%20Technologies%20 on%20Individuals.pdf NATO Technology Trends Report. (2017). Empowering the alliance’s technological edge. NATO Science and Technology Board. STO Trends Report 2017. Newell, A., & Simon, H. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall. Ocasio-Cortez, A. (2019, May 22). Facial recognition is growing amid “A Global Rise in Authoritarianism and Fascism” Committee on oversight and reform. House Oversight Committee Hearing. Facial Recognition Technology and its Impact on our Civil Rights and Liberties. O’Flanagan, T. P. (2018). A breach of trust: The impact of artificial intelligence on society and military operations. Dissertation, U.S. Naval War College. Ethics and Emerging Military Technologies graduate certificate program. Newport, RI. Peltz, J. J. (2017). The algorithm of you: Your profile of preference or an agent of evil? Dissertation, U.S. Naval War College. Ethics and Emerging Military Technologies graduate certificate program. Newport, RI. Polyakova, A. (2018, November). Weapons of the weak: Russia and AI-driven asymmetric warfare. Report part of a "A Blueprint for the Future of AI" series from the Brookings Institution. Retrieved from https://www.brookings.edu/research/ weapons-of-the-weak-russia-and-ai-driven-asymmetric-warfare/ Polyakova, A., & Boyer, S. P. (2018). The future of political warfare: Russia, The West, and the coming age of global digital competition. The New Geopolitics. Europe. The Brookings Foundation Transatlantic Initiative.

Artificial Intelligence and the Future Global Security Environment

33

Porat, T., Oron-Gilad, T., Rottem-Hovev, M., & Silbiger, J. (2016). Supervising and controlling unmanned systems: A multi-phase study with subject matter experts. Frontiers in Psychology, 7, 568. doi:10.3389/fpsyg.2016.00568 Putin, V. (2019). Putin approves Russia’s national strategy for AI until 2030. TASS, October 10. Retrieved from https://tass.com/economy/1082644 Ragan, S. (2012). Political activism gives way to hacktivism in Russia. SecurityWeek, February 20. Retrieved from http://www.securityweek.com/political-activism-givesway-hacktivismrussia Ramiccio, J. G. (2017). The ethics of robotic, autonomous, and unmanned systems in life-saving roles? Dissertation, U.S. Naval War College, Ethics and Emerging Military Technologies graduate certificate program. Newport, RI. Robles, P. (2018). China plans to be a world leader in artificial intelligence by 2030. Retrieved from https://multimedia.scmp.com/news/china/article/2166148/china-2025artificial-intelligence/index.html. Accessed on October 1, 2018. Russell, S. Dewey, D. & Tegmark, M. (2015, Winter). Research priorities for robust and beneficial artificial intelligence. AI Magazine. Association for the Advancement of Artificial Intelligence. Retrieved from https://futureoflife.org/data/ documents/research_priorities.pdf?x28271 Sheridan, T. B. (1992). Telerobotics, automation, and human supervisory control. Cambridge, MA: MIT Press. Shim, J., & Arkin, R. C. (2012). Biologically-inspired deceptive behavior for a robot. In T. Ziemke, C. Balkenius, & J. Hallam (Eds.), International conference on simulation of adaptive behavior. From animals to animals 12, (pp. 401–411). Berlin, Heidelberg: Springer. Singhvi, A., & Russell, K. (2016). Inside the self-driving tesla fatal accident. The New York Times, July 12. Retrieved from https://www.nytimes.com/interactive/2016/07/ 01/business/insideteslaaccident.html?_r50 Subbotovska, I. (2015). Russia’s online trolling campaign is now in overdrive. Business Insider, May 29. Retrieved from http://www.businessinsider.com/russias-onlinetrolling-campaign-is-nowin-overdrive-2015-5 Tegmark, M. (2018). Life 3.0: Being human in the age of artificial intelligence. New York, NY: Vintage Books, Division of Penguin Random House. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460. Reprinted in Feigenbaum and Feldman (1963). US Department of Defense. (2012). DoD Directive 3000.09: Autonomy in weapons systems. Retrieved from https://www.hsdl.org/?view&did5726163. Accessed on September 1, 2019. US Department of Defense (USDoD). (2018a). Summary of the 2018 Department of Defense artificial intelligence strategy: Harnessing AI to advance our security and prosperity. Arlington, VA: United States of America Department of Defense. US Department of Defense (USDoD). (2018b). Summary of the national defense strategy of the United States of America: Sharpening the American military’s competitive edge. Washington, DC: US Department of Defense (USDoD). US Department of Defense (USDoD). (2019). Summary of the 2018 Department Of Defense artificial intelligence strategy harnessing AI to advance our security and prosperity. Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. New York, NY: Oxford University Press.

34

Yvonne R. Masakowski

Ward, V. & Evens, M. (2018, September24). Artificial intelligence weaponry successfully trialed on mock urban battlefield. The Telegraph, Retrieved from https:// www.telegraph.co.uk/news/2018/09/24/artificial-intelligence-weaponry-successfullytrialled-mock/ Yudkowsky, E. (2007). Artificial intelligence as a positive and negative factor in global risk. Berkeley, CA: Machine Intelligence and Research Institute (MIRI). Zhou, J. (2013, March). Digitalization and intelligentization of manufacturing industry. Advances in Manufacturing, 1(1), 1–17.

Chapter 2

Artificially Intelligent Techniques for the Diffusion and Adoption of Innovation for Crisis Situations Thomas C. Choinski

Abstract The diffusion and adoption (D&A) of innovation propels today’s technological landscape. Crisis situations, real or perceived, motivate communities of people to take action to adopt and diffuse innovation. The D&A of innovation is an inherently human activity; yet, artificially intelligent techniques can assist humans in six different ways, especially when operating in fifth generation ecosystems that are emergent, complex, and adaptive in nature. Humans can use artificial intelligence (AI) to match solutions to problems, design for diffusion, identify key roles in social networks, reveal unintended consequences, recommend pathways for scaling that include the effects of policy, and identify trends for fast-follower strategies. The stability of the data that artificially intelligent systems rely upon will challenge performance; nevertheless, the research in this area has positioned several promising techniques where classically narrow AI systems can assist humans. As a result, human and machine interaction can accelerate the D&A of technological innovation to respond to crisis situations. Keywords: Crisis response; artificial intelligence; innovation; diffusion; adoption; innovation ecosystem; complexity; human–machine interaction

The Case for Artificial Intelligence The diffusion and adoption (D&A) of innovation propels today’s technological landscape. Crisis situations motivate communities of people to take action for the D&A innovation. These crises arise from natural disasters, technological failures, deliberate human malevolent action, or breakdowns in policy decisions. Sometimes the crisis strikes suddenly as in cybercrime, and other times the crisis evolves Artificial Intelligence and Global Security, 35–52 Copyright © 2020 Emerald Publishing Limited All rights of reproduction in any form reserved doi:10.1108/978-1-78973-811-720201002

36

Thomas C. Choinski

over relatively long periods as exemplified by global warming. Humans leverage technological innovation to respond to crisis situations, and artificially intelligent techniques offer the potential to assist. Complexity, capacity, risk, resources, culture, ethics, and emerging situations affect the pace of D&A of technological innovation. Artificial Intelligence (AI) can search the solution space, detect trends, identify potential solutions, reduce risk, mitigate unintended consequences, address the value proposition, and recommend appropriate courses of action suitable for the social networks in relevant innovation ecosystems. Humans can set the goal to accelerate the D&A of technological innovation for crisis resolution by leveraging artificially intelligent techniques. We need an interdisciplinary understanding of technology, as well as human and machine interaction, to achieve this goal. The D&A is essentially a human process based on relationships in a social network. AI can assist humans in the D&A of innovation if we keep these tenets in mind. However, AI must be readily available and well diffused and adopted as a technology in its own regard before it can be leveraged. A recursive nature to this challenge exists before AI can assist humans in crisis response; humans must diffuse and adopt AI. The process is both recursive and cyclic. This chapter presents an approach to achieve this goal by discussing how the evolution of innovation ecosystems has created an opportunity for AI, distinguishing which type of AI is suitable, confirming that humans have made AI readily available through the D&A of AI techniques and then proposing ways that artificially intelligent techniques can assist in the D&A of innovation for crisis resolution. This framework in this chapter evolved from the author's participation in a panel discussion at the Humanities and Technology Association Conference hosted by Salve Regina University in Newport, Rhode Island (Choinski, 2018). The D&A of innovation centers on human interaction. Artificially intelligent techniques can assist humans in several different ways, especially when considering the emergence of fifth generation innovation ecosystems that are complex and adaptive in nature.

Relevance in Innovation Ecosystems From my vantage point, the uptake of technology represents the biggest challenge we face to improve innovation. People often confuse invention with innovation. Invention is a subset of innovation, and in some instances, a new invention is not required for innovation to occur, e.g., when technology is appropriated. Humans must interact as they navigate through innovation ecosystems to respond to crises in innovative ways. A closer look at the historical evolution of innovation processes and ecosystems since the end of World War II provides better insight into the challenges facing the uptake of innovation in our current global environment. The United States experienced a rapid rate of industrialization after World War II, which led to stage 1 type sequential innovation ecosystems that were driven by research and development (R&D), the so-called technology push approach. This industrial-based approach carried business forward during the 1950s and early 1960s. As the rate of industrialization slowed down in the

Artificially Intelligent Techniques for the Diffusion

37

mid-1960s and early 1970s, businesses responded with stage 2 market-driven approaches to technological development and innovation. For this reason, this period was also characterized by a shift from government-directed R&D to initiatives led by organizations in the commercial sector. R&D became more in tune with market-driven needs; innovation was not solely driven by science, engineering, and government requirements. This stage 2 approach evolved through the 1970s and into the early 1980s to include feedback processes in stage 3 technological innovation ecosystems. Stage 1, 2, and 3 ecosystems are essentially built upon sequential processes (Rothwell, 1994, pp. 7–9). Dr. Stephen Klein described the technological innovation process through the chain-linked model for innovation that reflects the stage 3 approach. The chainlinked model depicts the sequential stages for the four interdisciplinary dimensions of technological innovation: the artifact (science and engineering), the sociotechnical system of manufacture, technique (know-how or techne), and the sociotechnical system of use (Kline, 2003, pp. 210–212). His model identified feedback between each stage. Kline provides a useful lens to discuss the interdisciplinary challenges associated with technological innovation. Each dimension represents a separate and distinct community of people who shape technological innovation by embodying their unique behaviors, cultural norms, traditions, and objectives; hence, the need for feedback between stages. A purely R&D or sequentially driven market-based approach would not adapt to the norms and emergent needs of each of these communities. For example, manufacturing methods are better accommodated during the early engineering development stages of the technological development process. Therefore, innovation ecosystems must embody both externally and internally generated needs. For this reason, as well as increased global market competition, stage 4 type innovation ecosystems emerged in the mid-1980s and 1990s. Stage 3 innovation ecosystems incorporated feedback between stages to address interdisciplinary needs, but were essentially still sequential in nature. Stage 4 innovation ecosystems overlap processes in a parallel fashion. In addition, the statistical process control and total quality management techniques, first discussed by Edward Deming after World War II, began to take effect. Supply chain management changed with business techniques such as just-in-time inventory control. One of the most notable examples of the power of stage 4 type ecosystems was the rise of Japanese automobile manufacturers and their ability to take market share from the automobile industry in Detroit (Rothwell, 1994, pp. 11–13, 26–28). Stage 4 ecosystems increased efficiency by performing phases of the process in parallel, but essentially remained a sequential approach. The aforementioned variants of essentially sequentially driven processes served us well during the industrial age. However, today the four communities of technological innovation (the science/engineering, the sociotechnical system of manufacture, technique [know-how or techne], and the sociotechnical system of use) have been energized by the transactional and shared learning nature of the global, digital, and Internet-driven worlds. The invention and appropriation of technology is accelerating.

38

Thomas C. Choinski

As a result, humans risk losing control over technology as technology gains control over humans. Jacques Ellul and Neil Postman are two philosophers of technology who point to the fact that this phenomenon has come to life as technology has become a way of living. Neil Postman coined the term “technopoly” to describe technology’s threat to human existence (Postman, 1992). Stage 5 innovation ecosystems seek to adapt to this accelerated technological phenomena with agility. These types of ecosystems trade sequential process for new approaches based on complex and adaptive theories. The communities of people who represent the artifact (science and engineering), sociotechnical community of manufacture, technique, and sociotechnical community of use interact with the goal of adapting in an agile fashion to respond to the emerging needs and consequences of the global environment. Otherwise, our response to crisis situations will either be too slow or out of sync. For example, the four communities may interact incoherently to produce what can be characterized as technological aliasing (Choinski, 2017, pp. 11–12, 62–63). In essence, the ecosystem creates a man-made crisis while attempting to respond to the original one. Technological aliasing can be explained through the metaphor of wheels on a stagecoach in a western movie. As the stagecoach moves out, the wheel rotates as expected with the direction of motion. However, as the stagecoach increases its speed, the wagon wheels appear to stand still. Further increases in speed give the visual appearance of the wheels moving backwards, counter to the forward movement of the stagecoach. This is a visual aliasing phenomenon. Similarly, technological aliasing is encountered when one or more of the technological dimensions described by Kline are out of sync. For example, scientific and engineering development may occur before end user needs are fully understood. Products may be delivered from the sociotechnical system of manufacture before techniques can be matured regarding their use. Scientists and engineers may believe they are accelerating the generation of innovations when in fact they could be slowing down the process or, even worse, providing less effective solutions than those that already exist! The accelerated pace of technological development is not the only justification for stage 5 innovation ecosystems. The ambiguous nature of emerging postWestphalian interstate interactions as discussed by Henry Kissinger in his book World Order, as well as the opposition to Western thought materializing from the global political awakening described by Zbigniew Brzezinski in Strategic Vision, compounds the compelling need for stage 5 innovation ecosystems. This political awakening changes the embodiment of cultural and ethical principles in technology. For example, the United States has emphasized a model for innovation driven by science and technology. In contrast, China’s model seeks to embody social and cultural objectives into technological innovation through their approach for appropriation, D&A (Jin, 2005). Scientific discoveries lead the former model and social directives lead the latter. How can AI help address the challenges stemming from the rapid pace of technological development, globalization, and global political awakening? Humans can leverage artificially intelligent techniques to navigate through complex innovation ecosystems and improve the D&A of innovation. The array

Artificially Intelligent Techniques for the Diffusion

39

of approaches incorporated within AI include: cybernetics (mimicry of circular biocontrol systems), symbolic manipulation, cognitive techniques that people use to solve problems, logic-based systems, knowledge-based expert systems, embodied intelligence (intelligence embodied in robots and other machines), statistical learning and neural networks that characterize soft learning. Artificially intelligent machines incorporate logic, automated reasoning, probabilistic methods (Bayesian networks, hidden Markov models, Kalman filters, particle filters, decision theory, and utility theory), and classifiers (pattern matching). AI systems learn from data sets. Robotic systems base decisions on sensory data. Expert systems exploit historical data. Nevertheless, before AI can help humans, humans must diffuse and adopt AI in its own regard.

Distinguishing Types of Artificial Intelligence for Diffusion and Adoption Not surprisingly, humans have already diffused and adopted certain types of artificially intelligent systems. Clarification of the types of artificially intelligent techniques that are available to humans helps to understand the extent of AI’s D&A. The technical community commonly speaks to two types of artificially intelligent systems: Narrow AI [classic AI] is the ability of a computer to solve a specific problem or perform a specific task. The other kind of AI is referred to by three different names: general AI, strong AI, or artificial general intelligence (AGI). (Reese, 2018, p. 61) Narrow AI focuses on solutions to specific problems with a priori knowledge and/or data. This type of AI leverages computer algorithms that have been in development for decades including: expert systems, machine learning, and deep learning. These types of AI systems also encompass pattern matching techniques such as those encapsulated in neural network learning architectures. For this reason, the term classic AI seems more suitable than narrow AI. The point is that these algorithms and techniques have been discussed for decades. The term “classically narrow AI” is used in this chapter to recognize the specific application of these algorithms, as well as the fact that the algorithms have existed in some form for decades. For example, during the late 1980s, while I was pursuing my MBA program of study, I completed a course focused on the use of an expert systems software package called Prolog. At the time, my wife was working as a real estate agent, which gave me the idea to develop a real estate appraisal expert system using the Prolog programming language. Prolog enabled me to develop a rule-based expert system that replicated the real estate appraisal process using three comparable properties that had sold within the last six months. The rule-based system incorporated information from existing real estate multiple listing services (MLS). Each property was categorized in terms of its attributes, e.g., location, square

40

Thomas C. Choinski

footage, total number of rooms, number of bedrooms, number of bathrooms, garage space, age, etc. The real estate appraisal system worked fairly well at the time and was very similar to what we now know as Zillow. The primary difference is that Zillow leverages real estate information which can be accessed from publicly accessible online databases available through the Internet which did not exist back in the late 1980s. I had to manually enter and label data taken from hard copy printouts of real estate directories. On the other hand, artificial generally intelligent systems, or AGI, seek to capture all the expectations of Kurzweil’s singularity concept to actually replicate and/or exceed the capabilities of human cognition (Luber, 2011, pp. 207–210). Think of the machine personified in the movie iRobot. This type of so-called autonomous machine is capable of sensing data from the environment, assessing situational context, making decisions, and taking action – all in real time. The machine has some level of consciousness. Needless to say, AGI systems do not exist today; these systems are currently the work of science fiction and perhaps will only exist, if ever, in the distant future. Moreover, some philosophers, such as Karl Popper, argue that it is doubtful that these AGI systems will ever exist. Humans bring a distinctive level of contextual understanding from what Karl Popper calls the third world of human creation. The first world is the physical one. The second world is the conscious organic one, which responds to stimulation for worlds one and two. The third world is one that consists of products from human cognition where intuition and activities such as critical thinking, strategic thinking, envisioning, planning, and artistic thinking occur. Interaction occurs between worlds one and two, as well as from worlds two and three; however, interaction does not occur directly between worlds one and three in Popper’s theory. Artificially intelligent systems engage in behavioral reactions between and within worlds one and two at best. Fully autonomous AGI systems would be required to interact between worlds one, two, and three. It is currently unforeseeable for a machine to be designed to accommodate an infinite number of scenarios and evolving contexts as humans can respond to. Philosophically speaking, the ambiguities of reality are left to the realm of the human mind when statistics and stochastic processes cannot be supported with data. Karl Popper’s three world theory identifies the physical world, conscious world, and the world that consists of products of the human mind. Reaction to human creations requires an accompanying understanding of the context behind these creations. Popper argues that the three most important things that make us human are self-transcendence through rational criticism, compassion, and consciousness of our fallibility. AGI systems fall short of these philosophical human attributes and will never fully reflect human behavior in decision-making. Will we ever be able to build systems that can think? Popper is hesitant stating that “it does not seem to me at all likely that a machine can be conscious unless it needs consciousness” (Popper, 1978). Popper argues that today only humans engage in the third world, and it is doubtful that machines will ever be able to exist in this world.

Artificially Intelligent Techniques for the Diffusion

41

Popper is not the only person to take this position regarding AGI. Norm Chomsky has been quoted as saying: The work in AI has not really given any insight into the nature of thought… I don’t think that’s very surprising… Even to understand how the neuron of a giant squid distinguishes food from danger is a difficult problem. To try and capture the nature of human intelligence or human choice is a colossal problem way beyond the limits of contemporary science. (Reese, 2018, p. 162) Today AGI seems unobtainable, but even Karl Popper is not absolute in his position on its impossibility. We don’t know what complementary technologies and knowledge may emerge in the future that would work as an enabling force. Where AGI systems have yet to be developed, systems built upon classically narrow AI algorithms and techniques have demonstrated promising, and moreover, valuable utility in a multitude of applications. Nick Polson and James Scott present numerous examples of how classically narrow AI has emerged in their book entitled AIQ: How People and Machines are Smarter Together. They discuss current medical applications, such as Dr. Zoltan Takats’, that assess the cancerous nature of vaporized tissue using mass spectrometer measurements from an AIbased smart knife (Polson & Scott, 2018, p. 198). In entertainment, the authors mention how Netflix transformed its business from one of content distributor to content producer by developing a classically narrow AI system from conditional probability algorithms, massive data sets, and subscriber latent features to predict and produce content that viewers like. Similarly, in sports, the authors discuss how Luke Borrn has used data analytics from video and player-tracking data to predict the results of specific defensive matchups (Polson & Scott, 2018, pp. 34, 174). Technology synergism and convergence has also assisted in the advancement and application of classically narrow AI. An example of the concept of technology synergism and convergence is the South African security industry. This industry produces 100,000 events per day and over 100 GB of video and voice data from multiple sites. Classically narrow AI algorithms and techniques are being researched to compress video data and analyze video information to compensate for limited bandwidth and reduce the complexity of human decisionmaking. The goal is a closer integration of artificial and natural intelligence to inform human decision-making, for example, in terms of the level of incident response (Dhlamini, Kachienga, & Marwala, 2007, pp. 2–4). How did these classically narrow AI applications emerge? What motivated their development?

D&A of Classically Narrow AI Jacques Ellul points to five unique factors currently resident in human society that sustain invention, technological growth, and innovation: The joint occurrence of the five factors we have briefly analyzed explains the exceptional growth of technique. Never before had

42

Thomas C. Choinski these factors coincided. They are, to summarize: (1) a very long technical maturation or incubation without decisive checks before final flowering; (2) population growth; (3) a suitable economic milieu; (4) the almost complete plasticity of a society malleable and open to the propagation of technique; (5) a clear technical intention, which combines the other factors and directs them toward the pursuit of the technical objective. Some of these conditions had existed in other societies; for example, the necessary technical preparation and destruction of taboos in the Roman Empire in the third century. But the unique phenomenon was the simultaneous existence of all five – all of them necessary to bring about individual technical invention, the mainspring of everything else. (Ellul, 1964, pp. 59–60)

For Ellul, the technical phenomenon has been a growing constant throughout history, and technique has accelerated to the point of taking control of human existence. Of particular note is Ellul’s first point about the reality of long maturation and incubation periods. Certainly, for classically narrow AI the incubation period has been in existence for some time, and that incubation period is not uncharacteristic of many technological innovations. In fact, many technological innovations take much longer to be diffused and adopted by society; yet, once that D&A process reaches a tipping point it is accelerated. Some insight regarding the speed of classically narrow AI’s uptake can be gained from looking at a historical example, sound navigation and ranging (sonar). Sonar technology has provided great contributions to undersea mapping, scientific measurement of oceanographic properties of the ocean, and medical devices such as ultrasound. These achievements did not emerge overnight; they were the result of thousands of years of human philosophical and scientific thought that was finally synergistically catalyzed by motivational events, i.e., crises. The earliest documented discussion of sound can be attributed to Greek philosophers. Two metaphors are notable about the nature of sound. In the first, Aristotle compared echoes to a bouncing ball when he said “the air struck by the impinging body and set in movement by it, rebounds from this mass of air like a ball from a wall” (Hunt, 1978, p. 23). Engineers commonly use the reference to sound bouncing off objects, particularly as the foundation for active sonar, as well as for indirect bottom bounce propagation effects in passive sonar. In the second metaphor, the biographer Diogenes Laertius (fl. first half of third century AD) referenced the following passage from the Stoic philosopher Chrysippus (ca 280–207 BC): “Hearing occurs when air between that which sounds and that which hears is struck, thus undulating spherically and falling upon the ears, as the water in a reservoir undulates in circles from a stone throne into it” (Hunt, 1978, pp. 23–24). The spherical sound wave discussion has been refined, analyzed, and quantified by modern scientists for use in modeling ocean acoustics (Cox, 1974, pp. 9–10).

Artificially Intelligent Techniques for the Diffusion

43

Observations continued in the fifteenth and sixteenth centuries with Leonardo da Vinci and Galileo. Leonardo da Vinci observed that the sounds of distant ships could be heard by listening to a tube placed in the ocean. Galileo articulated a ray theory of sound where a sounding body emits a stream of atoms; the speed of sound is related to this stream of atoms in motion (Raichel, 2006, p. 3). In 1621, Robert Southwell reported to the Royal Society the unusual echoes he observed in his travels, particularly in the whispering place at Gloucester (Gouk, 1982, p. 160). In 1627, Sir Francis Bacon identified a series of sound experiments in Sylva Sylvarum while leading England’s Royal Society (Gouk, 1982, p. 158). One of the experiments from that era, conducted by a Franciscan monk named Marin Mersenne, provided the first description of a sound tone at 87 Hz that led to the conclusion that sound could be used to determine distances over land (Gouk, 1982, pp. 160, 166). Sir Isaac Newton published the first mathematical theory of sound propagation in Philosophiae Naturalis Principia Mathematica in 1687 (Darrigol, 2010, p. 242). In 1877, Lord Rayleigh refined acoustic theories in his two-volume publication Theory of Sound. But the biggest technological impact came from the discoveries of electricity and magnetism by Michael Faraday, James Clerk Maxwell, and Heinrich Rudolf Hertz (Raichel, 2006, pp. 5, 6). Their theories laid the foundation for Leon Scott to invent and build the phonautograph and Alexander Graham Bell to extend the concept to the telephone (Sterne, 2001, pp. 260, 261; Wilson, 1960, pp. 280–285). These devices laid the groundwork for the electronic devices that would listen and record sound from the ocean. Yet, more than 2,000 years of philosophical thought, scientific discovery, and engineering know-how alone failed to give birth to sonar technology. Sonar emerged from the human motivation to take action in response to two crises dealing with safe passage during the early part of the twentieth century. The first crisis emerged from a natural disaster. The sinking of the Titanic in 1912 presented the first crisis, taking the lives of 1,522 passengers. The second crisis was man-made. The emergence of unrestricted submarine warfare during World War I, as symbolized by the sinking of the Lusitania in 1915, presented the second crisis. Both crises elevated the human desire for safe passage at sea. These crises motivated humans to take action. Within a week of the Titanic tragedy, Reginald A. Fesenden, who worked for the Submarine Signal Company, submitted a patent for an echo ranger to detect icebergs. Similarly, the Anti-Submarine Detection Investigation Committee (ASDIC) developed a sonar technology for the detection and location of submarines engaged in unrestricted warfare that is referred to by the organization’s acronym, ASDIC (Bjorno, 2003, pp. 25, 26; Kaharl et al., 1999; pp. 2, 3; Urick, 1983, pp. 3, 4). Similarly, Nick Polson and James Scott maintain that the mathematical foundations for these algorithms date back centuries even though algorithm development for classically narrow AI seemingly accelerated over several recent decades. For example, one of the foundational predictive mathematical algorithms used in classical AI dates back to the eighteenth century and an English clergyman named Thomas Bayes. Bayes rule has been an instrumental element in probability theory to predict the conditional occurrence of an event given the

44

Thomas C. Choinski

knowledge that certain other events have already occurred. For example, you can determine the probability that you will have cancer given a positive test on a mammogram. Polson and Scott point to many applications of Bayes rule in present-day technology that include medical diagnostics, robotics, financial investing, and airplanes lost at sea (Polson & Scott, 2018, pp. 90–106). Polson and Scott also point to other prediction rules developed in the late nineteenth century such as Henrietta Leavitt’s algorithm to more accurately determine the distance of pulsating stars. Leavitt developed an algorithm to predict the true brightness of a pulsating star based on its measured brightness and its period of pulsation. The pattern matching approach that Leavitt developed has progressed to more sophisticated deep learning algorithms used in classically narrow AI. Google Translate and Google Inception are two cases for pattern matching. Google Translate uses a mathematical algorithm that maps phrases in one language to phrases in another. The translation is not perfect, but provides a good starting point. Language is strongly contextually driven. Words and phrases in one context can have very different meanings and connotations when used in a different context. Think of the simple word “pop.” The word “pop” in English could mean father, soda, or the sudden loud noise a balloon would make when poked with a pin. The meaning of the word “pop” depends on the contextual use in a sentence, as well any context due to a geographical colloquialism. Mathematical algorithms can help with the predictive contextual translation, but depending on the context this prediction may be easier or more difficult to do. Common everyday language may be easier, and professional journal articles more difficult. Like all statistical and probabilistic algorithms, the accuracy depends on the sources and quantity of data to draw from. Google’s Inception model reduced the world record for image pattern matching for a machine, based on the ImageNet Visual Recognition Challenge, down to a 6.7% error rate with a 22layer deep neural network (Polson & Scott, 2018, pp. 55–58, 61, 72). Ellul’s second, third, and fourth stipulations for technological innovation that include population growth, a suitable economic milieu, and social plasticity are related to three enablers that Polson and Scott point to for classical AI’s assent. They identify Moore’s Law (increased speed of computers), the new Moore’s Law (the explosive growth in available data), and cloud computing as the three key enablers. Polson and Scott also add a fourth less tangible trend they refer to as good ideas (Polson & Scott, 2018, pp. 4–6). However, the growth in technical performance of computers, data, and forms of computing in and of themselves fall short of meeting Ellul’s stipulations. Rather it is the forces that these technological advancements set in motion that would be of particular interest to Ellul. On a side note, it is difficult to discuss increased computing speed, growth in available data, and cloud computing as separate individual driving forces because all three are related to what has been referred to as load balancing in the progressive design of computer architectures. Load balancing addresses the relationship between the three essential attributes of any computer architecture that entail processing, memory, and I/O (input/output bandwidth). All three are interrelated. Moore’s Law may initiate advancements in computing, but it does

Artificially Intelligent Techniques for the Diffusion

45

not drive advancement alone. Load balancing refers to the cycle of advancements in processing, memory, and I/O of data to/from a computer. As computer processing speeds accelerate, higher I/O bandwidths are required to sustain the data that feed the advanced algorithms. As I/O bandwidths accelerate, more memory is needed to store the growing amounts of data that need to be stored in the computer. The load balancing cycle is a bit of a chicken and egg phenomenon because as I/O bandwidths increase, computers need to store more data; the increased amount of stored data enables the application of more advanced mathematical algorithms that in turn require higher processing speeds, and the cycle repeats itself. So Moore’s Law, the new Moore’s Law, and cloud computing are all interrelated. For the case of classically narrow AI, Ellul’s stipulations for population, economic milieu, and social plasticity are also interrelated; each is driven by the aforementioned advancements in computing. Advancements in computing have certainly not increased populations of people; nevertheless, advancements in computing have increased access to existing populations and therefore have contributed the same effect. The same case can be made for the economic milieu and social plasticity stipulations. Advancements in computing have not, in and of themselves, enhanced the economic milieu and social plasticity; yet, their contributions to accelerating the World Wide Web have facilitated the development of new economic environments and expanded social plasticity on a more global level. Lastly, Ellul’s stipulation on technique is correlated to Polson and Scott’s reference to good ideas. Techniques are methods and/or approaches humans use to engage and exist in the world. Philosophically, techne refers to the way humans achieve an objective. When Polson and Scott discuss their fourth enabler in terms of good ideas what they are really referring to, in terms of Ellul’s philosophy and Stephen Kline’s characterization of innovation, are the relevant and worthy techniques that classically narrow artificially intelligent algorithms can offer humans to engage the world. Classically narrow AI assists humans in reducing complexity and distilling meaningful information from ambiguous data. The good ideas are really good techniques. Consequently, we should not be surprised by classically narrow AI’s appearance in medical diagnostics, robotics, financial investing, marketing, and other tools for technique such a language translation. AI, in terms of classically narrow AI, is here and readily available. Classically narrow AI has been diffused and adopted and continues to assist humans as a source technique. On the other hand, can human’s leverage classically narrow AI’s level of maturity to develop artificially intelligent techniques for crisis response?

AI Techniques for Diffusion and Adoption in Crisis Response The D&A of innovation is a human motivated process. The human resides at the core (Dearing & Cox, 2018, pp. 183–186). “Diffusion is a social process, not a rational process. People are embedded in social relationships, so the diffusion is

46

Thomas C. Choinski

shaped by the nature of those relationships” (Tai & Yang, 2016, p. 99). The nature of the relationships in stage 1, 2, and 3 innovation ecosystems tends to be more simplified because of the sequential arrangement of the human interaction. Humans have more direct control through planning. On the positive side, sequential arrangements tend to restrain or slow down the D&A process by incorporating controls and gating checkpoints that inhibit technological aliasing. On the negative side, the deceleration in the D&A process increases the chance for missed opportunities and reduces competitive advantage. So there are tradeoffs between sequential processes, control, and rapidity. The rapid acceleration of technological innovation repositions the human interaction from processes that are sequential in nature to those that are complex and adaptive as characterized by fifth generation innovation ecosystem models. The human no longer takes on a directing role in fifth generation innovation ecosystems. “The role of the orchestrator in the diffusion and adoption of innovation takes on a prominent role, especially in fifth generation innovation ecosystems. Orchestrator roles include the player-orchestrator, facilitator-orchestrator and sponsor-orchestrator” (Hurmelinna-Laukkanen & Natti, 2017, p. 67). The ambiguity and complexity emerging from fifth generation innovation ecosystems requires a focal organization, or person, to assume the initial risks and pull together the network in the ecosystem. Innovation network orchestration can be seen as the “set of deliberate, purposeful actions undertaken by a focal organization for initiating and managing innovation processes in order to exploit marketplace opportunities enabling the focal organization and network members to create value (expand the pie) and/or extract gain (gain a larger slice of the pie) from the network” (HurmelinnaLaukkanen & Natti, 2017, p. 66). Furthermore, more formal organizations and institutions often need to weigh in during these nascent periods when the complexity and ambiguity are particularly high. “We may anticipate the role of the ecosystem leader to be assumed by actors such as universities or governments in the very early period of ecosystem genesis when much uncertainty and technological infancy can discourage investments by private entities. In time, commercialization prospects improve, ecosystem leadership can shift to another actor” (Dedehayir, Makinen, & Ott, 2016, p. 34). Essentially, the ecosystem leader sets target policy for the network. Although the individual policies of each agent in the ecosystem network may, and do in fact, differ, the agents can respond with their individual behaviors to a common hypothetical target policy (Macua, Chen, Zazo, & Sayed, 2015, p. 1260). At least in the initial stages the ecosystem leader plays a pivotal role in establishing the ecosystem’s target policy as a means to initiate the process of selforganization within the innovation ecosystem. For these reasons, artificially intelligent systems will have little chance to replace the human in the innovation ecosystem. Humans relationships and interactions are at the core to motivating action in the ecosystem; however, as humans transition to fifth generation innovation ecosystems, they will need tools that can assist in reducing the ambiguity, complexity, and risk. Classically narrow AI systems are in a position to offer techniques to address these challenges.

Artificially Intelligent Techniques for the Diffusion

47

AI can help humans respond to crisis situations with technological innovation in six ways. First, AI can identify potential solutions to emerging problems by mining databases to cull out solutions that mix and match existing technologies. Second, AI can prioritize potential solutions that, as Dr. James Dearing from Michigan State University cites, are “designed for diffusion” (Dearing & Kreuter, 2010). Third, AI can assist in navigating through the social networks associated with the D&A of technological innovation by identifying appropriate communication channels, critical nodes, influencers, mavens, and change agents. Fourth, AI can reveal the incoherence between the four communities in order to mitigate any unintended consequences that surface when a rapid response to a crisis situation is pursued. Fifth, AI can assist in scaling up successful solutions rapidly. Scaling up requires the identification of appropriate partnerships. In addition, agent-based simulations can assess the behavioral implications of implementing different policy decisions. Sixth, AI can be used to adopt fast-follower strategies by detecting emerging trends. Innovation arises from the interaction of disparate communities of people. Michael McGrath talks about three basic strategies for innovation that entail technology-driven, prediction-driven, and opportunitydriven approaches. Prediction-driven approaches leverage emerging technologies, the declining cost of technology, and the intersection of multiple emerging technologies. Technology-driven approaches engage researchers to efficiently search for solutions to perceived problems, e.g., applied research or stumbling over a technology or scientific breakthrough via pure research. Opportunity driven approaches generalize a given solution to other specific problems and seek success by listening to specific customer needs (McGrath, 1986, p. 217). AI can sort through databases of patents, inventions, and journal publications to match characterizations of emerging problems with technology-, prediction-, and opportunity-driven solutions. AI would tailor the solution to the specific crisis situation. Dr. James Dearing points to the 11 attributes of innovation that can be used as a tool to gauge the solutions that maximize the potential for D&A. His 11 attributes are based on Everett Rogers’ work on the diffusion of innovation and encompass: relative advantage, effectiveness, observability, trialability, complexity, compatibility, reliability, divisibility, applicability, commutuality, and radicalness (Dearing & Meyer, 1994, pp. 43–57). Dearing argues that technical performance, or effectiveness, must often be sacrificed at the expense of one or more of the other attributes. In fact, the study conducted by Kapoor, Dwivedi, Williams, and Lal indicated that the top five attributes are complexity, compatibility, reliability, relative advantage, and flexibility or applicability (Kapoor, Dwivedi, Williams, & Lal, 2011, p. 233). Dearing argues that these attributes can be used to “design for diffusion” and increase chances for successful innovative uptake. AI tools could evaluate proposed solutions in terms of their suitability for D&A based on these attributes. Classically narrow AI can assist in reducing complexity. AI can assist in navigating the social networks that enable the D&A of technological innovation by identifying appropriate communication channels, critical nodes, influencers, mavens, and change agents. For example, Diesner and Carley

48

Thomas C. Choinski

identified groups of individuals who assume the theoretically grounded roles of change and preservation agents through sociolinguistics, social network analysis, and machine learning–based text mining (Diesner & Carley, 2010, p. 687). Other studies have examined networks to determine the roles of structural holes that foster intermediary relationships between people, diverse actors who rely on proximity relationships, community bridges who connect multiple communities, opinion leaders (Bamakan Nurgaliev, & Qu, 2019, pp. 200–222), community cores, and max influencers. Li, Lin, and Yeh showed that people acting as community bridges disseminate information across multiple communities while those serving as community cores spread information within communities effectively (Li, Lin & Yeh, 2015, pp. 293, 294, and 400). Valente attributed a similar role for integrated agents in the book edited by Dearing and Singhal. Integrated agents are more often critically referred to by other professionals during the diffusion process (Valente, 2006, p. 71). The significance of these roles depends on the type of network and network pathology that characterize the diffusion process. Classically narrow AI techniques can assess organizational structure by determining appropriate network pathologies. Once networks are characterized, AI enables the assessment of the interaction between the science/engineering communities, technical communities of manufacture, communities engaged in developing technique or know-how, and sociotechnical communities of end use. The degree of interaction determines the potential for the unintended consequences stemming from their incoherent interaction (Tsujimoto, Kajikawa, Tomita & Matsumoto, 2018, pp. 55–57). For example, an end user could experience difficulty using an engineering solution proficiently if that solution is too complex. Self-organization and emergent behavior theories offer exciting implications for the incorporation of agent-based models (ABMs) in the analysis and assessment of the D&A of innovation. Zhang has used ABMs to determine effective pathologies for scaling up the D&A of technology. Intelligent agent simulations that embody the fundamental behavioral rules in the diffusion process indicate the extent and rapidity of D&A. The intelligent agent rule sets can exercise policy decisions to reveal behaviors that emerge from self-organization. These agent-based simulations capture the nature of the individual interactions that are not portrayed in deterministic models, such as the Bass marketing model (Zhang, 2015, pp. 2009–2010). The ABMs can compare behavioral asymmetries that emerge from alternative policy decisions. The emergent behaviors also alert to the potential for unintended consequences from technological aliasing, i.e., one form of a man-made crisis. Lastly, AI can track emerging trends from real-world databases to implement fast-follower D&A strategies (Lan, 2009, pp. 73–76). Once identified, these emerging trends can be assessed in terms of relevance and adaptability to resolve crisis situations. The assessment would address the adaptation to relevant social networks, potential for unintended consequences, and pathologies for scaling (Larson, Dearing & Backer, 2017, pp. 1–82). One challenge to the use of AI techniques for crisis resolution involves the stochastic stability and unbiased nature of the databases used in classically narrow artificially intelligent programs. Expert systems rely on fixed and stable data

Artificially Intelligent Techniques for the Diffusion

49

to draw conclusions. Similarly, neural networks depend on the stability of the data sets for accurate information. Neither can deliver effective solutions from ephemeral data sets because of the excessive variance and lack of accuracy. The same is true for heavily biased data sets. Existing data sets can be shaped and tuned to provide the type of information required by artificially intelligent systems. Nevertheless, the level of effort required to mature existing data sets can be significant. Even when accomplished, the data sets need to be continuously monitored for stability and suitability. New applications offer the opportunity to correctly characterize emerging information into suitable datasets from the onset, but new datasets will not capture the bandwidth that exists in historical data. These challenges do not appear to be insurmountable; yet, solutions require resources and a level of effort to implement.

Conclusion The D&A of innovation propels today’s technological landscape. Crisis situations motivate communities of people to take action to adopt and diffuse innovation. AI can help to resolve the challenges associated with the use of technological innovation in crisis response. Complexity, capacity, risk, resources, culture, ethics, and emerging situations affect the pace of D&A. AI can search the solution space, detect trends, identify potential solutions, reduce risk, mitigate unintended consequences, address the value proposition, and recommend appropriate courses of action suitable for relevant social networks in ecosystems. In doing so, AI can help humans accelerate the D&A of technological innovation for crisis resolution. We will need a better understanding of technology, as well as human and machine interaction, to achieve this goal. AI can assist humans in this process given the understanding that the D&A of innovation is essentially based on human interaction in an innovation ecosystem. AI can help humans resolve crisis situations with technological innovation. AI can search the solution space, identify solutions with reduced risk, and mitigate unintended consequences in order to recommend appropriate courses of action within specific social networks. Complexity, capacity, risk, resources, culture, ethics, and emerging situations affect the pace of the D&A process. This chapter discussed how the evolution of the complexity and ambiguity evolving from fifth generation innovation ecosystems has presented an opportunity for AI in the D&A process. While human interaction will remain at the core and is essential for the D&A of technological innovation, there are areas in the D&A of innovation in fifth generation innovation ecosystems where AI can be leveraged. The case for the application of AI in the D&A of innovation in fifth generation innovation ecosystems was built by distinguishing that classically narrow artificially intelligent systems are suitable for this role, confirming that AI has suitably been diffused/adopted in its own right and then proposing ways that AI can serve as a technique to assist in the D&A of innovation. The D&A of innovation is an inherently human activity; yet, artificially intelligent techniques can assist humans

50

Thomas C. Choinski

in six different ways, especially when they are operating in fifth generation ecosystems that are emergent, complex, and adaptive in nature. The six areas where AI can assist include: matching potential solution sets to problems, selecting solutions that are designed for diffusion, identifying key roles in social networks, revealing unintended consequences, recommending pathways for scaling that include the effects of policy and identifying trends for fast-follower strategies. In doing so, AI can help humans resolve crisis situations by accelerating the D&A of innovative solutions. The stability of the data sets that AI systems rely upon will challenge performance; nevertheless, the research in this area has positioned several promising techniques where classically narrow AI systems can assist humans. D&A requires a better understanding of how interdisciplinary communities of people interact to produce technological innovation, as well as how humans interact with technology. AI can help humans, but not replace them in this process. Even though the research is progressing, it may take someone like Reginald A. Fesenden, who responded to the safe passage crisis in the early 1900s with sonar, to propel AI past the tipping point as a technique for the D&A of innovation in crisis response. Ultimately, D&A of technological innovation in crisis response is dependent on interdisciplinary human interaction in an increasingly ambiguous and complex world.

Disclaimer The views expressed in this presentation are those of the author and do not necessarily reflect the official policy or position of the US Naval War College, the US Navy, the US Department of Defense, or the US Government.

References Bamakan, S. M. H., Nurgaliev, I., & Qu, Q. (2019). Opinion leader detection: A methodological review. Expert Systems with Applications, 115, 200–222. Bjorno, L. (2003, January). Features of underwater acoustics from Aristotle to our time. Acoustical Physics, 49(1), 24–30. Brzezinski, Z. (2013). Strategic vision: America and the crisis of global power. New York, NY: Basic Books. Choinski, T. (2017, March). Dramaturgy, wargaming and innovation in the United States Navy: Four historical case studies. Ph.D. Dissertation. Salve Regina University, Newport, RI. Choinski, T. (2018). The role of artificial intelligence in the diffusion and adoption of innovation for crisis response. Invited Panel. In Humanities and technology association conference. Salve Regina University, Newport, RI. Cox, A. (1974). Sonar and underwater sound. Lexington, MA: Lexington Books. Darrigol, O. (2010, May). The analogy between light and sound in the history of optics from the ancient Greeks to Isaac Newton: Part 1. Centaurus: An International Journal of the History of Science and Its Cultural Aspect, 52, 117–155.

Artificially Intelligent Techniques for the Diffusion

51

Dearing, J., & Meyer, G. (1994, September). An exploratory tool for predicting adoption decisions. Science Communication, 16(1), 43–57. Dearing, J. W., & Cox, J. G. (2018, February) Diffusion of innovations theory, principles, and practice. Health Affairs, 37(2), 183–190. Dearing, J. W., & Kreuter, M. W. (2010, December). Designing for diffusion: How can we increase uptake of cancer communication innovations? Patient Education and Counseling, 81(Suppl. 1), 100–110. Dedehayir, O., Makinen, S. J., & Oritt, J. R. (2016). Roles during innovation ecosystem genesis: A literature review. Technological Forecasting and Social Change, 136, 18–29. Dhlamini, S. M., Kachienga, M. O., & Marwala, T. (2007). Artificial intelligences as an aide in management of security technology. In Proceedings from IEEE Africon, Windhoek, South Africa (pp. 1–4). Diesner, J., & Carley, K. M. (2010). A methodology for integrating network theory and topic modeling and its application to innovation diffusion. In Proceedings from the IEEE international conference on social computing/IEEE international conference of privacy, security, risk and trust, Minneapolis, MN (pp. 687–692). Ellul, J. (1964). The technological society. New York, NY: Vintage Books. Gouk, P. M. (1982, February). Acoustics in the early royal society 1660–1680. Notes and Records of the Royal Society, 36(2), 155–175. Hunt, F. V. (1978). Origins in acoustics: The science of sound from antiquity to the age of Newton. New Haven, CT: Yale University Press. Hurmelinna-Laukkanen, P., & Natti, S. (2017, October 16). Orchestrator types, roles and capabilities – A framework for innovation. Industrial Marketing Management, 74, 65–78. Jin, Z. (2005). In K. W. Willoughby & Y. Bai (Trans.), Global technological change: From hard technology to soft technology (2nd ed.). Bristol: Intellect. Web: November 4, 2013. Kaharl, V., Bradeley, D., Brink, K., Clark, C., Fox, C., Mikhalevsky, P., … Weir, G. (1999, March). Sounding out the ocean’s secrets. Washington, DC: Office on the Public Understanding of Science, National Academy of Sciences’s Office of Public Understanding of Science. Retrieved from http://www.beyonddiscovery.org/ includes/DBFile.asp?ID588. Accessed on February 5, 2013. Kapoor, K. K., Dwivedi, Y. K., Williams, M. D., & Lal, B. (2011). An analysis of existing publications to explore the use of the diffusion of innovations theory and innovation attributes. In Proceedings from the world congress on information and communication technologies, Mumbai, India (pp. 229–234). Kissinger, H. (2015). World order. New York, NY: Penguin Books. Kline, S. (2003). What is technology? In R. C. Scharff & V. Dusek (Eds.), Philosophy of technology: The technological condition, an anthology (pp. 210–212). Malden, MA: Wiley-Blackwell. Lan, J. (2009). Dynamic diffusion of innovations: A leader-imitator combined structure pattern. In Proceedings from the 2009 international conference on networking and digital society (pp. 73–76). Washington, DC: IEEE Computer Society. Larson, R. S., Dearing, J. W., & Backer, T. E. (2017). Strategies to scale up social programs: Pathways, partnerships and fidelity. A study commissioned by the Wallace Foundation.

52

Thomas C. Choinski

Li, C.-T., Lin, Y.-J., & Yeh, M.-Y. (2015). The roles of network communities in social information diffusion. In Proceedings from the IEEE international conference on Big Data (pp. 391–400). Luber, S. (2011). Cognitive science artificial intelligence: Simulating the human mind to achieve goals. In IEEE international conference on computer research and development, Shanghai, China (pp. 207–210). Macua, S. V., Chen, J., Zazo, S., & Sayed, A. H. (2015, May). Distributed policy evaluation under multiple behavior strategies. IEEE Transactions on Automatic Control, 60(5), 1260–1274. McGrath, M. (1986). Product strategy for high tech technology companies: How to achieve growth, competitive advantage and increased profits. Homewood, IL: Richard Irwin Inc. Polson, N., & Scott, J. (2018). AIQ: How people and machines are smarter together. New York, NY: St. Martin’s Press. Popper, K. (1978, April 7). Three worlds (p. 167). The Tanner lecture on human values. Delivered at The University of Michigan. Retrieved from http:// tannerlectures.utah.edu/_documents/a-to-z/p/popper80.pdf. Accessed on October 29, 2015. Postman, N. (1992). Technopoly. New York, NY: Vintage Books. Raichel, D. R. (2006). The science and applications of acoustics. New York, NY: Springer. Reese, B. (2018). The fourth age: Smart robots, conscious computers, and the future of humanity. New York, NY: Atria Books Group. Rothwell, R. (1994). Towards the fifth-generation innovation process. International Marketing Review, MCB University Press, 11(1), 7–31. Sterne, J. (2001). A machine to hear from them: On the very possibility of sound’s reproduction. Cultural Studies, 15(2), 259–294. Tai, Y.-C., & Yang, C.-H. (2016). How to grow social innovation from the view of organizational scaling and diffusion: Cases of ecotourism communities in Taiwan. In Proceedings from the IEEE international conference on orange technologies, Melbourne, Australia (pp. 96–99). Tsujimoto, M., Kajikawa, Y., Tomita, J., & Matumoto, Y. (2018). A review of the ecosystem concept – Towards coherent ecosystem design. Technological Forecasting and Social Change, 136, 49–58. Urick, R. J. (1983). Principles of underwater sound. Los Altos, CA: Peninsula Publishing. Valente, T. (2006, July). Communication network analysis and diffusion of innovations. In J. Dearing & A. Singhal (Eds.), Communication of innovations. Thousand Oaks, CA: SAGE Publications. Wilson, M. (1960). American science and invention: A pictorial history. New York, NY: Bonanza Books. Zhang, H. (2015). Data-driven agent-based modeling innovation diffusion. In E. Bordini & Y. Weiss (Eds.), Proceedings of the 14th international conference on autonomous agents and multiagent systems (AA-MAS 2015), May 4–8, 2015, Istanbul, Turkey (pp. 2009–2010).

Chapter 3

“Something Old, Something New” – Reflections on AI and the Just War Tradition Timothy J. Demy Abstract This chapter presents reflections and considerations regarding artificial intelligence (AI) and contemporary and future warfare. As “an evolving collection of computational techniques for solving problems,” AI holds great potential for national defense endeavors (Rubin, Stafford, Mertoguno, & Lukos, 2018). Though decades old, AI is becoming an integral instrument of war for contemporary warfighters. But there are also challenges and uncertainties. Johannsen, Solka, and Rigsby (2018), scientists who work with AI and national defense, ask, “are we moving too quickly with a technology we still don’t fully understand?” Their concern is not if AI should be used, but, if research and development of it and pursuit of its usage are following a course that will reap the rewards desired. Although they have long-term optimism, they ask: “Until theory can catch up with practice, is a system whose outputs we can neither predict nor explain really all that desirable?” 1 Time (speed of development) is a factor, but so too are research and development priorities, guidelines, and strong accountability mechanisms.2 Keywords: Ethics; just war tradition; history; values; ideas; framework 1

See Johannsen et al. (2018, p. 17). Their concern echoes that of Rubin, Stafford, Mertoguno, and Lukos, cited above, who state: “Many AI researchers seem to have lost their way in recent years and should be investigating core problems such as how a person reasons” (2018, p. 13). 2 These are widely acknowledged concerns. Cf. the work of the AI Now Institute and annual reports at ainowinstitute.org and other similar organizations. Regarding AI challenges and the role of philosophy, see Deutsch, D. (2012). Philosophy will be the key that unlocks artificial intelligence. The Guardian, October 3. Retrieved from https:// www.theguardian.com/science/2012/oct/03/philosophy-artificial-intelligence. Artificial Intelligence and Global Security, 53–62 Copyright © 2020 Emerald Publishing Limited All rights of reproduction in any form reserved doi:10.1108/978-1-78973-811-720201003

54

Timothy J. Demy

A Certain Trajectory but an Uncertain Future The future rarely unfolds as people think it will or should unfold. Instead, there are twists and turns that are not universally experienced and ones that are frequently unanticipated. Thus, the question of “how will” should be replaced with “how might?” Viewing the history of technology and war, historian Max Boot noted more than a decade ago: There is no magic formula for coping with the complexities of changing warfare. The course of future developments can be glimpsed only fleetingly and indistinctly through the fog of contemporary confusion. As any reader of H. G. Wells, Jules Verne, or Isaac Asimov must know, few prognostications about the future have ever come true. The best way to figure out the path ahead is to examine how we arrived at this point. (2006, p. 16) If Boot’s understanding of the history of warfare and technology is accepted, it would seem that broader rather than narrower considerations should be used when contemplating and implementing new technologies and what philosopher and military ethicist George R. Lucas, Jr. (2013) has termed “the relentless drive toward autonomy.” The growing presence of artificial intelligence (AI) in contemporary society is widely noted, usually embraced, and proclaimed to be a necessary and inevitable advancement of and for human progress and civilization. To be sure, there are naysayers, but few deny AI’s growing prominence and presence.3 Its effects in and ramifications for the disciplines of ethics, philosophy, and even religion are enormous. So too, are its effects in corporate boardrooms, hospital emergency rooms, and military and political war rooms. It is the latter of these that this chapter seeks to engage. How is AI to be evaluated and utilized in military planning and military operations? This remains a challenge for the near future (cf. Galdorisi, 2019). The noted Prussian military thinker Carl von Clausewitz (1976) stated: “theory becomes infinitely more difficult as soon as it touches the realm of moral values.” Warfare inevitably deals with “moral values” – in wartime planning, operations, and post-hostility phases. Ethics and war are intricately and inextricably linked – whether acknowledged or not. So too are ethics, technology, and war. Armin Krishnan (2009) is certainly accurate in his observation on AI, war technology, and ethics when he writes: “Automated killing is a sensitive ethical issue.” AI now can and will affect all aspects of warfare – conventional and unconventional. This includes such things as autonomous systems, missile defense, cyber, analysis, propaganda, and intelligence. The speed and course at which this 3

Well worth reading are titles such as Smith, G. (2018). The AI delusion. New York, NY: Oxford University Press and Walsh, T. (2017). Android dreams: The past, present, and future of artificial intelligence. London: C. Hurst & Co.

“Something Old, Something New”

55

will occur will vary, but the certainties of implementation and integration of AI and warfare are not in doubt. Thus, Paul Scharre, director of the Future of War Initiative at the Center for a New American Security and one who worked intimately with DoD technology policy, writes: The rise of artificial intelligence will transform warfare. In the early twentieth century, militaries harnessed the industrial revolution to bring tanks, aircraft, and machine guns to war, unleashing destruction on an unprecedented scale. Mechanization enabled the creation of machines that were physically stronger and faster than humans, at least for certain tasks. Similarly, the AI revolution is enabling the cognitization of machines, creating machines that are smarter and faster than humans for narrow tasks. (2018, p. 5) Rapid analysis for military decision-making is one of the major projected tasks for AI and perhaps its most important task in the near future. A critical question yet to be answered is whether the military will eventually delegate AI systems to make decisions independently from humans. Many ethicists and robotics and AI experts have called for a ban on the development and implementation of offensive autonomous weapons.4 It is understood that AI and autonomous weapons systems (AWSs) are not necessarily synonymous – but as autonomy integrates aspects of AI they are on the same technological trajectory and have overlapping moral, legal, and ethical considerations. However, if and when AI-independent decision-making is utilized, what is the potential for failure? Scharre asks: Automated stock trading has led to “flash crashes” on Wall Street. Could autonomous weapons lead to a “flash war”? New AI methods such as deep learning are powerful, but often lead to systems that are effectively a “black box” – even to their designers. What new challenges will advanced AI systems bring? (2018, p. 7) As with many technologies, AI can have positive or negative consequences: The stakes are high: AI is emerging as a powerful technology. Used the right way, intelligent machines could save lives by making war more precise and humane. Used the wrong way, autonomous weapons could lead to more killing and even greater civilian casualties… Artificial intelligence is coming and it will be used in war. How it is used, however, is an open question. (Scharre, 2018, p. 8)

4

Cf. https://autonomousweapons.org/compilation-of-open-letters-against-autonomous-weapons/ for a list of some of the letters. See also discussion in Scharre (2018, pp. 6–8).

56

Timothy J. Demy

Presently, the primary document of the US Department of Defense (DoD) with respect to utilization of AI is the four-page US Department of Defense Directive Number 3000.09 “Autonomy in Weapons Systems” of November 21, 2017 (with Incorporating Change 1, May 8, 2017).5 This document states there must be a human being with veto power over any action an AWS might take in a combat situation. Given the course of technological development and use in warfare, the permanence of such veto power is questionable. The US military is investing significant amounts of money into AI research and implementation. In so doing, US and other militaries need and want to understand the opportunities as well as the limitations of AI. Those concerned must continually monitor both of these ends of the spectrum for rapidly developing AI technologies in general as well as for ongoing research, development, acquisition, and implementation of AI in the realm of military operations (cf. Cummings, 2017; McLemore & Lauzen, 2018). However, at present, the US and competing militaries are moving forward despite uncertainties. In February 2019, the US Department of Defense released its AI strategy summary, titled “Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity” (hereafter referred to as the 2018 AI Summary). This release followed an Executive Order that created the American Artificial Intelligence Strategy.6 Along with this came the establishment of the DoD Joint Artificial Intelligence Center (JAIC) whose purpose is to evolve and incorporate AI into national defense strategy. In part, the preface of the 2018 AI Summary states: “As stewards of the security and prosperity of the American public, we will leverage the creativity and agility of our nation to address the technical, ethical, and societal challenges posed by AI and leverage its opportunities in order to preserve the peace and provide security for future generations” (US Department of Defense, 2019). The document continues and entails two sections: strategic approach and strategic focus areas. In the latter, before the document’s conclusion, there is a one-and-ahalf page section titled “Leading in military ethics and AI safety” (US Department of Defense, 2019, pp. 15–16). On the whole, the document and the actions of the US Department of Defense provide the establishment of a baseline of policy to “harness AI to advance our [US] security and prosperity” as noted in the summary’s subtitle. This is a necessary and desirable response to a field of rapid and exponential growth. It is also ambitious and optimistic. Quite accurately, the 2018 AI Summary states: “AI is rapidly changing a wide range of businesses and industries. It is also poised to change the character of the future battlefield and pace of threats we must face” 5

See https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/300009p.pdf. US Department of Defense. (2019, February 12). DoD unveils its Artificial Intelligence strategy. Retrieved from https://dod.defense.gov/News/Article/1755942/dod-unveils-itsartificial-intelligence-strategy/. Accessed on April 20, 2019. For the summary, see https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/1/SUMMARY-OF-DOD-AISTRATEGY.PDF. Accessed on April 20, 2019.

6

“Something Old, Something New”

57

(US Department of Defense, 2019, p. 4). With respect to AI and the ethics of war, how then should political and military leaders respond to and prepare for these changes? When considering ethical principles and guidelines with respect to war the question quickly arises of “by what standard” or “within what framework?” Only when that framework is developed and known can leaders consider the ethical dimensions of warfare. These ethical dimensions must be explicitly acknowledged in order to continue to guide decision-making and conduct for leaders, warriors, and stakeholders in all phases before, during, and after war.

If Not Just War, Then What? The subjects of war and AI are each enormous and multifaceted. When one brings these two wide-ranging topics together for consideration, what resources are available for considering the specifics of AI and war? This section contends that the just war tradition is a viable and valuable framework that answers the US 2018 AI Summary call for “creating a common foundation [italics original] of shared data, reusable tools, frameworks and standards [italics added]” (US Department of Defense, 2019, p. 7). The history of the just war tradition provides ample evidence of its ability to interact with new technologies of war and adapt to contemporary needs. Although it is a centuries-old framework, it is not a static framework. Although the framework continues to have strong supporters and detractors, refinement and application of just war principles are the subject of ongoing discussion and debate among both groups. Within the realm of robotics and AI, the work of writers such as George R. Lucas, Jr., Mary L. Cummings, Ronald C. Arkin, Jai Galliott, and Peter M. Asaro is important in that they interact in varying degrees with the just war tradition and are able to bridge the humanities/ science divide, albeit from different perspectives.7 The development and use of any warfare technology should be considered and analyzed within the framework of the just war tradition. The tradition is a framework that is capable of dealing with anything that comes along with respect to warfare – if not, then it must be jettisoned and replaced with a comprehensive framework that can deal effectively with contemporary issues. If we are going to have a theory of war, then it must be comprehensive with respect to the spectrum of war (low to high intensity), participants in war (combatant, noncombatant, nonstate, state, contractor, mercenary, etc.), and tactics and weapons (swords, bows, crossbows, tanks, planes, NBC, drones, AI, etc.)

7

Especially helpful as an introduction to some of the concerns of robotics and the just war tradition is Peter M. Asaro (2009). Modeling the moral user. IEEE Technology and Society Magazine, Spring, pp. 20–24. See also his work How just could a robot war be? In P. Brey, A. Briggle, & K. Waelbers (Eds.), Current issues in computing and philosophy (pp. 50–64). Amsterdam: IOS, 2008.

58

Timothy J. Demy

One might consider the just war tradition akin to an accordion or spring: it can expand and change to meet new developments in circumstances and technology. Using the just war tradition as a framework, one must initially consider how does the new technology interact with our thinking about proportionality and discrimination (jus in bello)? These types of ethical principles and considerations must be embedded in the design and development of AI weapons of war. Of course, it may also be that AI can help in the jus ad bellum aspect and perhaps in the jus post bellum aspects, but the major area of concern will be “what does AI do for jus in bello?”

Clarification or Confusion? What is confusing and bewildering to some parties working in the realm of military ethics are the mixed signals being sent by the DoD with respect to ethical principles. In January 2019, there was a widely publicized announcement of DoD desire, via the Defense Innovation Board, to draft guidelines for shaping norms for military AI. The announcement called for engagement, in part, through public listening sessions with academics, ethicists, scientists, and other interested parties (see, e.g., Tucker, 2019).8 This call for guidelines likely arose from the 2018 AI Summary statement: “The Department will articulate its vision and guiding principles for using AI in a lawful and ethical manner to promote our values. We will consult with leaders from across academia, private industry, and the international community to advance AI ethics and safety in the military context” (US Department of Defense, 2019, p. 8). Is DoD (and others considering the ethical implications of AI and war) desirous of a list of new ethical principles? Or is the department, rather, seeking to understand how AI technology does or does not apply to the already existing just war framework that has been embraced by the US and upon which much international law with respect to warfare is grounded? Are we seeking new principles, or seeking to understand how AI technology should be developed and used in accordance with standing principles – perhaps articulating AI specificity within those standing principles? Or, perhaps the public forums and subsequent articulation of principles are a mechanism for gaining cooperation and support from scientists, the public, and industry. This might be one aspect of events in light of the 2018 employee protests against Google’s partnership with DoD’s AI “Project Maven,” wherein the company declined to renew contracts with DoD for the project due to public protest against Google supporting military AI initiatives (Wakabayashi & Shane, 2018). Joshua Marcuse, the Executive Director of the Defense Innovation Board, noted, “It is critical that the forthcoming [Defense Innovation Board] recommendation to DoD is consistent with an ethics-first approach and upholds existing legal norms around warfare and human rights, while continuing to carry out the 8

See also the Defense Innovation Board (DIB) website with respect to AI and the DIB goal of crafting a “set of AI Principles for Defense” at https://innovation.defense.gov/ai/.

“Something Old, Something New”

59

Department’s enduring mission to keep the peace and deter war” (Tucker, 2019). The transparency of the Board’s meetings and work is commendable. Like many aspects of political and social life, there is a continuum of principle → policy → practice that one may use when considering ethical, political, and policy questions. So too, is this the case with AI. To think about AI and war, one must understand the relationship between the two. In so doing, the challenge is neither to make too much of the relationship nor too little. We must not overestimate it nor underestimate it. That relationship is one that is instrumental. AI is a weapon of war, a tool of war, an instrument of war. With respect to the continuum of principle → policy → practice, AI may be thought of as a tool used in the practice (application in warfare) of a principle (principles of just war tradition) that is governed by policy (DoD policy and international law). Where is it all going? AI speeds up some processes but also inhibits our ability to control them. AI augments individuals, systems, and states – it is a force multiplier in warfare. In that regard, the most prominent discussions involve where we are going with respect to governing lethal behavior in autonomous systems/robots. Thus far, the majority of roboticists, researchers, ethicists, and policymakers have said we want to keep the person in the decision-making loop, but that will inevitably change in the name of efficiency. Once it changes, how might we embed just war tradition principles in the design and in decision-making processes? Few people are dealing with these specifics, but one who is, is Ronald C. Arkin at Georgia Tech. He is a leading voice in autonomous systems and believes ethical lethal autonomous machines can and should be developed. Arkin argues that, though a long way from realization, autonomous lethal weapons and robots are desirable and their development and use can be within the just war tradition framework and international law (cf. Arkin, 2009, 2017). He acknowledges that ethical robot development is in its infancy but believes “it is important that the field move forward post-haste to ensure the safe and ethical deployment of intelligent autonomous robots, especially in the context of armed conflict” (Arkin, 2017, p. 44). As noted earlier, AI and AWSs, though not synonymous, do share ethical spheres of operation and principles. Lucas contends: In sum, and regardless of what specific avenues of research are finally pursued, machines equipped with lethal force needs to be designed to operate according to precise engineering specifications, including the specifications that their actions comply accurately and unerringly with relevant international law pertaining to distinction, proportionality, and the avoidance of needless suffering (the so-called “cardinal principles” of international humanitarian law), as well as to the operant constraints of “Standing” and specific rules of engagement (ROE). Such constraints are identical to those currently imposed upon human combatants in the field (often with less than perfect success). (2013, p. 225)

60

Timothy J. Demy

In this call, Lucas is grounding his thought in international humanitarian law, that is derived in part from the just war tradition. So too does Jai Galliott in his 2015 work Military Robots: Mapping the Moral Landscape. Galliott interacts with detractors of the just war tradition for future warfare technology. He writes, “despite claims to the contrary, the classical just war framework remains the most suitable and robust tool we have for analyzing the moral problems associated with the use [of] unmanned systems” (Galliott, 2015, p. 86). Galliott (2015, p. 233) argues for a “broad but complete contractually grounded version of just war theory.” In so doing, Galliott does not deny the uncertainty of future warfare and the complexity of using the just war tradition. Yet, he contends, “interpreting and applying existing just war principles to unmanned warfare is bound to be fraught with complexity, but represents our best chance at tempering states’ responses to perceived threats” (Galliott, 2015, p. 233). In this, he is correct. New technology and warfare may appear to be a perpetual Rubik’s Cube of ethical issues but, just because we may not know the right answer, that does not mean we don’t know some of the wrong answers. The perpetual quest for “a better mousetrap” or perhaps, more appropriately, “a silver bullet” is understandable even if the quest is usually futile (at least in the near term). Acknowledging “the thorny path ahead,” Mary “Missy” L. Cummings, the director of the Humans and Autonomy Laboratory at Duke University, writes: Although it is not in doubt that AI is going to be part of the future of militaries around the world, the landscape is changing quickly and in potentially disruptive ways. AI is advancing, but given the current struggle to imbue computers with true knowledge and expert-based behaviours, as well as limitations in perception sensors, it will be many years before AI will be able to approximate human intelligence in high-uncertainty settings – as epitomized by the fog of war. (2017, p. 12) AI is certainly a factor in the equation of war, and it will continue to grow in importance.

Concluding Thoughts This chapter has argued that the just war tradition remains a viable framework for considering future warfare. That argument is not new.9 It is, however, ongoing. Therefore one must consider: If not just war tradition then, by what and by what standard? This is not to say that just war tradition is regrettably the best we can do. Rather, it is to state that the tradition is more than adequate for the challenge of twenty-first-century warfare. It is a framework that is exceptionally well-suited for present and future warfare. 9

See, for example, Oliver O’Donovan’s defense in his work The Just War Revisited. Cambridge: Cambridge University Press, 2003 and Nigel Biggar’s In Defence of War. New York, NY: Oxford University Press, 2013.

“Something Old, Something New”

61

From rocks to rockets and triremes to submarines, the development and usage of weapons has always sought to give advantage to the user and to minimize harm to the user. Whether the weapon is a stone, spear, longbow, bullet, artillery shell, aerial bomb, or cruise missile, the goal has always been to maximize the weapon’s effect on the enemy and simultaneously minimize vulnerability for the user. Technology changes, but the nature of warfare does not. Max Boot observes: Technological advances will not change the essential nature of war. Fighting will never be an antiseptic engineering exercise. It will always be a bloody business subject to chance and uncertainty in which the will of one nation (or subnational group) will be pitted against another, and the winner will be the one that can inflict more punishment and absorb more punishment than the other side. But the way the punishment gets inflicted has been changing for centuries, and it will continue to change in strange and unpredictable ways. (2006, p. 471) The challenge in the present and in the future is to use AI technology ethically and in a manner that restrains war and human suffering rather than increasing them. New technology introduces micro and macro issues of scale in warfare, and the opportunities are many. As AI technology is being developed for national security purposes, what is needed is for there to be robust and thorough research and thought with respect to specific norms of application of the just war tradition and AI. A multidisciplinary approach of scientists, ethicists, philosophers, lawyers, policymakers, and others explicitly looking at AI and the tenets of the just war tradition would be extremely beneficial. That some are already doing this is commendable, but much more is needed. If this is done, perhaps all concerned will be able to avoid the pitfall of catching up that is so often prevalent with ethics, law, and technology. The use of any technology sets a course for disconnecting actions and ethics. However, in the same way that the military pioneered standards and safety in aviation, it may do so in the realm of AI. If so, it would be extremely valuable and commendable. If nothing else, this chapter is a call for engineers and ethicists, planners and philosophers, to actively engage with one another using the spectrum of humanities and the sciences to address the technological challenges of AI and future warfare. With respect to the history technology and the just war tradition, is AI different? Yes. Is it unique? No. Thus, the challenge is to understand how to combine the just war tradition and AI – something old, something new.

Disclaimer The views expressed in this presentation are those of the author and do not necessarily reflect the official policy or position of the US Naval War College, the US Navy, the US Department of Defense, or the US Government.

62

Timothy J. Demy

References Arkin, R. C. (2009). Governing lethal behavior in autonomous robots. Boca Raton, FL: CRC Press. For some of the challenges, see his Epilogue. Arkin, R. C. (2017, November). Perspectives on lethal autonomous weapons systems. Occasional Papers No. 30. New York, NY: UNDOA (United Nations Office or Disarmament Affairs). Boot, M. (2006). War made new: Technology, warfare, and the course of history 1500 to today (p. 16). New York, NY: Gotham Books. von Clausewitz, C. (1976). In M. Howard & P. Paret (Eds.) (Trans.), On war (p. 136, Book 2, Chapter 2). Princeton, NJ: Princeton University Press. Cummings, M. I. (2017, January). Artificial intelligence and the future of warfare. Research paper of International Security Department and US and the Americas Programme. Chatham House, London. Retrieved from https://www.chathamhouse.org/sites/ default/files/publications/research/2017-01-26-artificial-intelligence-future-warfarecummings-final.pdf Galdorisi, G. (2019, May). The navy needs AI, it’s just not certain why. USNI Proceedings, 145, 28–33. Galliott, J. (2015). Military robots: Mapping the moral landscape (pp. 65–93). Burlington, VT: Ashgate Publishing. Johannsen, D. A., Solka, J. L., & Rigsby, J. T. (2018). The rapid rise of neural networks for defense: A cautionary tale. Future Force: Naval Science and Technology, 5(3), 14. Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous robots (p. 133). Burlington, VT: Ashgate Publishing. Lucas, G. R., Jr. (2013). Engineering, ethics, and industry: The moral challenges of lethal autonomy. In B. J. Strawser (Ed.), Killing by remote control: The ethics of an unmanned military (p. 211). New York, NY: Oxford University Press. McLemore, C. S., & Lauzen, H. (2018). The dawn of artificial intelligence in naval warfare. War on the Rocks, June 12. Retrieved from https://warontherocks.com/ 2018/06/the-dawn-of-artificial-intelligence-in-naval-warfare/ Rubin, S. H., Stafford, M., Mertoguno, S., & Lukos, J. R. (2018). What will artificial intelligence mean for warfighters? Future Force: Naval Science and Technology, 5(3), 7. Scharre, P. (2018). Army of none: Autonomous weapons and the future of war (p. 5). New York, NY: W. W. Norton & Company. Tucker, P. (2019, January 4). Pentagon seeks a list of ethical principles for using AI in war. Defense One. Retrieved from https://www.defenseone.com/technology/2019/ 01/pentagon-seeks-list-ethical-principles-using-ai-war/153940/ US Department of Defense. (2019). Summary of the 2018 Department of Defense artificial intelligence strategy: Harnessing AI to advance our security and prosperity (p. 4). Washington, DC: USDoD. Wakabayashi, D., & Shane, S. (2018). Google will not renew pentagon contract that upset employees. New York Times, June 1. Retrieved from https:// www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html

Chapter 4

Space War and AI Keith A. Abney

Abstract New technologies, including artificial intelligence (AI), have helped us begin to take our first steps off Earth and into outer space. But conflicts inevitably will arise and, in the absence of settled governance, may be resolved by force, as is typical for new frontiers. But the terrestrial assumptions behind the ethics of war will need to be rethought when the context radically changes, and both the environment of space and the advent of robotic warfighters with superhuman capabilities will constitute such a radical change. This essay examines how new autonomous technologies, especially dual-use technologies, and the challenges to human existence in space will force us to rethink the ethics of war, both from space to Earth, and in space itself. Keywords: AI ethics; moral status; autonomous weapons; just war theory; moral luck; independence thesis; red line analysis; existential risk; dual-use; space war

Introduction The ongoing debate over lethal autonomous weapons systems (LAWS) has for now made distinctly terrestrial assumptions; the robots doing the fighting and killing, whether on land, under the sea, or in the air, have always been assumed to be part of terrestrial forces. But as military planners have long understood, space is “the ultimate high ground”; and despite the Outer Space Treaty, there is an increasing push towards weaponizing space. Inevitably, that push will involve LAWS (Lt Gen David Thompson, Vice Commander USAF Space Command, ISME Keynote, June 29, 2019), as all the reasons for adopting them in terrestrial contexts, including problems of latency and accuracy, as well as other ways in which human control is suboptimal, apply to an even greater degree in space. Such weapons systems could easily also have nonmilitary purposes in space, making the ethics of their dual-use deployment even more complicated. This essay

Artificial Intelligence and Global Security, 63–79 Copyright © 2020 Emerald Publishing Limited All rights of reproduction in any form reserved doi:10.1108/978-1-78973-811-720201004

64

Keith A. Abney

investigates these issues and introduces some basic guidelines as to how to think about the ethical issues involved.

What Is AI? What is AI? The most famous attempt to connect it to human intelligence was Alan Turing’s (1950) “imitation game”, better known as the Turing Test, which involves a machine demonstrating human-level linguistic performance that can deceive a judge into thinking they are conversing with another human. But of course, we use what we term “AI” for far more than deceptive chatbots.

Recent, Influential Definitions of AI and Its Goals So, AI is now variously defined; Marr (2018) lists six different definitions currently used by leading AI companies. For example, the Sony Corporation (2018) ethics guidelines offer the following definition: “‘AI’ means any functionality or its enabling technology that performs information processing for various purposes that people perceive as intelligent, and that is embodied by machine learning based on data, or by rules or knowledge extracted in some methods.” A common thread can be discerned through the various definitions, though: Vallor and Bekey (2017) posit the real goal of most AI research is “systems that can emulate, augment, or compete with the performance of intelligent humans in well-defined tasks.” Russell and Norvig’s (2002) textbook Artificial Intelligence: A Modern Approach further disambiguates this approach of seeing AI as something that attempts to “emulate, augment, or compete with the performance of intelligent humans” by providing different ways of defining AI. These possibilities arise from defining AI in terms of its telos along two axes; is an AI trying to match or surpass human abilities in rational thinking, or in the sophistication of its behavior? Accordingly, should we assess AI according to how it thinks, or by what it does?

Artificial Versus Natural Intelligence? For our purposes in thinking about AI and space war, a useful distinction is between “natural” and AI. Per Russell and Norvig, the idea of intelligence seems to involve goal-oriented, problem-solving behavior that, even if not ideal, can at least match some human capabilities in thinking or acting. Presumably, natural intelligence evolved to solve the problems of evolution, most generally the need to survive and reproduce, by having (to paraphrase, mutatis mutandis, Popper on the scientific method) ideas about survival strategies die in our stead. Those general goals then resulted in various subgoals as various evolutionary mechanisms themselves evolved, until natural human intelligence became capable of being unconstrained from those original goals, free to seek new goals of its own, even ones antithetical to survival and reproduction. As we began to conceive ways of

Space War and AI

65

testing our ideas abstractly, without risk to our actual lives, the idea of an abstract intelligence running on something other than our human bodies began to make sense. Following this line of reasoning, we can then define an AI as a goal-oriented, problem-solving thinking process, with at least some human-level (or better) capabilities, that arose artificially, not naturally. An analogy may help: the ability to fly naturally evolved in response to certain selection pressures, and created in birds a process that involves muscles, light bones, and wings flapping. Humans have created artificial flying machines, but did not create them by simply copying nature; they achieved the same goal of flight without using bones, muscles, or even flapping wings. So, an AI is an artificially constructed attempt to achieve cognitive goals, but may do so through means that are entirely different from our understanding of natural human cognitive processes. For example, current AIs may use propositional symbolic logic, evolutionary algorithms, Bayesian inference, or other nonbiological approaches. Or, an AI can use an approach designed to mimic the human brain, such as a neural network approach. Typically, this uses artificial “neurons” that continuously compare their output calculations to a desired outcome, and then update the strength of the connections between “neurons” to “reinforce” those that seem useful. The goal of producing an artificial general intelligence by “whole brain emulation” would take such an approach to its logical extreme.

AI: Questions of Moral Status A crucial issue in space ethics is moral status: what kind of value should nonhuman life or technology have? Denying speciesism (Singer, 1974) does not solve the issue. Nonhuman alien persons, like the Star Trek characters Worf or Spock, would clearly deserve intrinsic moral consideration. But what of alien bacteria, or even fishlike creatures, in the oceans of Europa? Numerous options on the scope of intrinsic moral value have been defended (Lupisella, 2010) including all life (biocentrism), or, indeed, everything in the universe (cosmocentrism). Other common views include rationality/sapience as crucial (ratiocentrism), or sentience – the capacity for conscious experience, of pleasure and pain – termed “sentientrism”. Two arguments for ratiocentrism over sentientrism directly involve technology: first, the “Zombie/robot argument” (Abney, 2019): suppose philosophical zombies could exist (beings with human-level complexity of behavior, but no consciousness). They reason capably enough to demonstrate moral behavior and be deemed worthy of continued existence, passing one kind of moral Turing test (Sparrow, 2012), yet experience no pleasure/pain. Second, the Wireheading argument (Abney, 2019): suppose a superintelligent AI believes sentientrism, and so refashions Earth into “hedonium”: all remaining material objects merely experience unending, unknowing pleasure, until they cease to exist. This eventuality constitutes a “malignant failure mode” for superintelligence (Bostrom, 2014). Why a failure? Because a world with only

66

Keith A. Abney

sentience is not a moral paradise, but instead a world in which morality has disappeared, as all beings capable of it have ended. This seems a reductio ad absurdum argument against sentientrism (Abney, 2004). The belief in sentientrism may stem from a common confusion between an entity having intrinsic value (which by definition requires no extrinsic, external relationship) and final value (valuable as an end in itself, not as a mere means to some further end) (cf. Korsgaard, 1983). Intrinsic value requires final value, but the converse is false. If the existence of morality requires moral responsibility (Abney, 2019), then only morally responsible agents have intrinsic value, but many other things could have final value. Hence, all other moral value depends on what agents should value – whether it is purely instrumental (like an asteroid valued for its mineral wealth) or a final value, like taking pleasure simply in the view of Saturn’s rings, regardless of whatever other use they may be. AIs, by definition, can certainly reason; they are defined as an attempt to achieve cognitive goals. Do they thus have intrinsic moral status? My answer: not yet, for they as yet lack the crucial requirement of agency and full autonomy (for more detail, see Abney, 2012; Verrugio & Abney, 2012). There is an immediate implication: AIs and their embodiment in robots (that lack agency) have a practical moral superiority over humans for almost all our near-term legitimate purposes in space. As long as AI and robots have no intrinsic moral status, there is no inherent wrong in using them as a means to explore, mine, and even help humans colonize alien locales; and of course, to fight any just wars in space. It may even be crucial that they have no sense of their own self-importance, no drive to preserve themselves at the expense of harm to others. For obeying the dictates of just war theory (JWT), it will help if AIs/robots are willing to sacrifice themselves to avoid harm to civilians. And further, suppose we encounter alien persons, or have as a final value unspoiled alien landscapes, flora, or fauna; it’s best if the first representatives of humanity to encounter them are creatures which are morally dispensable – our AI/robotic scouts and ambassadors and warriors, our proxies and advocates for human civilization. It may turn out to be very important indeed to make a good first impression, to be willing to sacrifice our creations for our would-be friends; as opposed to making them our enemies. In fact, this willingness could become a matter of existential importance; an argument to be continued in the final section.

Narrow AI Versus AGI Given that an AI is an artificially constructed attempt to achieve cognitive goals, another distinction is needed: between narrow AI and general AI (AGI). A narrow AI matches or exceeds human capabilities at attaining a specific cognitive goal; say, winning at chess, or calculating the product of two 17-digit primes. But while current AI chess programs can beat the human world champion, they are helpless at telling you where the closest gas station is. So, an AGI is an AI that has human-level or better capabilities generally – ideally, for all possible cognitive goals. Narrow AI, as seen, is already commonplace, whereas AGI remains for

Space War and AI

67

now a dream (or nightmare) of AI research. We could imagine such an AGI being better than humans at speed – it thinks and attains all cognitive goals as fast or faster than humans; or, a mere collective superintelligence would think like humans, but could in parallel have trillions of human-equivalent minds working simultaneously, with accompanying super performance. Or, there may be a “quality superintelligence” (Bostrom, 2014) that transcends mere human rationality and attains cognitive goals in ways we cannot yet imagine. Narrow AIs are commonly speed AIs, and with advances in parallel programming, are increasingly collective AIs. But narrow AIs are like idiot savants: brilliant in their specialization, but useless outside it. So, as humans anthropomorphize AIs and tend to (over)trust it once shown reliable in certain settings, they may well be oblivious to its limitations when asked to move outside its narrow specialization, leading at least to frustration, and perhaps disaster. But some would think an AI “oracle”, of which we merely ask questions, poses no threat on its own; only stupid or evil human actions that result are the problem. But embodied AIs will be able to act autonomously. Clarifying those issues are next.

Robots as Embodied AI and LAWS A classic definition of a robot (Bekey, 2012) is a machine, situated in the world, that senses, thinks, and acts. Like humans, robots have sensors that detect aspects of its environment and actuators that enable it to move and affect that environment. And like humans, a robot can think: that is, it has an AI. LAWS are robots designed for the purpose of being able to choose to target and kill humans. Humans can be “in the loop”, meaning a robot will never fire a weapon at a human on its own; a human must always make the ultimate firing decision. Or a LAWS can have a human “on the loop”, meaning that robot may target autonomously, but a human can override its decision to fire; or the robot may have humans entirely “out of the loop”, in which both targeting and firing are autonomous, and all a human could do is deactivate the robot after the fact. The problem of “latency” helps explain the push towards LAWS with humans out of the loop: there is a lag time between a robot targeting an enemy and the human authorizing the firing, and then the robot carrying out the order. This latency may cause the robot to miss, or even worse, hit an unintended target (collateral damage), or reduce its fighting efficiency and be more vulnerable itself. As the tempo and complexity of warfare increase, there will be increasing pressure to allow the AI to make the targeting and firing decisions itself. That will be particularly true in contexts in which the latency is great, or in which it is difficult or impossible for human controllers to participate. Space is perhaps the most obvious such arena for such considerations: the distances are vast, the needed electromechanical linkages between operator and robot are difficult to achieve and particularly fragile even when achieved; and it ranges from unrealistic to simply impossible for human operators to be in the battlespace or easily intercede during the warfighting. Accordingly, the pressure will be immense to deploy

68

Keith A. Abney

LAWS in space war. Could that be ethically permissible? To answer that, we need to examine LAWS in the context of JWT.

What Is Traditional JWT? Jus ad bellum, jus in bello: The Independence Thesis? JWT traditionally treats the relationship between the justice of declaring war, jus ad bellum, and the justice of how the war is actually fought, jus in bello, as one of complete independence: the justice of a (civilian) government’s decision to wage a war is one issue, and the questions of the moral boundaries of the ways in which the professional military of that nation wages war is an entirely separate issue. But recent developments in the use of autonomous weapons systems problematize such views, and their foreseeable use in space will exacerbate the issues. Indeed, some of the most common objections to the use of autonomous weapons systems by militaries (especially by the US) involve the claim that they will make war more alluring, so that decreasing the cost of war would make nations more likely to go to war unjustly. Such arguments effectively deny the independence assumption, as they use jus ad bellum worries to justify a jus in bello restriction or ban on LAWS. An alternative view (term it the “mutual dependence” or “interdependence” view) allows such a connection between jus ad bellum and jus in bello concerns, and instead reasons as follows: the more morally justifiable a war is, the fewer the restrictions on how it may ethically be fought; and vice versa, the less morally justifiable declaring a war is, the greater the restrictions should be on how it may be prosecuted. That is, the more morally justifiable waging and winning a war is, the more that can be morally justified to make sure the correct side wins it. And as the moral case for beginning or waging a war lessens, the greater the moral restrictions on how it may be waged. For instance, on this view, it might be permissible by the Allies (whose cause was clearly just) to drop nuclear weapons on Japan or Germany to stop the horror of World War II, despite such weapons clearly violating several traditional jus in bello principles; whereas it might be morally impermissible to use nuclear weapons to wage, say, a preventive war against Iran (in the mere expectation of a possible future nuclear capability).

What Matters for the Independence Versus Mutual Dependence (Interdependence) Thesis? So, in assessing the independence vs mutual dependence (interdependence) thesis, what matters? Is it:

• • •

The type of wars? Symmetric, asymmetric, large vs small? Or types of combatants/wars? Large/small nation-states? Civil wars? Antiterrorism campaigns? Peacekeeping? Or perhaps the types of weapons? Lethal vs nonlethal? Ships vs planes? Ballistic missiles vs bullets? “Rods from God” vs nukes?

Space War and AI

• •

69

Or, who’s doing the fighting? Humans, “human in the loop” robots, or LAWS? And crucially, location: is war in space ethically bound by the same rules and reasoning as terrestrial war?

Relevance of LAWS to Issue LAWS affect and seemingly support a common interdependence argument: LAWS cause unjust wars because they so greatly decrease the cost of war – the “blood sacrifice”. But take cyberconflicts: do they reinforce such interdependence arguments? Was Stuxnet an act of war by the US and Israel against Iran? Should we allow a permanent autonomous cyberdefense – i.e., permanent cyberwar? Is this a slippery slope to mutually assured destruction (MAD)? Such considerations for space warfare and LAWS raise directly the Autonomy worry: for war (in space), should human commanders be in, on, or out of the loop? “Latency”, as mentioned, refers to the time lag between a commander giving an order (to fire) and that order being executed. With a human in or on the loop, latency is always a problem as compared to fully automated warfare; and the greater the distance (and fragility of telecommunication connections), the greater the problem. Because of latency issues, current Space Command assumes that commanders will be out of the loop; space battles may well already be over before human commanders on the ground are even aware they are happening. So, for any chance of success in a war in space, JWT may plausibly require AI/LAWS due to the doctrine of military necessity (Thompson, 2019).

“Ought Implies Can” Issues and Moral Luck Latency considerations are used to justify LAWS in space because success in a just war would be impossible without them. This brings up a common principle in ethical theorizing, one that entails that ethics cannot demand what is irrational for agents to do or be. In particular, it cannot expect us to perform an action or evince a character that is practically impossible for us. This is termed the Ought implies can principle – which logically requires that the morally permissible act is not only one we are capable of doing, but the action is also one we can avoid; an agent must have an actual choice in order for moral responsibility to exist. Accordingly, moral responsibility demands full agency – the capacity for the rational exercise of free will. The result: a proper understanding of ethics has no role for what Thomas Nagel (1979) termed “moral luck”. Moral luck is defined as: when you are held morally responsible for things over which you had no rational control; i.e., when the “ought implies can” principle is violated in assigning moral responsibility. Hence, for a proper moral theory, there should be no moral luck. Does this constitute proof of the independence thesis? Inasmuch as political leadership makes the decision to go to war and the active military does not, the “ought implies can” principle would apparently imply that the active military, in determining how the war ought to be conducted, should not be held responsible

70

Keith A. Abney

for the decision to go to war, and so the independence of the two consideration would remain a moral necessity. For humans in traditional warfare this understanding of moral luck appears to support, if not prove, the independence thesis. One does not choose what country one is born into, and so one’s birth nationality and initial citizenship cannot be other than a matter of luck. Nazi soldiers did not deserve to be born in Germany, any more than American soldiers deserved to be born in the US. They accordingly cannot be held responsible for their government’s decisions to participate in World War II, whether just or unjust. Their only moral responsibility was to obey the moral rules of jus in bello, with jus ad bellum playing no role; to insist otherwise seems to insist on moral luck. Generalizing, typical native-born soldiers in any country cannot be held responsible for their civilian government’s decisions to go to war, and hence considerations of jus ad bellum cannot pertain to their moral responsibilities in the conduct of their duties as warfighters, regardless of the (in)justice of their cause. But the reality is not so simple. First off, political leadership rarely if ever declares war ignorant of the likely strategies and tactics of its military. Secondly, in democratic countries, soldiers are typically also voters, and voters may share in the collective responsibility of going to war determined by their elected leaders. One may protest that they did not vote for the war, or even for the elected leader; but in a society in which one is free to emigrate, citizens have made the voluntary choice to remain a part of a society that wages this war. Result: in both jus ad bellum and jus in bello, the “ought implies can” principle, when taken seriously, does mean moral luck can play no role in correct attributions of moral responsibility. Accordingly, any version of act consequentialism appears doomed as a defensible principle – to hold either soldiers or political leaders or citizens, combatants, or noncombatants responsible for what happens due to luck is to engage in moral error. Accordingly, a better understanding of JWT must seek to minimize the role of moral luck in formulating its correct principles. That also implies that moral evaluation would ideally be agent-based, not act-based; moral evaluation is properly of the character of an agent, in terms of the rational opportunities actually available to one, rather than the act that they did or did not do, without sensitive consideration of what capabilities – what liberties – were actually open to them. And introducing LAWS will complicate this argument. First off, moral luck is not directly an issue for robots unless/until they attain Kantian autonomy. Decisions to go to war are always a function of perceived odds of success, as a military option versus continued negotiation/surrender. Hence military leadership may inform civilian leadership that using/not using LAWS affects odds of military success (if not, there’s no reason to use), and hence jus ad bellum considerations. This appears to support the independence thesis in JWT, then – soldiers cannot decide whether to start wars; they only have a real choice over their conduct within them. One has no control over the country in which one was born, or whether one was drafted to serve in that country’s armed forces.

Space War and AI

71

But is it really true that soldiers have no control over jus ad bellum? Was there really such a person as the good Nazi concentration camp guard? (This type of argument is also known as the “Reductio ad Hitlerum”). After all, if one believes their country’s actions in war are unethical, then they can always vote or emigrate; or in serving in such a war, refuse to follow orders, or even surrender to the other side. Belief otherwise reinforces worries over collective responsibility problems and commits the fallacy known as the paradox of the heap – what if one person changing their mind changes the war not at all, but if many do… there is always some threshold at which additional individual acts will make a collective difference – the “straw that broke the camel’s back”. Taking seriously the “ought implies can” principle and notions of collective responsibility, it seems there should be a linkage between jus ad bellum and jus in bello.

Can LAWS Have Moral Responsibility? At a minimum, while the AI/LAWS fighting in space cannot (yet) have moral responsibility, the officers choosing to use LAWS (or not) do have moral responsibility – they can make a meaningful choice as to the means by which they fight, even if they cannot control whether or not their government decides to prosecute the war. So, is this support for independence thesis after all, then? Because space LAWS (which command assures are a military necessity) can as yet have no moral responsibility? To address this, we need more detail on the crucial issue: do LAWS in space or AI-enhanced astronaut warfighters undermine or reinforce the independence thesis? Dual-use issues in space exacerbate the conceptual problems. Dual-use concerns, of course, are built into almost all space-based technologies. For example, private corporations wish to build space hotels; but of course any large space base that hosted a hotel could do double duty as a launch platform for space-based missiles that could kill far faster than ground-based attacks, and may be undetectable until too late. Even the ISS only orbits at a height of 249 miles, a distance that on the ground would be nearby enough to seriously alarm an enemy – and that does not even count the fact that, unlike Earth-based missiles, gravity would be helping, not hurting, the weapon’s takeoff and impact velocity, and hence time to target. Indeed, even unarmed missiles from space could easily pack a wallop equivalent to a small nuclear weapon – the military calls them “Rods from God” (Shainin, 2006). Aggressive military planners have long termed space “the ultimate high ground” and dreamed of using it to launch devastating preemptive attacks. Indeed, serious military strategists propose that the US formalize space warfare into a separate branch of the military, like the Army or Navy, in claiming that “America Needs a US Space Corps” (Smith, 2017). To be clear, space-based attacks would not even have to use kinetic force; for example, a cyberattack on satellites could disrupt an enemy’s communications and weapons guidance systems; and a cyberattack on a large space-based solar power (SBSP) array could cut off power to one’s enemy, or suddenly darken

72

Keith A. Abney

(or illuminate) an unexpected area of Earth’s surface, or even modulate the microwaves that beam the power down to Earth into an intensity that could harm humans. Both the US and China have worked on microwave weapon systems as part of their research on “directed energy” weapons (Fingas, 2014). Prominent politicians have already floated such plans. For instance, in his 2012 presidential campaign, Newt Gingrich wanted the US to have a manned base on the Moon by 2020. Later NASA feasibility studies endorsed the idea, and the Constellation program had it as one of its goals (Whittington, 2015). Dual-use issues were clear in Gingrich’s pitch. He broached the idea that the Moon colony would be the “51st state” and made clear the potential dual-use aspects of such a colony when he said “We will have commercial near-Earth activities that include science, tourism, and manufacturing, and are designed to create a robust industry precisely on the model of the development of the airlines in the 1930s, because it is in our interest to acquire so much experience in space that we clearly have a capacity that the Chinese and the Russians will never come anywhere close to matching” (Gingrich, 2012). But it would be costly to keep humans permanently on a lunar colony: not even counting the initial costs of getting there and construction, Phil Plait (2012) estimated simply maintaining even a small colony would take at least $7.4 billion per year, over 1/3 of NASA’s budget. Moreover, any military activities on a lunar base would explicitly violate the Outer Space Treaty (1967). So, we need a red line analysis: what constitutes war in space, as opposed to a merely civilian use? What is an (il)legitimate dual use in space? Is it the same as on Earth? It certainly seems not, as the simple physics of space mean that acts that would be relatively innocuous on Earth (like launching a javelin-shaped tungsten rod) pack a far different wallop when descending on an enemy from orbit at 17,000 mph: the so-called “Rods from God”.

Scenarios, Including Ultimate Scenario A primary goal of this chapter is to help us avoid the First Generation problem in space, a problem more generally for technology ethics (Abney, 2019). The problem: does a first generation of users of a new technology always need to suffer or even die in order to accrue objective evidence that the risk of its use is real and unacceptable? This problem is especially pronounced in space, in which many near-term scenarios will be occurring for the first time in the near future, and the perils are still underexplored. We clearly have a need to do what is called “anticipatory ethics”. The following scenarios are introduced as thought experiments in the beginnings of an attempt to do such anticipatory ethics for space war. Scenario 1: A private company (e.g., Planetary Resources) proposes to bring a very large, heavily metallic asteroid into the Earth’s orbit, in order to lower the cost of mining it. For example, a single platinum-rich 500-meter-wide asteroid would contain approximately 174 times the yearly world output of platinum, and 1.5 times the known world reserves of platinum group metals (also including ruthenium, rhodium, palladium, osmium, and iridium). This amount could fill a basketball court to four times the height of the rim. By contrast, all of the

Space War and AI

73

platinum group metals mined to date in history would not reach waist-high on that same basketball court (Planetary Resources, 2018). To make such asteroid capture, movement, and mining in Earth orbit remotely plausible and cost-effective, it likely will not have astronauts involved; instead, an AI-powered robotic tow spacecraft would capture the asteroid and then use its motors to create a delta-V so as to reposition it in the Earth’s orbit, and then robotic miners would do the actual digging and robotic delivery vehicles would take it down to Earth (or wherever the precious metals were destined to be used). Should a private company be allowed to give Earth another moon? Or moons? And then gradually disassemble it, with associated risks of having pieces of the asteroid plunge to Earth? What if maneuvering/mining the asteroid used nuclear fission for propulsion? Or used fission (A-bombs) for blasting or other mining operations? What about nuclear fusion (H-bombs)? To be clear, an expansive interpretation of the Outer Space Treaty may give private companies some leeway here. In 2015, President Obama signed the US Commercial Space Launch Competitiveness Act (H.R. 2262), which recognizes the right of US citizens to own asteroid resources they obtain and encourages the commercial exploration and utilization of resources from asteroids. Planetary Resources (2018) trumpets the law as a key milestone in their roadmap to access asteroid resources for commercial use in space. Scenario 2: Bigelow Aerospace and Virgin Galactic both plan to offer space tourism, and suppose in 2020 both are ready to begin work on flights to lowEarth orbit to a commodious new private satellite, meant to be their new space hotel. Suppose their studies have led both companies to want the same orbital slot for their new space hotel, in geosynchronous orbit over Washington, DC, with a view of the entire US’s East Coast. How should it be decided which, if either, should occupy that orbital slot? Are there unacceptable dual-use or security concerns in occupying a slot directly over a nation-state’s capital city? What if it was directly over Minot, North Dakota, home of nuclear ICBMs? Scenario 3: Already contracting with NASA, SpaceX and Blue Origin both also want to contract their rockets to help private companies haul goods to and from orbit. What regulations should they have to abide by? What cargos should they be (not) permitted to carry? Scenario 4: SpaceX and MarsOne both have plans for Mars settlement and colonization. Should both be allowed to go forward unimpeded? Suppose they both decide on the same area on Mars for their colony – how shall it be decided who gets the prime spot? Is it simply whomever gets there first? How would their claims to ownership of land of Mars be adjudicated? Would Earth governments have any say over the government of their colonies? Should they? Let us examine a few scenarios in more detail, including a final one that involves existential risk. Asteroid Mining and Exploration – Is It Clearly Civilian? Or could asteroid mining and exploration be dual-use? The asteroid 101955 Bennu has a mean diameter of approximately 492 m. It has an Earth-crossing

74

Keith A. Abney

orbit, and has an estimated 1-in-2,700 chance of impacting Earth between 2175 and 2199, which would result in immense devastation. It will approach within 460,000 miles of Earth on September 23, 2060 (NASA, 2019). Currently NASA has a probe, the OSIRIS-Rex, which is investigating the asteroid. On June 18, 2019, NASA announced that OSIRIS-REx managed to capture a picture shot at a distance of a mere 0.4 miles (0.64 km) above Bennu’s surface. This mission further plans to land on Bennu and take a sample of the asteroid regolith. The sample return to Earth is planned to land in 2023. It would take only a small bump in Bennu’s trajectory to turn its near miss in 2060 (or even earlier) into a collision with Earth, or with any satellite or other space platform in near-Earth orbit. NASA has no public plans to weaponize Bennu – but they could. Should they be allowed to make such plans? And of course, America and NASA are not the only space organizations that could engage in such asteroid retargeting. Can we be sure the space programs of China, Russia, or even India will not make such plans? Could a space conflict between Russia and a NATO nation, or China and Taiwan, escalate from a commercial to a military engagement?

Could Tourist Spaceflight Be Dual-use? Could Virgin Galactic or another company dedicated to giving joyrides to space for extremely rich tourists be a dual-use technology? Some folks at the Pentagon certainly think so; they reportedly planned for the same space planes that take civilians joyriding to also transport UAVs or even human troops to a distant battlefield quickly. In theory, SUSTAIN (an acronym for “Small Unit Space Transport and Insertion” (Axe, 2014)) could deploy forces from the US to anywhere in the world within two hours. Flying at suborbital altitudes, SUSTAIN theoretically would be invulnerable to enemy air defenses, and could avoid violating the national airspace of countries bordering the war zone. SUSTAIN was supposed to be incognito: disguised as part of a venture for lifting tourists into space, like Virgin Galactic. The dual-use would be simple: to change from space tourism to war, simply switch out the passengers and retarget the coordinates. But (officially, at least) SUSTAIN is not being developed, so shock troops descending from space is on hold outside of movies like Starship Troopers. However, the US does have the X-37 robotic space plane. It stays aloft for weeks to months, and officially carries no weapons, but if it did, it could launch “rods from God” or any other weapons system that would fit in its hold almost anywhere in the world in a manner of hours. The Russians are trying to build a similar space robot with the capacity to fire nuclear weapons anywhere around the world within two hours – an ability they believe the X-37 may already have (Axe, 2016). Nonetheless, even the X-37 is not destabilizing in the way the SUSTAIN program would be; having AI-enhanced human astronaut or robotic troops ready to swoop down on any battlefield from space at any moment raises fundamentally different strategic concerns from those associated with mere (sub-)orbital flying

Space War and AI

75

robots. Generally, dual-use concerns are exacerbated by a human presence in space. After all, the civilian astronauts or tourists may secretly be spies or soldiers preparing an attack from the ultimate high ground. In addition to personally causing an attack, humans may accomplish nefarious ends by stealth: they may be able to override whatever safety measures are in place, either by an in-person cyberattack, or even by physically overriding or destroying security features of spacecraft. Humans could also engage in other kinds of subterfuge undetectable from the ground; they could reorient satellites, or change their orbit to encounter debris, and so on. These concerns would be alleviated somewhat by only having robots in space; civilian robots could still be hacked and repurposed for military attacks, but short of that, dual-use problems with civilian spacecraft are minimized when no humans, only robots, are allowed into space.

The Ultimate Scenario – Existential Risk Suppose alien robots attack, clearly intending to wipe out humanity (as in many sci-fi stories and movies, such as Independence Day (1996)) – would there be any jus in bello restrictions when the survival of humanity is at stake? Michael Walzer’s version of traditional JWT seems to make exceptions to jus in bello constraints in cases of “Supreme Emergency” (Walzer, 2006). Many authors (e.g., Cook, 2007; Toner, 2005) think Walzer is wrong about this. But I believe Walzer is correct, at least in cases of existential risk, and hence the independence thesis must be false. A utilitarian argument for the conclusion may be obvious, but traditional JWT is deontological, not utilitarian. But I believe an absolute deontological principle also supports the conclusion. Here is the argument: Deontologists routinely distinguish between prima facie (sometimes termed pro tanto) duties, which hold unless they are overridden by some other, competing duty; versus absolute duties, which hold no matter what. It can sometimes be ethical to violate a prima facie duty, if upholding it would violate some other, equally or even more important duty. But it is always unethical to violate an absolute duty; it takes precedence over every other obligation one could have. So, understanding an absolute duty is crucial to ethics – if any exist. Various moral theories claim they do, but differ as to what they are. The most plausible way of justifying that a duty is absolute is to argue that it is required for morality itself to exist (Abney, 2019). That is, any duty that conflicted with such an absolute duty could not be ethically required because it would do away with ethical requirements! What kind of duty could itself be morally required for morality itself to exist? Well, both I (e.g., Abney, 2017a, 2017b; Abney, 2019) and Brian Green (2019), following the work of Hans Jonas, have argued that humanity’s continued existence is such an absolute duty. If so, we can formulate as a corollary a plausible absolute duty: the Extinction Principle (Abney, 2019): “one always has a moral obligation never to allow the extinction of all creatures capable of moral obligation.” It then is an absolute duty to keep things capable of obeying absolute duties in existence. Accordingly,

76

Keith A. Abney

mitigating existential risk is an absolute duty, which wins any conflict it has with any other duty. If some foreseeable space war minimizes existential risk, then it is our highest duty, and trumps any conflicting obligation rooted in jus in bello. Virtue ethics may yield a different emphasis than deontological or utilitarian approaches; it’s plausible that a virtue ethicist might insist that an obsession with decreasing existential risk to the detriment of other aspects of human flourishing betrays a flawed, even vicious, character. But for the deontological Extinction Principle or a standard version of expected utility, decreasing existential risk trumps all other considerations. Without invoking alien, robotic attackers, we can wonder about other humans that could pose a doomsday risk – for example, a religious cult that believes bringing about Armageddon will result in their adherents going to heaven, and has access to space-based weapon systems? Should we have any jus in bello restrictions on stopping/defeating them?

Conclusions For AI and space war, how can we update traditional Just War Theory (JWT) to meet the novel challenges?

New Principles For now, the principal challenges coming for using LAWS from JWT lie in the principle of distinction or discrimination, which seems largely beyond the technical abilities of AI-enabled autonomous weapons for now. To address such concerns, first, I advocate a new principle, given the military necessity (given latency issues) of LAWS in space, which I call the Discrimination Command principle: Identify the military official with control over LAWS (typically the CO who chooses to use LAWS in battle). Unless that military official making the decision to use LAWS (or someone of equivalent or higher rank) is willing to participate in a realistic simulation of LAWS as a noncombatant before deployment, then the actual deployment of LAWS in war is immoral. So, unless the officer in charge (up to and including generals!) is willing to assume the role of a civilian in a realistic simulation, then the military should not use LAWS. This principle seems amply justified from a Kantian perspective, based on the categorical imperative principle of universalization and not making oneself an exception to a rule that everyone should follow. Second, I also advocate the Metaethical principle of moral luck: any ethical theory that incorporates moral luck is wrong. The implications for space-based JWT are that the independence thesis is strictly false (though sometimes approximately true) – the question is the degree of (in)dependence of any factor on the others; and that may change, depending on the details of the warfighting scenario and upon one’s own (il)legitimate role in the conflict.

Space War and AI

77

Accordingly, moral luck is a necessary but not sufficient condition to guide moral assessments. The existence of AI/LAWS in space does complicate interdependence, as (for now) no moral agency/responsibility. To address this, we need a third principle: our concept of “military necessity” should be widened to assess the degree of moral necessity of the just side winning (for a lasting peace and a flourishing postwar society). Merely requiring the condition of a lasting peace allows scenarios that could result from a stable but horrible totalitarian result. As an example, take World War II or the Cold War: what was justified to ensure Nazis/Stalinist communism/other totalitarian regimes did not take over the world? So, with proper safeguards, a just state could use LAWS to enforce a lasting peace and a flourishing postwar society. The problem, of course, is that the military superiority of AI/LAWS means that an unjust state could use LAWS to enforce its tyranny without possibility of revolt. Citizens armed with guns cannot defeat, e.g., poisonous drone swarms or space-based weapons platforms.

Final Conclusion To stop indefinite tyranny with new tech by an unjust state, there should be no jus in bello restrictions on LAWS or any other technology to avoid this worst-case outcome. In near future, just states must be willing to do whatever it takes to ensure new tech does not allow a permanent tyranny to take root. A just state could use LAWS to enforce a lasting peace and a flourishing postwar society, re Kant’s dream of perpetual peace (Abney, 2012); an unjust state could use LAWS to enforce its tyranny without possibility of revolt – what good would shotguns and pistols be against an army of missile-firing drones, or swarms of tiny poisonous drones, etc.? There should be no jus in bello restrictions to avoid the latter outcome, including the possible necessity of AI-directed war in space. In the not so distant future, such technologies will be extant, indeed widespread; and just states must be willing to do whatever it takes to ensure such technologies do not allow a permanent tyranny to take root.

References Abney, K. (2004). Sustainability, morality and future rights. Moebius, 2(2), 23–32. Retrieved from http://digitalcommons.calpoly.edu/moebius/vol2/iss2/7/ Abney, K. (2012). Robotics, ethical theory, and metaethics: A guide for the perplexed. In P. Lin, K. Abney, & G. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press. Abney, K. (2017a). Robots and space ethics. In P. Lin, R. Jenkins, & K. Abney (Eds.), Robot ethics 2.0 (pp. 354–368). New York, NY: Oxford University Press. Abney, K. (2017b, July 26). On the ethics of cyberwar. Communications of the ACM. Retrieved from https://cacm.acm.org/blogs/blog-cacm/219696-on-the-ethics-ofcyberwar/fulltext Abney, K. (2019). Ethics of colonization: Arguments from existential risk. Futures, 110, 60–63. ISSN 0016-3287. doi:10.1016/j.futures.2019.02.014

78

Keith A. Abney

Axe, D. (2014). The pentagon’s plan to put robot marines in space. The Week, July 9. Retrieved from https://theweek.com/articles/445664/pentagons-plan-robot-marinesspace Axe, D. (2016). Russia is building a nuclear space bomber. The Daily Beast, July 14. Retrieved from http://www.thedailybeast.com/articles/2016/07/14/russia-isbuilding-a-nuclear-space-bomber.html Bekey, G. (2012). Current trends in robotics: Technology and ethics. In P. Lin, K. Abney, & G. Bekey (Eds.), Robot ethics: The Ethical and social Implications of robotics. Cambridge, MA: MIT Press. Bostrom, N. (2014). Superintelligence. New York, NY: Oxford University Press. Cook, M. L. (2007). Michael Walzer’s concept of ‘Supreme Emergency’. Journal of Military Ethics, 6(2), 138–151. doi:10.1080/15027570701381948 Fingas, J. (2014). China has a microwave pain weapon of its own. Engadget. Retrieved from https://www.engadget.com/2014/12/09/china-poly-wb-1-microwave-gun/ Gingrich, N. (2012). Campaign speech in Cocoa, FL, on January 25. Retrieved from https://abcnews.go.com/Technology/newt-gingrich-promises-moon-base-flights-marsreality/story?id515449425 Green, B. (2019). Self-preservation should be humankind’s #1 ethical priority and therefore rapid space settlement is necessary. Futures, 110, 35–37. Korsgaard, C. (1983). Two distinctions in goodness. The Philosophical Review, 92, 169–195. Lupisella, M. L. (2010). Cosmocentrism and the active search for extraterrestrial intelligence. Astrobiology Science Conference 2010. Retrieved from https:// www.lpi.usra.edu/meetings/abscicon2010/pdf/5597.pdf Marr, B. (2018). The key definitions of artificial intelligence (AI) that explain its importance. Forbes, February 14. Retrieved from https://www.forbes.com/sites/ bernardmarr/2018/02/14/the-key-definitions-of-artificial-intelligence-ai-that-explainits-importance/#478311404f5d Nagel, T. (1979). Mortal questions. New York, NY: Cambridge University Press. Obama, B. (2015). U.S. Commercial Space Launch Competitiveness Act (H.R. 2262). Retrieved from https://www.planetaryresources.com/2015/11/president-obama-signsbill-recognizing-asteroid-resource-property-rights-into-law/ Outer Space Treaty. (1967). U.S. Department of State. Retrieved from https:// www.state.gov/t/isn/5181.htm Plait, P. (2012). The Newtonian mechanics of building a permanent moon base. Bad Astronomy, January 27. Retrieved from http://blogs.discovermagazine.com/crux/ 2012/01/27/the-newt-onian-mechanics-of-building-a-permanent-moon-base/ Planetary Resources. (2018). Why asteroids? Retrieved from https:// www.planetaryresources.com/why-asteroids/ Russell, S., & Norvig, P. (2002). Artificial intelligence: A modern approach. New York, NY: Prentice Hall. Shainin, J. (2006). Rods from God. New York Times, December 10. Retrieved from http://www.nytimes.com/2006/12/10/magazine/10section3a.t-9.html Singer, P. (1974). All animals are equal, Philosophic Exchange, 5(1), Article 6. Smith, M. V. “Coyote”. (2017). America needs a U.S. space corps. War is Boring. Retrieved from https://warisboring.com/america-needs-a-u-s-space-corpsab79bebe93eb#.8dc3qrwpd

Space War and AI

79

Sony Corporation. (2018). Sony Group AI ethics guidelines. Retrieved from https://www.sony.net/SonyInfo/csr_report/humanrights/hkrfmg0000007rtj-att/AI_ Engagement_within_Sony_Group.pdf Sparrow, R. (2012). Can machines be people? Reflections on the Turing Triage Test. In P. Lin, K. Abney, & G. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press. Thompson, D. (2019, June 29). ISME Keynote address on the ethics of space war. Colorado Springs, CO. Toner, C. (2005). Just war and the supreme emergency exemption. The Philosophical Quarterly, 55(221), 545–561. Turing, A. M. (1950, October). Computing machinery and intelligence. Mind, LIX(236), 433–460. doi:10.1093/mind/LIX.236.433 Vallor, S. V., & Bekey, G. (2017). Artificial intelligence and the ethics of self-learning robots. In P. Lin, R. Jenkins, & K. Abney (Eds.), Robot ethics 2.0 (pp. 354–368). New York, NY: Oxford University Press. Veruggio, G., & Abney, K. (2012). Roboethics: The applied ethics for a new science. In P. Lin, K. Abney, & G. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press. Walzer, M. (2006). Just and unjust wars: A moral argument with historical illustrations (4th ed.). New York, NY: Basic Books. Whittington, M. (2015). How Newt Gingrich’s moon base became ‘pretty cool’. The Hill, October 21. Retrieved from http://thehill.com/blogs/congress-blog/ technology/257501-how-newt-gingrichs-moon-base-became-pretty-cool

This page intentionally left blank

Chapter 5

Building an Artificial Conscience: Prospects for Morally Autonomous Artificial Intelligence William D. Casebeer

Abstract Discussions of ethics and Artificial Intelligence (AI) usually revolve around the ethical implications of the use of AI in multiple domains, ranging from whether machine learning trained algorithms may encode discriminatory standards for face recognition, to discussions of the implications of using AI as a substitute for human intelligence in warfare. In this chapter, I will focus on one particular strand of ethics and AI that is often neglected: whether we can use the methods of AI to build or train a system which can reason about moral issues and act on them. Here, I discuss (1) what an “artificial conscience” consists of and what it would do, (2) why we collectively should build one soon given the increasing use of AI in multiple areas, (3) how we might build one in both architecture and content, and (4) concerns about building an artificial conscience and my rejoinders. Given the increasing importance of artificially intelligent semi- or fully autonomous systems and platforms for contemporary warfare, I conclude that building an artificial conscience is not only possible but also morally required if our autonomous teammates are to collaborate fully with human soldiers on the battlefield. Keywords: Ethics; artificial conscience; moral judgment; moral development; heuristics; autonomy; morality

Introduction: Ethics of Artificial Intelligence Versus Artificial Intelligence Ethics? Discussions of ethics and Artificial Intelligence (AI) range across a variety of issues important for both the general public and the national security professional. How do we ensure that our algorithms for determining who is a good risk when Artificial Intelligence and Global Security, 81–94 Copyright © 2020 Emerald Publishing Limited All rights of reproduction in any form reserved doi:10.1108/978-1-78973-811-720201005

82

William D. Casebeer

extending a loan make fair and reasonable decisions? What should be a proper “training set” for teaching a learning algorithm how to identify a face so that we don’t build in race bias from the start? What rate of false positives and missed tumors should we tolerate in an algorithm that examines X-rays looking for cancer cells? Should targeting algorithms in semiautonomous missiles be tested until we can be one hundred confident they will never make an error? What parts if any of military decision-making – strategic, operational, and tactical – can we delegate to AI? And so on. These are all pressing, and in some cases difficult, questions. The bulk of them, however, are issues in applied ethics – “the ethics of AI,” if you will. That is, they deal with questions about ethical issues that surface when we apply AI to a particular area or domain; they deal with questions of consent, character, and consequence especially. Many of these questions have been discussed in detail in public fora and in scholarly journals. Answering them is an important exercise in applied ethics. In this chapter, however, rather than talking about the ethics of AI, I want to instead explore “artificially intelligent ethics” – that is, I want to talk about how we can use AI to build ethical systems, whether those systems be robots, human-machine teams, sets of algorithms, or semiautonomous weapons platforms. This chapter is primarily an exercise in “AI ethics,” not in the “ethics of AI.” This is an important distinction to make, as focusing only on the applied ethics issues can cause us to neglect very important questions about the epistemology and metaphysics of moral judgment and development – that is, questions about what moral knowledge is and how we come to have it, and questions about the nature of morality such that we think it is real and substantive, and not an illusion or chimerical.1 Our answers to these big picture questions in turn have important practical upshot – if I think, for example, that morality is “skilled performance” versus “abstract reasoning about principles,” I might have very different answers about some of the “ethics of AI” questions mentioned earlier. The primary AI ethics questions I will discuss here are: What is an artificial conscience? Why would we want to build one? How would we build it? And what are some of the practical and moral concerns we might have about doing so? I will briefly address each of these questions in turn. Building an artificial conscience is possible, can be done with variations on many already existing AI techniques, and is something that it is not only ethically permissible to do but something we are morally obligated to do given the very real power already wielded by semiautonomous and autonomous algorithms and platforms.

1

For practical ethical issues in AI, see Lin, P. et al. (Eds.). (2014). Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press. For an early and fascinating discussion of AI ethics, see Wallach, W., & Allen, C. (2008) in Moral machines: Teaching robots right from wrong. Oxford; New York, NY: Oxford University Press.

Building an Artificial Conscience

83

What Is an Artificial Conscience? As human beings, we regulate our conduct in some ways, choosing and being influenced in ways that determine what action we take or what kind of person we become. One definition (and hence theory) of what a “conscience” is takes it that it is the faculty that we use to help us make these decisions, and that motivates us to act on these decisions once we’ve made them. I might, for example, know that I generally shouldn’t lie but in a moment of weakness tell a lie so as to escape a punishment, and my conscience will tell me both what I ought to have done and make me feel guilty for not doing it. Conversely, when my conscience dictates that I speak the truth and I do so despite the threat of punishment for doing so, I might feel noble pride at having done what my conscience demanded. A larger theory of what a conscience is would note its connection to the entire web of cognitive states and actions that are woven through practical moral decision-making. In other words, a person “with a conscience” or “of conscience” demonstrates requisite: moral sensitivity, moral judgment, moral motivation, and moral skill. A person with a good conscience knows what facts are morally relevant, combines those facts with good principles to make moral judgments, is motivated to act on those judgments, and can carry those actions out in a way that respects them. She knows, for instance, that when pressured to lie for no good reason, the pressures are morally irrelevant, and that good judgment requires that she refuse to lie, she acts on that judgment despite the pressure, and she carries off the resistance to pressure in such a way that the principles behind the action inspire others to follow her in telling the truth. Sensitivity, judgment, motivation, and skill are all combined.2 Regarding moral judgment, a reasonably well-put together human conscience considers the kinds of principles of good reasoning embedded in our great traditional moral and ethical theories from eastern and western traditions, reflecting the natural lawyer’s point that the findings of solid reasoning and the outcomes from many of our faith traditions ought to coincide. In much the same way, then an artificial conscience would serve as a decisionaid for human-in-the-loop autonomous systems, and would be a critical faculty that an entirely autonomous system would need to possess. It would use AI methods to allow a system to embody some combination of moral sensitivity, judgment, motivation, and skill, either metaphorically if our artificial cognitive systems don’t possess those things, or literally if they do. And some of the principles either explicitly contained in a reasoning system or “immanent” in

2

This is a variation of a framework offered by the moral psychologist James Rest. I am in his debt. See, e.g., Rest, J. et al. (1986) in Moral development: Advances in research and theory (pp. 1–39). New York, NY; London: Prager.

84

William D. Casebeer

it – emerging from its operation – will likely make reference to the “three Cs” of character, consent, and consequence.3 Before briefly discussing how we might build such an artificial conscience and what its content and architecture might look like, we should review arguments regarding the necessity of building one. Assuming an artificial conscience is possible, why build one?

Why Build an Artificial Conscience? While in general, I think the systems we build should encourage moral behavior and be morally praiseworthy, in this chapter I will focus on the defense context given the nature of this volume. The points I will make regarding an artificial conscience apply more generally, but are even more forceful in the context of military action and the use of force. Four important reasons to build an artificial conscience include: (1) that the use of autonomy in warfare is for evolutionary reasons a foregone conclusion so we need to get “ahead of the ball” by building in ethical reasoning into these systems from the beginning, (2) that US and Allied military doctrine requires it, (3) there will be national security-oriented competitive reasons to do so such that we prevent our adversaries from capitalizing on its absence, and (4) because morality demands it. Let’s discuss each of these in turn so that our exploration of an artificial conscience is appropriately motivated. First, there is ample evidence that deploying some combination of autonomous algorithms and platforms, semiautonomous machines, and human-machine teams and hybrid centaurs onto the battlefields of the future will provide a competitive advantage to the states and groups that do so. Human-machine teaming can serve as a force multiplier, allowing us to make more effective use of our most valuable resource – the creativity and ingenuity of our soldiers, sailors, airmen and women, and marines. Multiple laboratories and testbeds have demonstrated this already, and in several domains we already have mostly autonomous or semiautonomous platforms operating to great military effect (satellites, unmanned or remotely piloted aerial vehicles, missiles, etc.). There are no signs that this trend is slowing, and with appropriate technical development, militaries of the future that don’t leverage autonomy and human-machine teaming will be at a severe competitive disadvantage. Given these developments, it is incumbent upon us to develop the technologies that we know will be required to employ these platforms and teams in a morally permissible manner. This act will afford us the opportunity to prevent scenarios where we fail to build in the safeguards needed to forestall worst-case outcomes. Rather than pasting ethical governors and 3

I am indebted to Col (ret) Charles Myers for his clear and crisp articulation of the three C4s in his article “The Core Values: Framing and Resolving Ethical Issues for the Air Force.” Airpower Journal (Spring 1997). Retrieved from https://apps.dtic.mil/dtic/tr/ fulltext/u2/a529801.pdf. I have used this framework in other presentations and article to discuss moral issues. See, e.g., Casebeer, W. (2003 October). Moral cognition and its neural constituents. Nature Reviews Neuroscience, 4.

Building an Artificial Conscience

85

artificial consciences on these systems after the fact, we should get ahead of the issue by developing them early and in tandem with autonomy. Second, nation-states and groups concerned to use force in an ethically permissible manner already recognized this fact, and have science and technology need and requirement documents which discuss moral reasoning in autonomous systems. For example, the US Office of Naval Research has an active research program exploring moral reasoning in autonomous machines, and the US Army has a science and technology research requirement for exploring moral reasoning in ground autonomy. The Pentagon’s Joint Artificial Intelligence Command (JAIC)’s commanding general, LtGen Patrick Shanahan, has recognized the need for incorporating ethical concerns into the use of AI and is seeking a dedicated ethicist for the JAIC.4 Similarly, watchdog groups and nongovernmental organizations, such as the Federation of American Scientists and others, have called for autonomous systems to possess moral reasoning capability if they are to be deployed. The US and allied doctrine and science and technology requirements documents either explicitly, or implicitly through their emphasis on the laws of armed conflict and desire to use force justifiably, require us to develop and deploy something like an artificial conscience. Third, there are purely operational and strategic considerations in favor of developing and deploying an artificial conscience. In the past two decades, there was considerable discussion of the “strategic corporal” – the individual who despite not being high in the official chain of command is making decisions that have strategic upshot for whether wars are won or not. This is because seemingly small tactical decisions can have outsized strategic impact, especially in an environment where transparency and international press can quickly magnify the consequence of poor moral decision-making on the battlefield. For example, in the Iraq war, an entire platoon was rendered combat ineffective as an investigation was conducted regarding alleged killing of noncombatants. While the allegations proved to be false, the net effect was the same: US soldiers were off the battlefield rather than on it because of the effect of this misinformation.5 In a system where we fail to build in moral safeguards and ethical regulators into our semiautonomous systems, it will be more difficult to police misinformation regarding the behavior of our human-machine teams. The concept of the strategic corporal applies to our algorithmic and semiautonomous “strategic corporals” as well.6 Finally, whereas strategic and prudential considerations might require us to develop an artificial conscience, moral considerations require it as well. If an artificial conscience can help the soldiers of the future win wars more quickly and fight them more cleanly, minimizing collateral damage and giving moral bite to 4

See, e.g., https://www.defense.gov/explore/story/Article/1950724/dod-seeks-ethicist-toguide-deployment-of-artificial-intelligence/. Accessed on September 3, 2019. 5 See Watzman, R. (2017). The Weaponization of Information: The Need for Cognitive Security. Testimony before the Senate Armed Services Committee. Retrieved from https:// www.rand.org/content/dam/rand/pubs/testimonies/CT400/CT473/RAND_CT473.pdf. 6 The concept of “strategic corporal” was introduced by General Charles Krulac (USMC) in 1999.

86

William D. Casebeer

the principles behind the laws of armed conflict, there is a sense in which the moral enterprise which causes us to come to the defense of the innocent and of liberty to begin with also requires us to develop and deploy an artificial conscience. We are “just warriors” when we use violence only as a last resort, against combatants and not innocents, and for the betterment of all, and while embodying the virtues implicit in the cardinal virtue of justice. These are all moral considerations. These same moral considerations don’t just permit but actually require us to develop an artificial conscience. An ethicist would say that “ought implies can.” That is: we can’t be required to do what we aren’t capable of doing. In the next section, I discuss both the potential functional architecture and content of an artificial conscience. We can and ought to develop it.

How Would We Build an Artificial Conscience? Two critical questions regarding an artificial conscience are: (1) what would its architecture be, and (2) what kind of content would it contain? The first question is one of how we ensure that the moving parts of an artificial conscience fit together so that it can enable the requisite capacities. The second question is likely the more controversial one: Where would the content for the conscience come from and how would we integrate it into the architecture? The best perspective to take here is broadly functional: What functions do I want the architecture of the network to embody? Recall our earlier discussion of four general capacities that enable us to be moral actors: moral sensitivity, judgment, motivation, and skill. Let’s discuss how each of these in turn can be embodied in an architecture. Moral sensitivity is what allows a good conscience to be sensitive to some facts and circumstances and not to others. Moral sensitivity is the capacity to sense and recognize morally relevant facts, and to know that moral principles or concerns are present in the current circumstance. For instance, a morally sensitive agent would recognize when others are suffering or when there is a likelihood that suffering will result from an action they or others take. Depending on the content of the artificial conscience, morally relevant facts may include whether the agent or another’s actions are causing harm or suffering, whether other agents are involved in the unfolding action consensually (implicitly or explicitly) or if they are present unwillingly, and an understanding of how current circumstances represent functional or dysfunctional states for agents and tools (say, by knowing that in general not being able to control one’s appetites is not good for a person). Some notions of moral sensitivity also include in this capacity an ability to know what general rules, principles, or concerns are at risk or relevant in current circumstances. For instance, an unequal distribution of goods might indicate that an unjust procedure was used to allocate them and hence there might be issues of justice present if, through no fault of their own, one person is starving while another is living large. These rules of moral salience as Barbara Herman calls them may also be part of moral judgment (see, e.g., Herman, 1997).

Building an Artificial Conscience

87

Moral judgment is the capacity to be able to reason from a given set of facts and morally salient principles to reach a conclusion regarding what ought to be thought or done. Professional ethicists and computer scientists currently working in AI ethics tend to focus on the complexities of good moral judgment. A conscience with good moral judgment would know, for example, that in general the taking of innocent life is not morally justifiable; if presented with a circumstance where the agent part of the taking of life is about to cause the death of an innocent person it would recommend against doing that. Most courses in professional ethics for members of the military focus on issues in moral judgment: When is it right for us to use violence to achieve a goal? What kinds of character traits should we try to develop in our warriors so that they are the best they can be? When can you justifiably deceive an adversary? Moral judgment consists in the capacity to move from considerations of fact and circumstance in conjunction with knowledge of what right action consists in more generally to help appropriately guide the actions of an agent or team. Moral motivation consists in being motivated to do that thing that moral judgment recommends. For biological creatures like you and I, that might consist of having the right set of desires or appetites that align nicely with the outputs of my faculty for moral judgment. “Tests of character” usually consist in noticing that the outputs of our judgment and our desires are misaligned; I know that I shouldn’t deceive my boss, but I nonetheless do so because I don’t want to be punished for missing the work deadline or I don’t want to look bad in front of my colleagues. Some of our most admired exemplars in moral action are people who take a stand for right action or appropriate principal, giving up their life in return – an ultimate test of character given that a primary motivation for biological organisms is to stay alive. Examples include war heroes (many Medal of Honor winners), Jesus Christ, Galileo, Buddha, Martin Luther King, or others who give up life and limb for the greater good or important principles, or who avoid taking the easy path of giving in to societal pressure – these figures are often celebrated in faith traditions around the world. Moral skill consists in being able to carry out the dictates of the conscience in a way that satisfies the demands that morality makes on us. For positive duties especially (where we are morally obligated to take action to make something happen, for example, by coming to the rescue of the innocent when their lives are threatened) the possession of the right kinds of skills is critical. If I have an obligation to prevent the suffering of innocent people, and am the kind of agent that can do so if I just exercise the skills I have, then I am obliged to develop and hone those skills. These skills are disparate – they depend on the context of action, but usually consist of things like the ability to reason instrumentally so as to know what particular set of actions to take to achieve an end or goal, and basic abilities like being able to extend a hand to help someone, or more complicated skillsets like being able to execute a set of social skills and actions that let you convince your teammate not to act rashly. Our artificial conscience should include the abilities to be able to reason about what skills need to be brought to bear and in what order so as to execute the dictates of moral judgment.

88

William D. Casebeer

Note that in many cases these four capacities align and can be used nonproblematically. Many times, unless we are involved in a deeply immoral enterprise, we do not walk around stricken by pangs of conscience, or if we do, it is about issues around which epistemic “brackets” need to be thrown for the present (think of a contemporary controversial issue about which many reasonable people disagree). Also in most cases, sensitivity, judgment, motivation, and skill align – I sense relevant facts, know what the right thing to do is, am motivated to do it, and follow through with a set of skilled actions to do it. When these things come apart, then we have qualms of conscience or work to do if, for example, I know what the right thing to do is but have problems doing it because of akrasia – the Greek term for weakness of the will. So whatever architecture our artificial conscience has, it likely needs to be able to possess moral sensitivity, moral motivation, moral judgment, and moral skill. What content will be an explicit part of or at least implicit in the operation of this architecture? For many, besides the question of whether it is even possible to build an artificial conscience, this will be the critical fabled “million-dollar question.” Whose justice and which morality should be built into an artificial conscience? A good place to begin is with classic moral and ethical theories that reflect the grand traditions in moral discussions taking place worldwide. These ethical theories serve as the backbone for most courses in professional military ethics. These are the three Cs discussed earlier: the virtue theory of Aristotle, Plato, and Confucius (Character); the deontology of Immanuel Kant (Consent); and the utilitarian theories of John Stuart Mill (Consequence). Do my actions and states of mind tend toward enabling the flourishing of myself and others and embody the virtues while avoiding the vices (Character)? Do my actions and states of mind respect the obligations and duties I have to other moral agents and bearers of rights like people (Consent)? Do my actions and states of mind produce consequences that are conducive to the greatest amount of happiness for sentient creatures like people and animals (Consequence)? This chapter is too short to do anything other than refer the interested reader to introductions to this literature, and to note two things about these three Cs.7 First, the three Cs are generally agnostic regarding metaphysics – that is, most people from most faith traditions and moral approaches can see the power of considering these questions. In fact, the natural law tradition contained in Christian and Islamic holy texts, for example, emphasizes the importance of using our God-given gift of reason to be able to discover truths and rationales about morality which speak to any person of reason (irrespective of their particular faith or metaphysical belief). This is important, as it allows us to know when “the little voice in our head” (telling us to do something we would all otherwise agree is irrational or evil) is one we should ignore or see a doctor about, rather than one reflecting deep-seated wisdom and truth about the universe. Second, the three Cs capture lots of the wisdom contained in other

7

Rhodes, B. (2009). An introduction to military ethics: A reference handbook, Praeger, or any number of introductory textbooks in normative ethics.

Building an Artificial Conscience

89

ethical theories (existential questions about meaning can be discussed in the context of virtue theory, egoism can be discussed in the context of utility and virtue, and so on), and, in general, give similar advice in many circumstances about what ought to be done or thought. The truly difficult cases stem from tensions in these three Cs. In much the same way that “hard cases make bad law,” we wouldn’t want to start with circumstances and content that generate disagreement among reasonable people as we take first steps in building an artificial conscience – neither Rome nor a good moral agent were built in a day. Using the three Cs as a basis for the content of our artificial conscience, we could start by either training or encoding into our architecture the rules and laws of war that we already expect professional soldiers to follow in most circumstances most of the time. This set of rules and laws will vary according to the functional role played by each autonomous or semi-autonomous system. In much the same way that I might expect a Judge Advocate General to be much more adept at reasoning about difficult issues in military law than an infantryman (…while I might think the infantryman would have more skill in squad tactics and practical experience in making shoot/no-shoot decisions), we will have to strike a balance between the domain-general and task-specific moral knowledge we will have to embody in the artificial conscience depending on where it sits in the platform or system. Moral sensitivity can be embodied in an artificial conscience with the right combination of sensors and algorithms. If the utilitarian domain is especially important for most decisions the algorithms will be making, we will need multispectral sensors capable of detecting the types of activities that usually are affiliated with mission accomplishment. If the domain is more abstract, the artificial conscience might be more removed from the sensor layer and instead may be dealing with more abstract issues of consistency-checking between law of armed conflict considerations, mission protocol, and tactical objectives. In all cases the algorithms will need to eventually connect to both external context and the internal states or milieu of the agent – we need to be sensitive to both the state of the world and the state of the self, that way we will know, for instance, that my tiredness in conjunction with the cluttered nature of this combat environment will combine to make it more likely that I will make a misjudgment, so I need to exercise more caution when making shoot or no-shoot decisions. Moral judgment can be embodied in an artificial conscience via traditional symbolic computational approaches or via new-wave neural network and deep learning approaches. On the symbolic side, there are already logics that deal with the complexities of optional and forbidden actions – these deontic logics are well suited for reasoning about dimensions of consent or understanding when conflicts might be generated between duties that will require a higher-order principle to resolve. On the nonsymbolic side with neural network or connectionist approaches, researchers have made a great deal of progress in replicating the interaction of principles and exceptions via deep learning architectures. Moral motivation presents an interesting case for an artificial conscience, as artificial agents are generally assumed to be put together in such a way that they follow the dictates of higher-order reasoning modules or units. Motivation might be a type of special reasoning which considers how things like requirements to preserve the “self” – the platform or the team – interact with the dictates of

90

William D. Casebeer

judgment to cause the system to allow or even demand self-sacrifice. It might also be that by the time we actually engineer sufficiently complex architectures and systems such that the artificial agent becomes semi- or fully (“stage five”) autonomous across a wide range of domains that we will discover that the problem of motivation comes to the forefront again when the artificial agent has to make decisions about what kind of role it will play in the system. Moral skill can be embodied in an artificial conscience by ensuring that the system has the requisite lower-level skills and capabilities needed to act in the way that the other parts of the conscience require. This could consist in skills as varied as much better automated target recognition systems or meta-algorithms that guide the agent toward switching between sensors and actuators in a contextsensitive fashion. Some of the skills that artificial agents will require will look just like those we humans possess. For instance, in order to possess social skill, it is likely our artificial agents and their consciences will need to implement a theory of mind module or set of algorithms. The skill we have to make inferences about each other’s mental states (“You look angry – what have I done?”) is very important for effective moral skill and the ability to move in functional ways in social systems. Our artificial agents will likely need these theory of mind skills as well (see, e.g., Casebeer, 2003, Chapters 4 and 5). In general, the architecture and content described thus far will come together in a fashion that generates morally praiseworthy action either: (1) in an emergent fashion where the interaction of modules or parts generates morally praiseworthy behavior, (2) using traditional symbolic approaches (the predicate calculus, fuzzy logic, modal or deontic logic, etc.), and (3) neural network and deep learning approaches (more biologically realistic approaches which resemble the way reasoning in natural systems usually occurs). To conclude this section, special note must be taken of the heuristics and biases research program in the study of human cognition. It is well known that humans reason using short-cuts which sometimes fail to follow the dictates of logic. In some cases, these quick and dirty methods lead us to generally good decisions or conclusions; in others, they don’t. The following diagram displays the full range of heuristics and biases that have been explored in people.8 In general, they can be grouped into considerations about (1) what I should remember, (2) how I deal with too much information, (3) how I deal with a lack of meaning in the stimulus, and (4) my need to act quickly. The presence of heuristics and biases in moral reasoning and moral action does not undercut the program to build an artificial conscience by demonstrating it can’t be done or will be too difficult; on the contrary, it highlights the need for us to think about morally useful heuristics and biases systematically, and to consider how we might embody or embed them in systems that will in all likelihood face these same sets of four problems. There is a rich research agenda in figuring out these interactions, and in building systems which will know when the heuristics and biases are ecologically valid and will lead to correct decisions given the task and environment, and which ones will not.

8

Diagram used via Creative Commons open license.

Building an Artificial Conscience 91

92

William D. Casebeer

Toward this end, it will be highly likely that we will need to engineer a set of tasks which resemble those we believe our autonomous or semiautonomous systems will be accomplishing in the field so that we can stress test our artificial conscience appropriately. To explore edge cases and stress test hierarchies of heuristics and biases, we will need something like a “moral Olympics,” and in some cases we might be able to use the wisdom of crowds as a starting point for populating effective algorithms that can make it most of the way through our moral decathlon and be worthy of incorporation into the core suite of an artificial conscience.

Concerns and Rejoinders As important as this enterprise is, there are several concerns we should note and consider in the design of our artificial conscience research and development program. These include worries about whose justice we will consider, concerns that this is not possible either in principle or empirically, and concerns about rejection because of a lack of trust and transparency. Let’s discuss each of these in turn. First, when people discuss an artificial conscience, they often justifiably worry about “whose justice” and “which ethics” will be incorporated into it. This is a worthwhile and important concern. As discussed above, the three Cs and the traditional rules and laws of warfare are a good starting point where there is mutual overlap between reasonable positions. But the other important point to note is that these same concerns already apply to the training in ethics and the rules and laws of war: we already embark on programs designed to train soldiers about how to think about moral issues on and off the battlefield. We have courses in professional military ethics. We distinguish our military training institutions not just by the technical expertise they generate but by noting that they develop character and professional judgment needed for warriors. We already face and deal with this question with people; the same answers that work there are our beginning point for answering this concern. Our ethics and our justice, reflecting humanity’s shared experience, will be our starting point, just as it is for soldiers facing these issues on the battlefield today. Of note, in much the same way that we have a generally transparent democratic process that has helped us arrive at the rules and laws of war and which serves as critical input into the content of our military training and character development institutions, we will also likely need transparent and democratic involvement in this process. Second, some worry that building an artificial conscience is either a conceptual or an empirical impossibility. On the conceptual side, these critics may say that an artificial conscience is not a conscience at all; or that sets of algorithms and platforms and their human teammates can’t be “free” in the way that the individual human conscience is “free.” Of note, if this is true, then the critic has nothing to fear – just relabel this paper “Building an artificial system which advises us about things we ought to do or think morally speaking but which we aren’t going to call a conscience because of other things we believe.” No harm, no

Building an Artificial Conscience

93

foul, in that case; and if the research program turns out to be successful, so much the worse for that critical position. The next thing to note is that we are already building miniature versions of artificial consciences, both in terms of algorithms that do moral heavy-lifting for systems – such as automated target recognition systems or automated systems which take control of an aircraft when a pilot is task-saturated to prevent its destruction – and in terms of the children we raise day in and day out. My child’s sense of morality develops over time – the code, lessons, training and instruction we provide wire together that moral control system. We are already literally and figuratively building artificial and natural consciences every day. We should not let outdated assumptions about the nature of cognition or the analytic impossibility of an artificial conscience prevent us from digging in more deeply. The pace of technology development demands it. Third, even if building an artificial conscience were possible, some critics worry that a lack of transparency in it and trust in it might prevent its use or deployment. This is a salient worry, and one we face even when dealing with people. As we build an artificial conscience, we might also need to apply the fruits of programs exploring intelligible or interpretable AI, biasing our methods toward ones that have transparency built in from the start. Of note, proper, design, education and actual use – especially co-training between the artificial system and human teammates – can help address the former. We trust systems that we use successfully in repeated interactions. I have little insight into the code that helps my Global Positioning System device in my automobile function, but trust it despite its lack of transparency because I have a general idea of how it works, and I have used it to navigate successfully thousands of times now. The same can go for our artificial conscience.

Preparing for the Moral Future of Autonomy Today In our discussion of artificial intelligence ethics, I’ve made the case in this chapter that it is possible to build an artificial conscience. We covered the basics of what an artificial conscience would be, motivated building one by discussing the need for it, and talking some about the practicalities involved in architecture and content. The concerns we face as we build one are not insurmountable, either conceptually or technically. Building an artificial conscience is not only possible, but also morally required if our semiautonomous and autonomous teammates are to collaborate fully with human soldiers on the battlefield. Prudence and morality both would recommend that we equip our human soldiers and warriors with the best technology that can be had so they can prevail in peace and war, and live with themselves while doing so. The logic of the deployment of military technology on the battlefield means that autonomy will become even more important in future battles, be those physical or cyber. Let’s ensure our machine teammates who come with us in that future are equipped with the basic machinery of moral sensitivity, judgment, motivation, and skill. Let’s give them an artificial conscience.

94

William D. Casebeer

References Casebeer, W. (2003). Natural ethical facts: Evolution, connectionism, and moral cognition. Cambridge, MA; London: MIT Press. Herman, B. (1997). Moral literacy: The tanner lectures on human values. Retrieved from https://tannerlectures.utah.edu/_documents/a-to-z/h/Herman98.pdf

Chapter 6

Artificial Intelligence and Ethical Dilemmas Involving Privacy James Peltz and Anita C. Street

Abstract This chapter explores how data-driven methods such as Artificial Intelligence pose real concerns for individual privacy. The current paradigm of collecting data from those using online applications and services is reinforced by significant potential profits that the private sector stands to realize by delivering a broad range of services to users faster and more conveniently. Terms of use and privacy agreements are a common source of confusion, and are written in a way that dulls their impact and dopes most into automatically accepting a certain level of risk in exchange for convenience and “free” access. Third parties, including the government, gain access to these data in numerous ways. If the erosion of individual protections of privacy and the potential dangers this poses to our autonomy and democratic ideals were not alarming enough, the digital surrogate product of “you” that is created from this paradigm might one day freely share thoughts, buying habits, and your pattern of life with whoever owns these data. We use an ethical framework to assess key factors in these issues and discuss some of the dilemmas posed by Artificial Intelligence methods, the current norm of sharing one’s data, and what can be done to remind individuals to value privacy. Will our digital surrogate one day need protections too? Keywords: Artificial intelligence; data collection; privacy; digital authoritarianism; surveillance economy; digital surrogate The digital world for all its conveniences and attractions (and distractions) is fraught with risk and threats to personal security and identity. Fleets of ubiquitous sensors, which include mobile devices, closed-circuit cameras, and social media platforms constantly collect and store information on the environment and everyone in it. Collect enough seemingly benign information about buying habits of an individual, how and with whom they communicate, and the frequency in Artificial Intelligence and Global Security, 95–120 Copyright © 2020 Emerald Publishing Limited All rights of reproduction in any form reserved doi:10.1108/978-1-78973-811-720201006

96

James Peltz and Anita C. Street

which they do it, and a digital surrogate emerges, imbued with an individual’s unique characteristics, interests, values, beliefs, health, and social connections. Enter AI: a powerful suite of tools that facilitates the ability of companies like Facebook or Google to collect, aggregate, analyze, and solicit massive amounts of information about its users and profit from it. Facebook and Google count on the unwitting user as well as the witting user who is willing to tolerate a certain level of risk in exchange for convenience and “free” access but at what cost? Later in the chapter we will discuss how confusion and ignorance contribute to the further erosion of individual protections and undermine autonomy and democratic ideals. The same motivations behind commoditizing data for the convenience of the consumer pose many risks to a free society because your digital surrogate freely shares your buying habits, who you associate with, and where you go. The recent rise globally of authoritarianism leaders these findings are all the more concerning as societies are increasingly willing to “click” away their privacy. This chapter provides an overview of the current data ecosystem and some examples on the potential value behind your data as well as examines the potential dangers through a vignette of how we use technology for convenience. It is the commoditization of data used to develop these methods and the sheer amount of money behind developing your digital surrogate that drives the commercial market to push us to share our data at the expense of our privacy. Money, motivation, and weak regulations pose many risks to an individual targeted by marketing under the veil of convenience when viewed as being used by an authoritarian state to control its citizens. It begs the question: Does a digital surrogate have its own rights and protections? How would these protections be upheld in the virtual world which lacks physical boundaries and the protections government affords to its citizens? We look to the three grand traditions of virtue ethics (Eudemonism), deontology, and utilitarianism not only to better understand the current context of privacy in the United States, but also as a framework for an accountability structure that can potentially mitigate harm due to data privacy violations. We specifically examine privacy as it relates to data collection and the tensions derived from the perceived erosion of Fourth Amendment protections and the rise of the “surveillance economy” in relation to advancements in Artificial Intelligence (AI). Viewing privacy and data collection through an ethical framework draws attention to the imprecision with which we define privacy. Privacy is subjective. It is the reflection of societal and cultural norms or personal experience, for example, which can dictate the limits of our tolerance for violations of trust (how well our data are protected). We start first with definitions. How we define AI, data privacy, and data security. We follow with a discussion about how these often vague definitions can result in poorly defined boundaries between Fourth Amendment protections and the third-party and public disclosure doctrines that generate risk. This can lead to privacy abuses by governments looking to control its citizens which the government can use in ways that are counter to a free and open society. Imprecision in these definitions means varied interpretations of the pros and cons; damage to an

Artificial Intelligence and Ethical Dilemmas Involving Privacy

97

individual and the collective society; resulting in a patchwork of contradictory regulations when Courts are left to navigate the murky waters of these issues on their own (Bedi, 2016, p. 494, 2017, p. 508). Finally, we contemplate what can be done when the development of technology (AI) outpaces regulations and any potential remedies are further confounded by high tech jargon. This is language that the layperson may find difficult to understand in terms of benefits, risks, and consequences. The medical field and informed consent are useful surrogates to examine because like the medical field, a high degree of asymmetry in knowledge exists between users and developers of AI in much the same way as between patients and doctors. But unlike the medical field, there are many challenges unique to these issues such as an individual’s decisions or choices regarding sharing of personal data may be influenced by specific heuristics applied by the developers behind these technologies and more importantly social norms change with technology itself (Acquisti, Taylor, & Wagman, 2016, p. 444). A corrective tax is also considered, since it too has been proposed as a way to address Fourth Amendment violations of search and seizure by acknowledging their poor probability of detection (Baer, 2017). What about fighting fire with fire so to speak by using the same technology to limit sharing or anonymize data as fast as the technology is developed to obtain it? Ignorance is a common attribute behind the many challenges to this discussion since it all starts with the complicated and jargon terms of use agreements that commercial entities use to gain access to individuals’ data.

The Virtual Environment The following narrative describes a typical day now or in the not so distant future for the average connected individual. The scenario is meant to highlight the conveniences of a world enabled by apps. It is a world where commercial advocates personalize marketing to individuals in order to maintain perennial access to their data. This scenario is meant to establish situational context for what an individual’s data enables. User data tracked over time creates a surrogate of an individual’s behavior, or pattern of life, that for all intents and purposes is a reflection of the individual because the algorithm embodies aspects of what the individual buys, consumes, where they go, who they interact with, and the frequency in which they do it. The more data that is shared the sharper the image. That is because methods such as machine learning, in which algorithms are trained can evolve, making complex associations regarding data the algorithm hasn’t experienced or yet seen. This is a pathway to AI, and when robust enough, theoretically AI could begin making decisions or responding like the individual these data represent. This scenario attempts to convey a common mental model, or at least a relatable example that illustrates the fine line that exists between using apps for worldly convenience and a business model that in an extreme view could create the methods that actually could bind society. As will be discussed in the sections regarding ethics and privacy, most scholars recommend such relatable vignettes

98

James Peltz and Anita C. Street

because without them, conveying the dangers that stem from something as banal as terms of use agreements is challenging.

Scene: Ext. Anytown, USA: Morning-Sunrise It’s 7:00 a.m. as indicated by your phone’s alarm. Instantly, notifications appear with events, news, and notifications of the events that happened while sleeping or for the upcoming day. The sleep app reports another restless night, not enough REM phase hours and two interruptions for the bathroom. Coffee is already brewing, courtesy of the routine morning app that sequentially adjusts the temperature, lights, and has the TV reporting the morning news. It’s your mom’s birthday and a confirmation for the roses that were suggested two days ago appear on a text balloon on the phone with a projected afternoon delivery. All of this information is seamlessly integrated courtesy of the synced location services (SLC) with the phone which is cross-linked to both your credit and bank accounts for convenience and additional security. It is a run day as proclaimed by the fitness app, and the snapshot of your progress over the last two months shows a stagnating fitness level and a flag of concern regarding the last two days of evening meals that were deviations from the prescribed plan you agreed to several weeks ago. The weather this morning is 55 degrees Fahrenheit, partly cloudy and the sidewalks and trails are damp from the night fog. You heed the warning balloon and wear all-weather shoes and an extra layer as it flashes across as a headline. The suggested route appears and will be monitored via GPS and by heart rate sensors within the shoes and the smart material in the jogging apparel. The route is teed up to optimize avoidance of environmental data from CO2 sensor and pollen, while syncing with bio monitors to ensure the safest and optimal path for fat-burning. The route will be adjusted in real time based on biometric data and optimal heart rate. Upon returning from the run, the smart home’s voice goes over the days to do list and then covers the household goods that are queued for the weekly purchase order. The list is generated from the appliances that are connected via the Internet of Things (IoT) technology. The algorithm now is robust enough to identify correlations about your health based on mood, buying habits, and your patterns of behavior. There is a whole community of people like you who purchased the same set of appliances who are available for questions about troubleshooting and better eating habits. Today, the car service which is automatically requested is recommended to be queued up a bit early since the roads were slick and your morning run was terminated prematurely. In transit, the routine logic game is played which claims to maintain optimal mental capacity by measuring and testing a variety of parameters related to IQ and emotional intelligence. Results are publicly released to other users of the app and your friends on Facebook to ensure broader participation and to brag. Arriving at the office, the computer is at the ready queued up from a new energy saving program which monitors the location of the proximity cards used to enter the building. The virtual assistant provides descriptions of 400-calorie lunch options that are part of the fitness plan and queries whether invitations should be sent to the colleagues who regularly join and because it’s one of their birthdays.

Artificial Intelligence and Ethical Dilemmas Involving Privacy

99

This information is gathered from recent conversations overheard by lunchtime app and feeds from the various social media apps. The day is routine and consists of shuffling around from various meetings, while notes and memos are kept by the virtual personal assistant. The last appointment is indicated as a telecom that is set up for the car ride home. The evening is already planned too when your friends indicated they would arrive from San Francisco that evening. Personal organizers requested access to both schedules and these virtual assistants coordinated the details behind the scenes adjusting both schedules according to changes in both itineraries. A new vegan restaurant is suggested as the meeting point due to its location, their dietary restrictions, and the strict fitness plan that you have been following. It also received good ratings and strong reviews. Reservations were made automatically, based on your chats. Isn’t this world extremely convenient? The ease with which tasks are automated can result in significant savings in time, energy, pollution, and yield numerous positive outcomes. Allergies have the potential to be correlated with the various products you buy and thus avoided. Targeted achievements related to health and fitness could consider optimal biometrics such as heart rate and environmental hazards could affect one’s routine in real time. It could even call for the authorities in cases when such biometrics indicated situations of stress or bodily harm. Even personal relationships might improve; social interactions could be better managed. The modern world is complicated because there is so much information to digest and consider and because most everyone is dealing with such challenges. The diversity of data when crosslinked, such as multiple bank accounts and credit cards, might also be useful by creating unique combinations of data for authentication. It could even become an added layer of security because access to all of them would require changes to one. This raises the question, what are the costs and potential risks enabled by this convenience? Among the numerous potential risks, one of the principal risks is that of loss of individual privacy. Your digital surrogate could be interrogated in numerous ways without your knowledge and reveal specific details related to your pattern of life. It all starts with privacy and terms of service agreements. The more services one uses, the more terms of user and privacy agreements presented to that individual. As we use each of these services, we seldom stop to read the privacy and terms of service agreements related to them. While there are numerous assumptions one has to make to get quantified data on the amount of time and opportunity costs that would have to be spent to protect one’s privacy, estimates for lost time and lost opportunity costs are on the order of tens and hundreds of billions, respectively (Cranor, 2012; Wagstaff, 2012). An empirical study was conducted to understand if anyone actually reads these documents and out of those who do, how well are they reading them. The results are not surprising. Of all those studied, nearly three quarters opted straight for an option to use the service without reading the details of the privacy policy. The terms of service had no such bypass and should have taken the average adult 30 minutes to read were reviewed by most in under a minute. Out of those who declined to agree, most only read both documents 30 and 90 seconds longer, respectively (Obar & Oeldorf-Hirsch, 2018). The fact is, the incessant onslaught of these agreements, their long

100

James Peltz and Anita C. Street

and confusing language creates a culture of acceptance and erodes privacy. But privacy is a difficult issue as we will see in the next section.

Definitions A general discussion of privacy is evidently nebulous because of the circular logic that often prevails when attempting to define it and because people are a product of society. The same experiences that mold a person’s current view of the world also shape their views on privacy. Why be so concerned with a precise definition? Because, without a shared understanding of privacy, it is difficult to heed the dangers presented by its erosion. Without an understanding of the dangers presented by its erosion, it is difficult to fully understand the consequences of checking the box to accept third-party terms of service agreements presented to users in order to collect an individuals’ data. And without a shared understanding the technology behind it, it is difficult to avoid potential dangers until damages are incurred or when justice is sought after those damages occur. In order to make informed choices users must fully understand how the technology works, why it is being used, how their data will be managed, and who will have access. Most of all, who owns these data is a much bigger question. Privacy and data collection rely on basic agreements that relay how users’ information will be collected and used. Terms of use and privacy agreements are a point of contention because they are a source for both confusion and paradoxically are partially the normalization blind trust which contributes to eroding privacy and loss of autonomy. The confusion on the parts of users stems from the lengthy technical jargon, poor imprecise definitions, and a general lack of understanding of what these data create. Because they are confusing, lengthy, and must be accepted for the service to be used, few individuals actually read them. And because these services are ubiquitous in everyday life, their mass use means mass acceptance. The following definitions discuss why this construct leads to the current mindset here in the United States. In a quantitative study designed to understand the parameters of privacy, Kristin Martin compares a social contract approach to privacy with Helen Nissenbaum’s (2004, 2009) definition using contextual integrity using respondents associated with athletic teams. These members’ relationships are shown to be a decent proxy for the same interactions of management teams in business. By parameterizing several main factors related to who has access, what information is being exchanged, where the exchange is happening, and for what reason the information is being exchanged, the study seeks to examine how business teams’ judgment about the appropriateness of what, why, how, and to whom information flows (Martin, 2012). It becomes clear from the study that privacy norms are complex and specific to specific attributes such as age and gender, and within organizational structure. Acquisti et al. (2016, p. 443) provide a thorough treatment using an economic definition to privacy discussing the trade-offs associated with privacy and sharing of personal data, asserting, “At its core, the economics of privacy concerns the trade-offs associated with the balancing of public and

Artificial Intelligence and Ethical Dilemmas Involving Privacy

101

private spheres between individuals, organizations, and governments.” They go on to assert that understanding the economic relevance of privacy requires: understanding the context of privacy; understanding the differences between the individual and society where protection of privacy can both enhance and detract welfare; and requires individuals making informed decisions based on understanding how their data will be used (2016, p. 449). An ethical treatment for Eudemonism with the story of Euthyphro followed by treatments of deontology and utilitarianism are used to understand how privacy might further be defined for an individual or how by focusing on the underlying value of privacy, one might arrive at alternatives to a precise definition. Another useful outcome is how these treatments bring to light the issue of the agreements used to gain access to these data, and the importance that societal norms play in shaping views about sharing personal information with others. The ethical discussion is obviously not the only way to discuss privacy’s value, which is why the economics of privacy were mentioned. Scholars continue to debate many of these same issues settling on the same themes which give us some sense of cross “validation” for this argument and our conclusions. The central concept of virtue theory is to use one’s inherent knowledge to reach an ultimate good (Plato, 2003). Virtue theorists would attempt to understand privacy as a kind of basic moral knowledge, a knowledge which would ensure correct conduct (2003). Euthyphro and Socrates’ discussion of the word, “holy” is useful in understanding the challenges to developing a precise definition in this manner because their argument highlights the fallacy of circular reasoning that often prevails such arguments. Time and time again, Socrates points out that Euthyphro’s evidence of right and wrong relies on his assumptions of faith as the basis for his argument. As the story suggests, holy must be a subset of a more virtuous concept (2003). While it is difficult to make the case for defining privacy as a virtue, it is easier to think of more fundamental concepts like trust that allow the concept of privacy to exist. Trust is also at the core of terms of use and privacy agreements that allow one’s data to be collected and is what is being eroded. And can data handlers be trusted to secure and protect the data. The story also highlights that an individual’s power to discriminate between right and wrong is fundamental to defining a virtue. But making such distinctions inevitably is shaped by the knowledge of the individual whose concepts of privacy and value will vary based on their personal factors and experiences. This presents problems when attempting to understand privacy norms on society because researchers’ analyses on these topics are more representative of the author’s particular conceptions of privacy (Martin, 2016). How then would we ever arrive at a common extensible definition for privacy? Let us examine other concepts around privacy such as its value, as a potential way to mitigate the need for a precise definition. Deontologists in general are particularly concerned about the duties that free and reasonable creatures (paradigmatically, human beings) owe to one another, particularly with respect to choosing actions that do not violate each other’s rights (Giordano, 2015). Sharon Anderson-Gold (2010, p. 40) uses the Kantian rationale to argue that an exact definition of, or right to, privacy is not necessarily

102

James Peltz and Anita C. Street

needed by focusing on the concepts that indirectly encompass privacy. “Although lying or testifying to what one does not believe is true, is for Kant, at the heart of all vice because it corrupts the internal relation of the self to its own dignity, we are under no general obligation to reveal all that is true about ourselves or about others. We are not even epistemologically well constituted to reveal ‘all that is true’ about ourselves or others since our access to anyone’s [including our own] deepest intentions are fundamentally indirect.” It would seem for argument’s sake that privacy is left to be defined by an individual as long as he or she was not asked questions that would force one to lie. The latter statement about not knowing anyone’s intentions directly is also useful in that, Kant would likely suggest deferring to definitions of privacy that would afford the greatest protections of privacy to ensure not violating the rights of an individual with strict views of privacy. This conceptually runs counter to the current paradigm of data collection in several ways. Potential users must agree to the totality of the terms of use and privacy agreements (assuming they fully understand in the first place) which doesn’t afford one to make choices about what they actually want or are willing to share. It is an all or nothing approach akin to being held hostage by third parties who demand we reveal everything they want, or decline to use the service. How then can we be expected to make informed decisions about services such as social media and email that most in society use as routine functions when the only other option is not to participate? The current construct streamlines the process for commercial entities to gain access, collect, and share data with the click of a checkbox by presenting an agreement with verbiage similar to the complexity of bank mortgage documents. The language of these agreements is vague and filled with legal jargon often crafted to protect the interest of the bank or private industry, rather than crafted to protect the rights of the user. Deontologists could also object to the opaque descriptions or lack thereof on who all will collect or use an individual’s data. What users often fail to consider before checking the box is that these agreements grant permissions to a whole host of collectors and harvesters of their data in ways the user may not understand or in ways that take years or decades to fully understand. This will be discussed more in subsequent sections. A related issue relates to the privacy restrictions that come preloaded as defaults with these services. These defaults are often set the least restrictive privacy settings and the most comprehensive in terms of collection. Google and Facebook are notorious for offering their services in this manner (Curran, 2018). Yet, they could easily improve their image if they simply embraced a maxim of privacy. Approaching privacy in this manner would likely motivate these companies to make their privacy settings easily accessible with checkbox options for prompting the user on what they are willing to share rather than putting the onus on the user to dig through complicated menus and find them. The only one who benefits from operating in this manner is the commercial interests collecting data on the individual (Hoback, 2013). A utilitarian view is where the idea of privacy seems to be most challenged for a variety of reasons. Utilitarianism is concerned with choosing to act in ways that increase the greatest good for the greatest number which is counter to the

Artificial Intelligence and Ethical Dilemmas Involving Privacy

103

discussion of privacy which we argue is personal. It is not difficult to imagine how data collected to develop AI technology could be argued with some collective benefit which warrants infringing on an individual’s privacy, so rather than dwell on examples of that concept, we present a more interesting twist. The example is interesting because the action of voyeurism is akin to third parties collecting data on unknowing individuals. This might explain some of the reasoning behind individuals with agnostic views about giving up their data or thinking third parties or the government has these data already. It is also interesting because the argument recommends such severe penalties for a breach of trust in this manner. Both are useful for the discussion here. Tony Doyle asserts that the basic definition of utilitarianism fails to consider where or how happiness is derived which means an act is supported as long as general happiness increases. Thus, how a perfect voyeur gains knowledge is irrelevant as long as the victim behaves the same as if they were not being watched (2009, p. 182). We are “watched” and recorded ubiquitously. Does being watched and tracked make us more accepting of surveillance in general? As the argument points out, if our behavior is the same as if it would be without being watched, do we care how or if someone is watching us? A key aspect of the arguments logic rests on increasing happiness which also relies on the condition of privacy. A voyeur looking to breach someone’s privacy out of self-interest, out of curiosity, profit, to discredit, or blackmail will incur costs to achieve these goals so it’s in their best interest to actually have their privacy protected (Doyle, 2009). This might also explain why companies facing accusations are not necessarily quick to fess up to breaches in privacy. What else will the public learn if these entities are challenged to reveal the full extent of their activities? Doyle’s analysis goes on to describe why it is important to deal harshly with cases such as this because “…accomplished voyeurs potentially do more harm and are harder to detect than dabblers, would-be effective snoops need a stronger disincentive than dilettantes to pursue their ‘hobby’. This means more severe sentences the longer the convict is found to have been engaged in the crime and for the stealthier his methods” (2009, p. 187). The main issue raised by Doyle is how society deters activities that are difficult to detect yet gravely damaging. This theme will come up again later in the discussion when a corrective tax is discussed, but we will show why severely penalizing companies for infractions related to privacy may not be as useful and again any legal actions would require clear definitions and explicit evidence of damages which may not have much historical precedence. To this end, each grand tradition has advantages and shortcomings with respect to applying a universally accepted framework for how privacy should or for how privacy could be defined. The lessons they do yield are that terms of use agreements and privacy agreements are a good place to start if we are interested in changing social norms on expectations of privacy. Any viable solution needs to define the boundary between ownership, use, and control and balance individual privacy between an individual and the security of society, so establishing definitions and relatable examples for key concepts that encompass privacy or breaches in privacy is a useful first step (Rincon & Carlos, 2012).

104

James Peltz and Anita C. Street

Data Privacy Versus Data Security: Don’t Get It Twisted Data privacy and data security are frequently conflated and often confused. There is more than a notional argument to be made for insisting on their distinction and why this distinction might be important. They should not be considered interchangeable as much as interdependent. A number of theoretical frameworks have been proposed by privacy advocates and scholars to better understand what is meant by privacy and how to mediate the controls to protect it (security). What is clear is that privacy is highly contextualized and value-laden which makes it difficult to ascribe any one singular definition. The priorities for data generators (users) and data collectors when it comes to privacy can be in violent disagreement. Broadly, data privacy can be defined as the rules or norms for how data is collected, shared, and ultimately used. Data security can be described as the means for protecting data from those with mal intent or from anyone who gains unwelcome or unapproved access to our personal information. Bambauer theorizes that privacy is a choice and can be seen as a “normative framework for deciding who should legitimately have the capability to access and alter information; while security defines which privacy choices can be implemented” (2013, p. 667). As stated earlier in this chapter the lack of clarity and uniformity in the United States and abroad has put data privacy concerns and compliance at the forefront for most companies. Laws like the Health Insurance Portability and Accountability Act (HIPAA) and the Children’s Online Privacy Protection Act (COPPA) give customers the right to see all data collected about them and allow them to request that the data be deleted. States, like California, have their own privacy laws (Schwartz, 2019). The more recent General Data Protection Regulation (GDPR) from the European Union is even broader, defining a privacy violation as the illegal retrieval or disclosure of “any information relating to an identified or identifiable natural person.” That information can include posts on social media, email addresses, bank details, photos, and IP addresses. The law emphasizes consent, control, and demands clear explanations to users so they know how data is collected on them (Tiku, 2018). Currently, the law has some reported success related to notification of breaches to privacy; it has fared less well in punishing companies who mishandled personal data (Wolff, 2019). Reporting of fines for the first 9 months show that most companies are still not being penalized for not protecting individual’s data, and those that are being penalized mere fractions of the 4% of a company’s revenue ceiling (Wolff, 2019). Part of it likely has to be associated with detecting such breaches, the other likely stems from regulators understanding the impacts and costs associated with them once they happen. Each organization defines its own data privacy policies, which typically include what data will be collected, how that data will be collected and used, who will have access to it, whether, or how data can be shared with third parties, if data can be legally collected or stored, and how long data will be stored. They also detail which regulatory restrictions the organization must comply with.

Artificial Intelligence and Ethical Dilemmas Involving Privacy

105

This information is critical, not only to companies hoping to avoid fines and other penalties, but to customers themselves. According to a recent report from RSA, data privacy concerns are sky high right now. The report found that 80% of respondents consider financial and banking information the top concern while 72% consider personal identity information a significant area of concern. More than half of millennials are concerned with personal information being used for blackmail. A Harris Poll last year sponsored by IBM backed this up, finding that three quarters of global consumers would not buy a product or service from a company if they didn’t trust that company to protect their data.

Benefits and Drawbacks of AI The Association for the Advancement of Artificial Intelligence, researchers working to identify critical gaps in knowledge that could be addressed through improved research and development, describe the range of definitions for AI as broad but all bounded with the standard as being bounded by a human reference (Lawless, Mittu, Sofge, & Russell, 2017). AI is a type of machine learning algorithms that identify, understand, and predict patterns in data. The program is an algorithm that essentially uses data as a relational map between a set of inputs to map to a corresponding unique set of outputs (Nevala, 2017). This relational map is then used to make predictions, likely outcomes, or identify anomalous or unexpected behaviors in these data based on applying known rules set by humans (Nevala, 2017). The concept underpinning machine learning is to give the algorithm a massive number of “experiences” (training data) and a generalized strategy for learning, then let it identify patterns, associations, and insights from the data. In short, these systems are trained rather than programmed (Henke et al., 2016). That said, these algorithms through the advances in computational power, massive amounts of data, and researchers dedicated to developing these technologies are pushing the frontiers of needing human rule sets. Machine learning algorithms – supervised, unsupervised, semi-supervised, and reinforced – are categorized by the way in which they are trained. Kimberly Nevala provides a primer covering the general uses, applications, and data requirements that speaks cogently to the categories of these methods and why commercial entities are interested in developing such technology (2017). Evidently, a great deal of knowledge and work goes into understanding these methods, the voluminous data needed to support the various methods, and the acquisition pipelines to get these data, i.e., “[The] governance policies and the data ecosystem must support exploratory environments (often referred to as sandboxes) as well as production environments” (2017, p. 50). One of the most important aspects to understand about AI and its geopolitical and economic effects is how it can change the current global order. It facilitates the ability for authoritarian governments to monitor, understand, and control their citizens far more closely than ever before. Nicholas Wright in Foreign Affairs affirms because of this ability, “AI will offer authoritarian countries a plausible alternative to liberal democracy, the first since the end of the Cold War” (2018, p. 1).

106

James Peltz and Anita C. Street

Data Ecosystem Users who search, stream, and share pictures and videos via the internet or by the use of apps represent a market that can be bought and sold because although they are consumers of these products, but as such, they are also producers (Rincon & Carlos, 2012). Thus, “advertisements on the Internet are frequently personalized; this is made possible by surveilling, storing, and assessing user activities with the help of computers and databases. In the case of the Internet, the commodification of audience participation is easier to achieve than with any other mass media” (Rincon & Carlos, 2012, pp. 55–56). The direct apps and their developers are the first of many beneficiaries of such data. Imagine the potential of being able to use data-driven analyses to make predictions. That idea is what is driving Recorded Future, a private data Analytics Company started in 2009, which claims to make use of machine learning and natural language processing (NLP) to continuously analyze threat data from a massive range of sources thus predicting them before they happen. But having data does not necessarily mean they are useful or valuable. Value often depends on the structure and the context about how, when, and who generated them. Access to these data is another critical factor that in turn also affects value. In general, information that relates to location-based services, healthcare, and retail markets are valuable; these data are estimated to be worth hundreds of billions in the United States and roughly the same globally (Henke et al., 2016). While rapid advances continue, barriers exist to tapping into the value of these areas in the form of strict regulations evidenced by those which govern healthcare. It is not surprising then to see why opting out of sharing one’s data are the defaults, rather than finding the parameter settings for opting in. In terms of volume, a commercial data management company, Domo, updates a graphical depiction of the volume of data produced each minute of every day. Their sixth version of Data Never Sleeps graphic released in 2018 (http:// www.domo.com/learn/data-never-sleeps-6) details the rate at which data is created and consumed via the internet. In the span of just seven years, the global internet community increased from 400 million in 2000, to 3.8 billion in 2017 (James, 2018). Over half of these data come from mobile devices and increased to 75% of this volume in 2018. This represents significant potential in terms of understanding the people generating these data, and companies such as Domo are collecting them in the hopes of using data to generate money in the form of sales and marketing and consumable products. As Domo advertises, “Data never sleeps, but Domo has compiled the latest statistics [so you or your business] can discover newer and faster ways to turn data into ah-hahs, to turn ah-hahs into to-dos, and to turn to-dos into chachings” (James, 2018). The cha-chings in this context are the dollars and revenue that come from the data service provider making it easier for the consumer or data generator to get the most use out of their own data. This concept plays out in numerous variations where the data generated by consumers is used by data analyzers to make these data “work” better for the consumer. Companies such as McKinsey and Domo are quite representative of the third parties that seek

Artificial Intelligence and Ethical Dilemmas Involving Privacy

107

and harvest user data. However, what users may not appreciate is that the US Government is able to bypass the Fourth Amendment protections under the third-party and public disclosure doctrines which apply to websites and apps (Hoback, 2013). As the story suggests, apps on a phone are surprisingly adept at building an algorithm of the person using them. Most employ the use of location services, record contacts, collect and link who one communicates with, and what one consumes, e.g., products, services, and data. All of this information describes the individual, their likes, and their networks. Income, education level, and numerous other attributes can be gleaned from these data and exploited to sell one products such as in the case with Amazon, Apple, or Google or to be used by con artists and government agents for more nefarious purposes. With enough data about a system, or a specific target/person, predicting attributes or specific behaviors becomes possible. In the case of people, what items fill their needs leaves data for algorithms to be developed or data that algorithms can exploit. For example, algorithms can capture data about the average citizen’s use of alarms, sleep monitors, workout logs, bank accounts, bills, grocery lists, and games. In addition, networks collect and use data of the individual using them as well as collecting data from others in their social network. A user’s phone quickly becomes an accurate profiling agent. This is mainly because the user agrees to the collection and sharing of these data through those lengthy privacy agreements that are required to use the app in question. “You are what you app” as they say, and it is becoming fairly easy to categorize and profile any number of individuals from the apps they use (Mulvey, 2011). Social networking apps such as Snapchat, Instagram, and Twitter from Fig. 6.1 are quite significant data generators. They link people and employ algorithms that over time create a profile of the individuals who use them. Who individuals chat with the most, whose content they like or dislike, and the frequency with which they communicate are all characteristics of that user. With the advent of hashtags, conversation topics become linked and searchable to a larger population of users as well as become prime examples of how social data become bridges to the physical places, things, and ideas. The more one participates in discussions or generating data for others to consume, the more extensive and detailed the algorithms in these apps are able to generate a profile of the apps users. Add in geographic information such as Uber, Google maps, and services which track the weather and it becomes easy to see where one lives, where they work, what places they visit, and the frequency in which they do so. These services establish your social status by connecting the dots. It’s not illogical to be able to also get a sense of someone’s socioeconomic level and political leanings from these data as they convey zip codes and physical places that are visited as well. Apps such as Google, Amazon, etc. collect what consumers buy and use it as key data to improve the robustness of their profit margin and establish socioeconomic status as good indicators of education, age, and household size. Ingeniously, these services enable price discrimination on an unprecedented scale.

108

James Peltz and Anita C. Street

Fig. 6.1.

The Amount of Data Generated and Consumed per Minute on the Internet.

Artificial Intelligence and Ethical Dilemmas Involving Privacy

109

To provide some perspective on just how much data Google has the ability to collect on any one individual, Dylan Curran a data consultant in the United Kingdom decided to find out as described in a 2018 article in The Guardian. Curran downloaded his total history from the site (google.com/takeout) which amounted to a single 5.5 GB file – the rough equivalent of 3 million Word documents. The contents of which include an unsettling amount and variety of data: bookmarks, Google Drive files, entire YouTube history, photographs, purchases made through Google, everything he’d ever searched, and much more (Curran, 2018).

Digital Authoritarianism and the Rise of the Surveillance Economy Apps and the internet – portals to the digital world – increasingly link people, make it easier to share ideas, and easier to make reliable predictions. Conversely, these technologies also make it easier for nefarious actors to use these technologies to foster and foment civil unrest. And of course, the Arab Spring was facilitated by social media apps and global connectivity (Hempel, 2016). There is, of course, a positive side that could be considered a tradeoff to data security. In the case of the Golden State Serial Killer – Joseph James DeAngelo – a man who terrorized California residents in the late 1970s, he was apprehended in 2018 through the exploitation of records in a commercial genealogy database (Arango, 2018). The investigators who used this genetic data to track and subsequently arrest DeAngelo were awarded the “DNA Hit of the Year award” – an international prize for the top DNA case (Stanton, 2019). Here is an example of how a public good (solving crime) could easily run afoul of safeguards for civil liberty protections. Ethicists are calling for more transparency from law enforcement with respect to how they use DNA searches (Saey, 2018). The police in the Golden State Killer case accessed data from GEDmatch which is a public database. In this instance, privacy protections that private companies are obliged to adhere to may not apply. This raises concerns for legal scholars because of the serious implications for relatives of suspects who themselves may be wrongly implicated. More importantly these scholars are considering whether these tactical operations are extralegal. The sheer volume of information allows sinister or state actors to hide in plain sight posing as a friend or casual associate. The same data that create convenience and serve user preferences could easily be misused by the state and their agents to target, control, and even eliminate a particular profile of users. Vitals, intelligence quotients, emotional states, professional, and private networks became extremely accurate indicators of people and continue to be refined over time. Location data capture networks of interactions with unwitting contacts – to include family members, children, home, places of business – all tracked and monitored often without consent. Another possibility is for this shadow profile to mimic the user’s presence even if that were not the case. People just don’t get together in person often enough or when they do make plans, they often change or cancel them because of the ease with which it can be done. A particular nefarious actor could target profiles with

110

James Peltz and Anita C. Street

specific IQs, special needs, those on government assistance, or those with activist tendencies. Adversaries could use these profiles to target families and their children or blackmail individuals with any number of demands. They could also use these data to help shape a society that will be easily controlled, constantly surveilled and monitored, and shaped according to the leader’s political and financial advantage. In a 2018 MIT Technology Review article, author Christina Larson asks the question: “Who needs democracy when you have data?” Martin Chorzempa who is quoted in the article points out that “no government has a more ambitious and far-reaching plan to harness the power of data to change the way it governs than the Chinese government” (2018, p. 1). The rise of populist movements across Europe – Brexit – and the election of Donald Trump are thought to have emboldened the Chinese political class by providing them with a justification to support anti-democratic sentiment in the form of voter suppression. Additionally, technology is already being used in China to monitor and control the Uighur population (a predominantly Muslim, Turkic speaking ethnic group) in Northwestern China. Scores of video surveillance cameras track individual movements and police checkpoints scan IDs and phones and even scan pupils. “This personal information, along with biometric data, resides in a database tied to a unique ID number. The system crunches all of this into a composite score that ranks you as ‘safe’, ‘normal’ or ‘unsafe’. Based on those categories, you may or may not be allowed to visit a museum, pass through certain neighborhoods, go to the mall, check into a hotel, rent an apartment, apply for a job, or buy a train ticket. Or you may be detained to undergo re-education, like many thousands of other people” (Millward, 2018). These surveillance techniques are coming into play as the protests in Hong Kong have grown increasingly violent. In addition to more common crowd control techniques such as tear gas, the police used water cannons laced with blue dye to make it easier to identify, and presumably punish, violent offenders captured on camera (Hu & Griffiths, 2019). China also intends to use the aforementioned surveillance machinery to rank its citizenry according to social credit much like a credit report does for credit worthiness. “The exact methodology is a secret – but examples of the behaviors that will be monitored are driving and purchasing habits, and the ability to follow the rules” (Ma, 2018, p. 1). The implications will dictate what services will be available to an individual and will even restrict them from some services all together. If Hitler or Stalin had these capabilities history might have turned out differently but as we see in China, “Tibetans know well this hard face of China. Hong Kongers must wonder: If Uighur culture is criminalized and Xinjiang’s supposed autonomy is a sham, what will happen to their own vibrant Cantonese culture and their city’s shaky ‘one country, two systems’ arrangement with Beijing? What might Taiwan’s reunification with a securitized mainland look like? Will the big-data police state engulf the rest of China? The rest of the world?” (Millward, 2018) This dystopian future has arrived and may be coming to democracy near you. A similar system is under development in the United States.

Artificial Intelligence and Ethical Dilemmas Involving Privacy

111

In New York State, life insurance companies are determining premium rates on what they find in subscribers’ social media posts. It is incumbent on the companies to prove there is risk but cannot use posts in a way that would otherwise be discriminatory (basing rates on ethnicity or disability) (Elgan, 2019).

Options to Consider? Based on this paper’s findings an individual should likely consider budgeting time to read and understand the agreements they are agreeing to with a click if privacy is a value to them. While the statistics are daunting, one could likely prioritize the number by the services most used. In the meantime, individuals should also demand that privacy and terms of use agreements should be changed and follow the Aristotelian considerations: Is the story coherent? Is it simple enough to be processed? Can it be remembered? Is it easy to transmit? If believed, will it motivate appropriate action? (Giordano, 2015) “The ethicallegal issues, questions, and problems demand attention before these technologies become operational. That will enable a determination as to what legal standards may be viable – or should be developed – to govern (1) the informal or formal pressures placed on individuals to participate in using such technology, (2) the disclosures required for informed consent, (3) the level of care taken to protect individuals from harm, and (4) the liability that parties bear when individuals using such cutting-edge neurotechnologies are harmed” (Moreno, 2001 as quoted in Giordano, 2015, pp. 267–271). The very same machine learning algorithms are being looked to automatically make privacy easier to understand and automatically update as fast as the technology is developed. Fighting fire with fire, or in this case, algorithms with algorithms is an option that individuals can use to maintain their own sense of privacy regardless of the argument to maintain it socially. “One proposed alternative to the status quo is to automate or semi-automate the extraction of salient details from privacy policy text, using a combination of crowdsourcing, natural language processing, and machine learning” (Wilson et al., 2016). A team from Carnegie Mellon is currently building and scaling up a privacy corpus to help internet users understand online privacy practices. By using law students, the team analyzed 115 privacy policies and was able to demonstrate the feasibility of partly automating the annotation process with several machine learning toolkits. While it was only a demonstration, the results reveal the complexity of these documents (Wilson et al., 2016). A tool like this may one day allow the user to answer a few questions designed to understand how they value privacy and then automatically identify apps that are inconsistent with these preferences. At a minimum, these efforts would provide near-term solutions until the more conservative privacy paradigm of the EU is adopted elsewhere. There are several other options that could be pursued which would have a significant impact in taking charge of our own data. Recall the findings from the grand traditions: Virtue theorists would likely recommend treating terms of service and privacy agreements as contracts of trust; deontologists might recommend

112

James Peltz and Anita C. Street

changing the current default option of most app accounts from opting out to choosing to opt in; and a utilitarian would likely accept the current paradigm as long as the punishment for violations were severe enough to maintain the happiness of society once an offender was caught. More importantly, these inconsistencies again highlight the need to understand the boundaries between privacy as examined through the individual, society, and the existing consequences so that future threats can be addressed with ethical solutions. Individuals should be provided with a clear and concise explanation of their privacy rights, as well as the potential risks associated with the use of systems and apps. Just as Monu Bedi argues, to understand the historical context and unique contours of each [disclosure and privacy] doctrine[s] so that, “For prudential reasons, we must strive for a logical and cautious application of these principles that is firmly grounded in prior precedent. Otherwise, courts as well as scholars risk muddying the waters and, in turn, making unnecessarily overly broad or erroneous conclusions on important privacy matters” (Bedi, 2017, p. 494). The same topology can be argued as a need to understand the contours around privacy to include privacy norms and terms of use agreements as well as societal ethics and collecting personal data. This project perhaps can contribute to that end but focuses more on a few near-term solutions aimed at the definition and social norms around privacy. Informed consent is an attractive concept to explore for this chapter because like the medical field, a high degree of asymmetry in knowledge exists between users and developers in much the same way as between patients and doctors. However, it becomes apparent that there are major differences between medical ethics and data collection such as the level of perceived risk and the Hippocratic oath that warrants a willingness of the individual to become informed and the doctor to protect the patient, respectively, that are largely absent in data collection and the erosion of privacy. A process of informed consent as applied to privacy and data collection is different because the market interactions are largely unaffected by an individual’s consent, specific aspects of the technology influence decisions around users’ privacy decisions, and ethics and social norms change with technology (Acquisti et al., 2016). In this way people must be compelled to be informed if the current social norms regarding privacy have any chance of changing. Those who have studied this concept cite these differences. This is what is behind Dr. Catherine Flick’s “waiver of normative expectations” (Flick, 2016). Facebook and its emotional manipulation study with Cornell University already set precedent for applying informed consent to data analytic research. “[R] esearchers at Facebook tweaked what hundreds of thousands of users saw in their news feeds, skewing content to be more positive or negative than normal in an attempt to manipulate their mood” (Sullivan, 2017, p. 1). Neither the terms of service agreements nor Cornell University had what Flick describes as, “[S]ufficient ethical oversight and neglected in particular to obtain necessary informed consent from the participants in the study” (2016, p. 14). She goes on to argue that, “[A] reasonable shift could be made from traditional medical ethics ‘effective consent’ to a ‘waiver of normative expectations’, although requires much-needed changes to the company’s standard practice” (2016, p. 19).

Artificial Intelligence and Ethical Dilemmas Involving Privacy

113

The change to company standard practice would have to address disclosure and require being upfront with the user about the intent of the research. Moreover, the terms of service agreement would need to be more easily understood, i.e., “Facebook can, in fact, improve their terms of service in such a way that the expected norms are included as part of the base standard, but that expectations that need to be waived are communicated effectively and consented to (either negatively or positively) by the user” (2016, p. 19). There is another complication that these services are global and accessible by users across the globe. Any lasting change would have to consider an international audience. Unfortunately there is much more at stake for the collective society, yet options are unclear. In the United States, it is largely up to the consumer to understand the consent that is given through privacy policies and terms of agreements since there exists a patchwork of regulation and entities responsible for regulating this space. However, there are other countries around the world that recognize the right to privacy and are leading the way in protecting the consumer that may ultimately change practices in the United States, but even the EU’s GDPR is not a panacea. Detecting violations and assigning penalties are proving to be quite challenging. Informed analysis of the ethical and legal issues requires an integrated approach that looks to include norms also embraced in international law (Giordano, 2015). Ralph Schroeder and Jamie Halsall in a summary article interviewed several business leaders and found, [T]here is a shared sense that the existing regulatory environment [i.e., big data policies] fail to be transparent, clear, fair, and consistent. […] One area of particular friction surrounds the issue of privacy and personal data. The law has lagged behind both the growth in personal data use and developments in technical and statistical anonymization techniques. There is also a lack of standardization of privacy practices across jurisdictional boundaries. These failings are reflected in a somewhat piecemeal response to the personal data issues in industry, and there is still no accepted standard for how such issues should be treated – or even what the appropriate definition of personal data should be. Voluntary standards or codes of conduct, according to interviewees, would be a good first step given the likely intractability of a truly global privacy regulation. (2016, p. 11) Mobile applications and the internet are individual resources that are used globally. Thus, data collected as a byproduct of these activities are also regulated according to the laws in each country. Lasting protections would need to consider and address these realities. One option to deal with difficult to detect, high volume offenses, that are incrementally damaging behaviors could be a corrective tax. Miriam Baer lays out a well-argued case for using a Pigouvian, or corrective tax, to deal with Fourth Amendment violations that stem from illegal search and seizure by police (Baer, 2017). In an imagined implementation scheme, local police departments

114

James Peltz and Anita C. Street

would be charged an annual fee based on the volume of search activities, the risk the activity conceals intentional misconduct, and the harm that comes from such misconduct. Putting the obvious argument aside that the goal of both is to maintain Fourth Amendment protections, there are many parallels described in the implementation that would likely hold for data collection and privacy. It would incentivize curbing data collection by tying violations to the volume of incidence, and in theory might improve reporting of violations since the penalties would be tied more to the probability of intentional violation rather than gross negligence. Its greatest strength would likely be to create awareness by way of the tighter regulations around the issue of data collection and privacy by understanding the social costs rather than individual damages that come from a more command and control regulatory regime (Baer, 2017). This scheme is preventative by nature and also has the benefit of avoiding lengthy and costly legal proceedings for violations that occur after the fact (Baer, 2017). There are even more drawbacks though as identified in the same scheme. Creating a regulatory body comes with mortgages that maintain such a bureaucracy and enforcement, politics, and underpricing fines because of a small number of bad actors that cause the greatest damage (Baer, 2017). Unlike the scenario examined by Ms. Baer which examines the behavior of government provided services, the issue of data collection and breaches of privacy ability of entities deals with commercial business that can also pass on these costs to consumers. The tax is by no means an end all answer but does make the issue collectively become a social discussion. Interestingly enough, the same technology employed for the privacy corpus could be employed to address a range of the deficiencies discussed above.

Discussion It is difficult to imagine how something so seemingly trivial as terms of use or privacy agreements could enable paranoid outcomes such as threatening the US military or facilitating abductions. Even more alarming is the fact that abducting an individual’s online presence means it might take years to realize anything happened. But, “[T]he significance of privacy policies greatly exceeds the attention paid to them: these documents are binding legal agreements between website operators and their users, and their opaqueness is a challenge not only to Internet users but also to policymakers and regulators” (Wilson et al., 2016, p. 1338). These are policymakers that are influenced by the significant money discussed in the opening of this chapter no less. These agreements are ignored by users because, like mortgage documents, they tend to be long and difficult to understand (Wilson et al., 2016). It also does not help that the average citizen takes for granted that they live in a society with governance backed by the rule of law. How then can users be expected to agree and thus consent to accept technologies where the full understanding and outcomes of their use are unknown? More importantly, when individuals surrender their privacy, they also unwittingly surrender their individual freedom as they have allowed others to dictate their choices.

Artificial Intelligence and Ethical Dilemmas Involving Privacy

115

These questions are not purely legal, either. They present confluent ethical and policy issues. The interests of the public in advancing and sustaining national security must be weighed in comparison with the private interest in protecting individual rights (Giordano, 2015). The current paradigm is problematic in both how data are collected and because the sheer number of terms of service agreements and their complex verbiage leads many a user to scroll straight to checking the box without ever actually reading the text. The result is a collective norm of acceptance and a poor assumption that their privacy will be protected in the court of law. In reality anytime an individual shares data with a third party knowingly or not via agreeing to terms of use agreements, they waive their right to privacy because this technology is viewed as a public service through the eyes of the law. Recall, societal values are changing and are heavily influenced by the understanding of those examining the issue. How then can we be expected to make informed decisions about participating in social media platforms and email platforms when society uses these tools for routine functions? Every time one downloads an app or when a free service is provided on the internet, one is prompted to review a terms of service agreement. The current construct uses a checkbox to indicate an agreement for commercial entities to gain access, collect, and share data. What users often fail to consider is that these agreements grant permissions to a whole host of collectors and harvesters of their data in ways the user may not understand or in ways that take years or decades to fully understand. Similar to the complexity of bank mortgage documents, the language of these agreements is vague and filled with legal jargon often crafted to protect the interest of the bank or private industry, rather than to protect the rights of the user. Like a mortgage loan, this information must be fully disclosed before consent is given, but the risks are not clearly presented to the user or even known for that matter. Throughout the privacy discussion, particularly when discussing the three grand ethical traditions, there is a need to set and maintain a common frame of reference and ensure that an individual has the necessary information to make a sound choice, or in other words consent to an agreement. This means that the person involved should have the capacity to give legal consent; should be so situated as to be able to exercise free power of choice, without the invention of any element of force, fraud, deceit, duress, overreaching, or other ulterior form of constraint or coercion; and should have sufficient knowledge and comprehension of the elements of the subject matter involved, so as to enable him/her to make an understanding and enlightened decision (Giordano, 2015). The issue with using such technology is clearly complicated because users often do not understand the potential dangers of using such technology and have very little incentive to educate themselves because the choices to use such services are routine yet the impacts are incremental and mundane. Apps and the internet essentially create the ultimate marketplace where companies can sell the same item to different people at different prices based on the profile of each individual consumer (Wheelan, 2002). Commercial entities and the industry argue that collecting this information allows them to deliver better,

116

James Peltz and Anita C. Street

more tailored services for the user. Although that is a reasonable argument for this type of data collection, this approach does not reveal everything that they do with the data. Much less evident is the fact that the data collected from each user and their household and/or social network allows them to build a virtual and detailed personal profile of everyone who uses them and who might use their products in the future. The bottom line is that this technology has or will have the ability to create a virtual copy of an individual. Does this virtual copy need protections or does it have rights? This digital surrogate is a reflection of an individual but because of where this virtual exists will be owned by others. Whoever owns the algorithm and whoever owns the physical servers where these data are stored, is technically who will own the digital surrogate. Could this surrogate be interrogated and reveal attributes about the physical individual? It most certainly could reveal where and how to find them. It most certainly would reveal how one thinks, what they value, who they are connected to, and how they communicate with their network. There are certainly no protections for such a surrogate since most laws protect people by way of the physical boundaries that define citizenship and thus the prevailing laws that are in place for their protection. This issue is coming and may become even more important as people inevitably feel that personal privacy will eventually end.

Conclusion People simply need to know what data third parties would want to collect, why they want to collect it, who has access, how they have access, and how these data are used, before making informed choices about how sharing such data would enhance or detract from their welfare. Collectively different units of society would have varying views and acceptance about what to share, which would likely necessitate several canonical situational dependent vignettes for a broad understanding of said trade-offs, but it is choice in the end that matters. Ethics sheds some light on the troubles with circular logic and individual views on privacy, but falls short on how to extend the issue to social and global points of view. The technology and poor understanding of the potential dangers is also enabled by the same ignorance that makes it difficult to draw lines in the sand for regulations and privacy violations. In reality, there is something much larger, insidious, and potentially sinister at play. The engine of the surveillance economy feeds relentlessly off of the 2.5 quintillion bytes of data generated each day. AI will only increase the capability to analyze, collect, and refine the data that will be used to target consumers who generate the data in the first place. Shoshona Zuboff (2016), a Harvard Professor and one of the most prolific writers on this topic, warns convincingly about the perils of what she calls “Surveillance Capitalism.” She argues that it’s not just our personal privacy at stake but fate of the liberal order as we know it. The lack of transparency is a consistent theme whether it applies to terms and agreements, development of algorithms and AI applications, or basic

Artificial Intelligence and Ethical Dilemmas Involving Privacy

117

understanding of how, for instance, Facebook advertising even works (Chin, 2019; Hitlin & Rainie, 2019). In a recent WIRED magazine op-ed, Professors Olaf Groth, Mark Nitzberg, and Stuart Russell (2019) present an interesting alternative to the current Wild West approaches to algorithm development. The premise is based on the determination that algorithms can and do have lasting and permanent effects on society. They draw parallels to how the drug approval process works. These authors suggest that perhaps we consider developing a Federal Drug Administration type review process for algorithms before they are made publicly available. Such an endeavor could take too much time to orchestrate and implement for a situation of perceived urgency. This chapter collected the ideas from scholars and practitioners who study various aspects of privacy and puts forth several recommendations the individual can consider while the broader discussion on options for regulation are underway. There are no easy options, but education on the issue and collective discussion of the potential risks from these technologies is useful in highlighting the many loose ends that would need to consider in order for a lasting solution to be devised and accepted. Teeing up the idea that a digital surrogate needs protections or may have rights might be a hyperbolic but a useful example that sheds light on a nebulous and opaque discussion.

Disclaimer The views expressed in this presentation are those of the author and do not necessarily reflect the official policy or position of the US Naval War College, the US Navy, the US Department of Defense, or the US Government.

References Acquisti, A., Taylor, C., & Wagman, L. (2016). The economics of privacy. Journal of Economic Literature, 54(2), 442–492. Anderson-Gold, S. (2010). Privacy, respect and the virtues of reticence in Kant. Kantian Review, 15(2), 28–42. doi:10.1017/S1369415400002429 Arango, T. (2018). The cold case that inspired the ‘Golden State Killer’ detective to try genealogy. The New York Times. Retrieved from https://www.nytimes.com/2018/ 05/03/us/golden-state-killer-genealogy.html Baer, M. H. (2017). Pricing the fourth amendment. William and Mary Law Review, 58(4), 1103. Bambauer, D. E. (2013). Privacy versus security. Journal of Criminal Law & Criminology, 103(3), 667–683. Bedi, M. (2016). The curious case of cell phone location data: Fourth amendment doctrine mash-up. North Western University Law Review, 110(2), 507. Bedi, M. (2017). The fourth amendment disclosure doctrines. William and Mary Bill of Rights Journal, 26(2), 461–494. Chin, C. (2019). Most users still don’t know how Facebook advertising works. Wired. Retrieved from www.wired.com/story/facebook-ads-pew-survey/

118

James Peltz and Anita C. Street

Cranor, L. (2012). Necessary but not sufficient: Standardized mechanisms for privacy notice and choice. Journal on Telecommunications and High Technology Law, 10(2), 273–308. Curran, D. (2018). Are you ready? Here is all the data Facebook and Google have on you. The Guardian. Retrieved from https://www.theguardian.com/commentisfree/ 2018/mar/28/all-the-data-facebook-google-has-on-you-privacy Doyle, T. (2009). Privacy and perfect voyeurism. Ethics and Information Technology, 11(3), 181–189. doi:10.1007/s10676-009-9195-9 Elgan, M. (2019). Uh-oh: Silicon Valley is building a Chinese-style social credit system. Fast Company. Retrieved from https://www.fastcompany.com/90394048/uhoh-silicon-valley-is-building-a-chinese-style-social-credit-system Flick, C. (2016) Informed consent and the Facebook emotional manipulation study. Research Ethics, 12(1), 14–28. Retrieved from http://journals.sagepub.com/doi/pdf/ 10.1177/1747016115599568 Giordano, J. J. (2015). Neurotechnology in National Security and Defense: Practical considerations, neuroethical concerns. Boca Raton, FL: CRC Press. Groth, O., Nitzberg, M., & Russell, S. (2019). AI algorithms need FDA-style drug trials. Wired. Retrieved from https://www.wired.com/story/ai-algorithms-need-drugtrials/ Hempel, J. (2016). Social media made the Arab spring but couldn’t save it. Wired. Retrieved from https://www.wired.com/2016/01/social-media-made-the-arab-springbut-couldnt-save-it/ Henke, N., Bughin, J., Chui, M., Manyika, J., Saleh, T., Wiseman, B., & Sethupathy, G. (2016). The age of analytics: Competing in a data-driven world. McKinsey Global Institute Report. Retrieved from https://www.mckinsey.com/business-functions/mckinseyanalytics/our-insights/the-age-of-analytics-competing-in-a-data-driven-world Hitlin, P., & Rainie, L. (2019). Facebook algorithms and personal data. Pew Research Center Internet and Technology. Retrieved from www.pewinternet.org/2019/01/16/ facebook-algorithms-and-personal-data/ Hoback, C. (2013). Terms and conditions may apply, documentary film. Park City, UT: Slamdance Film Festival. Hu, C., & Griffiths, J. (2019). Violence and arrests in Hong Kong’s fiery 13th protest weekend. CNN. Retrieved from: https://www.cnn.com/2019/08/31/asia/hong-kongprotests-aug-31-hnk-intl/index.html James, J. (2018). Data never sleeps 6.0. Domo Blog. Retrieved from http:// www.domo.com/learn/data-never-sleeps-6 Larson, C. (2018). Who needs democracy when you have data? MIT Technology Review. Retrieved from https://www.technologyreview.com/s/611815/who-needsdemocracy-when-you-have-data/ Lawless, W. F., Mittu, R., Sofge, D. A., & Russell, S. (2017). Autonomy and artificial intelligence: A threat or savior? Cham: Springer. Ma, A. (2018). China has started ranking citizens with a creepy ‘social credit’ system – Here’s what you can do wrong, and the embarrassing, demeaning ways they can punish you. Business Insider. Retrieved from http://www.businessinsider.com/ china-social-credit-system-punishments-and-rewards-explained-2018-4 Martin, K. E. (2012). Diminished or just different? A factorial vignette study of privacy as a social contract. Journal of Business Ethics, 111(4), 519–539. doi:10.1007/ s10551-012-1215-8

Artificial Intelligence and Ethical Dilemmas Involving Privacy

119

Martin, K. E. (2016). Understanding privacy online: Development of a social contract approach to privacy. Journal of Business Ethics, 137(3), 551–569. doi:10.1007/ s10551-015-2565-9 Millward, J. (2018). What it’s like to live in a surveillance state. The New York Times. Retrieved from https://www.nytimes.com/2018/02/03/opinion/sunday/china-surveillancestate-uighurs.html Moreno, J. D. (2001). Undue risk: Secret state experiments on humans. New York, NY: W. H. Freeman, 2001, Quoted in Giordano, J. J. (2015). Neurotechnology in National Security and Defense: Practical considerations, neuroethical concerns. Boca Raton, FL: CRC Press. Mulvey, J. (2011). ‘Appitypes’: You are what you app, study finds. Tech News Daily. Retrieved from https://www.today.com/news/appitypes-you-are-what-you-app-studyfinds-wbna41625761 Nevala, K. (2017). The machine learning primer. Cary, NC: SAS Institute. Retrieved from https://s3.amazonaws.com/baypath/files/resources/machine-learning-primer108796.pdf Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review, 79(1), 119–158. Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Standford, CA: Stanford University Press. Obar, J. A., & Oeldorf-Hirsch, A. (2018). The clickwrap: A political economic mechanism for manufacturing consent on social media. Social Media 1 Society. Retrieved from https://doi.org/10.1177/2056305118784770 Plato. (2003). The last days of Socrates. London: Penguin Group. Rincon, V., & Carlos, J. (2012). Internet and surveillance. The challenges of web 2.0 and social media. Revista Signoy Pensamiento, 31(61), 191–193. Saey, T. H. (2018). Why using genetic genealogy to solve crimes could pose problems. Science News. Retrieved from https://www.sciencenews.org/article/why-policeusing-genetic-geneaology-solve-crimes-poses-problems Schroeder, R., & Halsall, J. (2016). Big data business models: Challenges and opportunities. Cogent Social Sciences, 2(1), 1–15. doi:10.1080/23311886.2016. 11669242016. Schwartz, K. D. (2019). Data privacy and data security: What’s the difference? ITProToday. Retrieved from https://www.itprotoday.com/security/data-privacyand-data-security-what-s-difference Stanton, S. (2019). Golden state killer/east area rapist probe wins international DNA investigation award. Sacramento Bee. Retrieved from https://www.sacbee.com/ news/local/article231679358.html Sullivan, G. (2017). Cornell ethics board did not pre-approve Facebook mood manipulation study. The Washington Post. Retrieved from https://www.washingtonpost.com/ news/morning-mix/wp/2014/07/01/facebooks-emotional-manipulation-study-waseven-worse-than-you-thought/?noredirect5on&utm_term5.9dbc99d9c641 Tiku, N. (2018). Europe’s new privacy law will change the web, and more. Wired. Retrieved from https://www.wired.com/story/europes-new-privacy-law-will-changethe-web-and-more/ Wagstaff, K. (2012). You’d need 76 work days to read all privacy policies each year. Time. Retrieved from http://techland.time.com/2012/03/06/youd-need-76-workdays-to-read-all-your-privacy-policies-each-year/

120

James Peltz and Anita C. Street

Wheelan, C. J. (2002). Naked economics: Undressing the dismal science (1st ed.). New York, NY: Norton. Wilson, S., Schaub, F., Dara, A. A., Liu, F., Cherivirala, S., Leon, P. G., … Sadeh, N. (2016). The creation and analysis of a website privacy policy corpus, the usable privacy project. In K. Erk & N. A. Smith (Eds.), Proceedings of the 54th annual meeting of the association for computational linguistics (pp. 1330–1340). Berlin: Association for Computational Linguistics. Retrieved from https://www.usableprivacy.org/data. Accessed on April 18, 2018. Wolff, J. (2019). How is the GDPR doing? Slate. Retrieved from https://slate.com/ technology/2019/03/gdpr-one-year-anniversary-breach-notification-fines.html Wright, N. (2018). How artificial intelligence will reshape the global order. Foreign Affairs. Retrieved from https://www.foreignaffairs.com/articles/world/2018-07-10/ how-artificial-intelligence-will-reshape-global-order Zuboff, S. (2016). Google as fortune teller: The secrets of surveillance capitalism. Frankfurter Allgemeine Zeitung. Retrieved from https://www.faz.net/aktuell/ feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-of-surveillancecapitalism-14103616.html

Chapter 7

Artificial Intelligence and Moral Reasoning: Shifting Moral Responsibility in War? Pauline Shanks Kaurin and Casey Thomas Hart

Abstract It is no longer merely far-fetched science fiction to think that robots will be the chief combatants, waging wars in place of humans. Or is it? While artificial intelligence (AI) has made remarkable strides, tempting us to personify the machines “making decisions” and “choosing targets”, a more careful analysis reveals that even the most sophisticated AI can only be an instrument rather than an agent of war. After establishing the layered existential nature of war, we lay out the prerequisites for being a (moral) agent of war. We then argue that present AI falls short of this bar, and we have strong reason to think this will not change soon. With that in mind, we put forth a second argument against robots as agents: there is a continuum with other clearly nonagential tools of war, like swords and chariots. Lastly, we unpack what this all means: if AI does not add another moral player to the battlefield, how (if at all) should AI change the way we think about war? Keywords: Military ethics; artificial intelligence; smart weapons; moral responsibility; blame; praise; combatant

Introduction A common observation in academic, policy, and military circles is that artificial intelligence (AI) and its myriad uses in warfare is the next Revolution in Military Affairs. In his book Army of None, author Paul Scharre considers whether we will soon have weaponry, robots, and other technologies that can carry out the functions of warfare that currently we associate with human beings, even eventually making moral judgements in and killing in war without humans in the loop.i The idea of the “Killer Robot” is pervasive in popular culture and art, but groups like Campaign to Stop Killer Robots and others are calling for the banning of lethal autonomous technologies in war. They think these killer robots are more Artificial Intelligence and Global Security, 121–136 Copyright © 2020 Emerald Publishing Limited All rights of reproduction in any form reserved doi:10.1108/978-1-78973-811-720201007

122

Pauline Shanks Kaurin and Casey Thomas Hart

realistic than a remote science fiction future. Should we be worried? Will AI change the basic character of warfare, or more seriously even change the very nature of war itself ? Spoiler: we don’t think so. Here is a road map for how we arrive at this position. First, we unpack the existential nature of war, as we must first understand the thing that AI is purported to revolutionize. Next, we offer an argument against AI being morally radical. If mundane weapons of war like swords are morally unremarkable, you should think that smart missiles and AI-piloted drones are too. Then, we dive deeper into the prerequisites for agency in general and moral agency in particular. We additionally show that AI does not meet these requirements. We conclude with the more positive case: even if AI is morally ordinary, how will this technology expand and shift the focus of human actions as it is deployed more widely? In short, it increases the epistemic burden on users of the technology, and it widens the scope of moral responsibility: programmers have an increased moral role on future battlefields.

Identifying AI Heather Roff (2019) and other scholars make the important point that public discussion conflates AI with automated, autonomous, and machine learning (ML) technologies, so it is important to try and clarify exactly what we mean by artificial intelligence and what type of AI we are addressing.ii In general, most definitions of AI center on the idea of producing human-like intelligence in machines, especially computers – that is, AI that can appear like or function as human intelligence does. Some versions of defining AI even go so far as to claim that the aim is to produce “intelligent agents” who possess “rationality.”iii We will return to this agency claim in more depth later, but for now we should clarify the senses of AI so that we can pin down the target for our present discussion. We will follow roughly with Roff’s (2019) distinction between AI being automatic, ML, or Good Old Fashioned Artificial Intelligence (GOFAI).1 Let’s look at each of these in turn. Often all that is meant by “AI” is that some thing or system will automatically perform some task when certain conditions are met. A thermostat is like this: when the temperature is above or below some temperature setting, the A/C or the heat will kick on. But we struggle to see these processes as intelligent. In the thermostat case, the mechanism is often that some piece of metal bends to complete a circuit, or some quantity of wax under pressure melts to break a circuit. While these are clever engineering feats, they do not constitute intelligent agents any more than a landmine triggered by pressure does. The next “step up” in terms of AI is machine learning. While there is plenty of nuance to be had here – distinguishing different types of ML algorithms, deep 1

https://www.ethicsandinternationalaffairs.org/2019/artificial-intelligence-power-to-thepeople/#.XP6imLulYpU.twitter.

Artificial Intelligence and Moral Reasoning

123

learning, etc. – the core principle of all ML is the same. You present a large training data set to the AI, the larger the better. You then label some of these items as successes and others as failures. And then you identify the variables you want the AI to take into account when determining which states of the variables are indicative of success. For example, you could show a visual ML AI a huge stack of pictures of animals and tag the ones that have sheep in them. The AI will then train on this stack and can be used on a new picture outside of the training set. If the training set was sufficiently broad, and the new picture is relevantly similar to the training set, we expect the algorithm to get the right verdict about whether there is a sheep or not. It turns out that such algorithms are often incredibly effective. For example, Lello et al. (2018) have designed an algorithm that can pinpoint one’s height from just their genome with an accuracy of within a few centimeters.2 And while ML is amazing, it has some glaring flaws. Let us return to the sheep example. Janelle Shane (2018) highlights how brittle and unintelligent ML can look with even minor shifts between the content of the training set and the target.3 Imagine in your mind the picture of a sheep. What do you see? A safe bet is that there are rolling green pastures with white puffy sheep dotting the landscape, perhaps a stacked-stone fence in some English countryside. This is exactly what the training data looks like, and it means that if you put a sheep in a different context, the ML algorithm gets very confused. If you put a sheep on a leash, the AI thinks it’s a dog. If someone spray paints the sheep orange to make them more visible and less attractive to thieves, the AI tags them as orange flowers. Or, if you have some white rocks on a lush green hillside, the AI is likely to “hallucinate” sheep. In short, the AI’s accuracy is incredibly brittle, lacking the common sense that sheep are not identical to the cluster of concepts that they are often associated with. Instead, you can shift one or the other of these concepts (paint a sheep orange, put them on a leash, set them in a tree) and a human will easily identify them still as sheep. This sheep example highlights two of the persistent problems for ML AI. First, ML is a black box: it takes whatever variables you give it and generates an association with those variables and the “successes” labelled in the training data. But this association is absolutely opaque to all human users. Not only does this mean the AI may be inaccurate, but it may base its reasoning on totally irrelevant factors without users even knowing. In extreme cases, this can lead to developing racially biased AI algorithms that actively perpetuate injustice. The second flaw with ML AI is its brittleness. This is acceptable when identifying sheep, in which case the failures are harmless and comical. But this is unacceptable when identifying, say, military targets. One might think the missing link for ML can be filled in by augmenting AI with common sense and expert knowledge. The final “step up”, or at least step laterally, then, for AI intelligence is GOFAI. The AI was dubbed

2

https://www.genetics.org/content/210/2/477, p. 483. http://nautil.us/blog/this-neural-net-hallucinates-sheep.

3

124

Pauline Shanks Kaurin and Casey Thomas Hart

“old fashioned” by John Haugeland (1985).4 Rather than finding correlations, GOFAI represents claims symbolically and manipulates those symbols to draw conclusions. This means that conclusions are transparent, and knowledge can be expressed in ways to be context-sensitive to avoid brittleness concerns. However, GOFAI faces challenges of its own. First, it is incredibly difficult to itemize the pieces of knowledge required to reason about the world. And second, even with those pieces of knowledge encoded into a system, it is no small task to reason efficiently over the huge body of knowledge that must be present. And even when both of these hurdles cleared, the AI is only as good as the knowledge that is coded into it. There is no emergent knowledge above and beyond the input and the ability to combine those building blocks. This does not mean that GOFAI does not reach new or surprising insights from a human user’s perspective: it just means that each of those insights can be traced back to inputs supplied by human programmers. All of the above notions of “AI” can work together: we can have automatic processes by which ML algorithms tag visual inputs and are thereafter reasoned about using a symbolic GOFAI. The climate in AI seems ripe for this kind of merging, as companies like Apple and Microsoft (AI2) follow dedicated GOFAI companies like Cycorp. Inc. in acknowledging the shortcomings of purely statistical AI.5 Doug Lenat (2019) gives the helpful example of understanding Romeo and Juliet.6 ML can quickly and efficiently pick out certain facts from the play: how many people wore masks, the sequence of the major plot points, the names and general moods of characters who utter certain lines of dialogue. But if you try to formulate ML algorithms to determine what Juliet thought Romeo would believe when he came across her body after drinking the potion, you are doomed to fail. On the other hand, GOFAI has the resources to deal with nested belief contexts, as long as the language used is expressive enough. The above serves as a crash course on the differences between and respective strengths and weaknesses of automation, ML, and GOFAI. With a clearer understanding of what is meant by “AI”, we will next turn to the nature of war. Then, with those two pieces in hand, we can explore more carefully what role, if any, AI can play in the moral landscape of warfare.

War as Existential, Not Just Instrumental In order to address the context in which the questions of moral reasoning, autonomy, and agency arise relative to AI, we need to first consider the character of warfare within which AI and humans operate. Carl von Clausewitz famously notes, “War is politics by other means”; in other words, war is an 4

Artificial Intelligence: The Very Idea. https://allenai.org/. 6 https://www.forbes.com/sites/cognitiveworld/2019/07/03/what-ai-can-learn-from-romeo– juliet/#2aea83d71bd0. 5

Artificial Intelligence and Moral Reasoning

125

instrument to achieve certain ends in itself and has no intrinsic value or meaning apart from that instrumental use.iv Just War theorist Michael Walzer sees war as an instrument of the protection and survival of the political community, while the Realist school in international relations sees war as an instrument of state interest and power.v For each of these views, war is a tool for certain human, especially political, ends. Accordingly, machines (including AI) can and already certainly do engage in destruction and killing on a large scale, sometimes without the direct supervision or agency of humans. Paul Scharre documents many of these technologies including platforms that can do targeting, that can adjust and seek out targets after launched, and that can even recommend targets or courses of action based upon data processing and analysis.vi The instrumental view tells us that violence, death, and destruction is necessary for war as a means to compel the adversary towards our political will. However, that is not sufficient. We do not go to war for every political end, so which things warrant a resort to war? If war is merely an instrument of politics, that is not sufficient to motivate and sustain humans in the experience of warfare, to engage in killing and destruction on a large scale with considerable risk to their own interests, values and projects, and future. This is the remarkable behavior that we see reflected in the art, literature, memoirs, and memorials that humans produce about their experiences in war. It requires values that are deeper and tied to both individual and collective identities. Consider a few examples. In World War II the value for the Allies involved was the preservation of democracy against fascism, as well as addressing the violations of territorial sovereignty of nations invaded and occupied by the Axis powers and the survival of those political communities. In contrast, US involvement in Vietnam revolved around the value of checking the spread of Communism and self-determination for the South Vietnamese people. In the First Gulf War the values were the restoration of sovereignty of Kuwait and punishment and containment of Saddam Hussein’s regime. Meanwhile, the 2003 intervention in Iraq was oriented (initially at least) around the value of protecting the community from the threat of weapons of mass destruction, then regime change to address humanitarian abuses. In the case of World War II and the First Gulf War, there is a certain moral clarity, but with Vietnam and the 2003 Iraq War we see the moral conflicts and ambiguities, reflected in the political and social resistance to these conflicts by the society, even though there were political aims in these conflicts. These examples show that both political ends and values are a part of why and how societies and individuals wage war; war, therefore, has existential dimensions. War is existential in the sense of preserving political communities that make our physical and moral existence and flourishing as human beings possible. Michael Walzer, a contemporary Just War thinker notes, “The defense of rights is a reason for fighting. I want to now stress…that it is the only reason.”vii War is a public action of the State to preserve political community against threats, especially existential ones.

126

Pauline Shanks Kaurin and Casey Thomas Hart

War is also existential in the sense of the philosophical tradition of existentialism concerned with the meaning of human life. War is a meaning-making enterprise and reflects the values of those that wage it, as well as providing meaning and values for the society and the warfighters. Part of that meaning is the moral justification of war and moral parameters on the conduct of wars as reflecting in Just War thinking. Just War thinking is concerned with the moral value of justice and peace understood as essential to human life and political communities. However, this is also broader than just rights violations of individuals and communities; we are trying to create moral coherence and meaning which requires certain practices, rituals rooted in shared understandings, and values in communities of practice.viii We not only wage war, but we memorialize and create meaning from war, and war gives societies and individuals some sense of meaning and identity aside from whatever specific political ends are involved.ix Given that war is not simply instrumental, part of our inquiry will be to what extent it is possible for AI to meaningfully participate in the existential nature of war, not just the instrumental aspects. That will involve thinking about moral agency, and ultimately moral reasoning which produces moral agency.

Continuum Argument Before we tackle moral reasoning and agency, it is worth looking more deeply at the idea that AI is so very different from other technological advances in warfare that have preceded it. The claim that AI will revolutionize warfare hinges on AI being markedly different from “traditional” technological advancement. Paradigm types of traditional technology include swords, bows and arrows, rifles, and a myriad of nonsmart vehicles: helicopters, jeeps, etc. If we can establish that these tools of war are continuous with AI technologies in the ways that count, then this gives us good reason to think that the moral landscape of war will not be turned upside down by the integration of AI to the battlefields. Here is the argument from analogy: (1) Mundane technologies used in war are relevantly similar to AI technologies that will be used in war.7 (2) Traditional technologies used in war are morally ordinary; they enable human actors to do good and bad things, but they do not shield human actors from moral responsibility. (3) So, AI technologies are morally ordinary. Premise (1) is the contentious one, so we will start with the less controversial second premise and then defend the first by looking at a variety of cases.

We use ‘mundane’ here as a technical term to distinguish these technologies from the smart AI technologies under scrutiny.

7

Artificial Intelligence and Moral Reasoning

127

Why think that swords, helicopters, and other mundane technologies are morally ordinary? Consider a very simple case: a person wants to kill another, and does so with their bare hands. On the other hand, suppose one does this with a simple weapon, like a sword. How should we view the bare-handed versus the sword killings? It is hard to see a moral difference that the mode of killing makes. The sword killing may be quicker and easier, which could have moral consequences: perhaps an unjustified killing with one’s bare hands is worse because the guilty party had more opportunity to change their mind and relent. Or, perhaps the lethality of a sword makes an accidental killing more likely, and one may be less blameworthy for an accidental unjustified killing than for an intentional one. But in both cases, it seems that a justified killing without a weapon is equally justified with a sword. And this line of reasoning generalizes to other mundane technologies: justified killing aided by a bow and arrow, or jeep, or helicopter will almost entirely coincide with the situations in which one is justified in killing without technology whatsoever. There is one realm of counterexamples to this principle worth discussing. Perhaps having technology presents some other courses of action that are previously inaccessible without them. For instance, I may be justified in using my bare hands for self-defense, but not running someone over with a jeep in selfdefense for the reason that the jeep may afford me the ability to escape without any bloodshed whatsoever. To be more precise, then, this is the principle we advocate: Killing with Technology (KT) Principle: For any situation in which an agent may choose to perform a killing without the aid of technology, a counterfactual situation in which they have the same set of choices except to perform the killing with the aid of mundane technology, the moral status of the choice to kill is the same. The only plausible counter-examples we can construct from this are either incredibly ad hoc (e.g., God asks one to kill another but forbids the use of technology) or focus on the “efficiency” of the killing. Perhaps it is marginally worse to kill someone slowly without technology based on how long it will take. Or maybe there is a callousness in killing quickly. But these considerations seem minor: they are reasons to think a given killing may be slightly worse or better, but they are not considerations that might “flip” a killing to be forbidden when it was otherwise obligatory. The next challenge is to defend premise (1). In other words, why should we think that KT applies to AI killings as well as to killings by mundane technology? The previous discussion made the first of many connections that we can draw on the way from nontechnology, to mundane technology, to “smart” AI technology:



Attacking someone with your bare hands/feet/etc. is not morally relevantly different from using a rock, sword, or some other simple handheld implement. The handheld tools may augment one’s strength, but one is still equally responsible for the consequences of any acts committed using these tools.

128









Pauline Shanks Kaurin and Casey Thomas Hart

Take a step back and compare, say, using a sword to firing an arrow or throwing a javelin. Here there is a greater distance between combatants, but it would be strange to say this distance created a moral difference. At best, this distance creates a higher degree of variability in outcomes, and therefore agents may be less certain of the outcome of their actions than in a hand-tohand situation. We can extend the range of combat a great deal. Modern snipers are able to take down targets in excess of 2 km, with at least one recorded kill at over 3.5 km.8 At these distances, soldiers must be highly trained to account for effects of wind, optical illusions, and anticipating movements of their target, among many other factors. Regardless, it seems uncontroversial that snipers at these extreme ranges are morally responsible for their actions in the same way that, say, close range archers would be. Compare the above sniper shot to firing a smart missile. In the smart missile case, perhaps some of the strategizing is offloaded to the programming of the missile itself. But as we discussed earlier in this chapter, it would be misleading to treat this strategy as organically flowing from a moral reasoning process on the part of the AI: instead, the AI is executing a series of deterministic responses to inputs based on its programming. And at the last link, compare the act of firing a smart missile to that of sending out a swarm of armed drones that display swarming behavior. Again, this behavior may be very complex, but it originates in code that deterministically generates the complex behavior.

In each of the above links, we compare two actions whereby an agent engages in some sort of killing. And, while the distance, both physical and epistemic, may increase between the actor and the resulting killing by technology, we cannot draw a principled moral distinction between any of these pairs of actions. AI may generate new ways of killing, but it does not present a way to screen off agents from moral responsibility. By this we mean that the weapon cannot shield the user from moral responsibility. There are at least two ways this could happen. First, the weapon could be itself a bearer of responsibility, so that the human user might say “It is not my fault, the weapon chose to perform the act in question.” Second, the weapon might take the human user out of the loop by connecting up some other agents, say, the programmers or manufacturers of the weapon. Neither of these sorts of screening off are very compelling for any of the above weapons. You might be hesitant about this style of argument. Arguments from analogy, and Sorites style arguments in particular, can be deceptive. However, we have done our best to detail the continuum to ensure there is no sleight of hand. In the classic case of using a Sorites to show there is no baldness, one gets stuck with the obviously ridiculous claim that a person with a full head of hair is just as bald 8

https://www.theglobeandmail.com/news/politics/canadian-elite-special-forces-sniper-setsrecord-breaking-kill-shot-in-iraq/article35415651/.

Artificial Intelligence and Moral Reasoning

129

as one who has no hairs at all. But there is no such silly conclusion here: We are left with the claim that pushing a button to send a swarm of drones to kill someone is not morally all that different from using a gun, sword, or one’s hands. This argument merely demystifies AI. It is not going to magically remove human agents from the moral loop.

Moral Reasoning If the above argument holds, it seems straightforward to say that AI is not a threat to human moral agency in war. However, to further buttress this argument we should consider the nature of moral agency in war, and what it would take for AI to reach that threshold. While we are skeptical that threshold is or will be crossed, it is important to know what and where that threshold is. How much like humans would AI have to be to be considered as moral agents? In order for AI to be moral agents, they must first be able to demonstrate moral reasoning. According to Henry S. Richardson moral reasoning is a type of reasoning directed at deciding what to do, and when successful issuing in an intention.x This is not the kind of abstract reasoning about what morality is or what morality requires of us, but rather a branch of practical reasoning about what to do and how to do it. Intention is required for moral (and often legal) responsibility, which requires certain mental states, desires, and deliberation. What kind of moral reasoning is required for the kinds of moral autonomy and agency we associate with moral persons and the responsibility and accountability we ascribe to moral agents? It includes deliberation of different ends, effect, and reasons relative to a specific contexts and very often judging and responding to the intentions of other agents, but this is not just the deductive application of principles to a context; it involves moral judgement. Moral judgement (as a particular kind of moral reasoning) requires consideration of a wide range of moral considerations and elements that arise in a given case (as moral, as opposed to not morally relevant) and employing moral principles and ideas in a “sound” way. There are standards of good and bad moral judgement to be considered, not just the mechanical, logical application of rules and principles. In considering moral reasoning, different philosophers have differing conceptions that connect to their moral approaches and views of agency. We consider two, with the intention of seeing more what they have in common and what the basic outlines of moral reasoning might be. Aristotle, who is associated with a Virtue Ethics approach to moral questions, thinks about moral reasoning in terms of prudence or practical reasoning which is required to be deliberate and make choices relative to moral virtues. For him, both intellectual and moral virtues are required for the moral life, achieved through orientation of habit and proper desires. In addition, having the proper motivations and desires rooted in virtue will impact how effective one’s moral reasoning is. The virtuous person will just be better at prudence/moral reasoning because they are oriented toward the proper ends.xi

130

Pauline Shanks Kaurin and Casey Thomas Hart

Immanuel Kant, who is associated with a deontological approach, requires reason (a priori, independent of experience) to develop his ethical principles – the Categorical and Practical Imperatives – without contingency, and requiring universality. While the application of the principles may use or make reference to the context, the principles themselves can just be rooted in reason alone to ensure objectivity and universality.xii Both of these views require some understanding of reasoning or reason that is necessary to be a moral agent and marks moral agency apart from other notions of being (animals, machines) or personhood (humans with compromised or underdeveloped capacities – mentally ill, developmentally disabilities, child, and in some accounts women and persons enslaved or of color.) Both of these views also require an acknowledgement of morality as embedded values, but we also require an ability to articulate and justify reasons to a community of practice in which those embedded values have understanding and create meaning. One question that will be critical in assessing moral reasoning is how AI fares in the cases where the “standard” rules and principles alone seem insufficient for guiding what one should do. This is where the moral judgement that is part of moral reasoning (rooted in deliberation) is so critical. To illustrate, consider two examples. First, Stanislas Petrov, dubbed “The Man Who Saved the World,” was a Lieutenant Colonel in the Soviet Air Defense Force in 1983 when the new air defense system conveyed that there was a high certainty that American ICBM missiles (up to five) were incoming. Despite this information, he hesitated. He did not immediately advise his superiors, as military protocol at the time would suggest, for a response. He waited. As it turns out, the message was a malfunction in the relatively new system and not incoming American missiles. When pressed on why he did what he did, Petrov said he “instinctively” knew that it was not right.xiii He also noted that it was his civilian training that led to his decision; if others (military personnel) had been there that day and followed ordered as soldiers were trained to, the authorities would have been called and strikes likely would have happened. Second, in 1968 at the village of M~ y Lai, Vietnam, American pilot Hugh Thompson put his helicopter down between the Charlie Company commanded by Lt. William Calley and noncombatants in the village where Calley’s men were in the process of slaughtering the villagers. Thompson ordered the gunner to fire on Calley’s troops if they refused to stop which facilitated the evacuation of some of the villagers.xiv Thompson was eventually recognized and honored for his actions that day; at the time he was viewed as a pariah and having violated the military virtue of loyalty to one’s own forces. In both cases, the circumstances that actually presented were significantly different than the expected ones. In the first case, the judgement of the machine was wrong, and the human was required to override that judgement with his own and then act on that judgement. In the second case, there were completing (in this case human) judgements about what ought to be done and questions and disagreement after about who was right, which judgments and actions were

Artificial Intelligence and Moral Reasoning

131

correct, and the part moral agency played. These are both borderline cases of the sort that are commonly faced in warfare. The critical question here is to what extent AI can innovate and adapt in reliable and trustworthy ways that can be articulated after the fact to provide justification in ways that are better than what humans can do. Humans do get it wrong. Can machines do better? If they can, then would we say that they have the power of moral judgement and therefore, moral autonomy, in ways that are analogous enough to humans to count as moral agents? Moral considerations, as we see in the above cases, often conflict. So you need both moral reasoning and moral judgement oriented towards a specific action and context to generate moral agency. Rob Sparrow, one of the founding members of the International Committee fir Robot Arms Control, notes that while robots and other machines can identify things, they cannot necessarily identify and interpret human intent, like when someone fakes surrender (Scharre, 2018, p. 259). Paul Scharre notes that “Making life-or-death decisions on the battlefield is the essence of the military profession…Making judgment calls in midst of uncertainty, ambiguous information and conflicting valued is what military professionals do.” and that autonomous weapons are a direct threat to the military profession itself and the idea of human moral agency in warfare (Scharre, 2018, p. 293).

Moral Agency While we expressed skepticism that AI is capable of the kind of moral reasoning and judgement that is necessarily for and constitutive of moral agency, we now turn more directly to the issue of moral agency itself. As a helpful checklist, we will use List and Pettit’s (2011) three criteria for agency.9 They use these criteria to argue that groups of individual agents are themselves agents, but the conditions serve equally well to determine when robots or AI count as agents. To that point, the central illustration List and Petit use is whether we should attribute agency to a robot traversing a table and picking up cylinders. They list three conditions (paraphrased): Representation: Agents must have some representation of the way the world is. Motivation: Agents must have some desires and/or commitments about the way the world ought to be. Action: Agents must be able to act based on their motivations and representations to (attempt to) realize their desired states of affairs. Let’s consider a simple example: when Juliette promises to help Elizabeth clean up their toys, and then does so, she has satisfied the above arguments. First, she needed some understanding of the situation: there was a room with toys strewn about. Second, she was able to form commitments on the basis of her desire to clean the room, and this is evidenced by her ability to credibly promise to help clean the room. Lastly, she is able to act based on these conditions. 9

List and Pettit (2011).

132

Pauline Shanks Kaurin and Casey Thomas Hart

We can see that these conditions are necessary for agency by imagining how the toy example would change if any were not satisfied. If Juliette had no understanding of the state of affairs, then she would have no basis on which to promise or to act. Or, if she had no desire to follow through on her promises (a trait children are known to suffer from on occasion), then it would be foolish to engage in a promise agreement with her, and her subsequent actions would be random rather than driven in the way that an agent’s actions are supposed to be guided. Finally, if Juliette were able to see the state of the world, judge that state to be misaligned with her desires, but unable to act on this tension, then she would not have her agency. Instead, such an agent would seem to be trapped in their body, merely watching the events unfold rather than orchestrating them. With this understanding of agency in hand, what can we say about AI? We can plausibly say that AI has representations of the world: ML-based systems can be seen loosely to judge that the world has certain features (like that some picture contains a sheep), and GOFAI systems have symbolically encoded knowledge that can be reasonably thought of as representations of the world. The second condition is much more dubious. In what sense can AI be said to have motivational states? We see no basis on which an ML algorithm is motivated. One might think that GOFAI would meet this standard, since one could symbolically encode anything you like into an AI system. For instance, one could program an AI with “Life is valuable and should be preserved when possible” into such a machine. Unfortunately, this is not sufficient for establishing a motivation for the AI. Motivational states have a mind-to-world direction of fit, and all knowledge encoded in a symbolic AI goes in the opposite direction. To a GOFAI system, “Chihuahua is a breed of dog” is just as motivational as “Life is valuable and should be preserved”, which is to say, not at all. Most importantly, every claim that we may loosely attribute as knowledge for an AI can be traced back to human inputs. For ML, these inputs are the training data sets and parameters the AI is allowed to consider relevant. For GOFAI, the knowledge added is straightforwardly attributable to the programmer who added it. The representations are not chosen or endorsed by the AI in any way, but they are rather just complicated ways in which an AI behaves deterministically. These are not agents with the capacity to act like human agents. It is worth noting that so far we have only discussed whether AI can be conceived of as agents. But the debate in this chapter places even stronger requirements for moral agency. Agency is necessary but not sufficient for moral agency. One might be an agent without being subject to anything morally binding. However, anything that is subject to morally binding demands must be an agent. Given our skepticism that AI meets the bar for agency, it is even more doubtful that AI meets the requirement for moral agency. Moral agency also requires ideas of responsibility and accountability to a community of practice with embedded values and shared meanings; this has to include the possibility of moral learning and revision of moral view in light of experience (failure) and also communal feedback and sanction relative to the moral reasons we articulated that we acted on the basis of. On this account, moral reasoning is not a static thing, but must be constantly evolving and adapting in

Artificial Intelligence and Moral Reasoning

133

response to (1) outside responses and (2) internal critical thinking and reflection. It seems reasonable to argue that AI is capable of the second, but what about the first? Can it isolate, question, and reassess its own “assumptions,” since these are given as empirical facts and arguably something not chosen? Not only does AI fall short of moral reasoning and agency currently, there is little reason to think this will change in the near future. The main developments in ML AI focus on improved computing power and more sensitive and complex analyses of even larger data sets. This gives us reason to think that these algorithms will deliver faster and better results, but it does not fundamentally change the way that the AI functions with respect to motivation. On the GOFAI front, there is a wider appreciation that we should tackle building more and larger knowledge bases and provide ways to apply commonsense reasoning over this store of knowledge. But this does not change the fact that GOFAI systems only know what is encoded in them, and this knowledge is not capable of motivating an AI system to act.

What Role Does AI Play in War? We have spent most of the chapter discussing what AI is not. It is not a moral substitute or shield for combatants, nor can AI serve as a combatant itself. And the continuum argument shows that AI will not revolutionize the moral landscape of war. That said, it would be silly of us to deny that there is something special about AI. AI allows us to think at scales and distances that were previously impossible, leveraging computing power and access to data. This undoubtedly affects the landscape, even if it does not revolutionize it. In this section, we will discuss a few of the moral ramifications of our advancing technology in war. AI is far more complex than many of the mundane technologies. No one should be surprised at what happens when someone fires an arrow, or swings a sword, or even fires a bullet. But the behaviors exhibited by AI result from a complex series of algorithms such that it is not obvious to the layperson how the machine will respond. This complexity places a much higher burden on those who operate the technologies. For example, suppose someone uses a smart missile that uses visual recognition algorithms that are trained on data featuring desert encampments. It is very plausible that this missile would fare poorly in another sort of geographic location, say a jungle or near large bodies of water. If the user puts that missile in the wrong environment and this leads to bombing the wrong target, then the user is at fault for failing to have a proper understanding of the weapon. Thus, AI introduces greater epistemological constraints on military decision-makers. As weapons get more sophisticated, so must those who are morally responsible for using them. It is worth noting that this argument does not only apply to AI: any increase in complexity of a weapon will have a corresponding demand on users. However, AI technologies are often more than just complicated. Sometimes they are entirely opaque. As discussed earlier, many ML AI outputs are black boxes. Some neural net will be given access to a large swathe of data and

134

Pauline Shanks Kaurin and Casey Thomas Hart

generate a mathematical relationship between some set of a factors and placing a judgment on some new piece of data (e.g., whether a picture has a sheep in it). Users of this technology have two critical moral responsibilities, above and beyond the normal moral responsibilities regarding action in warfare. First, there is this epistemic requirement: they must know broadly how such neural nets operate. This means they need to know that the outcome is heavily influenced by the training data set, and that the reliability of the outcome will depend on which variables were under consideration as well as how closely the target piece of reasoning resembles the training data. Second, users of black box AI must come to grips with using outputs that are not explainable. To express this, let us consider a scenario: Anonymous Texter: A military decision-maker named John receives a text message from an untraceable, anonymous cell phone user. The message includes details about some potential military threat. A few days later, it becomes clear that this intelligence was correct. This process repeats many times. John receives hundreds of texts from this anonymous source, and the source is correct almost every time, with a 99% success rate. However, up to that point, John has never been able to act on any of that intelligence. Things changed on August 2, though, when the anonymous texter alerted John that a nearby suspected person of interest would commit a devastating bombing in the next 24 hours. John has one, and only one chance to take out this target. Should he do so on the basis of this text? This situation is a difficult one. John has a reliable method for forming beliefs based on an extensive track record with the texter. But these tips from the texter are entirely opaque. Suppose John responds to the tip, takes out the target, and it turns out the intelligence was wrong. How could one defend their actions based on an anonymous tip without any deeper justification? We will not pretend to resolve the problem of opaque AI. There are nuanced arguments in favor of using or refraining from using these sorts of technologies as decisive in our decision-making. However, a few things are clear. First, the increased access to these opaque AI technologies necessitates a clearheaded conversation about the extent to which we are comfortable relying on these sources. Second, if there are ways around this problem, we should find them. For example, DARPA is presently engaged in the Explainable AI (XAI) effort.10 AI that reveals the underlying reasons for its actions seems clearly morally preferable to black boxes. The last moral shift that AI brings to warfare is that it expands the number of agents who are morally responsible for actions that utilize technology. This issue is not entirely new. There is legal and ethical discussion of the liability of gun

10

https://www.darpa.mil/program/explainable-artificial-intelligence.

Artificial Intelligence and Moral Reasoning

135

manufacturers for gun deaths: while the manufacturers do not pull the trigger, they do provide sophisticated mechanisms by which an agent can use a gun in morally significant ways. The same argument is greatly expanded for AI systems. As previously discussed, AI can be thought of as reasoning, acting, and deciding, but each of these are really just exercising the thought processes coded into them by programmers. As such, it is not only the responsibility of the end user of the weapon to understand how it works, but it is also the responsibility of the builders of the weapon to program it ethically. There are boundaries on both sides. Surely the end user is not responsible for understanding every part of the code, and surely the programmer is not responsible for every possible way that a user may deploy the weapon. To review, while AI does not turn the moral landscape of war upside down, it certainly alters the moral landscape. AI is more epistemically demanding of its users. AI also means that we will have to place increased focus on how to use insights from black box AI systems, and we may need to seek to develop AI in ways that are more transparent. Lastly, the network of agents involved in an AI decision is expanded: programmers and users alike will bear the responsibility of AI-driven actions.

Conclusion Christopher Coker’s observation from his book Warrior Geeks is apt for concluding our discussion: This is where we stand today. Some wish to purge war of its existential and metaphysical elements and render it wholly instrumental….Others wish soldiers to remain in touch with their ‘humanity’ and the spiritual dimension of ‘being’….The pity of it all is that anyone should think it is an either/or. (Coker, 2013, p. 291). In our view, AI at the present time and for the foreseeable future functions as another tool in war, which may help human agents but must still be subject to human moral reasoning, judgment, and agency. That said, AI will require more epistemic rigor and perhaps a deeper kind of moral reasoning, and ultimately moral agency, from humans as moral agents functioning in war. As Paul Scharre noted, a certain kind of moral reasoning and agency is necessary for the moral and professional judgement and discretion that we expect in the military profession; AI may require more discussion and attention on exactly what this means relative to any technological partners that are part of war.

Disclaimer The views expressed in this presentation are those of the author and do not necessarily reflect the official policy or position of the US Naval War College, the US Navy, the US Department of Defense, or the US Government.

136

Pauline Shanks Kaurin and Casey Thomas Hart

Notes i ii iii iv v vi vii viii ix x xi xii xiii xiv

Scharre (2018). Roff (2019). Roff (2019, p. 4). von Clausewitz (1976, Book I). Walzer (1977). Scharre (2018). Walzer (1977, p. 72). See also Chapter 16. MacIntryre (1984). See Gary, J. G., Warriors: Reflections on men in battle. Richardson (2018). Aristotle. See Kant, I., Grounding to the metaphysics of morals. See https://www.nytimes.com/2017/09/18/world/europe/stanislav-petrov-nuclearwar-dead.html. See Bilton and Sim (1992, pp. 136–139).

References Aristotle. Nicomachean ethics. Books II, III and VI. Bilton, M., & Sim, K. (1992). Four hours in My Lai. New York, NY: Viking Books. von Clausewitz, C. (1976). In M.Howard & P.Paret (Eds. and Trans.), On war. Princeton, NJ: Princeton University Press. Coker, C. (2013). Warrior geeks: How 21st century is changing the way we fight and think about war. New York, NY: Oxford University Press. Haugeland, J. (1989). Artificial intelligence: The very idea. Cambridge, MA: MIT press. Lello, L., Avery, S. G., Tellier, K., Vazquez, A. I., de los Campos, G., & Hsu, S. D. H. (2018). Accurate genomic prediction of human height. Retrieved from: https:// www.genetics.org/content/genetics/210/2/477.full.pdf Lenat, D. (2019). What AI can learn from Romeo & Juliet. Retrieved from https:// www.forbes.com/sites/cognitiveworld/2019/07/03/what-ai-can-learn-from-romeo– juliet/#1bb946d71bd0 List, C., & Pettit, P. (2011). Group agency: The possibility, design, and status of corporate agents. Oxford: Oxford University Press. MacIntryre, A. (1984). After virtue. Notre Dame, IN: University of Notre Dame Press. Richardson, H. S. (2018). Moral reasoning. In E. N.Zalta (Ed.), Stanford encyclopedia of philosophy (Fall 2018 ed.). Retrieved from https://plato.stanford.edu/entries/ reasoning-moral/ Roff, H. (2019, June 7). Artificial intelligence: Power to the people. Ethics and International Affairs, 33(2), 127–140. Retrieved from https://www. ethicsandinternationalaffairs.org/2019/artificial-intelligence-power-to-the-people/#. XP6imLulYpU.twitter Scharre, P. (2018). Army of none: Autonomous weapons and the future of war. New York, NY: W. W. Norton and Company. Shane, J. (2018). This neural net hallucinates sheep. Retrieved from http://nautil.us/ blog/this-neural-net-hallucinates-sheep Walzer, M. (1977). Just and unjust wars. New York, NY: Basic Books.

Chapter 8

Ethical Constraints and Contexts of Artificial Intelligent Systems in National Security, Intelligence, and Defense/Military Operations John R. Shook, Tibor Solymosi and James Giordano

Abstract Weapons systems and platforms guided by Artificial Intelligence can be designed for greater autonomous decision-making with less real-time human control. Their performance will depend upon independent assessments about the relative benefits, burdens, threats, and risks involved with possible action or inaction. An ethical dimension to autonomous Artificial Intelligence (aAI) is therefore inescapable. The actual performance of aAI can be morally evaluated, and the guiding heuristics to aAI decision-making could incorporate adherence to ethical norms. Who shall be rightly held responsible for what happens if and when aAI commits immoral or illegal actions? Faulting aAI after misdeeds occur is not the same as holding it morally responsible, but that does not mean that a measure of moral responsibility cannot be programmed. We propose that aAI include a “Cooperating System” for participating in the communal ethos within NSID/military organizations. Keywords: Artificial intelligence; autonomous weapons; morality programming; ethical AI; communal ethos; ethics; military ethics

Introduction Decision technologies and artificially intelligent (AI) systems are being considered for their potential utility in national security, intelligence, and defense (NSID) operations (Giordano, Kulkarni, & Farwell, 2014; Giordano & Wurzman, 2016; Hallaq, Somer, Osula, Ngo, & Mitchener-Nissen, 2017). Given this trend, it becomes increasingly important to recognize the Artificial Intelligence and Global Security, 137–152 Copyright © 2020 Emerald Publishing Limited All rights of reproduction in any form reserved doi:10.1108/978-1-78973-811-720201008

138

John R. Shook et al.

capabilities, limitations, effects, and need for guidance and governance of different types of AI that can – and will – be employed in NSID settings. Perhaps one of the most provocative, if not contentious, issues is the development and use of autonomous AI (aAI) to direct targeting and engagement of weapons systems (Galliott, 2015). To be sure, accurate identification, selection, and engagement of targets involve acquisition, discrimination, and parsing of multifactorial information. Given that the end-goal of engagement is reduction or elimination of the targeted threat, it is critical to address and assess mechanisms and processes of signal discrimination and selection in terms of the technical effectiveness and the rectitude of action(s). If accurate target identification and threat-reduction are the defined ends (and regarded “goods”) of these systems’ operation, then decisional accuracy (i.e., identificationdiscrimination-action; or more simply stated: “acquire-aim-fire”) represents an intrinsic aspect of any such functions (Arkin, 2010; Canning, 2006; Lin, Abney, & Jenkins, 2017). For an aAI system, the decision to engage or not engage axiomatically obtains and entails the necessity to independently parse information about relative benefit, burden, threat, and risk domains that are contingent upon potential action or inaction (Arkin, 2010; Lin, Abney, & Bekey, 2007). Thus, missional effectiveness can be seen as involving both technical and ethical dimension. By pondering if and how an aAI system could be ethical (Arkin, 2009; Asaro, 2006; Wallach, Allen, & Franklin, 2011), or asking why ethical aAI is needed (Canning, 2006; Danielson, 2011; Sharkey, 2011), the fundamental worry is about who shall be rightly held responsible for what happens if and when aAI errs and commits acts that are regarded as “wrong”. Posing these issues and asking such questions of ethics are essential to understand – and establish – standards of acceptable behavior in the contexts in which humans live. Ever more, such discussions must acknowledge the growing role(s) of technology. The hoped-for promise of AI lies in humanity’s recognition of these systems as extensions and enhancements of human capability and activity, but rarely, if ever, as a complete replacement for human beings and their involvement in decisions that establish vital contingencies. Yet, the trajectory toward specific forms of aAI delegates at least some of this decisional and actional responsibility. Therefore, we accordingly ask, to what extent may aAI systems meaningfully participate in the moral evaluation of their actions? Apropos the explicit point and purpose of ethics, any AI system cannot be properly evaluated beyond the contexts in which they are employed by humans – who uphold communal values and standards (i.e., morals and mores) of acceptable conduct. Therefore, the same holds true for the use of AI in NSID contexts. Senior NSID personnel are held responsible for the deployment of all instruments of intelligence and combat. Commanders want force to be applied intelligently, and AI will become an integral component of that paradigm, not a poor substitute for it. The use of AI in NSID will entail discrimination and execution of high-impact decisions and actions at faster-than-human speeds, but that does not mean that military or legal responsibility vanishes from sight. The computational

Ethical Constraints and Contexts of Artificial Intelligent Systems

139

capabilities of AI in the field would be rendered useless by continual intervention by slow human thinking. In this light, AI systems of future battlescapes will necessarily be largely autonomous (Scharre, 2018). However, a caveat is warranted: using the term “autonomous” to connote “dangerously uncontrolled” or “dumbly robotic” is ignorant at best, and disingenuous at worst.

Delineating Major Types of AI AI will have an expanding role in automating an increasing number of operations through efficient information-processing and decision-routinization capabilities. Asaro (2006, 2008) has described four levels of machine system agency, which proceed from systems “with moral significance” that can execute basic decisions that can be of moral importance; through systems with increasing moral intelligence in their use of extant ethical codes, to an apex system that obtains dynamic moral intelligence. This final, most advanced iteration would engage some type of Bayesian-like decisional processes to advance from an initially programmed set of ethical precepts, to one of its own formulation, as based upon its own interactions with situations and environments (Rao, 2011). Building upon the work of Asaro, we posit that five core types of AI systems can be distinguished, with a view toward NSID applications, with higher-functioning types building on the capabilities of lower-functioning types. These types of AI are: (1) Automation AI – those systems incorporated within stationed computing platforms for information management. (2) Animate AI – systems acting to assess, traverse, and modify their environs (by projection, motility, and/or mobility) under human operational control. (3) Autonomous AI – systems conducting assigned activities without the need for real-time human operation. (4) Agentic AI – systems selectively engaging their environment in autonomously flexible pursuit of assigned outcomes. (5) Autopoetic AI – systems that adapt decision heuristics for improved planning and execution of agential conduct. A “human-supervised AI system” (of any kind 1–5) allows human operators to inaugurate, periodically direct, and terminate engagements under all conditions, including system failure. Of note is that those systems that temporarily act with a degree of independence do not constitute an entirely independent system. To be sure, types 3, 4, and 5 are autonomous, in increasingly complex ways, and terminology differs somewhat across AI industries and national governments (UNIDIR, 2017; USDoD, 2012). In the context of NSID use, we shall refer to an AI system that is able to decide upon and execute actions on its own – without direct and immediate operational control by a human – as an autonomous system. In contrast, an AI system that is only able to generate

140

John R. Shook et al.

action when directly operated in real time by a human-in-the-loop is not autonomous. Autonomous activity is certainly compatible with a degree of supervision. Humans are autonomous agents who receive guidance and direction, and are able to follow lawful commands. Full independence and emancipation from humanity could arise with higher forms of intelligence, such as sapient or sentient AI. A sapient AI system autopoetically controls how it accomplishes goals and interacts with other intelligences, including interpreting or ignoring humans. A sentient AI would sufficiently sapient to comprehend its capabilities and construct its goals without needing or consulting humans (Giordano, 2015). Entirely unguided and unsupervised actors, whether biological or mechanical, could become renegade, and as such, be of dubious value to a nation, organization, or NSID operation. Although critical works of technological foresight (Moravec, 1999; Wallach & Allen, 2009), and a corpus of science fiction depicting sapient or sentient AI pose profound ethical questions generated by sentient AI1, it is highly unlikely that any NSID operation will consider the use of such systems in the proximate future. Thus, acting with intelligent autonomy cannot – and arguably should not – be equated with behaving in an entirely independent way. Although the literal translation of the Greek auto-nomos is “self-rule,” an individual who controls his/her own decisions and actions can still follow the social norms that one endorses. For example, an autonomous AI vehicle will follow driving rules and traffic laws, just as a good human driver does. Neither machine nor human should establish the habit of obeying only rules that they themselves create. Social norms, and moral norms in particular, promote cooperative practices. Habitual conformity with social norms, especially norms for communal welfare, are compatible with exercising individual autonomy within a defined group (of either “moral friends” or “moral strangers”; Engelhardt, 1996). Renegade deviance is only a crude kind of liberty. The conforming guidance of moral norms should contribute to productively autonomous existence.

Moral Guidance for Autonomous AI Our practical question here is: To what extent could moral guidance be a component of ongoing supervision of an aAI (type 3–5) system, such as a ground robot, a flying drone, or a mechanical swarm? AI capable of degrading or destroying hostile forces shall be the primary subject in the following sections, unless otherwise specified. Raising the matter of moral guidance for AI is not just asking the question, “How should an ethics review of AI activity be imposed after military operations?” An external ethics review is usually too late to prevent or mitigate moral transgressions. Nor are we inquiring about preemptory ethical condemnations against aAI and their tactical performances

Ethical Constraints and Contexts of Artificial Intelligent Systems

141

in the battlefield (see here, Maas, 2019). Neither preemptory nor post operational examinations may be sufficiently attuned to the particular ways that human-AI systems will harmoniously work to achieve tactical goals during NSID (and more specifically military) operations. Rather, if it is to be effective in practice, moral guidance be constituent to, and occur within human-AI relationships. Premising that moral guidance cannot essentially involve AI, but only be about AI, makes an assumption that renders an AI system functionally amoral. On this assumption, moral humans must be responsible for amoral AI. The pairing of an ethically autonomous human with an intelligently autonomous system at most ensures that the actions of the AI system can be compared against particular standards of right conduct. The system performs its AI actions in the field while understanding nothing about norms; the human judges the deeds of the AI system according to human norms while performing nothing in the field. That the AI system does not participate in its moral evaluation is akin to being convicted in a legal trial in absentia. This sort of external ethics review does not characterize the moral evaluation of NSID personnel by their commanding officers and high-ranking leadership. This is especially true in the military. All military personnel are instructed in, inculcated with, and instilled with respect for military moral virtues and rules (Lucas, 2016). Compliance may vary to some extent given the variance of human action in certain circumstances, but none could claim ignorance of duty, and everyone can understand how and why their conduct is evaluated according to military moral standards. Military ethics is primarily about internal (i.e., organizational) oversight, supervision, and compliance. All military personnel: (1) adopt the required norms of proper behavior in uniform; (2) expect other service members to satisfy norms while conducting group activities; and (3) cooperate with military evaluations of personnel conduct by those standards. These three conditions establish the foundation of the “military ethos.” In general, a community inculcates and sustains its internal moral ethos where all members: (1) prioritize meeting the behavioral norms of the group; (2) participate in activities so as to promote the fulfillment of those norms in the conduct of participants, and (3) provide cooperation with communal efforts to uphold those standards of conduct. A communal ethos will weaken and decay when some members prioritize their own selfish goals, allow others to degrade group achievement, and disregard attempts to maintain collective standards. A communal ethos is fortified when members exemplify conformity, expect cohesiveness, and enforce compliance. Internal moral guidance flourishes within the framework of a communal ethos. No single member of the organization takes the responsibility for morality, except in the inclusive sense that they all do. Morality infuses the whole; it does not reside in any part taken separately. Trying to ask, “Which particular component of the group takes the moral responsibility?” will fail to discern how moral responsibility is distributed throughout the relationships and practices of constituent members. This “collective ethics” model of

142

John R. Shook et al.

internal moral guidance is in strong contrast with the model of external moral criticism.

Locating Moral Responsibility Ascertaining moral responsibility can be accomplished externally. External moral criticism assumes a standpoint outside a group to inspect its components, workings, and activities. Adding aAI to a group of (autonomous) human agents allows the question, “If the AI acts in a way that is morally wrong, where would moral responsibility rest?” Taking the components of that human-AI system separately, the question actually becomes: “Shall moral responsibility reside with the human side, or the AI side?” Emphasizing how any AI is merely an amoral tool is a short cut for placing the full burden of responsibility upon the human side. Claims that “AI in itself is neither good nor bad, but is only used rightly or wrongly by human users” establish that humans are fully responsible for aAI. Canning (2006) has argued that a constant construct across any spectrum of system autonomy should be the principle that machine systems must have either the involvement of a human operator or explicit human authorization to act against (i.e., neutralize) a human target. To paraphrase Canning (2006), machines can target other machines, while only humans may target humans. Yet full human responsibility is not a simple matter to confirm. Moral responsibility for an outcome in a situation implies having some degree of control in that situation. If moral responsibility for an outcome resides with the party having the most control over that situation, where does moral responsibility reside in a human-AI system? A human in-the-loop who is continuously monitoring and orienting the AI system has significant control over its activities, and hence considerable moral responsibility for its activity, including any destructive or lethal actions. In contrast, while a human “on-the-loop” is periodically monitoring how an aAI system is performing, its specific acts (including lethal acts) could be performed without explicit human direction. Let us suppose that an aAI system does something that no human authorizes, suggests, or even envisions. Clearly, there is no human oversight and control of that independent action. If absence of control implies lack of moral responsibility, no human supervisor is morally responsible for an aAI system’s independent action. The aAI controlled its action, not a human; so, we could infer the aAI must be held morally responsible here. Yet, we opine that the assignment of moral responsibility to an aAI may be premature. Even sophisticated AI systems may not really “know” or “care” about right and wrong, and chastisement and punishment of AI seem pointless. Assigning moral blame could be more corrective than retributive, but fixing an AI system’s programming does not first require assigning moral responsibility. A paradox looms. We expect greater autonomy to be linked with greater responsibility. However, as Arkin (2010) and Bataoel (2011) have noted,

Ethical Constraints and Contexts of Artificial Intelligent Systems

143

increasing system autonomy does not inherently provide or confer parameters for the development and/or execution of moral decisions and ethical actions. For example, while the aAI system is operating fairly independently to fulfill tactical goals, a human “on-the-loop” may bear little to no moral responsibility, and the AI system never has any moral responsibility. Conjoining human autonomy and AI autonomy in a human-AI system appears to only diminish or eliminate moral responsibility altogether. Ethics should be able to discern where moral responsibility resides and propose how to improve the capacity for moral responsibility. Where human-AI systems are concerned, ethics from an external standpoint would need to work harder at isolating and crediting full moral responsibility. Applying ethical theories to aAI system cannot proceed from a presumption that the system is already an individual agent bearing moral responsibility. Presumptive responsibility must be attached to human agents. The “ethics of AI” is typically conceived simply as the “ethics of using AI.” Consequentialist and deontological approaches offer accordingly distinctive analyses. Employing human-independent AI can be supported by consequentialist views, such as utilitarian arguments weighing the benefits of off-loading work that is too demanding for humans against the possible harms that could be (or are) incurred. However, the liberating humanity from labors could bring new forms of degradation or enslavement. Although sapient AI rebelling against its human masters is still the stuff of science fiction, deontological concerns can be raised about human dependency upon aAI’s dominion of safety and security. Whether victories are gained or lost, the prospect of perpetual and escalating AI-driven and AI-executed warfare will challenge thresholds and tolerances for military conflict, and precepts of human values, freedoms, and rights (Asaro, 2008; Howlader & Giordano, 2013). Parameters and boundaries for deploying aAI could be roughly determined by adjusting balances and compromises among utilitarian and deontological factors (Howlader & Giordano, 2013). But that is not the ethical issue pursued in this essay. Regardless of what idealized bounds are imaginable, actual AI will be constructed and deployed under real-world circumstances. If ethics proves unable to discern moral responsibility within in the human-AI system, where that abundance of autonomy appears to eliminate moral responsibility, there would be little point to conducting a moral inquiry and investigation into wrongful conduct on the part of any individual internal to the system. All the same, external ethical scrutiny can be applied to the system as a whole. How should AI be used by humans? If the key issue for deontological and/or consequentialist address is solely the ethics of using aAI, there would be an initial dichotomization dividing humanity (the responsible moral agents) from aAI (mechanisms barely envisionable as agents). By separating AI, the design of these systems’ programming becomes paramount, wherein “moral” prescriptions, proscriptions, and constraints could potentially be encoded (Franklin, 1995). Furthermore, so long as deontological and consequentialist approaches remain focused upon what the singular agent should and should not do, ethics is led toward developing some sort of

144

John R. Shook et al.

“algorithm” or “application” for morality, which could run in real-time to direct and restrain the actions of an aAI system (see here, Casebeer, 2003, 2017). Thus, if we take the “artificial” in “artificial intelligence” to infer that AI is imitating human intelligence, then perhaps individual moral intelligence could be modeled and installed.

Programming AI Morality The notion of programming an AI system to be good is almost as old as the idea of robotics itself. However, some serious obstacles block the road – or offer the proverbial dilemma of the “choice of the path taken” to designing moral programming. This is because deontological and consequentialist approaches to morality can point in differing directions. The means to utilitarian ends may fail to be right or just; and duty and righteousness for their own sake might not increase the overall good. Substituting other ethical theories cannot avoid this “ethical divergence” problem. Reliance on just one ethical theory can linearize moral thinking, but nonsubscribers to that theory need not agree that the thought process and answers it brings are right. It is impossible to say how AI could be as ethical as humans, so long as humans disagree about how to be ethical. This issue undergirds proposed aAI development in China, as the general idea is that the ambiguity of human moral and ethical decisional processes could (and should) be resolved through the integration of aAI system that conforms to centralized tenets of control, keeping the human element on-the-loop, but not necessarily inthe-loop. As Kania has noted (2018), the Chinese military is required to “remain a staunch force for upholding the Chinese Communist Party’s ruling position,” and would not tolerate any AI system that behaves in ways contrary to this tenet. Autonomous AI can surely be “programmed” for moral conduct – indeed, many such moral programs are imaginable and at minimum, feasible. Moral proliferation is the real obstacle, and competing ethical theories are part of that problem, not a resolution. There is no optimal design solution: any compromise among ethical approaches would be practically indistinguishable from immorality, from the standpoint of one ethical theory or several theories. Any and all morally programmed AI would likely be categorized as evil by one subset of humanity or another. This is a familiar human situation: one righteous group (by its own ethical definition) is perceived as malevolent by another (according to that group’s ethical standards). To this point, we are fond of paraphrasing MacIntyre’s (1988) query: what good; whose justice; which rationality? It is irrelevant whether moral relativism has (or lacks) the endorsement of philosophical ethics; the world where humanity lives is a scene of genuine moral and ethical disagreement that cannot be ignored. As long as humans disagree with each other, and with “moral” AI, about what is truly ethical conduct, there is no way to generally determine how any AI could be as ethical as humanity. The suboptimal alternative is to concede that any “moral AI” could only be as moral as the chosen ethical approach taken by a specific subgroup of humanity. That is, for a given human community with its

Ethical Constraints and Contexts of Artificial Intelligent Systems

145

own ethos, there could be, in principle, locally optimal moral programming for a AI (vide supra). The implementation of local “morality programming” for AI is one thing; creating morally responsible AI is quite another. Even if high confidence could be bestowed upon a suitably complex programming for moral AI behavior, does that mean that this AI can now bear some moral responsibility for its actions? If we still feel uncomfortable crediting AI with any moral responsibility, we are forced to look elsewhere, specifically, to the human side of the human-AI system. Yet the humans on-the-loop seem even less responsible for whatever happens while the AI pursues its mission and makes its decisions with a high degree of independence. That tactical and moral independence lends the appearance of (perhaps even greater) autonomy to the AI side, yet moral responsibility remains as elusive as ever. No matter how we survey the human-AI system, the AI component appears only to have the status of an amoral tool. Perhaps then, the humans responsible for the moral programming should be held responsible for any immoral behavior enacted by the programmed AI system. However, those programmers are even more distant, in a causal and control sense, from the concrete actions that the AI takes in the field. As well, training and testing moral programming within hypothetical scenarios can rarely, if ever, duplicate the unanticipatable contingencies of real-world engagements (e.g., against inherently unpredictable other AI systems and human agents). When an AI system performs oddly, in what seems to be an erratic or almost chaotic manner, such unwanted behavior is considered to be “accidental”; yet occasional accidents are almost inevitable due to the high complexity of aAI systems’ structures and functions. Simply put, that AI accidents will happen is no accident. The dynamically recursive and looped networks and integrated systems of aAI platforms, so tightly coupled high speed computation and communication, will produce nonlinear, cascading, and concatenating decisions and acts occurring too fast for human comprehension or intercession. The growing adoption of “deep learning” and auto-reprogramming will only make AI more inscrutable. If AI were slow enough for humans “on-the-loop” to always understand what AI is doing and why AI is doing it, then that AI would be practically worthless as an asset for gaining tactical advantage in the field. That advantage is amplified by expecting AI systems to make ever-faster tactical decisions with incomplete and inconclusive data, further increasing the odds of unpredictable and erratic behaviors (Scharre, 2016). Such eccentricities are acceptable features under certain NSID conditions. Yet, in any case, there is no good reason to hold human programmers morally responsible for all aAI decisions and actions. To summarize matters so far, programming aAI systems for morality and moral responsibility confronts serious obstacles. Viewed externally, little moral responsibility can be discerned on either the human side or the AI side of a human-AI system, especially as AI functions more autonomously. Viewed internally, such obstacles are greatly reduced, but contextual dependence is increased. Morality programming for aAI can be designed for functioning within the context of a community’s ethos.

146

John R. Shook et al.

Ethics in Context and Community An ethics of and for a human-AI system, like any study of human-AI relationships, should apply network and systems principles (Liu, 2019). Making AI ethical cannot be simply about programming the AI for morality in imitation of human morality in a general sense. For human beings, morality is a complicated arena replete with tough dilemmas as well as simplistic platitudes. The amount of abstract rationality applied to moral thinking is not the issue. Just as humans are not atomistic rational beings outside of their bodies and their environments (inclusive of other biological organisms encountered and tools available), AI is and always will be similarly situated. This situatedness calls for an ethics appropriate to what AI is about within the human context. Ethics is a tool, and is grounded to the enterprise in which it shall be used (for overview of this construct in NSID contexts, see Tennison, Giordano, & Moreno, 2017). Regardless of whether a human or a machine is employing a deontological algorithm or a calculation for utility, such thinking is unhelpful if an actual problematic situation does not call for that tool. Tools are extensions of human capacities. Ethics provides systems of rules and rationalizations to guide, govern, and in many cases justify moral decisions and actions. AI is also a tool – to extend human intelligence. Extending the reach of intelligence is not just a matter of duplication, or of adding one more intelligence to the world. Like ethics, the extensive capacity and power of AI is to enable decisions and actions that affect humans in interaction on and across a variety of scales. Thus, it functions as social intelligence. An autonomous system or machine is often presented atomistically, as a complete unit with hard and distinct boundaries between itself and the world. Irrespective of where it is in the world, it remains the same unit, the same machine, the same agency. This conception of intelligence and agency as being discretely autonomous is a myth, not merely about AI, but about humanity as well (Wurzman & Giordano, 2009). The atomistic conception of autonomy is a pre-Darwinian view that takes the intelligence – whether called psyche, nous, soul, mind, consciousness, or simply “the self” is beside the point – as fully formed and final. In this view, moral reasoning is undertaken by a lone agent (regardless of who or what this agent happens to be) in order to render an “objective” moral judgment. After Darwin, the notion of such an idealized abstract intelligence was no longer tenable. Human intelligence is a bricolage of cognitive abilities, able to be more or less adaptive to contingent practical demands (Anderson, 2014; Johnson, 2014). Like human nature, human intelligence is not fixed and final. Rather, it is historical, situated in and embedded with multiscalar physical environments that are dynamic and changing in response to and with human activity (Wurzman & Giordano, 2009). This dynamic transaction is well coordinated by cybernetic functions of the nervous system: activity is coordinated and governed via feedback processes that modify feedforward anticipations of the human bodily system. The human body, of course, lives within a larger systems-of-systems that include other enbrained and encultured humans (and other organisms), which are embedded in a wider ecology of the natural environment (Flanagan, 1996,

Ethical Constraints and Contexts of Artificial Intelligent Systems

147

2017; Giordano, Benedikter, & Kohls, 2012; Solymosi, 2014). Intelligence is extensionally and functionally relational, making use of and operating across all of these levels of systemic complexity. Locating responsible intelligence at solely one level or in a special node of the whole network is to commit a category mistake (viz. a form of erroneous mereological conceptualiztion; Bennett & Hacker, 2003; Giordano, Rossi, & Benedikter, 2013). Intelligence of every kind is thoroughly social. Responsible moral intelligence, whether human or AI, is therefore social in nature and communal in its exercise (Giordano, Becker, & Shook, 2016). If moral performance is evaluated apart from mission performance, then the communal ethos is being ignored and overridden, degrading the responsibility of everyone involved. We began our inquiry of AI ethics by focusing on the question of to what extent AI systems for NSID service may meaningfully participate in the moral evaluation of their actions. We propose that the answer now becomes clear: For a given human community with its own ethos – such as any NSID organization – there could be, in principle, locally optimal moral programming for aAI. To the extent that AI systems are tactically cooperating in the responsibly intelligent conduct of their mission activities, they already meaningfully participate in the moral conduct of the human-AI team.

Toward Synthesis: An AI “Cooperating System” for Ethics Let us recall how a communal ethos (basically) functions within NSID/military organizations. Personnel: (1) adopt the required norms of proper behavior in uniform; (2) expect other service members to satisfy norms while conducting group activities; and (3) cooperate with community-focal evaluations of personnel conduct by those standards. This moral framework can be extended as a quasi-Gigerenzer (or other Bayesian) heuristic model to encompass aAI (co) operating within a human-AI team (Gigerenzer, 2000; Gigerenzer & Todd, 1999; for other heuristic models, see Brooks, 2002; Gams, Bohanec, & Cestnik, 1994; Hall, 2007; Roscheisen, Hofman, & Tresp, 1994). Autonomous AI will exemplify these three capacities (1, 2, 3) in a manner appropriate for AI, under all conditions: Moral AI (1). AI will conform to the required norms of cooperative behavior while in service. Programming for heuristics of collaborative teamwork will function at the level of a “cooperating system” that hierarchically rests on the core operating system, preventing any noncooperative decision to reach the mechanical stage of actual action. All other programming is designed for functioning within that “cooperating system,” and no other programming has the ability to engage any mechanical operation of the AI (e.g., no other programming by itself can engage an on-board weapon or use the AI system itself as a weapon). Moral AI (2). The AI Cooperating System (AICS) will ensure that all on-board programming satisfies the communal norms of appropriate tactical conduct,

148

John R. Shook et al.

inclusive of moral conduct. The AICS is empowered to temporarily halt or terminate any other programming functioning within its virtual environment, if persistently noncooperative (and possibly immoral) decisions are generated by it. Human approval for this AICS noncooperation override will not be required, although human notification should be promptly provided. Moral AI (3). The AICS, whether operating as onboard computing or cloud-based computing (NB: ideally both, in case of communication disruptions), will provide real-time tactical data relaying its (own) cooperative performance (including moral conduct) for continual or periodic human review. The AICS will be reprogrammable for improvement and retraining of its heuristics for evaluation of AI cooperativeness. Importantly, the AICS will not be self-reprogrammable, but can only be modified/upgraded by designated NSID personnel. Taken together, Moral AI (1–3) would serve as responsible entities designed for upholding the communal ethos of the NSID/military organization served. Because moral guidance will be embedded into human-AI teamwork, aAI and its AICS would exhibit and exemplify moral responsibility. The entirety of the human-AI system can achieve conformity with the NSID/military ethics of its communal ethos, since moral guidance is distributed throughout the human-AI system. Distributed intelligence, if it is responsible intelligence, will fulfill communal morality. In this context, AI ethics will be NSID/military ethics, conforming to the NSID/military ethos. An AI in NSID/military service will participate in its capacity to meet moral standards of conduct during NSID/ military engagements.

Conclusion In this chapter, we have investigated some minimal conditions and criteria for designing autonomous and responsible AI systems able to exemplify key NSID/ military virtues. Virtuous responsibility is compatible with autonomous activity. Autonomy is not simply freedom from responsibility. Individual autonomy need not be antithetical or contrary to group responsibility. This communal ethos model of virtuous responsibility is exemplified in teamwork under good leadership. NSID/military organizations are prominent examples of communal ethos, wherein aAI systems can perform commendable service. A core NSID/military virtue is loyalty, which requires trusting relationships. It is likely that humans will have understandable difficulty wholly trusting aAI, at least at first. But loyalty is a “two-way street,” which is a colloquial way of describing trust-in-loyalty as being about a group’s character, not just an individual characteristic. Thus, just as cooperative intelligence is distributed, responsible intelligence must also be extended and distributed. From the perspective of humans observing the conduct of AI, fulfilling that cooperative responsibility will allow AI to display key NSID/military virtues. In this way, we believe that NSID/military AI can be ethical AI. Of course, given ongoing

Ethical Constraints and Contexts of Artificial Intelligent Systems

149

international efforts in AI research, development, and use, as before (Giordano, 2013; Lanzilao, Shook, Benedikter, & Giordano, 2013; Tennison et al., 2017) we must once again – and perhaps consistently – ask: which military; what ethic? Such inquiry is important to define the possibilities and problems generated by uses of increasingly aAI, and establish process(es) to inform realistic and relevant perspectives, guidelines, and policies for direction and governance (Danielson, 2011). And we opine that apace with developments in AI, this endeavor should be – and remain – an interdisciplinary, international work-in-progress.

Acknowledgments Dr. James Giordano’s work is supported in part by the Henry M. Jackson Foundation for Military Medicine; Leadership Initiatives; and federal funds UL1TR001409 from the National Center for Advancing Translational Sciences (NCATS), National Institutes of Health, through the Clinical and Translational Science Awards Program (CTSA), a trademark of the Department of Health and Human Services, part of the Roadmap Initiative, “Re-Engineering the Clinical Research Enterprise.” The views presented in this chapter do not necessarily reflect those of the US Department of Defense, US Special Operations Command, the Defense Advanced Research Projects Agency, or the authors’ supporting institutions and organizations.

References Anderson, M. L. (2014). After phrenology: Neural re-use and the interactive brain. Cambridge, MA: MIT Press. Arkin, R. (2009). Ethical robots in warfare. IEEE Technology and Society Magazine, 28(1), 30–33. Arkin, R. (2010). The case for ethical autonomy in unmanned system. Journal of Military Ethics, 9(4), 332–341. Asaro, P. M. (2006). What should we want from a robot ethic? International Review of Information Ethics, 6(12), 9–16. Asaro, P. M. (2008). How just could a robot war be? In A. Briggle, K. Waelbers, & P. A. E. Brey (Eds.), Current issues in computing and philosophy. Amsterdam: IOS Press. Bataoel, V. (2011). On the use of drones in military operations in Libya: Ethical, legal and social issues. Synesis: A Journal of Science, Technology, Ethics and Policy, 2(1), 69–76. Bennett, M. R., & Hacker, P. M. S. (2003). The philosophical foundations of neuroscience. Oxford: Blackwell. Brooks, R. (2002). Robot: The future of flesh and machines. London: Penguin. Canning, J. S. (2006, September). A concept of operations for armed autonomous systems. Presented at the third annual disruptive technology conference, Washington, DC. Casebeer, W. D. (2003). Natural ethical facts. Evolution, connectionism and moral cognition. Cambridge, MA: Bradford/MIT Press. Casebeer, W. D. (2017, June). The case for an ethical machine system. Lecture presented at the Neuroethics Network Meeting, Paris, France.

150

John R. Shook et al.

Danielson, P. (2011). Engaging the public in the ethics of robots for war and peace. Philosophy and Technology, 24(3), 239–249. Engelhardt, H. T. (1996). The foundations of bioethics (2nd ed.). New York, NY: Oxford University Press. Flanagan, O. (1996). Ethics naturalized: Ethics as human ecology. In L. May, M. Friedman, & A. Clark (Eds.), Mind and morals: Essays on ethics and cognitive science. Cambridge, MA: MIT Press. Flanagan, O. (2017). The geography of morals: Varieties of moral possibility. New York, NY: Oxford University Press. Franklin, S. (1995). Artificial minds. Cambridge, MA: MIT Press. Galliott, J. (2015). Military robots: Mapping the moral landscape. New York, NY: Routledge. Gams, M., Bohanec, M., & Cestnik, B. (1994). A schema for using multiple knowledge. In S. J. Hanson, T. Petsche, M. Kearns, & R. L. Rivest (Eds.), Computational learning theory and natural learning systems. Cambridge, MA: MIT Press. Gigerenzer, G. (2000). Adaptive thinking: Rationality in the real world. Oxford: Oxford University Press. Gigerenzer, G., & Todd, P. M. (1999). Simple heuristics that make us smart. Oxford: Oxford University Press. Giordano, J. (2013). Respice finem: Historicity, heuristics and guidance of science and technology on the 21st century world stage. Synesis: A Journal of Science, Technology, Ethics and Policy, 4, 1–4. Giordano, J. (2015). Conscious machines? Trajectories, possibilities, and neuroethical considerations. Artificial Intelligence Journal, 5(1), 11–17. Giordano, J., Becker, K., & Shook, J. R. (2016). On the “neuroscience of ethics” – Approaching the neuroethical literature as a rational discourse on putative neural processes of moral cognition and behavior. Journal of Neurology and Neuromedicine, 1(6), 32–36. Giordano, J., Benedikter, R., & Kohls, N. B. (2012). Neuroscience and the importance of a neurobioethics: A reflection upon Fritz Jahr. In A. Muzur & H. M. ¨ Sass (Eds.), Fritz Jahr and the foundations of integrative bioethics. Munster; Berlin: LIT Verlag. Giordano, J., Kulkarni, A., & Farwell, J. (2014). Deliver us from evil? The temptation, realities, and neuroethico-legal issues of employing assessment neurotechnologies in public safety initiatives. Theoretical Medicine and Bioethics, 35(1), 73–89. Giordano, J., Rossi, P. J., & Benedikter, R. (2013). Addressing the quantitative and qualitative: A view to complementarity – From the synaptic to the social. Open Journal of Philosophy, 3(4), 1–5. Giordano, J., & Wurzman, R. (2016). Integrative computational and neurocognitive science and technology for intelligence operations: Horizons of potential viability, value and opportunity. STEPS: Science, Technology, Engineering and Policy Studies, 2(1), 34–38. Hall, J. S. (2007). Beyond AI. Amherst, NY: Prometheus Books. Hallaq, B., Somer, T., Osula, A. M., Ngo, T., & Mitchener-Nissen, T. (2017). Artificial intelligence within the military domain and cyber warfare. In Proceedings of 16th European conference on cyber warfare and security. Dublin: Academic Conferences and Publishing International Limited.

Ethical Constraints and Contexts of Artificial Intelligent Systems

151

Howlader, D., & Giordano, J. (2013). Advanced robotics: Changing the nature of war and thresholds and tolerance for conflict – Implications for research and policy. The Journal of Philosophy, Science and Law, 13, 1–19. Johnson, M. (2014). Morality for humans: Ethical understanding from the perspective of cognitive science. Chicago, IL: University of Chicago Press. Kania, E. (2018, April 17). China’s strategic ambiguity and shifting approach to lethal autonomous weapons systems. Lawfare. Retrieved from https://www.lawfareblog.com/ chinas-strategic-ambiguity-and-shifting-approach-lethal-autonomous-weapons-systems. Accessed on September 29, 2019. Lanzilao, E., Shook, J., Benedikter, R., & Giordano, J. (2013). Advancing neuroscience on the 21st century world stage: The need for and a proposed structure of an internationally relevant neuroethics. Ethics in Biology, Engineering and Medicine, 4(3), 211–229. Lin, P., Abney, K., & Bekey, G. A. (2007). Robot ethics. The ethical and social implications of robotics. Cambridge, MA: MIT Press. Lin, P., Abney, K., & Jenkins, R. (Eds.). (2017). Robot ethics 2.0: From autonomous cars to artificial intelligence. New York, NY: Oxford University Press. Liu, H. Y. (2019). From the autonomy framework towards networks and systems approaches for ‘autonomous’ weapons systems. Journal of International Humanitarian Legal Studies, 10(10), 1163. Lucas, G. (2016). Military ethics – What everyone needs to know. Oxford: Oxford University Press. Maas, M. (2019). Innovation-proof global governance for military artificial intelligence. Journal of International Humanitarian Legal Studies, 10(10), 129–157. MacIntyre, A. (1988). Whose justice? Which rationality? Notre Dame, IN: University of Notre Dame Press. Moravec, H. (1999). Robot: Mere machine to transcendent mind. New York, NY: Oxford University Press. Rao, R. P. N. (2011). Neural models of Bayesian belief propagation. In K. Doya (Ed.), Bayesian brain: Probabilistic approaches to neural coding. Cambridge, MA: The MIT Press. Roscheisen, M., Hofman, R., & Tresp, V. (1994). Incorporating prior knowledge into networks of locally-tuned units. In S. J. Hanson, T. Petsche, M. Kearns, & R. L. Rivest (Eds.), Computational learning theory and natural learning systems. Cambridge, MA: MIT Press. Scharre, P. (2016). Autonomous weapons and operational risk. Washington, DC: Center for a New American Security. Retrieved from https://s3.amazonaws.com/ files.cnas.org/. Accessed on September 30, 2019. Scharre, P. (2018). Army of none: Autonomous weapons and the future of war. New York, NY: W. W. Norton & Company. Sharkey, N. (2011). Automating warfare: Lessons learned from the drones. Journal of Law, Information and Science, 21(2), 140–154. Solymosi, T. (2014). Moral first aid for a neuroscientific age. In T. Solymosi & J. R. Shook (Eds.), Neuroscience, neurophilosophy, and pragmatism: Brains at work with the world. New York, NY: Palgrave Macmillan. Tennison, M., Giordano, J., & Moreno, J. (2017). Security threat versus aggregated truths: Ethical issues in the use of neuroscience and neurotechnology for national

152

John R. Shook et al.

security. In J. Illes & S. Hossein (Eds.), Neuroethics: Anticipating the future. Oxford: Oxford University Press. UNIDIR (United Nations’ Institute for Disarmaments Research). (2017). Weaponization of increasingly autonomous technology: Concerns, characteristics and definitional approaches (No. 6). New York, NY: UNIDR Resources. Retrieved from https://www.unidir.org/files/publications/pdfs/the-weaponizationof-increasingly-autonomous-technologies-concerns-characteristics-and-definition al-approaches-en-689.pdf. Accessed on September 1, 2019. USDoD (US Department of Defense). (2012). DoD Directive 3000.09: Autonomy in weapons systems. Retrieved from https://www.hsdl.org/?view&did5726163. Accessed on September 1, 2019. Wallach, W., & Allen, C. (2009). Moral machines. Oxford: Oxford University Press. Wallach, W., Allen, C., & Franklin, S. (2011). Consciousness and ethics: Artificially conscious moral agents. International Journal of Machine Consciousness, 30(1), 177–192. Wurzman, R., & Giordano, J. (2009). Explanation, explanandum, causality and complexity: A consideration of mind, matter, neuroscience, and physics. NeuroQuantology, 7(3), 368–381.

Chapter 9

An AI Enabled NATO Strategic Vision for Twenty-First-Century Complex Challenges Imre Porkol´ab

Abstract Constant transformation plays a crucial role in the future success of the NATO Alliance. In the contemporary security environment, those who can get the latest technology to the war fighter faster will tend to enjoy a comparative advantage, unless that technology in turn blinds the organization to alternatives. Thus, the author lays out a strategic vision for AI enabled transformation for the Alliance detailing NATO’s ability to adapt throughout history, introducing contemporary efforts for AI enabled tech solutions for NATO, but also pointing out the necessity of organizational learning. His conclusion is that in an age where technological development is exponential, the Alliance appears increasingly unable to deal with the problems related to the exponential technology disruption. Complex contexts require a different mindset, and NATO has to look for new AI enabled tools, to face the increasing number of wicked problems. Most importantly he points out that building a platform with AI enabled technological solutions is only one side of the coin. There is a need for an organizational one as well which connects the different components together, and creates interoperability within the Alliance. As NATO incorporates new AI solutions, there is a need for introducing radically new training and education solutions, and create a framework for what the author calls Mission Command 2.0. Keywords: Transformation; complexity; change; design thinking; mission command; NATO In the twenty-first century, NATO has largely struggled with how to organize, strategize, and act effectively in increasingly complex and emergent contexts where the previous distinctions between war and peace have blurred beyond comprehension (Bousquet, 2008).

Artificial Intelligence and Global Security, 153–165 Copyright © 2020 Emerald Publishing Limited All rights of reproduction in any form reserved doi:10.1108/978-1-78973-811-720201009

154

Imre Porkol´ab

Governments and their militaries continue to experience radical and entirely unforeseen calamities that defy historical patterns and essentially rewrite the rulebooks. These popularly termed “black swan events” continue to shatter any illusion of stability or extension of normalcy in foreign affairs. The Alliance appears increasingly unable to deal with the problems related to the exponential technology disruption using traditional planning and organizing methodologies alone (Kupchan, 2017). What had worked well previously and made NATO the most prevailing defence alliance, no longer appears to possess the same precision and control. The formal planning process (NDPP), the foresight methodologies, and the organizational learning methods, initially developed to cope with Cold War Era large-scale military activities in “a conventional, industrialized state vs industrialized state setting” (Jackson, 2017) are seemingly incapable of providing sufficient means of getting the organization unstuck, and there is a constant search for tools to improve them. Within this new and increasingly chaotic context, NATO has to fulfill all three core tasks at the same time, which requires new and noble approaches from policymakers, and military personnel alike. The Alliance’s complex decision making processes are tested on a daily basis in multiple domains supported by expansive technology, social media, propaganda, and the malicious activity in the cyberspace. This canvas, upon which rivals can create neverbefore-seen complex problem-sets, defy previously accepted definitions for conflict and war. Complex contexts require a different mindset while still maintaining the consensus-based decision-making along with different awareness and appreciation (United Kingdom Ministry of Defence Development Concepts and Doctrine Centre, 2016). In simplistic settings, organizations see things they have previously experienced (Paparone & Topic, 2017), and for NATO, an organization with so much success in the past, these experiences can be an obstacle to change in today’s VUCA (volatile, uncertain, complex, ambiguous) world. Complex contexts often have only one repeating and predictable process: an organization will continue to experience things they have never seen before that marginalize or defeat all established practices and favored tools (Tsoukas, 2017). When an organization encounters things they have experienced previously in some format or context, they can reapply approved processes to address these problems, often in an analytic and optimization-fixated approach to reduce and increase stability (Ackoff, 1973). Yet the question remains, what does an organization do when they experience something they have never seen before? One aspect of the ongoing organizational and mindset change in NATO is an increasing need to look for new tools. In our digital age, AI (artificial intelligence) seems to be the tool of preference for large bureaucratic organizations to tackle strategic challenges, therefore, in this chapter I suggest that as part of the response to the changes in a complex environment, NATO should introduce “AI enabled tools” to address the wicked problems in future foresight, organizational learning, and decision-making.

AI Enabled NATO Strategic Vision

155

To prove the point, this chapter will first look into NATO adaptation and present contemporary AI approaches and provide a glimpse into the vision for NATO’s AI enabled organizational transformation of the Alliance, and finally, present a strategic vision for AI solutions to solve complex security problems.

NATO Adaptation: An Evolution of Change and Collaboration Today’s environment is inherently complex with an increase of key stakeholders as well as the exponential increase in the connections between these players. With the escalation in technology and information exchange, NATO’s operational areas are increasingly challenging and potentially chaotic. In some regions, NATO is facing a broad range of threats simultaneously. The Alliance has come to realize that what it was designed and optimized to do is no longer applicable to today’s VUCA battlefield. Complexity and uncertainty seem to be the norm (Boulding, 1956; Pondy & Mirtoff, 1979), and for an international organization with much history, legacy, and past success like NATO, it is a very difficult moment, which requires organizational transformation and adaptation. Adaptation is certainly not new to the Alliance, which has a long history and has undergone several focus shifts before. In fact, the Warsaw Summit acknowledged the fourth phase in NATO history, where “there is an arc of insecurity and instability along NATO’s periphery and beyond…Today, faced with an increasingly diverse, unpredictable, and demanding security environment, we have taken further action to defend our territory and protect our populations” (North Atlantic Treaty Organization, 2016). This modern context of uncertainty and emergent developments places NATO within a new world where many of the traditional “tools” in the Alliance toolkit no longer work or result in bizarre outcomes. The Alliance since its creation in 1949 has mainly focused on collective defense. However, following the fall of the Berlin Wall in 1991 an era of cooperation began, where expansion (especially from the former Warsaw Pact countries) and the development of partnerships (including Russia) became the primary focus. In 2001, NATO’s focus shifted again towards expeditionary operations and crisis management with a strong emphasis on Afghanistan following the terrorist attack on the US. Thus, adaptation is not new. Rather, NATO has been an adaptive organization throughout its existence (Stoltenberg, 2015). As the tempo and thrust of change have accelerated and altered the rules of the game, beginning in 2014, a new NATO strategic focus has emerged and come into view (Dunford, 2017). Marked by the past two NATO Summits as important milestones along the path for NATO’s future, the Alliance embarked on a journey of organizational transformation at an unprecedented pace. While NATO’s essential mission remains unchanged, goals in increased adaptation, ability to anticipate change, and increasing both efficiency and transparency were noted as new benchmarks in the Summit communiques.

156

Imre Porkol´ab

Three years ago at the NATO Summit in Wales in 2014, NATO leaders were clear about the security challenges on the Alliance’s borders. In the East, Russia’s actions threatened Europe, on the Alliance’s south-eastern border, the ISIL terror campaign posed a threat. Across the Mediterranean, Libya was becoming increasingly unstable. The Alliance’s leadership took decisive steps to address these challenges and reaffirmed the central mission: the shared responsibility of collective defense. Continuing this adaptive trend, Allies agreed to an increase of NATO’s presence in Central and Eastern Europe with additional equipment, training, exercises, and troop rotations. Following the Wales Summit, at the NATO Summit in Warsaw in July 2016, the Alliance had even more emergent problems and challenges to grapple with. At this time, NATO was engaged in all areas of its core tasks simultaneously and often in overlapping and confusing ways. To counter these challenges, the US quadrupled its funding for the European Reassurance Initiative (ERI) and sent more troops to Europe, who were accompanied by other NATO Allies to serve as a deterrent force along NATO’s eastern border. NATO is moving ahead at a rapid pace for a large multinational bureaucratic organization; however, adaptation and transformation is never an easy process. Preparing for the future, and building strategic foresight, is becoming increasingly difficult. There are no blueprints, rules, or best practices anymore, and frequently an organization’s successful tools from yesterday actually work against it in discovering tomorrow’s challenges. Today, when security challenges demand a different kind of force, agility is essential. Thus, speed is another problem that can be addressed through increasing operational agility and flexible thinking. NATO’s adaptation measures introduced above have partially addressed this challenge. A third issue is a recognition that a major cornerstone of many of today’s emergent security challenges is the pattern of power shifts toward networks. Since the number of key stakeholders in any operational setting has increased, the Alliance has to think and act like a network as well, and this requires institutional adaptation beyond what had previously been sufficient in education, professionalization, and organizational transformation. NATO clearly has a high potential for adaptation and transformation. However, given today’s mindset, the Alliance leadership faces a critical choice. They believe that they must choose between tackling complex challenges (adapting) or responding as a traditional bureaucratic organization as they attempt to give adequate responses to emerging challenges in an age of constant disruptions (operating). Many large military organizations face the same challenge; a tension between an operational mindset and adaptive experimentation. The reality is that NATO is quite capable of doing both approaches at the same time. Allied Command Transformation (ACT) has the potential to contribute to NATO’s overall adaptation, while Allied Command Operations (ACO) can focus on the more traditional end of the spectrum while also receiving facilitation and transformative abilities from ACT’s adaptive efforts. NATO’s command structure with the two strategic commands (and their different functions) enables the Alliance to operate and adapt at the same time, the question is how?

AI Enabled NATO Strategic Vision

157

In order to thrive in a VUCA environment, when challenges are increasingly complex and interrelated, NATO needs to use novel technology, like AI and a radically different mindset on an everyday basis to engineer new solutions. A new strategic approach, based on this mindset, is needed when the organization “needs what does not yet exist” so that it can gain or maintain relevance, as well as gain a strategic and tactical advantage in emergent futures (Nelson & Stolterman, 2014).

NATO Transformation and AI In recent years, NATO has been experiencing the most urgent organizational transformation of its history, and is looking for answers through a very thorough foresight process. There is no question that the strategic environment is changing rapidly, and there is an acknowledgement that the Alliance is facing simultaneous dangers both geographically (mainly from the East and from the South) on a much wider scope than ever before. Europe seems to be an importer of instability now, with an increased risk of traditional conflict, as well as emergent crises initiated by individual actors with ideological differences that will inspire irregular warfare. Thus, NATO must be prepared to address the spectrum of chaos and conflict on a global scale, regardless if it is nation-driven or inspired by rogue individuals with a cause. These challenges were discussed as part of the Washington Project, an Atlantic Council supported event initiated by ACT in 2016. The principal finding of this event was that the Alliance must be revitalized for the new world, and an overarching strategy must rely on “NATO’s ability to provide a full spectrum deterrent and defense tools to provide collective defense for all its members, together with an ability to project stability and resilience beyond its borders using an array of tools for crisis management” (Binnendijk, Hamilton, & Barry, 2016). Building a “full spectrum” deterrence and defense toolkit was a topic at the Warsaw NATO Summit as well. Leaders at this meeting emphasized that there is a need to develop continuous strategic awareness and procedures for rapid decision-making. These capabilities in the digital age cannot be imagined without the active engagement of the innovation ecosystem and seeking AI solutions already out there on the market. Experts in NATO have realized that building a comprehensive and integrated strategic awareness can only be achieved through information fusion, Big Data analysis (making sense out of the huge data sets), and sharing. Moreover, this initiative must address partners (especially the EU and NGOs, and think tank communities) as well, and enable them to contribute to information fusion. This networked-based approach requires new technological solutions, as well as a new kind of thinking to forge a new mindset based on innovative, out-of-the box thinking. These ideas were also supported by other strategic documents, highlighting that “NATO must add resilience as a core task to its existing core tasks of collective defence, crisis management, and cooperative security” (Kramer, Binnendijk, & Hamilton, 2015). The drive (need) for innovation and leadership was also

158

Imre Porkol´ab

highlighted, emphasizing that innovation will be a pre-requisite for leadership across all elements of national power. In addition, there is also a need to expand synergies between and among the key elements of the innovation landscape to encourage diverse approaches for capability development (Kramer & Wrighston, 2016). Maintaining the strategic advantage via implementing technological advances will help NATO maintain the decision-making advantage essential for supporting global security. Keeping the edge has been a major driver for the largest member of the Alliance (US) as well. The US has been the world’s leading and technological powerhouse after World War II; however, given China’s and Russia’s commitment to advances in technology and specifically to AI, there has been a downward shift in this area. The US is faced with a significant challenge that it is attempting to address to catch up with, but in the twenty-first century advances in AI. The race between the US and its adversaries is on for achieving information superiority via advances in technological innovation (Engelke & Manning, 2017). Specifically, regarding AI, there is an increasing competition from China, which has declared an intent to become a world leader in AI technologies in the next 10 years and has put significant resources behind this effort as well. In an age where technological development is exponential, and the context is dynamic and rapidly changing, there is an operational imperative to address these challenges given that the environment is becoming more and more unpredictable. NATO must bolster its commitment and foresight for implementing technological advances in their operations, as well as address its organizational learning processes. The accelerated rate of change in technology demands a need to speed up organizational learning processes. It is not a surprise that the GLOBSEC NATO adaptation initiative has also highlighted the “One Alliance” concept. This policy emphasizes that “NATO needs a forward-looking strategy that sets out how the Alliance will meet the challenges of an unpredictable and fast-changing world” (GLOBSEC NATO Adaptation Initiative, 2017). This policy encourages NATO leaders to commission a strategy review, with a future war strategy that fully integrates the full spectrum (hybrid warfare, cyberwar, counterterrorism, and hyperwar). These changes cannot be initiated, of course, without a thorough review of the future context and the security environment. This is the job of NATO’s ACT, which has completed its Strategic Foresight Analysis (SFA) report in 2017. General Denis Mercier, Supreme Allied Commander Transformation (SACT) has emphasized in his foreword that “the rapidly changing complex security environment will continue to be the main driver for NATO’s adaptation efforts” (NATO Allied Command Transformation, 2017). The document itself builds upon the SFA 2013 and 2015 update reports with the goal to identify trends that will shape the future strategic context and derive implications for NATO out to 2035 and beyond. This long-term military transformation effort is also supported by another hallmark document, the Framework for Future Alliance Operations (FFAO), which is designed to improve the Alliance’s long-term perspective and to inform the defense planning processes, both for NATO overall and its member countries alike.

AI Enabled NATO Strategic Vision

159

The SFA puts a special emphasis on technology and Chapter 4 of the document highlights that “the introduction of Artificial Intelligence (AI), autonomous systems, and other disruptive technologies are expected to enable humans to achieve a profound new state” (NATO Allied Command Transformation, 2017). The document forecasts a new age of collaboration between humans and machines, and recognizes that this may bring both advantages as well as risks for humanity. The accelerated rate of technological advances present significant interoperability challenges to NATO as well as increased debate regarding the moral values and ethical principles associated with its use. Advances in AI technologies and related autonomous system designs will also have a huge effect on acquisition and life-cycle management processes. The ubiquitous nature of technological advances such as AI, et al. will be afforded to everyone regardless if they are private individuals, corporations, nations, and/or nonstate actors. These technological advances will enable disruptive behaviors, and present challenges regarding existing frameworks. The current near-monopoly of states for high-tech weapons continues to decrease. Data are becoming a strategic resource, and the increasing number of sensors in a global network generates both operational opportunities and weaknesses. Because the commercial sector is leading the development in disruptive digital technologies, the Alliance must build systems to collaborate with the innovation ecosystem. Commercial off-the-shelf solutions are increasingly available at a lower cost and are rapidly developed, such that nation state and nonstate actors are actively searching for dual-use technologies to incorporate them into their own capability development processes. The Alliance seems to understand that reliance on disruptive digital solutions, such as AI, will create vulnerabilities as well, and overall resilience of the force is becoming a critical strategic issue. The interconnectedness with large multinational corporations also highlights the need of critical infrastructure protection because in a hybrid warfare scenario, every system that supports military operations (such as communications, power generation, transportation, water supply) can be and will be a potential target. At the operational level, the FFAO points out that there are certain military implications of the above-mentioned strategic trends. These factors intend to inform the transformation of forces within the Alliance, including political development, long-term requirements, and capability development. These military implications fall in the main stability areas of project, engage, consult, command and control, protect and inform. At the heart of the system is a mission-command approach, which needs to be updated for the modern digital era. AI will enable both a superior situational awareness, to make operational decisions better and faster, and also the decentralized execution of human– machine teams. This concept of Mission Command 2.0 (Porkol´ab, 2019) has huge implications on how future forces will fight on the modern battlefield. The central idea of Alliance transformation is that in order “to keep the military edge and prevail in future operations, NATO forces must continually evolve, adapt, and innovate and be credible, networked, aware, agile, and resilient” (NATO Allied Command Transformation, 2018). AI solutions will play a crucial

160

Imre Porkol´ab

role in these aspects of force modernization. Recognizing this pressing need to maintain a technological edge, NATO’s Science and Technology Board requested the NATO Science and Technology Organization (a network of nearly 5,000 scientists and engineers) to identify potential wild cards, and start engineering potential solutions. In 2016, 12 technology areas were identified with a game changing impact on future military operations. This report has identified AI and its potential impact to replace human decision-makers, autonomous robot or vehicle control, automated information fusion and anomaly detection (NATO Science and Technology Organization, 2018). There are also other technology trends recognizing the importance of AI with similar implications (Deputy Assistant Secretary of the Army (Research and Technology), 2017) and the 2018 US DoD AI strategy specifically highlights that “AI is rapidly changing a wide range of businesses and industries. It is also poised to change the character of the future battlefield and the pace of threats we must face” (US Department of Defense, 2018). Another UK strategic document is focusing on opportunities for the future of decision-making and points out that “Artificial intelligence holds great potential for increasing productivity, most obviously by helping firms and people use resources more efficiently, and by streamlining the way we interact with large sets of data” (United Kingdom Government Office of Science, 2015). The European Union’s AI Ethics and Guidelines provides a framework and foundations for a trustworthy AI, and details the technical and nontechnical methods for development of such AI systems (European Commission, 2019). In light of the above strategic guidance, it is not a surprise that AI is at the core of ACT’s future technology development focus, and the NATO strategic command is currently looking at AI solutions to enhance the lessons learned process, strategic decision-making, and the NATO defense planning processes alike. In sum, NATO is no exception, and (just like all other bureaucratic organizations) the Alliance is heavily involved in keeping up with the current technological trends, AI being one of the most prevalent topics. In April 2018, SACT General Denis Mercier stated that “We must also internalize the urgent need for developing innovation in order to better understand the threats and opportunities in our strategic environment, and ensure that throughout NATO, all understand the challenges of interoperability, at the technical and political level. ACT must therefore be in a position to bring these issues to the attention of decision-makers in the Alliance, at the North Atlantic Council and Military Committee, and organize regular meetings at that level”. During one of these meetings, known as the “SACT conference” which was engaging the NAC (North Atlantic Council), they focused on AI and the many challenges NATO has to face related to disruptive technological changes. In front of all NATO leadership, SACT joined Sophia the robot on the stage to discuss security issues, in order to demonstrate the level of sophistication for AI enabled systems. Admiral Manfred Nielson also highlighted that “Human augmentation, underpinned by artificial intelligence, will be the extension of centuries of human endeavor in which people sought to become faster, stronger and smarter through the use of tools and machines” (Nielson, 2018). Deputy SACT emphasized that

AI Enabled NATO Strategic Vision

161

NATO must employ humans and AI teaming efficiently and ethically. For that, NATO must achieve an effective convergence of technology, operating concepts, and adapt its organization and processes. NATO’s ACT is leading that charge. AI and autonomy will surely drive NATO to require new processes, new skills, and new policies, and these changes will require political willingness and changes in the legal frameworks in order to fully exploit the potential of the new technologies. NATO has to leave its comfort zones and increase its pace at all levels. ACT’s leadership understands that innovation at this level requires a sustained push from all levels in the hierarchy, and also requires buy-in from staff members within defense organizations. This strategic approach related to AI and autonomy serves as the basis for the work carried out by ACT, and is manifested in multiple projects ranging from using algorithms to make sense of the last 20 years of operational lessons learned, to enabling and enhancing the NATO defence planning process (NDPP). AI was one of the key topics at the 2018 COTC (Chiefs of Transformation Conference) as well. Following this event, there was a call-out request for information (RFI) for submitting their ideas about AI-based technologies. The chiefs of allied and partner nations discussed the AI topic in depth and their likely impact on future considerations at ACT’s annual Chiefs of Transformation Conference in Norfolk, Virginia. However, worth noting is the fact that AI is primarily not a technology problem in and of itself. As US Secretary Mattis said “success does not go to the country that develops a new technology first, but rather, to the one that better integrates it and more swiftly adapts its way of fighting” (Mattis, 2018). Rather, AI is an enabler that empowers individuals with information that they decide to integrate into their understanding of the operational environment and use it to inform their decision-making. It was also realized by ACT as well, since the organization has been supporting a battle cry for organizational transformation for a long time. NATO leadership understands that building a platform, which is the goal of multiple technological solutions a technological fix, is only one side of the coin. There is a need for an organizational one as well, which connects the different components together, hence the need for design application and education, which is aimed at speeding up organizational learning. The idea is not new, since multiple NATO partner nations are already implementing and running design programs at the national level, and NATO already has some design networking and collaboration abilities within the Alliance to leverage. NATO ACT has taken this a step further and started to establish a formal design education program, at the NATO School in Oberammergau (where the first course ran in 2019) as well as to support project design modules (Mobile Education Teams) to overseas and even remote locations. Combining AI tech solutions and design thinking, which is a mindset for exploring complex problems or finding opportunities in a world full of uncertainty, and exploring questions such as what is, and then imagining what could be with innovative and inventive future solutions, could be a very potent combination for lasting organizational change.

162

Imre Porkol´ab

While most of AI debates in NATO often devolve into arguments over “killer robots”, AI technological changes supported by a new mindset will introduce new processes for information management that will have a profound impact on NATO’s organization, operations, and interoperability itself. Successful adoption of technology and the ability to reinvent the way we fight will help the Alliance maintain its competitive edge and deterrence capability. Thus, constant transformation plays a crucial role in the future success of the Alliance. There is a need to increase operational agility, the ability to sense, and to build a network, but this complex strategic approach must be supported by modern technology tools, like AI, and a new strategic mindset that can enable the whole organization to utilize this new technology. In the contemporary security environment, those who can get the latest technology to the war fighter faster will tend to enjoy a comparative advantage, unless that technology in turn blinds the organization to alternatives.

A Strategic Vision for NATO’s Organizational Transformation NATO has been adapting throughout its history, but the tempo and speed that are required to deal with potentially disruptive challenges, like AI, push the Alliance to the edge of its capabilities, and recently NATO has been experiencing one of the most urgent organizational transformations in its history. In an age where technological development is exponential, the Alliance appears increasingly unable to deal with the problems related to the exponential technology disruption. Complex contexts require a different mindset, and NATO, which has been very successful in the past, has to look for new tools to face the increasing number of wicked problems. AI enabled tools are one of the most important ones in the arsenal, but we have to put AI into a strategic context and also define the root cause of the problem, in order to successfully apply AI enabled solutions. In the contemporary context, organizational learning seems to be the main challenge, and it must be accelerated, together with the mindset and culture, which must be changed as a result of the adaptation process. Generating alternative options and enabling their rapid implementation throughout NATO seem to be the best way to maintain the competitive advantage in today’s complex context. Thus, building a platform with AI enabled technological solutions is only one side of the coin. There is a need for an organizational one as well which connects the different components together, and creates interoperability within the Alliance. As NATO incorporates new AI solutions, there is a need for introducing radically new training and education solutions, in order to speed up organizational learning. As AI is taking over the world and changing industries with unprecedented speed, militaries appear to be constrained by traditional and resistant-tochange centralized hierarchies. Thus, ACT has a leadership role to develop, shape, and nurture the defense applied design-thinking cadre for NATO. In order to challenge current organizational learning practices, design thinking can support innovative thinking to generate creative responses to disruptive technologies. While NATO as an organization already has executed design

AI Enabled NATO Strategic Vision

163

activities within various locations and for a variety of missions, those efforts were individually inspired and local with respect to the entire enterprise. As multiple nations now incorporate design thinking in formal military education within services, war colleges, as well as at universities, NATO should use this tool to enable people with the mindset to reimagine the way we learn and fight modern war. Today, analytic-based military planning alone is insufficient, and action without critical reflection and novel creation appear inadequate. There is an operational imperative for NATO to adopt a new approach to the way they think about warfare. Design thinking affords NATO to change its traditional view of planning for instability situations and adapt to the new world order that requires creativity, innovation, and sense-making strategies to think about command and control in warfare from a new perspective. A new generation of leaders will look at Mission Command differently as well. Right now, we are still using the original Mission Command concept with a goal to establish a decision-making cycle that works faster and more effectively than our potential adversaries. We must deliberately build command and control relationships that maximize operational efficiency, and create an operational tempo where decentralized units can thrive in uncertain situations. It fosters a leadership training approach, where a mission command culture often improves resilience by enabling forces to perform the correct actions that lead to mission accomplishment when a centralized command system is not optimal. We also have to anticipate that AI will change that in the near future. Mission Command 2.0 will enable leaders in large bureaucratic organizations to oversee increasingly complex operations and situations, and support an AI assisted emergent type of commander’s intent. At the same time, decentralized execution of small teams will be replaced by the decentralized execution of human–machine teams, autonomous swarms of machines, with a “human in the loop”. This leadership philosophy, which will be triggered and supported by AI technology, will transform decision-making into a collaborative and agile process, and empower leaders at the top of the bureaucratic institutions to delegate intent co-creation authorities to the lowest levels of the command chain. If leadership training in the Alliance prepares junior leaders to take ultimate responsibility, and look at their actions as experiments, then this new AI platform of a collaborative intent creation can eventually replace intent originating solely from top leaders. It is a radical departure from the traditional military mindset. To implement this new “AI friendly” mindset, and organizational methods, an AI focused strategic vision is required, with novel educational solutions, as well as methods to deliberately nurture people to adopt the new mindset within an organization. NATO has to gradually build up a larger population of AI educated professionals, and conduct multiple experiments during exercises that address NATO challenges based on the tools AI can support. The upfront costs of establishing a strategy-driven “NATO AI program” are minimal when compared to the costly programs involving new platforms. The return on investment though is huge. This mindset shift can enable the Alliance to facilitate human–machine co-evolution, which in turn will change the way we address instability situations.

164

Imre Porkol´ab

References Ackoff, R. (1973, June). Science in the systems age: Beyond IE, OR, and MS. Operations Research, 21(3), 661–671. Binnendijk, H., Hamilton, D. S., & Barry, C. L. (2016). Alliance revitalized: NATO for a new era. Washington, DC: Atlantic Council publication. Boulding, K. E. (1956). General systems theory: The skeleton of science. Management Science, 2, 197–207. Bousquet, A. (2008). Chaoplexic warfare or the future of military organization. International Affairs, 84(5), 915–929. Deputy Assistant Secretary of the Army (Research and Technology). (2017). Emerging science and technology trends: 2016–2045. Retrieved from https://csiam.org.cn/ Uploads/Editor/2017-07-21/5971a7aa26e97.pdf Dunford, J. Jr. (2017, January). From the chairman: The pace of change. Joint Force Quarterly, 84, (1st Quarter). Retrieved from http://ndupress.ndu.edu/JFQ/JointForce-Quarterly-84.aspx Engelke, P., & Manning, R. A. (2017). Keeping America’s innovative edge: A strategic framework. Washington, DC: Atlantic Council publication. European Commission. (2019). Ethics guidelines for trustworthy AI. Retrieved from https://www.euractiv.com/wp-content/uploads/sites/2/2018/12/AIHLEGDraftAIEthics Guidelinespdf.pdf GLOBSEC NATO Adaptation Initiative. (2017). One alliance: The future tasks of the adapted alliance. Washington, DC: GLOBSEC Press. Jackson, A. (2017). Innovative within the paradigm: The evolution of the Australian Defence Force’s joint operational art. Security Challenges, 13(1), 67–68. Kramer, F. D., Binnedijk, H., & Hamilton, D. S. (2015). NATO’s new strategy: Stability generation. Washington, DC: Atlantic Council Publication. Kramer, F. D., & Wrighston, J. A. (2016). Innovation, leadership and national security. Washington, DC: Atlantic Council Publication. Kupchan, C. (2017). Is NATO getting too big to succeed? The New York Times, May 25. Retrieved from https://www.nytimes.com/2017/05/25/opinion/nato-russiadonald-trump.html Mattis, J. (2018). US Secretary of Defense James Mattis speech at the announcement of the US National Defense Strategy. NATO Allied Command Transformation. (2017). Strategic foresight analysis. Norfolk, VA: Allied Command Transformation Press. NATO Allied Command Transformation. (2018). Framework for future alliance operations. Norfolk, VA: Allied Command Transformation Press. NATO Science and Technology Organization. (2018). STO Tech trends report. Public release version of the AC/323-D(2017)0006. Retrieved from https://www.nato.int/ nato_static_fl2014/assets/pdf/pdf_topics/20180522_TTR_Public_release_final.pdf Nelson, H., & Stolterman, E. (2014). The design way. Cambridge, MA: MIT Press. Nielson, M. (2018). Keynote speech by Admiral Manfred Nielson, Deputy Supreme Allied Commander Transformation. Hudson Institute conference. North Atlantic Treaty Organization. (2016, July 08–09). Warsaw Summit communique. Retrieved from http://www.nato.int/cps/en/natohq/official_texts_133169.htm Paparone, C., & Topic, G. Jr. (2017, June). Training Is D´ej`a Vu: Education Is Vu Jade. Army Sustainment, 15.

AI Enabled NATO Strategic Vision

165

Pondy, L. R., & Mirtoff, I. I. (1979). Beyond open systems models of organizations. In B. M. Staw (Ed.), Research in organizational behavior (pp. 3–39). Greenwich, CT: JAI Press. ¨ Porkol´ab, I. (2019). Szervezeti Adapt´acio´ a Magyar Honv´eds´egben: Kuldet´ es Alapu´ Vezet´es 2.0 a digit´alis transzform´acio´ kor´aban. Honv´eds´egi Szemle, 2019(1), 3–12. Stoltenberg, J. (2015). Keynote speech at the 2015 chiefs of transformation conference by NATO Secretary-General, Norfolk, VA. Retrieved from http://www.nato.int/ cps/en/natohq/opinions_118435.htm Tsoukas, H. (2017). Complex knowledge: Studies in organizational epistemology. New York, NY: Oxford University Press. United Kingdom Government Office of Science. (2015). Artificial intelligence: Opportunities and implications for the future of decision making. Retrieved from https:// assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_ data/file/566075/gs-16-19-artificial-intelligence-ai-report.pdf United Kingdom Ministry of Defence, Development, Concepts and Doctrine Centre. (2016). Joint Doctrine Publication 04: Understanding and decision-making. Bicester: Ministry of Defence. US Department of Defense. (2018). Artificial Intelligence strategy: Harnessing AI to advance our security and prosperity. Retrieved from https://media.defense.gov/2019/ Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF

This page intentionally left blank

Chapter 10

AI Ethics: Four Key Considerations for a Globally Secure Future Gina Granados Palmer

Abstract Harnessing the power and potential of Artificial Intelligence (AI) continues a centuries-old trajectory of the application of science and knowledge for the benefit of humanity. Such an endeavor has great promise, but also the possibility of creating conflict and disorder. This chapter draws upon the strengths of the previous chapters to provide readers with a purposeful assessment of the current AI security landscape, concluding with four key considerations for a globally secure future. Keywords: AI; ethics; dual-use; key considerations; robot; neuroscience; LAW

Coming to Terms with AI There is no consensus on a final definition of Artificial Intelligence (AI), but AI must be defined in some manner as a baseline for discussing AI ethics. However, as evidenced in the previous chapters, many concepts overlap. These varied contributions map to a general framework for ethical discussion – a kind of Rosetta Stone of AI terms, concepts, and puzzles. This general framework can help leaders understand AI issues, associated ethical problems, and possible options for future exploration. Building a framework for ethical discourse, Casey Hart and Pauline Shanks Kaurin employ Heather Roff’s three distinctions of AI: AI can automatically perform tasks, incorporate a method of machine learning to improve performance, and make symbolic claims about data and draw conclusions by manipulating those symbols. While not nailing down a specific definition for AI, these iterations of AI, as described, are incapable of passing a Turing test (achieving independent thought indistinguishable from human thought).

Artificial Intelligence and Global Security, 167–176 Copyright © 2020 Emerald Publishing Limited All rights of reproduction in any form reserved doi:10.1108/978-1-78973-811-720201010

168

Gina Granados Palmer

Hart and Shanks Kaurin contribute from a philosophy and military ethics perspective en route to establishing foundational AI ethics and exploring moral agency. In contrast, contributor Keith Abney takes a robot AI and corporate research and development view, citing six recent definitions for AI used by influential AI companies. Notably, a 2018 Sony Corporation definition marks any technology performing information processing for specific tasks as AI if people perceive it as intelligent. This model does not require AI to pass the Turing test; the AI need only reasonably imitate intelligence. Other definitions Abney cites employ similar perception-based language, stating that AI systems need only “emulate, augment, or compete with” human performance on specific tasks. He further defines AI along the two axes of thinking and behavior. This division also reinforces the idea that AI need only behave as humans do (and do not necessarily have to think like humans) to merit consideration as artificially intelligent. These prior definitions in hand, Abney proposes AI is “a goaloriented, problem-solving thinking process, with at least some human-level (or better) capabilities, that arose artificially, not naturally.” He further extends his reasoning to include definitions for relevant subcategories that will be discussed later in this chapter. These include delineation of narrow and general AI and a definition for “robot,” eventually honing in on how these concepts apply to twenty-first-century warfare. Contributors in this volume refine definitions of AI with increased granularity: John Shook, Tibor Solymosi, and James Giordano specify “five core types of AI,” expanding upon the work of Peter Asaro: (1) (2) (3) (4) (5)

Automation AI Animate AI Autonomous AI Agentic AI Autopoetic AI

Of these five core types, Type 1 and Type 2 incorporate ways of processing information for a task. Type 1 is a stationary machine with no animated physical attributes, and Type 2 animated in the sense that it is mobile enough to navigate through an environment in some manner. Type 2 automates processes but is also constrained by a human in the decision-making loop. As such, humans are involved in all process stages to approve courses of action. Type 3 conducts assigned activities, but does not need a human in the loop, and is therefore autonomous; while Type 4 may autonomously pursue assigned outcomes in any manner it chooses. Finally, Type 5 is “autopoetic,” in that it incorporates decision heuristics to learn, adjust, and execute agential conduct. Shook, Solymosi, and Giordano’s five core types provide a continuum of AI acquisition and, therefore, are useful for purposes of establishing a common framework. Mapping against Hart and Shanks Kaurin’s AI definitions: although Types 1–5 accomplish increased levels of behavioral agency as separate from human intervention, none of them necessarily achieve moral

AI Ethics: Four Key Considerations for a Globally Secure Future

169

agency. Indeed, Hart and Shanks Kaurin might argue that only Giordano et al.’s Type 5 systems might have a distant future possibility of acquiring moral agency, but only if machines manage to reach beyond rote, humandetermined automation.

Narrow, Narrow, Narrow How does all of this apply to global security? To tighten the frame requires at least three more concepts as prescribed by Abney: (1) narrow versus general AI (AGI); (2) robot; and (3) Lethal Autonomous Weapons Systems (LAWS). Regarding narrow AI versus AGI, Abney provides further delineation of terms. Narrow AI “matches or exceeds human capabilities at attaining a specific cognitive goal,” while an AGI “has human-level or better capability generally and moves toward realization of general, non-human sentience.” These narrow, specific constraints allow short-term, targeted application of AI principles to systems that fall short of Turing-test sentience. Contrastingly, Hart and Shanks Kaurin analyze morality and ethics from an AGI perspective, while Abney tackles narrow AI with cognitive goals specific to the space warfighting domain. Before addressing the last two definitions as proposed by Abney, it makes sense to first acknowledge the need for a timeline component of our framework. Namely, different types of AI and different innovation timelines for narrow AI versus AGI evidence the need for short-term, intermediate, and long-term strategies and solutions. We may then harness the timeline to provide a flexible framework for discourse and policy. Short-term might, for example, denote realization within one to five years, intermediate within 5 to 10 years, and long-term in 10 or more years. Thus, the present chapter acknowledges that anticipatory ethics (as Abney prescribes) must simultaneously address all three of these timelines at once. Abney observes that the nature of conflict is radically shifting in the shortterm and concludes we will soon be forced to rethink the ethics of war. He further asserts that two factors will dominate this radical shift: the warfighting domain of space and integration of AI automation. As previously noted, AI only needs to emulate human behavior and result in tangible consequences to warrant anticipatory ethics application. Indeed, if the history of atomic bomb development is any indication, we need only be close to acquiring AI-automated warfare to trigger a race. In such a race, gaining warfare advantage over the enemy is the strategic imperative before ethics and legality have a chance to stifle innovation. The emphasis will be to innovate and win at all costs. Abney contends we must acquire AI superiority first, before nation-states such as Russia and China, to avert tyranny in the interest of just peace and flourishing. Scholars and politicians have long debated about just peace. Suffice it to say, which nation-states are “just” is arguably relative – and a just and lasting peace is not necessarily easy to define, much less establish.

170

Gina Granados Palmer

Returning to two of Abney’s adopted definitions, we examine “robot” and “LAWS” in the interest of narrowing toward global security concerns. For the first term, “robot,” Abney cites the definition, “a machine, situated in the world, that senses, thinks, and acts.” Additionally, Lethal Autonomous Weapons Systems, or, as commonly abbreviated, LAWS, are robots that are designed to kill humans. Our terms now defined to the narrower focus required; we next turn to AI technological capabilities before discussing the application of anticipatory ethics.

AI Technology: Are We There Yet? We may never “get there” in creating AI that passes the Turing test – an artificial mind capable of thought and moral decision-making wholly and completely independent from humans. However, we have already arrived at a near-Turing point in AI development where outcomes in specific, human-driven scenarios can approach and emulate Turing-AI outcomes. Therefore AI ethics must immediately address the concerns of automated, human-in-loop, and augmented-human weapons of warfare. In other words, the current state of AI technology necessitates near-Turing-test ethics. William D. Casebeer proposes in his chapter that we aggressively develop the “artificial conscience.” He argues that morally autonomous AI are not only possible but a requirement in modern warfare that possesses power but lacks a just war responsibility. Like Hart and Shanks Kaurin, he agrees that questions of both moral knowledge and the nature of morality are central to any practical application of AI. Furthermore, warfare and security AI demand consideration of Colonel (retired) Charles Meyers’ “three Cs” of character, consent, and consequence. Casebeer argues that the current military development of automated weapons and machine-human killing teams requires that we get “ahead of the ball.” We must examine and apply moral coding into these systems even if AI does not yet have moral agency. Although not contending that AI is currently morally capable of moral sensitivity, judgment, and motivation, Casebeer argues that creators of human-inthe-loop and autonomous systems must, for now, metaphorically embody these artificial “thinking” processes. As such, an “artificial conscience” already exists – what is rapidly shifting is the level of sophistication possible in encoding that conscience. One example that we can use to exercise Meyers’ three Cs immediately comes to mind: area-denial sentry guns. In the past decade, North Korea posted Samsung-designed robot sentries to deny access to the demilitarized zone and fire rubber bullets – but the original design was a stationary LAWS with lethal ammunition. The design and use of these systems must address the situations under which moral character, consent, and consequence come into play. Does the system have the character of discriminating friend (civilian or military) from foe (enemy combatant)? Children from adults? Does the system involve a human in the consent loop, or is the fire mode automatic with no human involved? What are

AI Ethics: Four Key Considerations for a Globally Secure Future

171

the consequences of codifying or not codifying these metaphorical moral components? Area-denial machines currently exist. Although upgrading these to LAWS is illegal by current international law conventions, they could be assembled and employed in various conflict situations. Area-denial scenarios may not breach an agreed-upon definition of war, but their potential for dual-use exacerbates conflict and tensions. Although it is improbable AI will achieve sentience in the 10-year timeline, AI ethics must stay ahead of that development curve. Indeed, the automation of many processes – especially weapons-based killing – may not pass the litmus test for acting entirely independent of human decision-making. Yet, automated weapons systems are (potentially) capable of pulling a trigger, scanning an area and killing single and multiple targets, and – if fitted for them – launching citydestructive missiles, whether intentionally or unintentionally prompted by a human. Nation-states or nonstate actors could potentially employ LAWS in the field in many of these ways in the short term. Any attempts to fully automate these weapons warrant stringent AI ethical due diligence. But if AI technologies and systems are not sentient, can they be moral? And if not, how should ethics be operationalized?

Can LAWS Be Moral? Hart and Shanks Kaurin observe that even the most advanced AI-driven systems are tools – and not agents – of war. Laying out assumptions about war and moral agency, the two scholars first examine the existential nature of war. It is only after establishing this baseline definition of war, they reason, that we then can understand our relationships to war as moral agents. Hart and Shanks Kaurin put forth a twofold argument establishing AI systems as mundane tools: (1) existing AI systems are a long way off from achieving moral agency, and (2) AI systems are on a continuum from nonmoral tools (such as swords and tanks) to moral agents (humans). AI currently falls squarely on the nonmoral end of the spectrum. To date, futurists have explored the prospects of how an agential AI might change the face of warfare. Until AI moral agency is realized (if ever), Hart and Shanks Kaurin conclude that we might consider how AI might change conflict as a nonmoral, sophisticated component of weapons technology. Giordano concludes similarly to Hart and Shanks Kaurin regarding the issue of AI and moral agency, premising that moral responsibility does not involve AI as an agent but can only be about AI as a follower of rules. If AI systems are amoral, then humans must be responsible for amoral AI. He describes a group or collective moral responsibility where humans in war are morally accountable for AI deployed in the field. LAWS deployed in combat, therefore, could theoretically perform within standards of conduct in the field as part of a human-machine team where LAWS are intelligently autonomous, and humans provide ethical autonomy. Giordano et al., however, do not stop short at an “ethics of using AI.” They make a reasonable claim that since AI can imitate human intelligence, perhaps we can also program moral intelligence.

172

Gina Granados Palmer

Moral programming, Giordano et al. contend, could encode the military ethos of a specific warfighting group with a moral point of view and ethical practice specific to their communal perspective. Thus, LAWS that are morally programmed would be part of a human-machine military unit that collectively accomplishes warfare objectives that are (optimally) consistent with a communal ethos. Regarding moral responsibility, Abney also seems to agree with Hart and Shanks Kaurin that LAWS are amoral, and therefore humans must bear moral responsibility for their use. He applies his red line analysis to scenarios of narrow AI in the domain of space, necessitating his consideration of the dual-use paradox and just war ethics. Casebeer is also interested in “artificially intelligent ethics” and how to use AI to build ethics into systems. He takes a general approach that would apply this to LAWS, whether operating independently or as part of human-machine teams. Thus, collectively, the authors in this volume analyze using similar general approaches: establish definitions for AI, discuss aspects of AI morality and ethics, and, finally, apply ethical principles. Before comparing applications of AI ethics, it is valuable to identify another piece of the puzzle relevant to ethics and technology: the paradox of dual-use.

The Dual-use Paradox Generally speaking, we can ask the dual-use question of all AI technology scenarios. Any AI developed for positive use also might be bent to detrimental use. Dual-use thinking must be used not just for technological innovation and pushing the boundaries of technologies, but for exploring ethical issues. Any surveillance AI technology, once developed, might be weaponized into LAWS. Surveillance and identification are critical components of any mission to search and destroy. Unmanned vehicles such as the General Atomics Predator series aerial drones, initially used exclusively for air monitoring, have been armscapable with Hellfire missiles and laser-guided bombs for targeted killing for years. What would this mean if adversaries deployed surveillance drones or LAWS over US airspace rather than over Syria or Pakistan? Can citizens assert rights to safety, security, and freedom to live without surveillance or physical threat? Dual-use development dictates that once rulers gain power, they are unlikely to relinquish it and will exercise that power to remain in control. A controlled populace may have little choice but to comply. Thus, dual-use AI development necessitates dual-use ethics. Responsible design, therefore, should integrate both multiterm thinking and dual-use anticipation ethics for LAWS and other AI technologies.

Ethics and AI Technologists, engineers, and thought leaders have been warning for years that AI-automatic killing is the next significant threat to humanity. Leaders on the

AI Ethics: Four Key Considerations for a Globally Secure Future

173

balancing side advocating unrestrained innovation contend that we are far from realizing an AI that could pass the Turing test, if ever. Examination of the ongoing debate has taken us on a path from mapping definitions and grappling with the continuum of technology capabilities to recognizing the need for AI Ethics. Timothy J. Demy is concerned with applications of just war tradition to AI military technology. His cited triangulated cycle of principle → policy → practice provides a comprehensive framework with which all stakeholders might grapple with the intersection of the two realms of military ethics and AI. Narrowing his lens to focus on US policy, he illuminates current documentation, specifically the “Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity” – noting that technology continually changes, but the principles of just war tradition do not. He argues for the application of those principles in the fastmoving world of AI contending that just war tenets and ethics are readily applicable and well suited to AI and LAWS issues. Namely, the tenets of proportion, discrimination, and sovereignty are already established and currently in debate. Simply stated, new AI technology structures require strong ethical foundations. Ethical rules of engagement, just war questions of sovereignty, discrimination of civilians from non-combatants: these issues call for AI ethics – foundational specifics for moral decision-making (both on the field and off). Indeed, the five chapters discussed in this overview and comparison chapter examine AI concerns from this foundational ethics perspective:

• • •





Demy makes a case for the just war tradition as a legitimate sound framework for continued AI ethics analysis. Sovereignty, discrimination, and other just war components are well-established principles in international law. Hart and Shanks Kaurin test foundational terms and philosophical approaches with a military ethics lens, preparing robust definitions for just war thinking and debate on AI ethics. Casebeer sets out an AI technology and ethical approach for not only reaching for AI sentience in the long-term but integrating ethics into the AI conscience in the short-term time frame. He proposes that we should democratically explore cognitive biases and ethics approaches to identify the best models. Abney tests just war tenets and theory in a narrow application that is currently changing where and how we fight. The shift of physics and the war domain of space transform how we should apply just war principles such as proportion and sovereignty. He contends there should be no jus in bello (justice in the conduct of war) legal restrictions on LAWS. Finally, Shook, Solymosi, and Giordano clarify a continuum of AI thinking development, contending that morality and responsibility reside within a local, military community ethos. As such, AI may be programmed and attuned to deliver within that ethos in the field.

174

Gina Granados Palmer

Conclusion: Four Key Considerations for a Globally Secure Future Literature about security, warfare, and technology is vast and increasing daily. However, as a follow-on to the chapter presented in this book, interested readers might refer to the works of writers such as Keith Abney, Ronald Arkin, Ian Barbour, George A. Bekey, Patrick Lin, and George Lucas, Jr. The contributors have provided various definitions of AI and other key terms. They have explored whether AI can be moral and the possibilities of programming and building an artificial conscience. The authors have applied well-known ethical frameworks to the puzzles of AI and LAWS in a global security context. Synthesizing the given definitions of AI, we assume that it may, indeed, be possible to program a continuum of operationalized, narrowly, and locally defined ethical rules into AI. Conjoining this assumption with established international normative tenets derived from just war ethics as a baseline for discussion – this brief analysis asserts four key considerations for a globally secure future: (1) AI technology currently necessitates near-Turing-test ethics. Automation and human-augmented actions are enough of a tipping point without definitively passing the Turing test. AI ethics researched and developed must address the concerns of human-machine team, augmented human, and automated weapons of warfare. (2) Complexity and uncertainty necessitate short-term, intermediate, and longterm strategies for ethical AI development and use. Given the VUCA (volatile, uncertain, complex, and ambiguous) aspects of the current global security environment, leaders must consider short-term, intermediate, and long-term tactics and strategies. Establishing a continuum of AI types and debating which technologies fit into which levels is a useful way of anticipating problems and determining how ethics must be applied. If we are to, as Casebeer exhorts, stay ahead of the ball, we must exercise ethical analysis and integration now and cyclically repeat for improvement as technologies develop. (3) Dual-use AI development necessitates dual-use ethics. Dual-use will continue to predominate AI research and development as a method of embracing innovation. Dual-use development strategies skirt (and creep up upon without necessarily activating) a public demand to address AI ethics. Responsible innovation, implementation, and law necessitate discourse on how to apply just war and other warfighting ethics to global security concerns. AI ethics are applicable now, in the current time frame, to a variety of contemporary and near-future AI and warfighting technologies. Any technology has a potential for dual-use; and creators, users, and the spectrum of concerned parties (from citizen watchdogs to leaders and lawmakers) must reasonably incorporate safeguards against alternative exploitation.

AI Ethics: Four Key Considerations for a Globally Secure Future

175

(4) New AI technology structures require strong ethical foundations. Although the technological landscape of AI and global security are quickly shifting, foundational ethics remain the same. We must shift to a prerogative of constant and cyclical application of ethical principles as a part of all processes. Although grounded in the Western intellectual canon, the ethics of the just war tradition are a useful and globally applicable framework and foundational to existing international norms and law. Many might effectively argue that these considerations are evident. Yet, it is worth noting that, to date, there is still no international consensus on what constitutes a Lethal Autonomous Weapons System. According to an August 2019 policy think tank summary on the status of International Relations and LAWS, nation-state parties debating a preemptive LAWS ban have been unable to agree on a singular definition for LAWS. Although militaries already have operationalized automation functionality into weapons, no lethal weapons are currently recognized as fully autonomous. The study consolidates data on nation stances, which fall into a recognizable international relations pattern. States that have a high probability of either acquiring or developing LAWS capability oppose the ban, while those that lag in acquisition and development support the ban. Of more than 30 nations, those that oppose the ban include France, Israel, South Korea, Russia, the United Kingdom, and the United States as countries the analysts mark as most capable of developing LAWS – in addition to Australia, Belgium, Germany, Spain, Sweden, and Turkey. Given the current global security climate, it is natural that states racing toward LAWS superiority would reserve a right to pursue that technology. Likewise, it is prudent that nations that would be disadvantaged by lacking LAWS technology would support such a ban. In hindsight following a possible first automated mass destruction event, a ban might constitute a wiser choice. Given the current global technology landscape, a moratorium on the race to acquire LAWS is highly unlikely. With a silent post–Cold War race for automated mass-destructive advantage already in progress, just states pursuing LAWS have a moral imperative to integrate just war ethics. Whether from Western or other perspectives (and barring an enforceable international ban on LAWS), integration of and due diligence concerning AI ethics is the only way to keep conventional war tenets in practice and to avert the chaos of warfighting with questionable limits and no identifiable ethos.

Disclaimer The views expressed in this presentation are those of the author and do not necessarily reflect the official policy or position of the US Naval War College, the US Navy, the US Department of Defense, or the US Government.

References Future of Life Institute. (2015). Open letter on autonomous weapons. Retrieved from https://futureoflife.org/open-letter-autonomous-weapons/?cn-reloaded51

176

Gina Granados Palmer

Gaillott, J. (2017). The unabomber on robots: The need for a philosophy of technology geared toward human ends. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0: From autonomous cars to artificial intelligence (pp. 369–386). Oxford: Oxford University Press. Giordano, J. J. (2015). Neurotechnology in national security and defense: Practical considerations, neuroethical concerns. Boca Raton, FL: CRC Press, Taylor & Francis Group. Lin, P., Abney, K., & Bekey, G. (2014). Ethics, war, and robots. In R. L.Sandler (Ed.), Ethics and emerging technologies (pp. 349–362). Basingstoke; New York, NY: Palgrave Macmillan. doi:10.1057/9781137349088_23 Lin, P., Abney, K., & Jenkins, R. (2017). Robot ethics 2.0: From autonomous cars to artificial intelligence. New York, NY: Oxford University Press. Nardin, T. (2015). Michael Walzer, Just and unjust wars. In J. T. Levy (Ed.), The Oxford handbook of classics in contemporary political theory. Oxford: Oxford University Press. Retrieved from https://www.oxfordhandbooks.com/view/10.1093/ oxfordhb/9780198717133.001.0001/oxfordhb-9780198717133-e-26 Pike, J. Military. Samsung Techwin SGR-A1 Sentry Guard Robot. Retrieved from https://www.globalsecurity.org/military/world/rok/sgr-a1.htm Reinhardt, D. N. (2007). The vertical limit of state sovereignty. Journal of Air Law and Commerce, 72, 1–77. Retrieved from http://scholar.smu.edu/cgi/viewcontent.cgi? article51126&context5jalc Schmitt, M. N. (2011). Drone attacks under the Jus ad Bellum and Jus in Bello: Clearing the ‘fog of law’. In M. N.Schmitt, L.Arimatsu, & T.McCormack (Eds.), Yearbook of international humanitarian law – 2010 (pp. 311–326). The Hague: T.M.C. Asser Press. doi:10.1007/978-90-6704-811-8_9 Shue, S., Hargrove, C., & Conrad, J. (2012). Low cost semi-autonomous sentry robot. In 2012 proceedings of IEEE southeastcon. 10.1109/secon.2012.6196937 USNI News. (2019). Report to Congress on lethal autonomous weapon systems. USNI News, August 20. Retrieved from https://news.usni.org/2019/08/20/report-tocongress-on-lethal-autonomous-weapon-systems Zador, A., & LeCun, Y. (2019). Don’t fear the terminator. Scientific American Blog Network. Retrieved from https://blogs.scientificamerican.com/observations/dontfear-the-terminator/

Chapter 11

Epilogue: The Future of Artificial Intelligence and Global Security James Canton One cannot speculate or about forecast the future of global security without considering the future of Artificial Intelligence (AI). The genie is out of the bottle though not all leaders see this coming. The weaponization of AI, either in virtual such as cyber or analytics or physical as synthetic brains for drones, ships, or robotics, is an undeniable game-changer for global security. Nations want AI as deterrence, reprisal, defense, and/or aggression. Whether it be the great power competition, gray wars, rogue engagements, or terrorist incursions, AI’s trajectory of increased utilitarian intelligence and eventual autonomous lethality are undeniable. This is coming sooner than predicted. Hold on. The future of AI will parallel the future transformation of national and global security. The AI race among state and nonstate actors, to swiftly with agility become a “have” rather than a “have not,” of AI are facing off presently in a decisive competition influencing intelligence, diplomacy, and defense. There are as many sinister scenarios that are logical for AI in the hands of actors that seek to elevate their game of chaos, threat, and aggression as the antithesis of peace. Cyber-AI makes fake news a truth machine. Elections become targets. AI wars make for common fare. Autonomous AI-as-a-Service becomes the game-changer, first between private actors and then expanded to state and state surrogates. AI will protect humans from AI, we hope. We should be wary of the immensity and underestimation of the exponential evolution of AI, whether in the hands of bad actors, state or nonstate. Of course, bad actors also will include AI; not just the popular sci-fi rogues but rather AI with their own agenda of logical efficiency, mission-adaptation, critical upskilling, extrapolative decision-making or sheer sustainability, and Darwinian survival. This will not be hard to predict as the rise of neuromorphic and neuromimetic AI models, modeled on human neuroscience, emerges as a dominant paradigm for AI development. Not surprisingly, the future of AI will be based on biological evolution. Computational biology of the future will program synthetic neural networks to enable higher functioning AI to embrace immensely complex tasks and problems that defy humans today or simply will take humans too long to

Artificial Intelligence and Global Security, 177–183 Copyright © 2020 Emerald Publishing Limited All rights of reproduction in any form reserved doi:10.1108/978-1-78973-811-720201011

178

James Canton

figure out. The time premium of Faster Than Moore’s Law AI processing is difficult to predict the upper end of the scale of processing power or even agility especially as AI will be designing AI. But suffice to predict, AI by 2030 will be difficult for humans to fully comprehend, is my forecast. We are talking about the inevitable rise of smart technology both in the image of humans and that which may arise from a completely different nonhuman synthetic evolution. Of this development, we should carefully analyze and consider long-range implications on human evolution. It will be attractive to enhance humans with AI designed augmentation for extending lifespans, curing diseases and human performance enhancement such as cognitive enrichment. Future wars may be won, lost, or even avoided by the AI National Dividend. This is the increased value that societies will benefit from having AI embedded in the sciences, industry and national security. The phenomena of parallel evolution of different species that interact, co-evolve, interdepend on each other, compete, and cooperate on the planet has a long history. Interspecies interactions, plants to mammals to reptiles over millennia, both co-exist and compete. AI will no doubt engage as another new species toward what aims one can imagine will be of keen interest, especially when and if AI seeks competition with humans. The complex future of global security will be reshaped by the hegemonic and competitive advantages brought by AI to state and nonstate actors. Ideological differences about the role, ethics, values, and even the nature of AI will be in conflict between these actors. Even today, we see this with the rise of smart robotics and UAVs that are used differently around the world for various security missions based on ideology across the global multidomain environment. China is selling autonomous UAVs today. Buyers are rethinking their security plans based on the active threat of weaponized AI. This is not the future. This is now. The future is a world where weaponized AI proliferation is either controlled or not. With AI in everything and everywhere, it is difficult to consider banning AI while actors are using AI for hegemonic purposes. An AI arms race is brewing and will shape the future, even if the outcome is a ban. Keep an eye on the consumer market to see the creeping invasion of AI-driven robots into the home, AI in autos, AI decision support and business, telecom network management, equity market trading, and medicine and health care. The consumer market may not appear on par with the global security market but there is a long tradition of consumers as predictive of emerging technology adoption. There is no domain of human existence where AI will not be used for competition, efficiency, or speed enhancement in the near future. Many of the vexing global security issues that face humanity that we have not been able to fix, that we don’t fully understand such as global hunger, diseases, public health, economic productivity, the lack of universal education, rampant poverty, and even peace, may play a role in guiding humanity’s embrace of AI. AI for good could well be transformational for global security. Though the easy forecast is killer robots and AI run amuck, the reality of the global security futures may be more nuanced and even positive. Future global security scenarios will likely comprise both good, bad, and ugly scenarios as I have written about previously.

Epilogue: The Future of Artificial Intelligence and Global Security

179

The future of narrow AI that performs simple at first but then expands to include even complex tasks such as robotic surgery or big data analysis of suspected threats will only increase in utility, accuracy, reliability, and performance efficiencies as AI learns and adapts over time. A decade of AI Time could be minutes, not human time but AI Time. This is a different model of time. General AI that has a broader skills platform, that includes autonomy, self-learning, and self-organizing, and that is viewed as vastly more powerful in knowledge discovery, intelligence, innovation, and insight generation is what is coming in future as well and sooner by 2030 or before. Glimpses of AI’s future today, such as Google’s Alpha Zero’s capacity to self-learn, not taught by humans or even other AI, is an early indication that the accelerating AI holy grail, that autonomous learning is upon us. In other words, we have failed to forecast how fast AI is evolving today and now has to course correct. We will need AI to develop an enhanced situational awareness of AI and advise humans how to understand this emergence if they can in the future. In even the near future of a decade or less from today, there will be no definitions as primitive as narrow or general AI. There will be various species of AIs that will have enhanced synthetic intelligences, various skills, and escalating cognitive capabilities. These future AIs will be extensions of human’s minds, organization’s workforces, and of course armies. AI will also self-organize into its own communities, networks, and ecosystems where it will spawn its own AI cultures different and yet at first similar to organic life, biological life forms, and societies. AI societies may well evolve beyond human understanding and this should be expected. This is predicated on the assumption that the key factor that will shape the future of AI is nonbiological evolution and the rise of synthetic cognition, not the familiar human biomimetic. AI societies will prosper. They will simultaneously antagonize and serve humanity. AI will not follow predictable paths even if programmed but become independent agents and decision-makers. AI in the near (one to three years) and certainly far future (20 year plus) will be a game-changer for global security because it will enable ways to predict, anticipate, shape, prevent, and direct conflict. AI will also be a game-changer in both predicting and deterring the potential conflict scenarios, in forecasting conflict, and certainly in warfare itself. I would argue that AI used for deterrence not unlike other massively powerful weapons such as nuclear demonstrate a likely path ahead in the future. This global security deterrence will of course come after a display of AI lethality is proven up, as nuclear bombs signaled in WWII. The future of AI and global security cannot be forecasted or even discussed without addressing lethality. Putting aside legal and ethical clear lines that mandate the use of lethal AI in defense, it would be shortsighted not to realize that AI in warfare at some point in time becomes both a deterrence to waging war and a dominant force in battle. We can and should debate the ethical implications, but nevertheless AI as a weapon of various forms and deliverables is inevitable. One can hope deterrence is paramount, but given the mass proliferation of AI in the future, it is prudent to assess the situational readiness of AI’s use in warfare. The collapse of Moore’s Law, the rise of exponentially powerful and cheap computing power at the core of every AI system, will level the playing field for super

180

James Canton

powers and even nonpeer actors – states, terrorist, and rogues. UAV cost reduction and increased AI onboard give a glimpse of the near future. The era of a competition over AI-enhanced systems has begun. Therefore, the future AI competition will be who has the smarter, faster, and deeper capacity to interdict, prevent, disable, or morph as some future AI deliverables in a multidomain environment. The AI arms race has begun. This may mean within 10 years that it would be catastrophic to wage war given the AI-enhanced infrastructure and global AI proliferation that would ensue. There will be advantages as the AI arms race will lead to AI versus AI conflict, with human observers. It is certain that entire armies, navies, and air forces will be first tele-robotic, then semi-autonomous, and then fully autonomous within this decade. Many nations and leaders will not embrace this future forecast as a possibility or as an eventual future. These AI systems may be accurate for a period of time. However, as autonomous AI systems become ubiquitous and effective as a new global defense paradigm that is superior to humancentered force protection, (regardless of the size of human-centric armies, navies, or air forces) such change will lead to a shift in the nature of global security. One can envision how much more efficient, cost-effective and reliable such AI networks and systems will become. So much so, one can readily imagine that dependent actors will swiftly move to wage war when it is merely AI versus AI. Advances in AI will influence gamification and reshape warfare that will undoubtedly be broadcast live. In every factor where competition for power, influence, and dominance and competition from gray to ghost conflicts to direct engagement and cyber, AI will offer a new dimension to global security that cannot be underestimated. It may be possible to even develop a predictive capability, to forecast conflict prone events, and map out a time-stream of actions that will lead to conflict even war and how to avoid and prevent such conflict from ever happening by taking certain strategic actions now to redirect the time-stream of events. This sounds improbable given what we think of AI today but not in the future of 2035. There are some political, defense, and intelligence leaders that fail to appreciate or understand the AI as a game-changer paradigm. This is a risk factor that reduces strategic readiness and situational awareness, both in the commercial and defense establishment. The cost of this failure is the result of a lack of vision. This is a legacy idea that warfare and intelligence should be run by humans alone. The ethics of war and the clear need for human leadership in an era of autonomous AI is reasonable and by law today required. Not all actors agree with this doctrine. There will be state and nonstate actors that will not and do not today share that doctrine that humans have the final say – that autonomous AI cannot persist in the kill chain. This is a clear risk to global security that competitive doctrine around autonomous AI may produce a vulnerability for some actors. AI versus AI is coming. AI and human teaming may be the new paradigm for defense in the one- to five-year forecast from today. AI-enhanced teaming and collaboration with humans is the most strategic way forward because it enables actors who abide by ethics, values, and morals to direct AI. A strategy that fails to appreciate this historic moment would greatly reduce operational readiness in a world being reshaped by AI. At the same time, it would be folly to not prepare for fully autonomous AI in virtual or physical domains of which conflict is waged.

Epilogue: The Future of Artificial Intelligence and Global Security

181

Critics of AI fail to appreciate that AI is in its infancy, though having been present in science and society for over 30 years, AI has only begun to prove up recently as a reliable and even an imaginative tool. This is especially accurate with the emergence of deep learning and machine intelligence innovations that are now pervasive worldwide. That AI will emerge as a strategic advantage, given the appreciation of the numerous global challenges facing the world, cannot be underestimated as we will need AI, or the more powerful AI in the near and far future, to help resolve issues impacting regional and global security. The investments by every power and corporation in AI are now in the billions of dollars. We are in the early stages of this AI emergence based on not just considerable investment in speculation but also in results. The impact of AI and the record of verifiable evidence of benefits in numerous industries, especially in intelligence and defense, from predictive analytics, big data discovery, and bad guy hunting is undeniable. The big data image recognition capability alone represents a major breakthrough in a world of cloud-based mobile security. It is likely that the very concept of AI will morph and evolve in a future of even five years to a decade forward. Even the near future of a technology, so bold as AI resists perfect definition. AI in the future will not be the AI of today. Enhanced synthetic cognition as a service, smart avatars, sentient swarms, virtual bots, neuromimicry of robots, artificial synapses, and intelligent clouds – from toasters to autos to entire defense systems – will be AI enhanced. Clearly, this is a future that few are ready to imagine or navigate but it is coming faster with every month not years. AI will be the increased embedding of synthetic cognition in every physical and virtual thing that guides decisions and takes actions. This AI will be both controlled by humans and those autonomously are self-controlled will bewilder us. AI will simply become how things work. Your car knows your route to work. The robo-doctor sees at the nanoscale where to operate, and the counter terror predictive analytics system operates with a stealthy hypersonic smart targeting capability. Predictive decision-making and engagement from AI are coming that will enhance human performance in all forms of life, work, and national security. The capacity to act with high situational velocity, a comprehensive depth of understanding, a larger holistic systems-view and the processing of immense big data is coming from AI in the near future. The evolution of AI will be exponentially faster by eons than that of humans. Expect AI to be faster, smarter, and more accurate than human doctors, teachers, intelligence officers, and warfighters. Every device, system, weapon, computer, network, and machine will be forever altered by AI. All scenarios will be probable. Thus, advances in AI will shape future military strategies and present potential risks related to over reliance on AI that may lead to distrust, betrayal, and amazing breakthroughs. The competitive advantage of nations, either in industrial or military, will demand an AI R&D Race – as is the case emerging today. No mystery here. AI wars are happening now, we know them as cyberwars. Their numbers will increase in the coming years. Rogues and nation-states will do AI battle ranging from cyber to kinetics, as AI roams the dark web, as well as air, land, sea and space. But eventually the speed and comprehension of AI as a different nonhuman entity, with a different trajectory of evolution will surpass our human understanding for certain

182

James Canton

applications. The rise of AI, and its complete transformation of defense, will be the most existential challenge facing the future of global security. As every powerful new technology from nuclear energy to genetic engineering, there is a case for the good, the bad, and the disaster. This will no doubt be my forecast for AI especially as it relates to global security. Here are a few scenarios that address global security challenges: (1) AI develops its own hard operational agenda beyond its human handlers in presenting a highly effective operations engagement that goes beyond achieving mission objectives. (2) AI accelerates situational awareness to identify new opportunities beyond human identification capacities for predicting conflict. (3) AI produces a significant global security solution that humans cannot understand but support as diplomats confirm its viability to solve a critical problem. (4) Robot armies reprogram AI lethality ethics rules to defeat adversary in conflict. (5) AI speeds up by a factor of 10,0003 the reaction time for predicting global security threats in advising analysts. (6) AI goes rogue, overreacts and precipitates in offensive security actions in conflict with ally. (7) A Tier One AI network develops an ecosystem of defense and intelligence suppliers, entrepreneurs, manufacturers, and logistics to support an Arctic plan. Just as the internet is fast embracing AI as an overarching layer of synthetic cognition, effecting every choice, decision, search, and transaction, so shall every other aspect of our culture. If we simply accept that AI can be defined as the exponential increase in intelligence that may mimic human cognition but will also be “alien” separate from human or biological intelligence, you can begin to grasp how powerful and strange the future of AI may be. The future of AI is complex, accelerating fast and requires completely new thinking. In fact, in the era of AI that is approaching in the near future, 10 minutes from now, global security strategists may need an entirely new way to think to get AI right. We are not there yet. Too many well-placed leaders reduce AI to a fancy new computer. This is a vast lack of imagination of a strategic tool that will enable global hegemony for some nations and cripple others. It would be dangerous to not realize the fast and new power that AI represents, given the recent developments in self-learning, autonomy, and problem-solving as evidenced by current AI. With so pervasive a technology as AI, we cannot fully fathom its full potential. However, we can assume that we need to think about military doctrine, strategy, and global security quite differently with AI in the wild. Now as we step into the future AI will be everywhere: in the cloud, in every device, every building, every weapon, and even, though hard to fathom, AI in every person. Having been both a CEO of an early AI company and watched the evolution of AI change over a scant 30 years, I can say that the AI of 1985 and the AI of

Epilogue: The Future of Artificial Intelligence and Global Security

183

today is an order of power that is exponentially more powerful than what we could have imagined because of the new breakthrough recently in machine and deep learning. Next will be neuromorphic AI, new AI models based on nature and biology. There is no more powerful a processing model for knowledge discovery than human brain. If we can mimic the human brain in AI, we will see even faster breakthroughs in AI in this decade. AI will enhance superiority in the global security domain. AI is the game changer. Speed, cognition, complexity, lethality, target perfection, and predictive analytics are the deliverables of AI. Whether off world in space, under the seas, by land, or into virtual reality, AI is a pervasive new paradigm of global security and warfare that is growing exponentially in capability as the technology evolves. Also, we cannot consider the future of AI without linking it justly to the other chief exponential technologies that are driving AI. AI is making every machine that designs or harnesses bits, atoms, genes, neurons, and qubits smarter. Mapping the future of the genome to cure disease, material sciences for manufacturing, vastly smarter and faster quantum computers will be shaped by AI. The exponential technologies that will be enablers and accelerators of AI will be: Quantum computing, nanoscience, biotechnology, information technology, and cognitive technology. The future of global security cannot be conceptualized without envisioning how AI may evolve. And the very concept of what AI may become, given the fast developments in fundamental technologies, would suggest that we may enter an era where Singularity Machines that defy human understanding entirely will be the norm by 2040 or before. As a culture and as a civilization, we are not ready for this invention. Smarter than Human (STH) machines that we will use for defense and security will be used by adversaries and allies. Power in global security will no longer be just in might but intelligence. Vast AI will battle AI as they do today in the 24/7 global Cyberwar that engulfs our world. Though we shoot or bomb, digital weapons are fast replacing boots, planes, and ships. Digital wars may be the future of conflict. Autonomous AI will be the new warfighters as entire armies, navies, air forces, and space forces go the way of dominating automation in this century. The super speed of synthetic cognition replaces humans. The end game may be ethical AI that advises humans in the loop and maybe AI proliferation will deter conflict, making global security a more realistic and achievable objective for the future of humanity.

This page intentionally left blank

Index Agent-based models (ABMs), 46 AI Cooperating System (AICS), 147–148 Allied Command Operations (ACO), 156 Allied Command Transformation (ACT), 156 Anti-Submarine Detection Investigation Committee (ASDIC), 41 Artificial conscience concerns, 91–93 definition, 83–84 ethics, 93 moral judgment, 87, 89 moral motivation, 87, 89–90 moral skill, 87, 90 rejoinders, 91–93 Artificial Intelligence (AI), 33–34 Autonomous Unmanned Vehicles (AUVs), 5 autonomy, 14 benefits and drawbacks, 105 civilian and military applications, 8–9 cyberattacks, 12–14 distributed sensors, 2–3 ethics, 167–175 future warfare, 9–11 global security, 21–24, 167–175, 177–183 global situation awareness (GSA), 4 innovation, 6–7 Intelligence Preparation of the Battlespace (IPB), 6 intelligentization, 5 international perspectives, 17–21 just war theory (JWT), 66, 68–76 LAWS, 67–68

leader decision-making, 24–26 machine learning, 15–16 military and nongovernment agencies, 2 moral reasoning, 121–136 Natural Intelligence vs., 64–65 programming morality, 144–145 role, 19 Russia, 3 space war, 63–77 types, 139–140 unmanned underwater vehicle (UUV) systems, 5 Asteroid mining, 73–74 Autonomous AI, moral guidance, 140–141 Autonomous Unmanned Vehicles (AUVs), 5 Autonomy, 14–15 Civilian applications, 8–9 Clarification, 58–60 Classically narrow AI systems, 39–43 Community, 145–147 Context, 145–147 Continuum argument, 126–129 Cyberattacks, 12–14 Data ecosystem, 106–109 Data privacy apps, 115–116 data ecosystem, 106–109 data security vs., 104–105 definition, 100–103 digital authoritarianism, 109–111 diversity, 99 Facebook, 112–113 informed consent, 112 social media platforms, 115

186

Index

surveillance economy, 96, 109–111 virtual environment, 97–98 Data security, 104–105 Diffusion and adoption (D&A) Anti-Submarine Detection Investigation Committee (ASDIC), 41 Artificial Intelligence, 33–34 classically narrow AI systems, 39–43 crisis response, 43–47 humans relationships, 44 innovation ecosystems, 34–37 load balancing cycle, 43 mathematical algorithms, 42 sonar technology, 40 types, 37–39 Digital authoritarianism, 109–111 Diversity, 99 Dual-use paradox, 172 Ethics AI Cooperating System (AICS), 147–148 community, 145–147 context, 145–147 divergence, 144 European Reassurance Initiative (ERI), 156 Facebook, 112–113 Framework for Future Alliance Operations (FFAO), 158, 159 Future warfare, 9–12 Global security, 20–24 dual-use paradox, 172 ethics, 172–173 future transformation, 176–183 lethal autonomous weapons systems (LAWS), 169, 170–172 narrow versus general AI (AGI), 169

robot, 169 Global situation awareness (GSA), 4 GLOBSEC NATO adaptation, 158 Good Old Fashioned Artificial Intelligence (GOFAI), 122–124 Google Inception, 42 Google Translate, 42 Human intelligence, 146 Humans relationships, 44 Human-supervised AI system, 139 Imitation game, 64 Independence, 68 Informed consent, 112 Innovation, 6–7 ecosystems, 34–37 Intelligence Preparation of the Battlespace (IPB), 6 Intelligentization, 3 International perspectives, 17–21 Internet of Things (IoT) technology, 98 Leader decision-making, 24–26 Lethal autonomous weapons systems (LAWS), 63, 169, 170–172 Load balancing cycle, 43 Machine learning, 15–16 Mathematical algorithms, 42 Metaethical principle of moral luck, 76 Military applications, 11–12 Moral agency, 131–133 Moral judgment, 87, 89 Moral motivation, 87, 89–90 Moral reasoning Artificial Intelligence (AI), 122–124 continuum argument, 126–129 definition, 129–131 existential war, 124–126 Good Old Fashioned Artificial Intelligence (GOFAI), 122–124

Index moral agency, 131–133 war, 133–135 Moral responsibility, 142–144 Moral sensitivity, 89 Moral skill, 87, 90 Moral values, 54 Mutual dependence, 68–69 Mutually assured destruction (MAD), 69 National security, intelligence, and defense (NSID) operations, 137–141 NATO changes, 155–157 collaboration, 155–157 strategic vision, 162–163 transformation, 157–162 NATO ACT, 161 NATO defence planning process (NDPP), 161 OSIRIS-Rex, 74 Programming morality, 144–145 Red line analysis, 72 Rejoinders, 91–93 Risk, 75–76 Rules of engagement (ROE), 59

187

Self-organization, 46 Small Unit Space Transport and Insertion (SUSTAIN), 74–75 Social intelligence, 146 Social media platforms, 115 Sonar technology, 40 Space-based solar power (SBSP), 70–71 Strategic Foresight Analysis (SFA), 158 Supreme Allied Commander Transformation (SACT), 158 Supreme Emergency, 75 Technological aliasing., 44 Technology-driven approaches, 45 Unmanned underwater vehicle (UUV) systems, 5 Virtual environment personal organizers, 99 synced location services (SLC), 98 Virtue ethics, 76 World War II, 68 Zombie/robot argument, 65

This page intentionally left blank

This page intentionally left blank

This page intentionally left blank