Autonomous Weapons Systems and International Norms 9780228009245

A timely analysis of autonomous weapons systems and their implications for international relations. In Autonomous Weap

275 88 2MB

English Pages [297] Year 2022

Report DMCA / Copyright


Polecaj historie

Autonomous Weapons Systems and International Norms

Table of contents :
Tables and figures
1 Autonomous Weapons Systems and International Relations
2 New Technologies of Warfare: Emergence and Regulation
3 International Law, Norms, and Order
4 Norms in International Relations
5 How Autonomous Weapons Systems Make Norms

Citation preview

autonomous weapons systems a n d   i n t e r nat i o na l n o r m s

Autonomous Weapons Systems and International Norms ingvild bode and hendrik huelss

McGill-Queen’s University Press Montreal & Kingston • London • Chicago

© McGill-Queen’s University Press 2022 ISBN 978-0-2280-0808-8 (cloth) ISBN 978-0-2280-0809-5 (paper) ISBN 978-0-2280-0924-5 (ePDF) ISBN 978-0-2280-0925-2 (ePUB) Legal deposit first quarter 2022 Bibliothèque nationale du Québec Printed in Canada on acid-free paper that is 100% ancient forest free (100% post-consumer recycled), processed chlorine free

Library and Archives Canada Cataloguing in Publication Title: Autonomous weapons systems and international norms / Ingvild Bode and Hendrik Huelss. Names: Bode, Ingvild, author. | Huelss, Hendrik, author. Description: Includes bibliographical references and index. Identifiers: Canadiana (print) 20210306270 | Canadiana (ebook) 20210307358 | ISBN 9780228008095 (softcover) | ISBN 9780228008088 (hardcover) | ISBN 9780228009245 (ePDF) | ISBN 9780228009252 (ePUB) Subjects: LCSH: Autonomous weapons systems. | LCSH: Autonomous weapons systems (International law) | LCSH: Autonomous weapons systems—Moral and ethical aspects. | LCSH: International relations. | LCSH: Security, International. | LCSH: Artificial intelligence—Military applications. | LCSH: Artificial intelligence—Moral and ethical aspects. | LCSH: Weapons systems—Technological innovations. | LCSH: Weapons systems—Technological innovations—Moral and ethical aspects. Classification: LCC KZ5645.5.A98 B63 2022 | DDC 341.6/3—dc23

This book was typeset in 10.5/13 New Baskerville ITC Pro.


Tables and figures vii Acknowledgements ix Abbreviations xi Introduction 3 1 Autonomous Weapons Systems and International Relations 16 2 New Technologies of Warfare: Emergence and Regulation 60 3 International Law, Norms, and Order 102 4 Norms in International Relations 132 5 How Autonomous Weapons Systems Make Norms 156 Conclusion 211 Notes 225 References 231 Index 271

Tables and Figures

tables 1.1

Selected weapons systems with automated or autonomous features in their critical functions. 18


Spectrum of autonomy. 26


Articles 48, 49, 50 of the 1909 London Declaration. 64


Treaty relating to the Use of Submarines and Noxious Gases in Warfare, Washington Naval Conference 1922. 67


Limitation and Reduction of Naval Armament (London Naval Treaty), London Naval Conference 1930. 68

2.4 Declaration (IV,2) concerning Asphyxiating Gases, 1899 The Hague Peace Conference. 72 2.5

Washington Treaty 1922, Article 5. 73


Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous or Other Gases, and of Bacteriological Methods of Warfare, Geneva, 17 June 1925. 74


Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on their Destruction. 76


Partial Test Ban Treaty 1963, Article I. 83


Key elements of the NPT’s non-proliferation dimension. 84

2.10 The NPT’s disarmament dimension. 85


Tables and Figures

2.11 The NPT’s peaceful use of nuclear energy dimension. 86 2.12 Treaty on the Prohibition of Nuclear Weapons, Article I. 89 2.13 Proposals for CCW Protocol by Sweden and the ICRC. 91 2.14 UN-CCW Protocol on Blinding Laser Weapons, Articles 1–4. 92 3.1

Diverging attribution thresholds. 118


Norm types. 144


Operational definitions of meaningful human control. 162


Levels of human control. 163


Sequencing of an air defence system operation. 171

5.4 Two perspectives of appropriateness. 199 5.5

Arguments made to support the appropriateness of integrating automated and autonomous features into air defence systems. 200

figures 1.1

NATO’s Joint Targeting Cycle. (Source: NATO (2016, Section 2-2.) 28


Formal contributions to debates on LAWS at the CCW (2014–20). (Source: Bode (2019) and authors’ 2020 calculations.) 32


Humanoid killer robots (top) versus X-47B demonstrator with autonomous features (bottom). (Sources: Mykola Holyutyak/Shutterstock; US Navy via Wikipedia.) 58


Relationship between international law and normative order. 114


Emergence of contested areas outside of current normative order. 130


The journey of writing this book began in late 2017 when we developed initial ideas for a manuscript that would give us more space and freedom than conventional research articles to develop our research agenda on autonomous weapons systems and norms. While our endeavour took some (unexpected) diversions such as the arrival of our son, Espen, moving institutions and countries as well as a pandemic, the book is the result of four years of reflection on the normative dimension of the AI revolution in warfare. We are grateful for the support we received from our colleagues, friends, and families throughout this process. For discussions and intellectual exchange, we are thankful to the small community of researchers on technology, norms, and warfare that we have had the pleasure to meet: especially Maya Brehm, Merel Ekelhof, Mathias Leese, Elvira Rosert, Frank Sauer, Birgit Schippers, Elke Schwarz, and Maaike Verbruggen as well as the participants of the EWIS Groningen and Belfast workshops on technologies, ‘killer robots’, and security in 2018. Also, we want to thank the diplomats, civil society representatives, practitioners, UN staff, and AI specialists for the time they spent on talking to us, thereby giving us important insights into the practical-political debate on autonomous weapons systems. We are grateful for the excellent support that we have received from Denise Garcia ever since she attended our first paper presentation on autonomous weapons systems at the ISA 2017 in Baltimore. This support continued into the completion stage of this book, where her constructive comments, as well as those of a second, anonymous reviewer, helped to sharpen our arguments. We also want thank Tom Watts for his research support, especially for digging deep into



the (sometimes obscure and always complex) open source data on air defence systems and autonomy. We thank our editor at MQUP, Richard Baggaley, for his interest in our research and his patience in waiting for this book to emerge from our desks. At a practical level, the final stages of writing this book also benefited from funding that Ingvild received from the European Research Council for a five-year-long project on autonomous weapons systems and norms (AutoNorms).1 Starting in August 2020 (Bode 2020), this funding has given us the necessary time to finally wrap up this writing project and is a privilege in the middle of a time period that put additional pressure on our colleagues with teaching and administrative roles. The ERC grant will also allow us to explore answers to the many open analytical and empirical questions that this book raises. Further, funding from the Joseph Rowntree Charitable Trust enabled Ingvild to attend GGE sessions in Geneva as well as to engage in the deep empirical work on air defence systems that went into chapter 5. We dedicate this book to Espen – whose boundless energy and spirit of adventure are a constant source of inspiration, despite the, sometimes, sleepless nights.



anti-air warfare tactical action officer artificial intelligence autonomous weapons systems blinding laser weapons Convention on Certain Conventional Weapons combat information centre close-in weapons systems Comprehensive Nuclear-Test-Ban Treaty Chemical Weapons Convention US Department of Defense Group of Governmental Experts International Atomic Energy Agency International Committee for Robot Arms Control International Committee of the Red Cross identification friend or foe system international humanitarian law international human rights law International Monetary Fund International Relations lethal autonomous weapons systems law of armed conflict meaningful human control non-aligned movement North Atlantic Treaty Organization non-governmental organisation Treaty on the Non-Proliferation of Nuclear Weapons (also Non-Proliferation Treaty) Organisation for the Prohibition of Chemical Weapons




Partial Test Ban Treaty Stockholm International Peace Research Institute Strategic Arms Reduction Treaty tactical ballistic missiles track number Treaty on the Prohibition of Nuclear Weapons United Nations UN Convention on Certain Conventional Weapons Systems UN Security Council World Trade Organization

autonomous weapons systems a n d   i n t e r nat i o na l n o r m s


We live in an era of enormous technological advances. Our daily life is lived in close interaction with technologies that we happily apply for different purposes and reasons. In the dimension of consumer electronics, many of us strive to get our hands on the latest smartphones, tablets, gadgets, and gimmicks, while ‘the Internet’ has become a dominant and indispensable factor in our professional and private lives. Digitalisation, automatisation, and robotics prevail in manufacturing and change how businesses and governments work. For centuries, human inventions and the use of technology have brought tremendous benefits to the lives of people. The story of human–technology interaction is, however, neither recent nor conflict free. From nineteenth century fears of negative effects on the body when travelling by train to the social consequences of automating manual labour, or worries about mass surveillance via social media, technological developments entail fears, dangers, and risks. This applies, in particular, to the dialectical relationship between war and technology. The interplay of sophisticated technology and weapons systems can trigger completely novel sets of fears and risks previously primarily known from popular science fiction. This book considers technological advancement in a specific domain by examining the emergence of so-called autonomous weapons systems (AWS) and the consequences of this development for International Relations. Weapons systems with an increasing number of automated and autonomous features are emerging as game-changing technologies of future warfare. In contrast to remote-controlled platforms,


Autonomous Weapons Systems and International Norms

such as drones, AWS use data from on-board sensors to navigate and, potentially, target without human input.1 Defined, therefore, as ‘weapons that, once activated, can select and engage targets without further human intervention’ (Heyns 2016a, 4), the incremental development of such AWS will see humans move further and further away from immediate decision-making on the use of force. AWS signal a weaponisation of artificial intelligence (AI). This implies various complex forms of human–machine interaction and a drastically distributed agency, decisive parts of which are not likely to be accessible to human reasoning. This looming absence of meaningful human decision-making in warfare makes scrutinising the challenges associated with AWS a matter of great importance. In this regard, this book investigates how AWS can have an impact on how the use of force evolves, thereby altering a fundamental aspect of international security policy. Specifically, the book considers how AWS can change our understanding of what appropriate use of force is: chiefly, when, how, and if the use of force by weapons systems with autonomous features is appropriate. These are questions that pertain to norms, defined broadly as standards of appropriateness, which are decisive for how the development and deployment of AWS take shape. Within the broad category of AWS, the types of autonomous functions differ in important ways. These can, for example, include features relating to their mobility, i.e. the capacity to direct their own motion, or health management, i.e. securing their own functioning. In this book, we are interested in autonomous functions related to the targeting process and, in particular, so-called critical functions including identification, tracking, prioritisation, selection, and engagement (Boulanin and Verbruggen 2017, 24; Ekelhof 2018). AWS are developed to perform extremely divergent tasks in air, on land, and at sea. Basically, such ‘[m]ilitary robots … undertake tasks that are difficult for human beings to handle’ (Lele 2019, 52). But, as we argue, they should be seen not only as tools or instruments that militaries use in practice but also as more fundamentally shaping what is seen as appropriate in applying force. The importance of these functions for changing processes, procedures, and practices only materialises in human–machine interaction; that is therefore at the centre of our analytical perspective. AWS are on track to becoming established as ‘the new normal’ (Singer 2010) in warfare. In its ‘Unmanned Systems Integrated



Roadmap 2013–2038,’ the US Department of Defense, for example, sets out a concrete plan for developing and deploying weapons with ever-increasing autonomy in the air, on land, and at sea in the next 20 years (US Department of Defense 2013). While the US strategy on autonomy is the most advanced, more countries, including many of the top ten arms exporters, are developing or planning to develop some form of autonomous weapons system (see chapter 1). For example, media reports have repeatedly pointed to the successful inclusion of machine learning techniques in weapons systems developed by Russian arms maker Kalashnikov (Busby 2018; Vincent 2017). China has likewise reportedly made advances in developing autonomous ground vehicles (Lin and Singer 2014) and, in 2017, published an ambitiously worded government-led plan on AI with markedly increased financial expenditure (Metz 2018a; Kania 2018a). In recent years, not only the political community but also the wider public has become increasingly aware of the emergence of AWS. In popular terms, AWS are often referred to as ‘killer robots’ (see Sample 2018) as the Campaign to Stop Killer Robots, an influential coalition of non-governmental organisations (NGOs), underlines. However, knowledge about AWS, their status as weapons systems in security policy, and the wider implications of their development and use not only remains limited but also overdramatises the imminent raise of humanoid monsters akin to the Terminator T-900. The futuristic, science fiction framing of AWS, presenting them as a problem that has not yet materialised and maybe never will, risks confounding the profound impact the incremental emergence of automated and autonomous features in weapons systems is already having on international relations, international security policy, and the international order governing the use of force. The emergence of AWS has not gone unnoticed in the international political arena: having first entered the agenda of the Convention on Certain Conventional Weapons (UN-CCW) at the United Nations (UN) in Geneva in 2014.2 In 2016 the 125 states parties of the UN-CCW created a Group of Governmental Experts (GGE) with a discussion mandate on what they term ‘lethal autonomous weapons systems’ (LAWS). These GGE meetings have seen growing support among CCW contracting parties to consider options suggesting a regulation or prohibition of LAWS. During the most recent, ninth, meeting in September 2020, ‘more than 65 CCW states parties … endorsed


Autonomous Weapons Systems and International Norms

group statements calling for a legally-binding instrument to prohibit and restrict such weapons systems, including the Non Aligned Movement (NAM)’ (Campaign to Stop Killer Robots 2020c). These ongoing debates, and the comparatively high speed with which this topic has entered the international political agenda, clearly illustrate its topical nature, as countries around the world are developing positions on the possible regulation, restriction, or prohibition of AWS. UN Secretary-General António Guterres has also taken a strong position in this debate, arguing inter alia that ‘machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law’ (UN News 2019). However, as we argue in this book, considering AWS only within the framework of international law (governing the use of force) has serious limitations. These limitations concern not only the slow progress of (potentially regulative) discussions – the CCW meetings have not even settled on a shared definition of what LAWS are (see Ekelhof 2017) – but also the structural shortcomings of international law. The example of drone warfare shows that the use of force in terms of small-scale but long-term interventions often takes place in contested areas of international law. Drones are not prohibited as a weapons category: their lawful use in warfare therefore depends on whether they are used in accordance with the requirements of international humanitarian law (IHL), such as proportionality or distinction. Without going into the legal details at this point, access to drone technology appears to have made so-called targeted killing seem an ‘acceptable’ use of force for some states, thereby deviating significantly from previous understandings (Haas and Fischer 2017; Bode 2017a; Warren and Bode 2014). Indeed, technologies have long shaped and altered warfare and thus how force is used (Ben-Yehuda 2013). It is therefore important to recognise that warfare alongside technological innovations sets precedents and influences use-of-force norms in terms of what is considered appropriate. These norm-making practices take place beyond formal deliberations and outside of international law. Often, these norm-making practices do not relate to commonly established sources of international law as they neither are a deliberate act of codifying standards of appropriateness in the form of treaties, conventions, or agreements, nor qualify as customary international law, as they remain far from ‘a general practice accepted as law’ (ICRC



2010). This is the reason why we consider the role of norms, clearly differentiated from law, with regard to AWS in this book. Addressing this topic comprehensively is a matter of political, academic, and public urgency in at least three ways. First, the political debate on AWS has so far produced insufficient results, while the development, testing, and use of weapons systems with autonomous features have continued and indeed accelerated. Without descending into technological determinism, this makes the wider-spread deployment of systems with an increasing number of autonomous features in their critical functions a potential fait accompli. Their manner of deployment has (and will continue to have) significant effects on the use of force as an instrument of international politics. A perspective on how norms emerge in these practices, defined as patterned ways of doing things, is particularly well suited to comprehending and highlighting the impact of AWS on international relations, as it provides new angles for studying the dynamic processes of dual-use technological innovation associated with the AI revolution. Advances in AI are a (fast-)moving target surrounded by considerable uncertainty even among expert communities. Studying how new understandings of appropriateness evolve in this setting therefore requires a similarly dynamic analytical approach on norms. Second, the issue of AWS is of increasing public interest, but comprehensive and accessible knowledge about them is rare. Although our book was conceived as an academic work contributing to the discipline of International Relations (IR), it is also timely in that it covers important aspects relevant for gaining more insights into AWS and is targeted towards increasing the critical knowledge base by demystifying the possible consequences of weaponised AI. Third, the book contributes to the academic consideration of AWS across disciplines such as law, philosophy, science technology studies, or sociology. The body of work on AWS in IR is slowly increasing but still marginal. While there is a more substantial literature on the legal and ethical implications of AWS, to the best of our knowledge there is no existing extensive study on AWS and norms. This norms-focused perspective is important not only because it broadens our view on the impact of AWS as advanced security technologies in politics and society beyond the legal perspective, but also because the question of broad normative appropriateness – in terms of what is considered as both ethically right and practically appropriate – lies at the heart of human decision-making. Investigating and


Autonomous Weapons Systems and International Norms

understanding how the emergence of new weapons systems with autonomous features can shape the way force is used and affects our understanding of what is appropriate is therefore crucial. We review the main developments regarding AWS and outline, in each of the five chapters, their implications for the use of force. Chapter 1, entitled ‘AWS and International Relations’ provides an overview of AWS and their role, and discusses the implications for international relations and international security. First, we introduce what we understand by AWS, paying close attention to competing definitions and to AWS’s main features such as autonomy, technological capabilities, and human–machine interaction. How to define the autonomy of weapons systems is a matter of debate not only at the political level, as outlined with regard to the UN-CCW, but also in academia. Some scholars (Sartor and Omicini 2016) seek to refine the conceptual understanding of autonomy via specific aspects (e.g. independence, cognitive skills) or by relating it to core functions of weapons systems, such as trigger, targeting, and navigation (Roff 2016), while others debate appropriate levels of human control and the extent to which machine autonomy undermines this (Heyns 2016a; Sharkey 2016; Carr 2016). The notion of ‘meaningful human control’, in particular, has become increasingly important in the international political debate on AWS, shifting the focus away from finding a universal definition of AWS, which has led to considerable foot-dragging in UN debates. In chapter 1, we also distinguish AWS from remote-controlled systems such as drones. Second, chapter 1 introduces the main factors in the political and academic debate on AWS, outlining core arguments and contested points, such as the legality of specific weapons systems and the potential of international law to regulate or prohibit AWS. So far, academic research has explored AWS alongside two main, often interconnected lines: their (potential) legality, and the ethical challenges attached to their usage. In principle, a dense network of domestic and international legal and normative structures provides standards of how and when the deployment of different types of weapons is lawful. International law, enshrined in the UN Charter and various treaties of international humanitarian and human rights, sets rules for what kinds of weapons are prohibited (e.g. nuclear, biological, and chemical weapons, cluster bombs), when the use of force is legal, and how weapons ought to be used. The normative structure comprises domestic and internationally



accepted norms, understood in this volume as standards of appropriateness, as well as values and ethical concerns, including the question of what is ‘right’ and ‘wrong’ in warfare (Leveringhaus 2016; Fleischman 2015; Roff 2014). The relevant literature has therefore primarily assessed the deployment and emergence of weapons systems against existing legal and ethical standards and investigated further options for regulation (Sehrawat 2017; Kastan 2013; Grut 2013; Husby 2015; Brehm 2017). Third, chapter 1 also sheds light on the ongoing UN discussions of AWS in Geneva under the framework of the CCW. Civil society actors, chiefly the Campaign to Stop Killer Robots and the International Committee for Robot Arms Control (ICRAC), have been vital for getting this topic on to the UN’s agenda and promoting a preventive ban of such systems. By January 2021, a total of 30 states, including Argentina, Brazil, China, Egypt, Iraq, Mexico, Pakistan, Peru, and Uganda, have followed this appeal. But while a growing number of states have expressed support for novel legal regulation of AWS, debates at the CCW do not indicate consensus on this. In fact, the international community remains divided and uncertain about how this important topic should be tackled. Debates at the CCW in Geneva are hampered by the issue’s technical complexity. Some governmental experts even dispute whether ‘fully’ autonomous weapons systems exist or will ever be developed, thereby side-stepping the challenges raised by incrementally increasing autonomous features in weapons systems. In representing the current political debate, the chapter focuses on contestation among state and nonstate actors when it comes to defining AWS. We discuss both the progress of the international debate on AWS and reasons for the slow political response. Chapter 1 therefore provides an empirical basis for the remainder of the book, introducing readers to the core fault lines in the debate on AWS, the extent of their current scholarly examination, and the gaps therein. In Chapter 2, on ‘New Technologies of Warfare: Emergence and Regulation’, we contextualise the development of AWS, by providing a historical account of how new technologies of warfare emerged, how practices have influenced perspectives on the ‘appropriate’ use of force, and what factors have led to the regulation or prohibition of weapons systems in the past. We start with a historical overview of weapons systems, focusing on systems that can be defined as ‘game changers’ when it comes to


Autonomous Weapons Systems and International Norms

the nature of warfare. We discuss whether and how these weapons systems were regulated. In particular, we consider four twentieth century weapons systems that played an important part in the emergence of novel, regulative norms: submarines, chemical weapons, nuclear weapons, and blinding lasers. The individual sections in chapter 2 discuss the development of these weapons systems and how the international community reacted. In this way, the chapter differentiates between ex ante and ex post regulation: while submarines, nuclear and chemical weapons have been regulated or banned after their usage in warfare, blinding lasers were banned before they became fully operational. This discussion highlights the different effects that practices on the use of force centred on the deployment of particular weapons can have for the emergence of norms. Comprehensive regulation and stigmatisation of nuclear or chemical weapons, for example, only emerged after these weapons had been used in the First and Second World Wars. However, the analytical focus of existing scholarly studies on weapons regulations and norms typically remains on public deliberations after their usage. But technological advances below the radar of international legal regulations might set norms before debates have unfolded. Chapter 2 therefore links the legal perspective on weapons systems with specific practices as reasons and conditions why regulation and prohibition emerged and is the contextual background for a new perspective on studying norms and their emergence. Chapter 3, entitled ‘International Law, Norms, and Order’ examines the relationship between international law and international norms in the context of AWS in more detail. It thus provides a conceptual backdrop for our empirical analysis of how AWS may change norms in practices (chapter 5). We are particularly interested in how international order governs the use of force. While international order is one of the most important and perennial issues of IR scholarship, the constitution of international order is, often, largely thought through exclusively from the view of international law, in that the emergence and impact of legal rules on the use of force is at the heart of scholarly interest. We suggest a diversification of perspectives by differentiating between a legal international order, constituting rules, and a normative international order, representing the regularity of practices and containing the diversity of accepted understandings of



appropriateness. Our main point here is that normative order goes above and beyond international law by including practices that can establish patterns of what is considered to be appropriate. Such understandings are not necessarily codified or formally agreed (what we call non-verbal). We develop our argument in two steps: first, we provide further insights into how international law governs the use of force. We chiefly cover jus ad bellum, law relevant to the resort to armed force by one state against another, by considering the legal standards of imminence and attribution as well as examining targeted killing as a practice in more detail. We connect this discussion both to the political debate on AWS (chapter 1) and to the historical regulation of weapons systems (chapter 2), highlighting the problems of permissiveness and ambiguity in jus ad bellum. We argue that through the use of new technology, such as armed drones, throughout the 2010s we have seen how state justifications have significantly increased the number of contested areas in international law on the use of force by introducing competing readings of vital standards, thereby putting the international legal and normative framework under a great deal of strain. There is now considerably less agreement between states about the precise legal content of core standards than there was ten years ago. The development of AWS further threatens expectations because they are likely to introduce more disputes in how states interpret international law. Interpretation is a matter not only of state deliberation but also of how the use of force takes place. The possible emergence of a normative order altering legal understandings of what ‘appropriate’ use of force is lies at the centre of our interest. In this chapter, we connect current examples of international legal order and the use of force with the question of what the emergence of AWS could mean for the future of using force in international relations. In examining these often detrimental dynamics, we also argue for, and develop, greater conceptual clarity in terms of the relationship between law and norms. We note that the origin of norms in law is overemphasised and obfuscates important processes that we should pay much closer attention to: how norms transform but can also emerge in practice; how normative substance can change due to the implementation of norms; and how norms as standards of appropriate action only come to exist through patterns of ‘doing


Autonomous Weapons Systems and International Norms

things’. Critical legal scholarship accepts the essential indeterminacy of international law, noting that substantive decisions within international law always imply political choice (Koskenniemi 2011), but does not explore this critique further. We argue that a deeper engagement with how norms emerge in practices can address this point. These considerations provide the basis for a conceptual approach developed in chapter 4. Chapter 4, ‘Norms in International Relations’, concisely introduces the main assumptions of conventional norms research in IR as a discipline. It explains how constructivist norm research conceptualises the emergence of norms in often rather static and sequential models. This means that the function of norms, how they affect behaviour, and under what conditions state compliance is most likely are well-researched themes in IR (see, for example, Risse, Ropp, and Sikkink 1999a; Adler 2013), in contrast to in-depth examinations of norm emergence and change. While critical studies of norm contestation argue for a diversity of normative meaning, these contestations still refer back to a deliberative expression of a shared normative kernel, rather than its reconceptualisation in practice (e.g. Garcia 2006; Wiener 2014; Zimmermann 2016). Research therefore predominantly deploys the concept of a stable normative structure (such as international law) that shapes actions. We agree that norms based on international law have both structural and constitutive effects, as our discussion of how weapons systems have been regulated in the past demonstrates (see chapter 2). At the same time, we aim to introduce a broader understanding of norms as standards of appropriateness beyond law. In this chapter, we develop this core analytical argument further. This provides the conceptual framework we will apply to study what kind of norms may emerge in how states develop, test, and deploy AWS. Apart from providing this analytical framework, chapter 4 also represents a conceptual contribution to norms research in IR. We introduce the concept of procedural norms that emerge in practice as a novel category. Procedural norms contest the conventional perspective that limits studies on norms in IR to fundamental, deliberative norms. Norms, when defined as ‘understandings of appropriateness,’ do not only emerge and change in open, public debate through deliberative processes in institutional forums; the development, testing, training, or deployment of weapon technologies such as AWS may also shape norms in practices. These practices



typically privilege so-called procedural, non-verbalised norms that define standards of perceived procedural–organisational appropriateness, such as efficiency. The difference between procedural norms and the fundamental norms enshrined in international law amounts to whether promoted actions are perceived as functionally suitable or as morally ‘right’. Examining practices around developing, testing, and deploying AWS can therefore capture how procedural norms may alter/affect the content of fundamental norms by pushing forward novel standards of perceived appropriateness that might become dominant in state practice as the ‘right’ ways of using force. What appears to be useful and efficient in procedural terms and in a specific organisational context could therefore become the new standard of what is normatively legitimate. We propose studying AWS in the context of two different but interrelated normative spheres: the legal–public sphere (the primary realm of fundamental norms), and the procedural–organisational sphere (the primary realm of procedural norms). While these spheres are not independent from each other, we use them to examine a broader notion of appropriateness: (1) legal–public appropriateness, and (2) procedural–organisational appropriateness. Legal–public appropriateness represents fundamental norms, including public expectations in terms of (political) accountability, lawfulness, or transparency. In contrast, procedural–organisational appropriateness represents procedural norms, considerations of appropriateness in specific organisational settings, such as the armed forces, or in specific situations of warfare. Here, appropriateness is concerned with functional legitimacy, specific regulations, and accountability hierarchies, such as a chain of command. In this sense, appropriateness is linked to following clear procedural structures regardless of their content. In representing different contexts of appropriateness, this model accounts for diverging, interplaying layers of appropriate actions. A comprehensive research framework on the normativity and normality of AWS should therefore consider both legal–public and procedural–organisational appropriateness. To gain an understanding of procedural norms, we suggest focusing on practices. We assume that these practices can construct new or different standards of appropriateness that turn into procedural norms when they become widespread. Practices can therefore become decisive for studying both how procedural norms emerge


Autonomous Weapons Systems and International Norms

and how they relate to fundamental norms. Importantly, AWS do not only refer to a specific type of weapon platform, such as nuclear missiles. Rather, autonomous features span from simple to very complex platforms. This wide range of platforms for use at sea, in air, and on land greatly increases the number of development and testing practices, in the course of which diverging standards of ‘appropriate’ use may emerge. Further, they are characterised by complex human–machine interactions that have particular relevance in the context of considering procedural norms. Basic norms defining when and how the use of force is proportional and discriminate, as well as how targets are selected and engaged, may increasingly be made by machines. Finally, Chapter 5, entitled ‘How Autonomous Weapons Systems Change Norms’, examines the so-called constitutive impact of AWS on norms by providing a detailed discussion on the emerging norm of meaningful human control, using a case study of air defence systems as a widely used type of system with automated and autonomous features in its critical functions.3 Here, we examine selected cases of how practices in operating air defence systems exemplify the problematic character of human–machine interaction to the point of making human control meaningless. These cases comprise the unintentional downing of civilian or combat aircrafts (friendly fire) and underline the extent to which meaningful human control – particularly in specific use-of-force situations, being one of the potentially central norms supposed to govern AWS – is compromised by technological autonomy. Chapter 5 uses the analytical arguments on procedural norms introduced in chapter 4, while we aim to show how specific practices of operating air defence systems lead to the emergence of standards of appropriateness in the use of force. We argue that procedural norms may emerge around perceived operational advantages associated with practices surrounding autonomous features of weapons systems. Such operational advantages – not limited to air defence systems – may include their ability to safeguard military personnel, their ‘superior’ ability to process a large quantity of information, their ‘agency’ on the battlefield, while disadvantages include the problem of transmission speed faced by drones as well as their supposed precision, and budgetary pressures (Noone and Noone 2015; Horowitz, Kreps, and Fuhrmann 2016; Adams 2001; Haas and Fischer 2017; Crawford 2016). While these



are distinct operational disadvantages associated with AWS, current development and deployment trajectories of military powers point towards increasingly autonomous features. Further, once introduced by one actor, their (future perceived) procedural advantages may shape new norms of procedural appropriateness, putting pressure on other actors to follow suit. In the final section of chapter 5, we use these results to discuss the possible ramifications of AWS for the future of the use of force in political and legal regards – and especially their possible adverse consequences for international peace and security. Based on the evidence provided by the use of air defence systems over decades, we expect that AWS can have a significant impact on IR norms through their practices. This impact can be discovered not through deliberations in a norm-setting arena such as the CCW, but via AWS development, testing, training, and deployment practices, which establish a pattern for the use of force by AWS. We call this a process of silent norm-making. Established practices of human– machine interaction that show significantly compromised levels of human control alter standards of appropriateness when using force. The intensity and complexity of current developments in the AWS sector makes political and academic responses a matter of considerable urgency. The book’s conclusions summarise our findings and analytical insights and return to the overall question of what the question of AWS means for international relations norms governing the use of force. Our central argument is that increasing technological autonomy in weapons systems is not a future scenario but already a reality of warfare. While deliberative discussions about the nature of autonomy and meaningful human control continue in the setting of the CCW in Geneva, practices of operating so-called ‘legacy’ systems such as air defence systems have already established ‘acceptable’ standards for the quality of human control. The silent making of norms on the use of force by weapons systems with an increasing number of autonomous features including state-of-the-art AI is already underway. And it has the potential to profoundly change how warfare – in particular, the harming and killing of humans – takes place.


Autonomous Weapons Systems and International Relations

Throughout the twentieth century, the story of modern, industrial-scale warfare undertaken by highly developed states revolved around increasing the physical distance between soldiers and their enemies or targets: from air campaigns in the Second World War to the development of cruise missiles during the Cold War; from networked warfare in the Persian Gulf War’s Operation Desert Storm to remote warfare via drones. This technology-mediated process changed the character of warfare significantly, as Peter Singer summarised in the context of drone operators: ‘Going to war has meant the same thing for 5,000 years. Now going to war means sitting in front of a computer screen for 12 hours’ (Helmore 2009). In fact, ‘direct human involvement has been reducing in modern warfare over time’ (Lele 2019, 51). Autonomous weapons systems not only continue this trajectory of the physical absence of humans from the battlefield but also introduce their psychological absence ‘in the sense that computers will determine when and against whom force is released’ (Heyns 2016a, 4). As this broad understanding implies, autonomous features in weapons systems can take many different forms: we find them in loitering munitions, in aerial combat vehicles, in stationary sentries, in counter-drone systems, in surface vehicles, and in ground vehicles (Boulanin and Verbruggen 2017). Even military technology that has been around for much longer, sometimes decades, such as air defence systems and active protection systems, has automated or autonomous targeting qualities. Table 1.1 provides an overview of some weapons systems in current use or development that include automated or autonomous features in the

Autonomous Weapons Systems and International Relations


use phase of the targeting process. In other words, they have such features in their critical functions. This does not mean that these systems are used without human control. In fact, we typically find that systems with autonomous features can be operated in distinct modes with different levels of human control. To give an example, the US-employed Aegis air defence system ‘has four modes, ranging from “semiautomatic”, where a human operator controls decisions regarding the use of lethal force, to “casualty”, which assumes that the human operators are incapacitated and therefore permits the system to use defensive force independently’ (Crootof 2015, 1858–59). But what matters is that these systems can, in principle while not (currently) in practice, detect, engage, and attack targets autonomously, without further human intervention. Weapons systems with autonomous features are not a topic of the distant future – they are already in use (see also Scharre 2018, 4). These diverse systems are somewhat uneasily captured by the catch-all category of autonomous weapons systems (AWS), because they weaponise artificial intelligence (AI) and apply this in varying combat situations. They signal the potential absence of immediate human decision-making on lethal force and the incremental loss of so-called meaningful human control, a concept that has become a central focus of the transnational debate on lethal autonomous weapons systems (LAWS) at the Convention on Certain Conventional Weapons (CCW), as we will demonstrate in more detail in the course of this book. While lethality is a potential outcome of AWS usage, what is problematic about integrating autonomous features applies in general to ‘acting with the intent to cause physical harm, i.e. violence’ (Asaro 2019, 541; see also Rosert and Sauer 2020, 14; Heyns 2016b, 355; Crootof 2015, 1836). We therefore use the more general term ‘AWS’ throughout the book and only refer to LAWS when speaking to the transnational debate at the CCW, as the discussion there is specifically focused on lethal autonomous weapons systems. The first section introduces AWS and their role by way of conceptual definitions as well as the implications for international relations in two steps. First, we introduce AWS and discuss the competing definitions of autonomy and human–machine interaction. We also clarify the associated conceptual terminology, such as AI and machine learning. Second, we shed light on the ongoing transnational, political debate on AWS at the United Nations (UN) in Geneva under the auspices of the UN-CCW. Third, we review the academic literature on AWS, characterising it as chiefly interested in


Autonomous Weapons Systems and International Norms Table 1.1 Selected weapons systems with automated or autonomous features in their critical functions. Active protection systems

Iron Curtain (US) in operation Korean Active Protection System (Republic of Korea) in operation Trophy/ASPRO-A/Windbreaker (Israel) in operation

Anti-personnel sentry weapons

Samsung SGR-A1 (Republic of Korea) in operation Sentry Tech (Israel) in operation

Air defence systems

Iron Dome (Israel) in operation MIM-104 Patriot (US) in operation Phalanx (US) in operation

Combat air vehicles

X-47B (US) tech demonstrator Taranis (UK) tech demonstrator

Ground vehicles

Uran-9 (Russia) in operation Robotic Technology Demonstrator (US) tech demonstrator

Counter-drone systems

HEL effector (Germany) in operation Drone Dome (Israel) in operation Silent Archer (US) in operation

Guided munitions

Dual-Mode Brimstone (UK) in operation Mark 60 CAPTOR (US) in operation

Loitering munitions

Harpy, Harop (Israel) in operation KARGU-2 (Turkey) in operation FireFly (Israel) in operation

Surface vehicles

Sea Hunter II (US) development completed Protector USV (Israel) in operation

Sources: Boulanin and Verbruggen (2017), Boulanin (2016), Roff (2016), and Holland (2019).

their (potential) legality, ethical challenges attached to their usage, and options for their (legal) regulation. Thus, this chapter provides an empirical basis for the remainder of the book, introducing readers to the core fault lines in the debate on AWS.

autonomous features, artificial intelligence, and autonomous weapons systems Autonomous features are what make the development and deployment of AWS significant. Yet, what autonomy is or should refer to is a matter of contestation and remains ‘poorly understood’ (Haas

Autonomous Weapons Systems and International Relations


and Fischer 2017, 285). Likewise, the discourse about AWS frequently uses ‘automation’ and ‘autonomy’ interchangeably. It is easy to get bogged down in the ongoing debate about definitions – in fact, this lack of consensus has hampered discussions among states at the CCW (Ekelhof 2017). We should also recognise that how participants at the CCW define autonomy is deeply political, as it ‘affects what technologies or practices they identify as problematic and their orientation toward a potential regulatory response’ (Brehm 2017, 13). Contributors to the debate on AWS – be they states, institutions, or defence manufacturers – invariably have a stake in defining autonomy in ways that advance their interests. To illustrate, actors may use the terms ‘automated’ or ‘highly automated’ rather than ‘autonomous’ in referring to a weapons system’s critical functions because they imply a greater level of human control. Likewise, they may add stringent requirements to any definition of autonomy in order to avoid the regulation or condemnation of systems currently in development (Moyes 2019). Defence companies often ‘play up the sophistication and autonomy of their products in marketing, and downplay them when scrutinised by international bodies such as the United Nations (Artificial Intelligence Committee 2018, 26). To navigate these ambiguities and contextualise our subsequent analysis, we only provide basic, workable definitions of autonomy and automation. Autonomy is a relative concept and can be broadly defined as the ‘ability of a machine to perform a task without human input’ (Scharre and Horowitz 2015, 5). On this basis, an autonomous system is one that ‘once activated, can perform some tasks or functions on its own’ (Boulanin and Verbruggen 2017, 5). Automation is a term that overlaps with these understandings and is often used synonymously with it. The difference between the two is not always clear (Hagström 2016, 23). Basic definitions found in robotics offer some distinctions. According to robot ethicist Alan Winfield, automation means ‘running through a fixed preprogrammed sequence of actions’, while autonomy means that ‘actions are determined by its sensory inputs, rather than where it is in a preprogrammed sequence’ (Winfield 2012, 12). To further unpack this, while automated systems follow clearly defined, deterministic ‘if–then’ rules, autonomous systems select probabilistically defined best courses of action – this makes the output of automated systems predictable, while conversely implying


Autonomous Weapons Systems and International Norms

that autonomous systems will produce uncertain outputs rather than consistently producing the same results (Cummings 2018, 8). As Heyns notes, autonomous systems ‘are unpredictable in the sense that it is impossible to foresee all of the potential situations that they will encounter during their programming’ (Heyns 2016b, 356; see also Crootof 2016, 1350). Autonomous systems combine software and hardware components that fulfil three key functions: perception via different sensors (e.g. electro-optical, infrared, radar, sonar) that allow the system to sense its environment; ‘decision’/cognition, through processors and software that assess ‘collected data about the environment and plans courses of action’ (IPRAW 2017a, 18); and actuation, allowing the system to physically respond in line with its planned courses of action (e.g. a motor triggering movement, or weapons release) (Welsh 2015; IPRAW 2017a, 18–19). It is clear that basic levels of autonomy are comparatively easy to achieve. Indeed, a robotic vacuum cleaner, such as the popular Roomba, is a good example for such autonomy. It can navigate (i.e. change direction), make decisions, and take actions (i.e. clean particular spots on the floor more or less thoroughly) on the basis of its sensor inputs rather than where it is in a preprogrammed sequence. This makes it ‘autonomous but not very smart’ (Winfield 2012, 13; see also Roff 2015a). Autonomy therefore does not necessarily imply a high level of ‘sophistication’ or intelligence. As Sharkey highlights, ‘the autonomous robots being discussed for military applications are closer in operation to your washing machine than a science fiction Terminator’ (Sharkey 2010, 376). Consequently, prominent AI researchers such as Toby Walsh argue that giving autonomy to such stupid, yet already available systems1 is what constitutes the primary problem when considering lethal autonomous weapons systems (T. Walsh 2018, 189). AI:

a basic understanding

We can shine more light on this discussion about smart and stupid weapons by defining what we mean by AI in more detail. Defined in simple terms, AI is the ‘attempt to make computers do the kinds of things that humans and animals do’ (Boden 2017). In other words, ‘AI is the capability of a computer system to perform tasks

Autonomous Weapons Systems and International Relations


that normally require human intelligence, such as visual perception, speech recognition and decision-making’ (Cummings 2018, 7), while the constituent components of intelligence also remain a matter of debate (see T. Walsh 2018, 61–81). There are a wide range of computational techniques summarised under the term ‘AI’ that are based on ‘applications of mathematical logic, [and] advanced statistics’ (IPRAW 2017b, 9) – any autonomous system is likely to require different techniques at different levels. We can further distinguish between weak (also called ‘narrow’) AI and strong AI. Weak AI refers to applications that are capable of executing a single, particular task within a narrow domain in a way that ‘equals or exceeds “human” capabilities’ (T. Walsh 2018, 126). This goal has been reached in games such as chess, and to some extent in (much-hyped) speech and facial recognition (Dickson 2017). Context-specificity is an important feature of narrow AI, as even small changes to context and task specifications prevent the AI system from ‘retain[ing] its level of intelligence’ (Goertzel 2014, 1). This is because of the way much of narrow (weak) AI, which is typically based on variations of machine learning algorithms, learns: rather than being able to learn across problems the way humans do, ‘[m]achine learning algorithms tend to have to start again from scratch’ (T. Walsh 2018, 94). This failure to generalise makes algorithms inherently brittle (Cummings 2018, 12–13). We will return to some of these specifics around machine learning shortly. In contrast to weak AI, strong AI refers to ‘machines that will be minds’, including such essential features as ‘self-awareness, sentience, emotion, and morality’ (T. Walsh 2018, 126). AI researchers also refer to ‘artificial general intelligence’, which is supposed to approximate human-level intelligence: it comes with the interactive capability to self-adapt to different circumstances by generalising knowledge across tasks and contexts (Goertzel 2014, 2; Dickson 2017). This is also referred to simply as having common sense. In other words, machines with artificial general intelligence would have ‘the ability to work on any problem that humans can do, at or above the level of humans’ (T. Walsh 2018, 128). As noted, not only has weak AI been achieved in some fields, in stark contrast to strong AI, but most research of the AI community focuses on weak AI applications (Roberts 2016; T. Walsh 2018, 127). Artificial general intelligence has been an unachieved research goal for decades, and its challenges are often highlighted by the so-called


Autonomous Weapons Systems and International Norms

Moravec Paradox, which continues to influence the field: ‘it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility’ (Moravec 1988, 15). This makes the hard tasks easy to automate while making the easy tasks hard to automate. Finally, there are also ideas about an artificial super intelligence that could exceed human-level intelligence, leading to the so-called technological singularity, an idea that has been around since at least the 1950s (T. Walsh 2018, 163–4). Most recently, Kurzweil, a prominent futurist, has popularised this line of thinking in associating the coming singularity primarily with the pace at which ‘human-created technology’ and, in particular, AI is growing (Kurzweil 2006, 7). As a result of this development, ‘information-based technologies will encompass all human knowledge and proficiency, ultimately including the pattern-recognition powers, problem-solving skills, and emotional and moral intelligence of the human brain itself’ (Kurzweil 2006, 8). At the point of the singularity, ‘we build a machine that is able to redesign itself to improve its intelligence – at which point its intelligence starts to grow exponentially, quickly exceeding human intelligence by orders of magnitude’ (T. Walsh 2018, 164). The philosopher Nick Bostrom has expressed similar ideas about the ongoing intelligence explosion that we find ourselves in, which is associated with coming machine superintelligence (Bostrom 2016). Yet, as Walsh argues, ‘the singularity is … an idea mostly believed by people not working in artificial intelligence’ (T. Walsh 2017b; our emphasis). Instead, there continue to be significant doubts about the prospects of linear expectations of growth and progress as well as fundamental limits to innovation associated with any scientific field (IPRAW 2017b; LeVine 2017). Indeed, throughout its history, the AI research community has gone through various intermittent phases of progress euphoria and significant setbacks (T. Walsh 2017a). Walsh summarises a range of further arguments that fundamentally question this idea of a runaway, uncontrollable development of AI (T. Walsh 2017a, 166–78). Rather than entering into (potentially obstructive) speculations about the likely developmental trajectory of AI, Roberts (2016) offers a succinct summary of the state of the art in AI research: ‘Most of the work in the field for the past 40 years has focused on refining

Autonomous Weapons Systems and International Relations


[artificial narrow intelligence] and better incorporating it into the human realm, taking advantage of what computers do well (sorting through massive amounts of data) and combining it with what humans are good at (using experience and intuition)’. This leads us directly to consider various complex forms of human–machine interaction that will most likely be based on different forms of narrow AI and have the greatest potential to pose challenges. Here, we should briefly engage with machine learning algorithms that are at the heart of (most) current applications in the area of narrow AI. This ‘involves programming computers to teach themselves from data rather than instructing them to perform certain tasks in certain ways’ (Buchanan and Miller 2017, 5). Machine learning revolves around producing probabilistic outputs that not only enable forms of diagnosis and description but are also increasingly used for prediction and prescription based on having identified features and patterns in data. It is useful to differentiate between supervised and unsupervised learning. In supervised learning, various types of learning algorithms (e.g. support vector machines, decision trees, Bayesian networks) attempt to predict what connects input and output by using a labelled training data set (i.e. combining training examples with correct outputs). The goal is to find patterns within the data, allowing the learning algorithm to connect the correct input and the correct output. This is referred to as supervised learning for two reasons. First, ‘each piece of data given to the algorithm also contains the correct answer about the characteristic of interest, such as whether an email is spam or not, so that the algorithm can learn from past data and test itself by making predictions’ (Buchanan and Miller 2017, 6). In other words, all training data used is correctly labelled, i.e. ‘this is a cat’, ‘this is a dog’, etc. Second, humans can correct algorithmic outputs in real time during the learning phase. Supervised learning is conducted until the machine learning algorithm reaches a certain, reliable performance level (Brownlee 2016; IPRAW 2017b, 10–11). Conceptually, this is also connected to deep learning via neural networks that work through interconnections at multiple levels simultaneously. Supervised learning accounts for around 90% of machine-learning algorithms2 and thus, perhaps unsurprisingly, most of the recent, well-publicised advances in AI have been in supervised learning (T. Walsh 2018, 95).


Autonomous Weapons Systems and International Norms

By contrast, in unsupervised learning, learning algorithms are fed with unlabelled input data that does not, therefore, come with correct outputs. In contrast to supervised learning, algorithms are charged with finding potentially interesting patterns, called clusters, within the data on their own (Brownlee 2016); ‘they receive no input from the user of algorithm as to what the categories are or where the boundaries or lines might lie in the data’ (IPRAW 2017b, 10). Unsupervised learning is considered to be beneficial in unstructured situations when ‘there is not a clear outcome of interest about which to make a prediction or assessment’ (Buchanan and Miller 2017, 8). As this brief summary demonstrates, all machine-learning solutions are, by definition, data hungry and data dependent. This is typically characterised as machine learning’s major problem because labelled data is not readily available, as ‘in many application domains … collecting labels requires too much time and effort’ (T. Walsh 2018, 95). Further, machine-learning solutions that are the outcome of supervised learning only work reliably with data that has been gained in the exact same manner as their training data sets:3 ‘the performance of the algorithm on further data, or real-world data, depends on how representative the training and test data sets are of the datasets in the application domain’ (IPRAW 2017b, 11). To illustrate this with an example, autonomous cars that were trained with data gained in specific weather conditions and surroundings in California cannot be assumed to function in safe and reliable ways in other countries or regions: in fact, studies show that even small alterations to the training environment, such as changing weather and road conditions, make it hard to ensure safe and predictable driving behaviour by autonomous cars (Himmelreich 2018). Apart from this reliance on particular types of training data, machine learning algorithms are faced with another, well-known fundamental problem that makes their integration into critical applications highly contestable from a safety and trust standpoint: they are a closed or black box. This means that they ‘cannot meaningfully explain their decisions, why a particular input gives a certain output’ nor can they ‘guarantee certain behaviours’ (T. Walsh 2018, 92). These insights bring us to a wider set of practical questions: while civilian research advances in dual-use technologies such as AI are clearly relevant for autonomous features in weapons systems, we should not assume automatic spillover (Verbruggen 2019). Civilian

Autonomous Weapons Systems and International Relations


applications differ contextually in important ways from military applications: for example, the unstructured environments of urban battlefields within which an autonomous ground vehicle would have to operate are significantly more challenging to navigate than the comparatively structured environment of highways that autonomous vehicles are currently trained for. And even here, as we have learned above, training data does not equal training data. Typically, both civilian advances in AI and their direct connection to weaponised AI are often over-hyped. This is clearly visible in transnational discourse on AWS: at UN-CCW meetings in 2017 and 2018, everyone was buzzing about the rapid advances of Deep Mind’s AlphaGo Zero, a Go-playing AI, towards ‘superhuman’ competency, as if this were a game-changing moment for the UN-CCW debate on the military applications of AI (Suchman 2018). In fact, the comparatively late timing of AlphaGo Zero’s supposed breakthrough has rather surprised AI researchers, given that machines enjoy a home advantage in this tiny field. Games have long been a popular testing ground for AI because they offer access to a ‘simple, idealized world’ (T. Walsh 2018, 114): weak AI’s superhuman capacity to number crunch makes it perfectly suited to finding rules-based solutions for one clearly contained goal in a constrained environment that can easily be iterated multiple times (Heath 2017). Defining autonomy along a spectrum For our purposes, we follow the definition of autonomy provided by researchers at the Stockholm Peace Research Institute (SIPRI): ‘the ability of a machine to execute a task, or tasks, without input, using interactions of computer programming with the environment’ (Boulanin and Verbruggen 2017, 5). Autonomy should be thought of broadly as a ‘relative independence’ (Lele 2019, 55) and, importantly, as referring to particular features and functions of autonomous weapons systems, rather than to the system as a whole. This is an important distinction because some weapons systems differ significantly in their make-up of autonomous features: these can, for example, relate to their mobility, i.e. the capacity to direct their own motion, or health management, i.e. securing their functioning (Boulanin and Verbruggen 2017, 21–32). In connecting autonomy to core functions of weapons systems, such as trigger, targeting, navigation, and mobility, we can evaluate the


Autonomous Weapons Systems and International Norms Table 1.2

Spectrum of autonomy.

Remote Automated Autonomous Fully autonomous controlled features features systems Complex human–machine interaction

extent to which a weapons system operates autonomously to different degrees (Roff 2015a). The crucial aspects of autonomy relate to autonomous features in trigger and targeting, that is in the critical functions of weapons systems to ‘target select (search for, detect, identify, track or select) and attack (use force against, neutralize, damage or destroy)’ (Davison 2017, 5). Therefore, while the topic is often framed using the language of AWS as if only one, clear version of autonomy exists, what we see is an inclusion of autonomous features along a spectrum of autonomy (table 1.2). At the one end of the spectrum, we find remote controlled systems where humans remain in manual control of the targeting functions, such as drones. Such systems require human input for executing their tasks. At the other end are what is often referred to as fully autonomous systems (see Heyns 2016a, 6). Here, humans are no longer involved in the specific use decisions. These are instead administered by the system, which operates completely on its own. But we see most significant developments in the middle of the spectrum: a zone that we have coined complex human–machine interaction. Here, systems exhibit automated and autonomous features and operate under the supervision of a human. This supervision differs in quality, depending on the range and type of tasks ‘performed’ via automated and autonomous features. We explore what this implies for meaningful human control in more detail in chapter 5. The inclusion of more automated and autonomous features in the critical functions of weapons systems is likely to see humans move further and further away from immediate decision-making on using force. Often, this is highlighted by the image of the control loop, referring to human control in the so-called ‘use phase’, that is, in specific targeting situations (Human Rights Watch 2012, 2; J. Williams 2015, 183). Typically referred to as the orient, observe, decide, act (OODA) loop,4 this image helps to visualise the relationship between the human and the system in specific situations when targets are

Autonomous Weapons Systems and International Relations


selected and engaged, rather than in earlier phases of the targeting process, e.g. strategic planning (Burt 2018, 11). In ‘in-the-loop’ systems, humans actively participate in selecting specific targets and making decisions to use force. By contrast, in ‘on-the-loop’ systems, the role of the human operator is significantly reduced: they monitor system actions and can intervene when necessary, but ultimately only react to targets suggested by the program. As this demonstrates, understanding the autonomy of weapons systems (almost always) involves various forms of human–machine interaction and the extent to which machine autonomy may (or does) undermine human autonomy. We will therefore consider human–machine interaction more closely, alongside the evolving concept of meaningful human control.

meaningful human control and the targeting process Originally coined by the non-governmental organisation (NGO) Article 36 (Article 36 2013a), there are different understandings of what meaningful human control implies (see also chapter 5). Many states and other actors consider the application of violent force without any human control as unacceptable and morally reprehensible. The term has gained significant currency in the transnational debate on LAWS (see section 1.3), yet it can refer to hugely different aspects. Sharkey’s five levels of human supervisory control, for example, range from humans deliberating about specific targets before initiating an attack at the highest level, via humans choosing from a list of targets suggested by a program, to programs selecting targets and allocating humans a time-restricted veto at the lowest level (N. Sharkey 2016, 34–37). Brehm (2017, 8) offers a helpful indication of the contours of meaningful human control from a legal perspective: ‘the requirement of meaningful human control over AWS would seem to entail that human agents involved in the use of an AWS have the opportunity and capacity to assess compliance with applicable legal norms and to take all legally required steps to respect and ensure respect for the law, including preventive and remedial measures’. Further, even if we only concentrate on meaningful human control with regard to the critical weapons functions of selecting and attacking, targeting itself is a complex, multi-dimensional process


Autonomous Weapons Systems and International Norms

Figure 1.1 NATO’s Joint Targeting Cycle.

in military terms (Ekelhof 2018). Following, for example, the North Atlantic Treaty Organization’s doctrinal documentation, the (joint) targeting cycle goes through six phases: objectives/guidance; target development; capabilities analysis; commander’s decision and assignment; mission planning and force execution; and assessment (see figure 1.1). Importantly, this conceptualisation encompasses ‘deliberate planning phases of the targeting process’ (Ekelhof 2018, 29, phases 1–4 and 6), rather than just their execution (phase 5). This more holistic approach to targeting indicates that, while some critical functions of weapons systems are not necessarily directly connected to the kinetic use of force, they can still contribute to target development. This is an important consideration when thinking about how meaningful human control can be usefully defined in operational terms. To give an example: the United States has tested a machine-learning algorithm (the ill-named Skynet, another reference familiar to those acquainted with the Terminator franchise) to develop potential targets for drone operations from mobile phone metadata in Pakistan. While these pattern-identifying functions do not stand in direct connection to target engagement and attack,

Autonomous Weapons Systems and International Relations


they are firmly, and importantly, part of the targeting cycle in terms of target selection and should therefore be included when we talk about critical functions of weapons systems. Operational advantages and disadvantages of reducing human control Retaining whatever form of meaningful human control is likely to be challenging in the light of operational considerations of effectiveness/efficiency, that is, identifying the human as the ‘weakest link’ in the targeting process. Pressure to reduce human decision-making, in an immediate sense, from the planning and execution of targeting will therefore potentially increase on account of three push factors. First, a perceived advantage of granting full autonomy to AWS lies in their ‘superior’ ability to process a large quantity of information without facing human cognitive overload (Noone and Noone 2015, 33; Haas and Fischer 2017, 295). Second, AWS have ‘agency’ on the battlefield, in advantageous contrast to remote-controlled weapons that are steered by a remotely positioned, decision-making human, making such systems more vulnerable to countermeasures on account of transmission speed and susceptible to interference (Sparrow 2016, 96; Horowitz, Kreps, and Fuhrmann 2016, 26). Third, while AWS require the investment of considerable financial resources from development to testing and training, they are projected to turn out cheaper than human soldiers or pilot-flown jets in the long run (Gubrud 2013; Crawford 2016). This trajectory has already become obvious even when only comparing, for example, the cost of manufacturing a single system: the US-manufactured Sea Hunter (I and II), autonomous submarine-hunting vessels (that at the time of writing remained unarmed) cost US$2 million, compared with the US$1.6 billion that acquisitioning an Arleigh-Burke-class destroyer entails (Scharre 2018, 79). Four Sea Hunter prototypes are expected to be under the command of Surface Development Squadron 1 (SURFDEVRON), established in 2019, which ‘will be dedicated to experimenting with new unmanned vessels, weapons, and other gear to propel the surface force forward’ by 2021 (Eckstein 2020). Rather than only considering potential push factors for the development and deployment of AWS, we should also note that AWS come with operational disadvantages that may hinder their spread. Generally, it bears reminding that technological trajectories are rarely, if


Autonomous Weapons Systems and International Norms

ever, linear, but instead subject to constant change and contestation. As an organisation, albeit not a monolith, the military is structured around control, which makes the idea of ceding it to AWS highly contested. The history of developing AWS in the US military, for example, includes various programmes that were cancelled despite considerable investment (Gubrud 2013) and even led to one report identifying ‘a cultural disinclination to turn attack decisions over to software algorithms’ (Watts 2007, 283). Even in a context that is overly enthusiastic about robotics and autonomy, these arguments still remain current. Therefore, even though the US Department of Defense’s Third Offset Strategy is centred around deep learning systems and human–machine combat teaming as strategic components of retaining the country’s technological advantage, ‘[t]here is intense cultural resistance within the US military to handing over combat jobs to uninhabited systems’ (Scharre 2018, 61). We will now summarise the extent to which these arguments have shaped the ongoing international debate on LAWS at the United Nations.

lethal autonomous weapons systems at the un Since 2014, the international community has been discussing LAWS under the framework of the CCW. Following earlier advocacy work by ICRAC, this process took off properly with the publication of Losing Humanity: The Case Against Killer Robots, a report co-authored by Human Rights Watch and the Harvard International Human Rights Clinic (Human Rights Watch 2012), which set in motion a broader public debate on a potential, preventive ban of ‘killer robots’. In 2013, Human Rights Watch became one of the founding members of the Campaign to Stop Killer Robots, an international coalition of non-governmental organisations dedicated to promoting a legal ban on LAWS. The same year saw the publication and presentation of another influential report on ‘lethal autonomous robotics’, authored by Christof Heyns, then UN Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions to the UN Human Rights Council (Heyns 2013). Here, many states present expressed the desire to continue discussing LAWS in the context of the CCW, which then started its first informal debates in May 2014. The CCW is something of an unlikely forum in which to discuss such an emerging, highly publicised topic. On the fortieth

Autonomous Weapons Systems and International Relations


anniversary of its adoption, in 2021, the CCW was not only positively cast as an ‘(unintended) incubator for ideas’ but also, more cynically, as a ‘place where good ideas die’ (Branka 2021, see also Carvin 2017). Entering into force in 1983, the CCW was initially composed of an umbrella document as well as three protocols on weapons with non-detectable fragments (Protocol I), landmines (Protocol II), and incendiary weapons (Protocol III) (Carvin 2017, 57). Two further protocols have since been added: a preventive prohibition of blinding laser weapons (Protocol IV, entered into force in 1998) and explosive remnants of war (Protocol V, entered into force in 2006). Deliberations at the UN-CCW firmly frame the issue of LAWS as a potential problem for international humanitarian law, with civil society actors, chiefly the Campaign to Stop Killer Robots, hoping to convince states parties to commence negotiating new binding international legislation. In 2016, the CCW formalised its deliberations through the creation of a Group of Governmental Experts (GGE) on LAWS. As of the time of writing in January 2021, the GGE had met six times, typically for five days at a time: in November 2017, in April and August 2018, in March and August 2019, and in September 2020. As the GGE mandate was renewed for a further two years in 2019, its discussions will continue at least until 2021. The GGE only has a discussion mandate, but similar mechanisms have, in the past, led to regulation agreements. The CCW has been and continues to be the only international deliberative forum where LAWS are substantially and regularly discussed. Therefore, it has become the focal point of transnational debate as well as of norm-promotion and lobbying activities of civil society actors. This significance of the CCW for international debate on LAWS can be visualised by how many states have participated between 2014 and 2020. Seventy-five [out of 125] high contracting parties contributed formally5 to the nine meetings on LAWS held between 2014 and 2020. Figure 1.2 goes into further detail, presenting data on the numbers of states parties contributing per year, categorised into three groups: Global North (GN), Global South (GS),6 and states that support a preventive ban on fully autonomous weapons (ban share). The data summarised in figure 1.2 point to three interesting observations (Bode 2019, 361). First, the number of formal contributions by states increased over time and across both the Global North and


Autonomous Weapons Systems and International Norms Total

GS share

GN share 55

53 45

Ban share




35 25 21






24 20






6 2015


25 17

14 5








Figure 1.2. Formal contributions to debates on LAWS at the CCW (2014–20).

the Global South until 2018. The higher numbers for 2017 indicate a growing interest in the newly created GGE. Numbers stabilised in the four GGE years from 2017 to 2020, and we see the same countries continuing to participate in debates each year. However, both 2019 and 2020 participation numbers were below 2018 levels. This could indicate a growing fatigue with the lack of progress on the issue at the CCW, but only a review of the numbers for future meetings will confirm whether this signifies a trend; as 2020 saw the beginning of the Covid-19 pandemic, these numbers are not representative.7 Second, the number of contributions by the Global South doubles when comparing 2014 and 2018. Third, the data also demonstrates the growing voice of ban supporters at the CCW: while only five supporters of the ban on LAWS contributed to the debate in 2014, this number grew to seventeen [out of thirty who supported a preventive ban8] in 2020. Apart from Austria, who joined the ‘ban group’ in April 2018, this group of ban supporters is primarily composed of states from the Global South, and does not include any of the major actors developing such technologies – with the (possible) exception of China. At the GGE in April 2018, China became the first ‘great power’ calling ‘to negotiate and conclude a succinct protocol to ban the use of fully autonomous weapons systems’ (Kania 2018b). It has since repeated this commitment on several occasions. As others have argued, China ‘wishes to ban the battlefield use of AWS, but not

Autonomous Weapons Systems and International Relations


their development and production’ (Haner and Garcia 2019, 335). However, doubts remain about the quality of its commitment. Discussions at the CCW have been hampered by diverging viewpoints on autonomy and a resulting lack of conceptual consensus on LAWS. While some states parties have yet to share even a working definition, those put forward, for example, by the United States and the United Kingdom differ significantly. The United States defines LAWS as a ‘weapon that, once activated, can select and engage targets without further intervention by a human operator’ in US Department of Defense Directive 3000.09 (US Department of Defense 2012, 13). This language has become widely used in discourse on LAWS. But the Directive also includes a distinction between so-called semi-autonomous and autonomous weapons: semi-autonomous weapons can ‘employ autonomy’ for a full range of ‘engagement-related functions’ for ‘individual targets or specific target groups that have been selected by a human operator’ (US Department of Defense 2012, 14). The utility of this distinction is contested as target selection, under-defined in the Directive, becomes the only marker to distinguish semi-autonomous from autonomous weapons systems (Gubrud 2015), making it ‘a distinction without difference’ (Roff 2015a). The United Kingdom differentiates between an ‘automated’ and an ‘autonomous system’: ‘an automated or automatic system is one that, in response to inputs from one or more sensors, is programmed to logically follow a predefined set of rules in order to provide an outcome. Knowing the set of rules under which it is operating means that its output is predictable’ (UK Ministry of Defence 2017, 72); in contrast, an autonomous system is defined as ‘capable of understanding higher-level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be’ (UK Ministry of Defence 2017, 72). In including the qualifiers ‘higher-level intent and direction’, this arguably defines away the challenges associated with short-term AWS. Further, the use of this specific phrase has political and legal implications: it sets a ‘futuristic’ and ‘unrealisable’ threshold for what


Autonomous Weapons Systems and International Norms

qualifies as autonomy (Article 36 2018), is ‘out of step’ with how autonomy is defined by practically all other states (Noel Sharkey, quoted in House of Lords Select Committee on Artificial Intelligence 2018, 103), and makes it difficult to determine the United Kingdom’s position on the use of less sophisticated AWS nearing development (H. Evans 2018; Article 36 2016, 2). We summarise the substance of the debate at the CCW in the GGE years from 2017 to 2020 below, which also points to seven key changes and challenges. This summary also provides an account of negotiation dynamics and issues of substance raised at the GGE. Key changes and challenges debated at the CCW from 2017 to 2020 First, in the light of the persistent definitional issues surrounding autonomy, various states parties as well as the three chairpersons of the GGE sought instead to move the debate towards exploring human–machine interaction. The concept of meaningful human control, also referred to inter alia as appropriate or effective control of LAWS, gained particular traction. As Brehm (2017, 8) notes ‘what that involves, concretely, remains to be clarified’ and still continues to be a matter of debate. There have been efforts by various states parties as well as civil society actors and independent experts to offer more concrete ways of defining meaningful human control, including expanding it to cover the entire life cycle of LAWS. A prominent example of this includes a slide identifying so-called human touchpoints circulated by then chairperson Ambassador Amandeep Singh Gill of India. These touchpoints (also referred to as the sunset diagram due to the graphic it depicted) for the exertion of human control or supervision involved four stages: research and development; testing, evaluation and certification; deployment, training, command and control; and use and abort (Singh Gill 2018). This distributed approach to human control, to be distinguished from a previous focus on the use (and abort) phase only, has since gained traction at the GGE (e.g. Permanent Mission of the United States 2018b). Other stakeholders in the debate, such as the International Committee of the Red Cross (ICRC), SIPRI, and the Campaign to Stop Killer Robots have also put forward detailed and practical operationalisations of meaningful human control (Boulanin et al. 2020; Campaign to Stop Killer Robots 2020a, see also chapter 5). Further, in

Autonomous Weapons Systems and International Relations


2019, the GGE included a reference to human–machine interaction in its so-called Guiding Principles (UN-CCW 2019) and discussions in 2020 continued to converge around this issue (GGE on LAWS 2020). That said, the continued attractiveness of meaningful human control in deliberations may lie precisely in retaining its ambiguity, a recurring feature of (UN) diplomacy (Berridge and James 2003, 51). While GGE discussions on LAWS have not entered a negotiation stage, negotiations at the UN generally provide plentiful incentives for actors to leave core normative notions under-defined, vague or ambiguous, thereby creating the possibility for compromise (see, for example, Rayroux 2014). Second, states parties’ positions have become clearer, more substantial, and, incidentally, more polarised over time. At the first GGE in November 2017, voices of caution and critique dominated: no state party spoke explicitly in favour of developing LAWS, and many states parties followed a ‘wait and see’ approach. However, since the August 2018 meeting, some states parties (chiefly the United States but also Australia) have listed potential, and one must say supposed, benefits of LAWS in the area of precision targeting. Specifically, the United States, for example, argues that LAWS can strengthen compliance with IHL by effectuating the intent of commanders (Mission of the United States 2018c). States parties to the CCW in 2020 can be more or less neatly split into three groups: the ban states (with Brazil, Chile, Costa Rica, Pakistan, and Austria being among the most vocal) calling consistently for an immediate transition to negotiating novel international law on LAWS in the form of a ban; the moderates (such as Germany, France, Switzerland), who are generally supportive of novel regulation but consider a political declaration or a strengthening of Article  36 weapons reviews rather than a legislative piece the most appropriate form at this stage of the debate; and the sceptics, including the United States, the United Kingdom, Australia, Israel, and Russia, who are united by their critical attitude on novel regulation rather than their positions of substance in the issue. As noted above, the group of sceptics includes states parties that argue expressly in favour of developing and using weapons systems with greater autonomy. But it also includes those, for example, Russia, who have cast doubts on the very existence of LAWS for years and since the start of the GGE process appear to be increasingly interested in slowing down discussions at the (often procedural) opportunities that present themselves. Other


Autonomous Weapons Systems and International Norms

states parties, such as the United Kingdom, have argued that existing international humanitarian law is sufficient to deal with the technological evolution represented by AWS (Brehm 2017, 10). Generally, over the period 2017–19 the atmosphere at the GGE grew increasingly polarised, which arguably had an effect on the positions expressed by academic experts observing the meetings.9 We can underline this increased polarisation with the fact that finding consensus for the GGE’s final report (a document with no legal character in the positivist sense) has taken increasingly more time since 2019. On the final day of deliberations, interpreting services stop at 6 p.m. due to financial/budget constraints. The states parties then move into a different room, where the remainder of deliberations and discussions on the final report take place in English only. While the discussion time was only exceeded by around two hours in November 2017, these discussions dragged into the early hours of the morning in both 2018 and 2019. In the September 2020 meeting, the GGE debate saw less polarisation, with more states parties occupying a kind of middle ground and showing apparent interest in furthering discussion around the substance, in particular the extent to which IHL is sufficient, as well as aspects of human–machine interaction (GGE on LAWS 2020). That said, due to the current GGE mandate running until the end of 2021, states parties did not have to discuss the text of a final report. Third, as noted above, states parties talk about very different technologies in the context of LAWS, making what is talked about deeply political: whereas some exclude existing weapons systems from the debate on LAWS, others stress that past and present violent practices involving mines, torpedoes, sentry guns, air defence systems, armed drones, and other technologies with autonomous features offer important insights into the changing modes and locales of human agency in the use of force and should be part of the debate (Brehm 2017, 13). A split has therefore emerged between those states parties that base their arguments in opposition to LAWS on these technological precedents and others who seek to clarify that the present discussions should only deal with ‘fully autonomous’ systems rather than so-called legacy systems or semi-autonomous system that they portray as acceptable (see also chapter 5). Fourth, participants (and observers) of the GGE debate draw on fundamentally different images of technological futures, referred to as ‘socio-technical imaginaries’ in the academic literature (Jasanoff

Autonomous Weapons Systems and International Relations


2015). A first group embraces the development of LAWS, arguing that they represent a ‘technological fix’ for problems associated with current modes of warfare and that ‘increasing autonomy in weapon systems enables conducting war in ever more moral and legal ways’ (Brehm 2017, 14). Here, the inevitable deployment of LAWS is presented as an eventual empirical fait accompli that policymakers should simply make the most of by adapting adequate ethical and safety standards, not least for using ‘AI for good in war’ (L. Lewis 2019b; see also L. Lewis 2019a). By contrast, a second group challenges the supposedly inevitable journey towards LAWS (N. Sharkey 2012), highlights the fact that policymakers around the world have a decided scope for agency and choice in the matter, and, instead, problematises the ever-decreasing role of the human being in contemporary warfare. As Brehm (2017, 71) argues succinctly: ‘Increasing autonomy in weapon systems is neither automatic nor inevitable. Inevitability is purposefully constructed by human agents. It is an ethical question and a political act when human agents attribute agency to a technological device or system rather than to people. This returns responsibility to us as representatives of institutions that deploy technology, who are involved in its design, who use the equipment or, perhaps most significantly, who are subjected to its operation’. This is exactly the line of reasoning we support in writing our contribution to the debate. Fifth, civil society participation has always been strong at the GGE, in particular via the Campaign to Stop Killer Robots. In fact, the very discussion of LAWS in this forum and the creation of more formalised discussions in the GGE is entirely due to the civil society’s successful lobbying efforts (see Bahcecik 2019). As the numbers of civil society organisations joining the Campaign have grown, so, therefore, have the number of contributions delivered as part of the GGE discussions. In 2018, for example, fourteen organisational representatives and academics spoke in favour of the Campaign’s overall critical goal of preventively banning LAWS. However, from 2019 onwards, there has been a sense of a growing disillusionment among many of the civil society participants. Having argued that technological developments are outpacing diplomacy since the beginning, civil society representatives increasingly voice frustration with the slow rate of progress at the GGE when it comes to deciding on a way forward – which is not least due to its consensual decision-making nature. Since 2019, suggestions to change


Autonomous Weapons Systems and International Norms

the forum have therefore abounded. By moving outside of the CCW, civil society representatives hope to negotiate an international treaty outside of the UN’s platform and therefore not be bound by its institutional challenges. (In fact, major treaties of international humanitarian law, such as the Mine Ban Treaty (also called the Ottawa Treaty), the Cluster Munitions Convention, and the Nuclear Ban Treaty, have been successfully negotiated outside of UN auspices in the past three decades.) The downside to this turn of events would be that major developers of LAWS, such as the United States, China, and Russia, are not very likely to be part of this negotiation process, thereby raising doubts about the eventual effectiveness of the potential future treaty in curbing LAWS development. But there are also important normative benefits to having a piece of international law that clearly expresses a moral desire to ban LAWS. Further, the global proliferation of drone technology over the 2010s to a point where at least 102 countries have military drone programmes underlines the potential for a similar arms race in the field of weaponised AI (Gettinger 2020). Importantly, this process demonstrates that not only the major military powers are relevant in proliferation processes: the list of countries with military drone programmes includes many with lesser military capabilities. A treaty negotiation process outside of the CCW is still likely to include their participation. At the same time, many campaigners still see a benefit in at least keeping major developers as part of the continued discussion. Sixth, GGE discussions have often been led in a circular fashion, with some prominent themes re-emerging year after year. These issues become distractions or sideshows and are frustrating from a discussion standpoint as they slow down the pace of potential progress. Aside from the discussions around autonomy, two further issues have kept popping up in this way: LAWS and dual-use technology; and public opposition versus potential acceptance of LAWS. Sceptics of entering into a negotiation phase with regard to LAWS have pointed to the dual-use nature of many of the technologies that sustain them as being a major obstacle when it comes to regulating them, fearing important technological innovation could be stifled. However, given that most military technology has elements that could be put to significant civilian use, including, for example, chemical weapons or blinding laser weapons, ways around this problem clearly could be, and have been, found if states parties are willing to consider them.

Autonomous Weapons Systems and International Relations


Concerning the supposed eventual public acceptance of LAWS as time passes by, campaigners have outlined how delegating the kill decision to machines violates basic standards of human dignity and the public conscience. This line of reasoning finds legal grounding in the Martens Clause. Such arguments are further sustained by successive public opinion polls, demonstrating that a solid majority of the general public (including in China, Russia, and the United States) oppose the development and use of LAWS (Campaign to Stop Killer Robots 2019a). At the same time, proponents of LAWS or sceptics of their legal regulation suggest that the general public will eventually become more prone to accept LAWS as familiarity with AI in their daily life increases. The problem with this argument lies in its deterministic reasoning: why should increasing familiarity immediately lead to increasing acceptance when the reverse outcome is just as logical? In fact, the growing rollout of AI-driven technologies such as facial recognition has led to a more sustained public backlash over time. Further, there is a significant difference between using AI in weapons systems and using AI for functions not related to the use of force. As one interviewee noted: ‘just because people are increasingly using laser pointers for their [PowerPoint] presentations does not mean that they are more likely to accept blinding lasers’.10 Seventh, when the format of CCW discussions moved towards a greater degree of formalisation with the GGE, many participants in the debate had hoped for a focus on more substantial discussion among states parties. The loose discussions on LAWS from 2015 to 2017 had instead been dominated by expert presentations. However, the first two years of the GGE (2017 and 2018) still included a significant number of expert panels, including presentations on seemingly abstract and far-away uses of AI that did not signal close proximity to the major matters at hand. Further, calls continued to be heard for more expert input, even in 2020. While expert advice should and does continue to complement GGE discussions, for example, via policy reports and GGE side events (held before and after sessions or during lunch breaks), giving more time to lengthy expert panel debates during the precious GGE debating time is not likely to be helpful in pushing the process forward. In summary, the years of discussing LAWS under the auspices of the CCW have led to a greater clarity of purpose and a significant deepening of the substance of discussions, especially around issues of


Autonomous Weapons Systems and International Norms

compliance with IHL and human–machine interaction. At the same time, the debate appears to have stalled since 2019 in the sense that the three blocs of states – proponents of a ban, moderates, and sceptics – remain more or less tied to their positions and there appears to be little room for moving forward on specifying the key notions of the debate. In the meantime, in many countries, technological development and planning for weapons systems with autonomous features in their critical functions continues to progress.

literature on aws: questions of law and ethics Starting in the late 2000s, AWS have been the subject of a lively scholarly debate, even if few of these contributions speak explicitly to the discipline of IR (Bode and Huelss 2019).11 Research has explored AWS chiefly along two, often interconnected lines: questions surrounding their (potential) legality, and ethical challenges attached to their usage. This section summarises this debate, and comments on the gaps in this literature. Studies of legality examine the extent to which AWS can or could potentially comply with international humanitarian law and, to a lesser degree, international human rights law. Chiefly, these legal concerns revolve around whether machines can technologically ‘make the required judgment calls, for example, about who to target and how much force to use’ (Heyns 2016b, 351). Below we tackle the concerns of international humanitarian law (IHL), international human rights law (IHRL), and jus ad bellum law. Often, scholars in this field draw on one of two analogies to frame AWS: they either discuss instrumentally weapons that can or cannot be used lawfully, or AWS are endowed with agency by capturing ‘combatants’ that can or cannot ‘act’ in adherence with IHL or IHRL (Crootof 2018). It is useful to read the following section with this critical framing in mind. International law and AWS All states are required to review any new weapons system that they seek to introduce, to demonstrate that the weapon itself can, in principle, be used lawfully. This is primarily based on Article 36 of the 1977 Additional Protocol to the Geneva Conventions. While not all states are parties to the 1977 Additional Protocol (the United

Autonomous Weapons Systems and International Relations


States is a notable exception), its stipulations are considered generally binding as part of customary international law. Article 36 states that ‘[i]n the study, development, acquisition or adoption of a new weapon, means or method of warfare,’ states are ‘under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law’ (ICRC 1977). These reviews should rule out the employment of weapons that are inherently unlawful for either of two reasons: if they ‘cannot be directed at a specific military objective … or are of a nature to strike military objectives and civilians or civilian objects without distinction’ (ICRC 1977, Paragraph 51 (4)); or if they ‘cause superfluous injury or unnecessary suffering’ (ICRC 1977, Paragraph 35 (2)). Apart from providing clarity on legal requirements, there is some dispute in the literature about the role of human decision-making in weapons law. Some commentators note that these requirements do not immediately make the use of autonomous weapons systems unlawful if such systems are able to comply with international humanitarian law: ‘the mere fact that an autonomous weapons system rather than a human might be making the final targeting decision would not render the weapon indiscriminate by nature’ (Thurnher 2013; see also Anderson and Waxman 2013, 11). Along this line, it matters not who complies with these legal principles, be it a human or a machine, but instead whether they are applied correctly. On the other side of the debate, human decision-making is described as essential in order to adhere to current standards of IHL compliance, ‘i.e. a human commander must make a specific legal determination such as with proportionality’ (Talbot Jensen 2018). Determining this would clearly entail investigating the precise role of human decision-making in international law. But the pivotal role of human decision-making can also be assumed to be located in the spirit, if not (necessarily) in the letter, of international law, resting on a monotheistic approach that privileges humans (Noll 2019). As Heyns (2016a, 8) argues, ‘it is an implicit assumption of international law and ethical codes that humans will be the ones taking the decision whether to use force, during law enforcement and in armed conflict. Since the use of force throughout history has been personal, there has never been a need to make this assumption explicit’.


Autonomous Weapons Systems and International Norms

Important jus in bello principles governing the use of force – so-called targeting principles – raise fundamental concerns about the implications of increasing autonomy in complex aspects of weapons systems. Distinction and proportionality are key principles here and we will consider these in turn. Distinction, i.e. the compliance with and ability to distinguish between civilians and combatants as well as between civilian and military objects, is the most fundamental principle of IHL as enshrined in Protocol IV of the Geneva Conventions and as part of customary international law (ICRC 1977, Paragraph 57 (2) (iii)). Distinction therefore guarantees civilian protection in clearly prohibiting their deliberate attack, making distinguishing between civilian and combatant an essential protective assurance of IHL. Ultimately, the ability of AWS to adhere to distinction depends on the extent to which distinguishing between civilians and combatants is something that their targeting algorithms are capable of or the extent to which we see this as even programmable. Some scholars emphasise that whether AWS will be able to meet the requirements of distinction also depends on the contexts of their envisaged usage, differentiating between less and more complex contexts. In less complex contexts such as battles between declared hostile forces or in remote areas (e.g. underwater, desert) AWS ‘could satisfy this rule with a considerably low level ability to distinguish between civilians and combatants’ (Thurnher 2013; see also Anderson and Waxman 2013, 11). We can see how this ‘envisioning of AWS operating in empty spaces far away’ (Brehm 2017, 40) explicitly and implicitly figures in graphics employed by diplomatic missions in Geneva. To illustrate, a presentation to the GGE meeting in August 2018 delivered by the Swedish delegation included a depiction of an anti-armour warhead to target tanks with autonomous trigger function in an empty desert setting. In selecting what is visible, these images arguably seek to shape what appears ‘appropriate’ in terms of AWS practices by limiting the imagination to their deployment in militarily clean, but unrepresentative, scenarios. However, in complex environments, these requirements are considerably more demanding and scholars question whether AWS will ever be able to meet them (Sparrow 2016; Thurnher 2013; Asaro 2009). To argue that targeting algorithms may gradually become ‘good enough’, in whatever terms that may be defined, remains

Autonomous Weapons Systems and International Relations


highly speculative as current programmers ‘are a long way off, even in basic conceptualizing, from creating systems sufficiently sophisticated to perform … in situations densely populated with civilians and civilian property’ (Anderson and Waxman 2013, 13). This finding gains even more significance when we consider broader trends in warfare that have seen war fighting in urban landscapes, as illustrated by the ongoing conflict in Syria, emerge as an environment that is characteristic of modern warfare. Determining whether a vehicle is used for combat purposes or not, whether an armed individual is a fighter or a civilian, or whether a group comprising individuals also comprises civilians are questions of contextual human judgement. As these examples show, the legal definition of who is a civilian, for example, is not written in straightforwardly programmable terms, nor do machines have the necessary situational awareness and ability to infer required to make this decision (N. Sharkey 2010, 379). As former UN Special Rapporteur on Extrajudicial Killings, Philip Alston, summarised, ‘such decision-making requires the exercise of judgement, sometimes in rapidly changing circumstances and in a context which is not readily susceptible of categorization’ (UN General Assembly 2010b, Paragraph 39). Further, and importantly, although the targeting software included in AWS exceeds human capacities in terms of processing large data sets and other quantitative tasks, they are disadvantaged compared with humans when it comes to deliberative reasoning, interpreting qualitative data, or accurately judging complex social situations and interactions (N. Sharkey 2016). The principle of proportionality allows civilians to be killed if their death is not deliberate and/or is justified by a proportionate response invoking military necessity, that is ‘the military effects outweigh the unintended effects on non-combatants’ (Kaempf 2018, 36). Distinction and proportionality therefore find themselves in an uneasy legal balance. Again, scholarly opinion about AWS and proportionality differs and we encounter questions surrounding the human element that are similar to those related to distinction. Some regard AWS as potentially more precise weapons, thus decreasing human suffering (Arkin 2010) as underlined by the benefits of ‘precision-guided homing munitions such as torpedoes’ (Horowitz and Scharre 2014). Crootof (2015, 1879) holds that current weapons systems with autonomous features, such as close-in weapons systems (CIWS), are being


Autonomous Weapons Systems and International Norms

used in adherence to the proportionality requirement. Further, she points to how US practice already uses a ‘collateral damage estimate methodology’ that is somewhat akin to pre-programming proportionality estimates (Crootof 2015, 1877). Others argue that any decision about the proportional use of force requires assessing and processing of contextual and complex data that might be based on contradictory signals if measured against a preprogrammed set of criteria–action sequences of autonomous ‘decision-making’. Based on this, some scholars argue that proportionality is even more difficult to comply with than other IHL principles ‘because of its highly contextual applicability’ (Laufer 2017, 71). These tasks pose significant challenges for AWS: ‘To comply with the principle, autonomous weapons systems would, at a minimum, need to be able to estimate the expected amount of collateral harm that may come to civilians from an attack. Additionally, if civilian casualties were likely to occur, the autonomous systems would need to be able to compare the amount of collateral harm against some predetermined military advantage value of the target’ (Thurnher 2013). The fact that assessments of distinction and proportionality require highly complex reflection processes as well as value judgments therefore pose fundamental challenges to deploying AWS lawfully (N. Sharkey 2010). These considerations also highlight that, no matter how important advances in general forms of AI may be for future weapons systems, we should focus on ‘stupid’ autonomous weapons now (Bode and Huelss 2017). The question of accountability has likewise seen significant coverage in the legal literature on AWS (Hammond 2015; Crootof 2016; Liu 2016; Jain 2016). This is often linked to public demands of individual and political responsibility when force is used, particularly in cases that violate norms of international humanitarian law (J.I. Walsh 2015). Failing to distinguish between civilians and combatants or using excessive force outside of the proportional assessments of military necessity constitutes such a war crime and triggers criminal liability (Sparrow 2007, 66). The spectre of such disastrous consequences looms particularly large in the case of AWS because of ‘their destructive capacity and their inherent unpredictability’ (Crootof 2016, 1350; see also Liu 2016, 330–31). What distinguishes fully autonomous weapons systems is their operation without meaningful human control. In other words,

Autonomous Weapons Systems and International Relations


AWS ‘possess discretionary autonomy over the use of force’ (Liu 2016, 328). More specifically, therefore, the accountability problem hinges on the absence of intent: ‘by definition, war crimes … must be committed by a person acting “wilfully”, which is usually understood as acting intentionally or reckless[ly]’ (Crootof 2016, 1350). There may be cases connected to using AWS where commanders or other military personnel can be held directly or indirectly accountable – either because they ordered a specific use for the purpose of committing war crimes or because they did not abort an AWS when its use was likely to lead to war crimes being committed (Schmitt and Thurnher 2013, 277). But who is responsible for war crimes committed by an autonomous system if they result from unanticipated consequences of the use of AWS rather than the intended action of a commander in charge of the operation? Crootof (2016, 1375) therefore holds that the use of AWS may undermine a foundational principle of international criminal law as ‘absent such a wilful human action, no one can … be held criminally liable’. It is clear that ‘legal obligations are addressed to human beings’ who are ‘accountable for harm done or infringements of the law’ (Brehm 2017, 21). The increasing autonomy of weapons systems therefore also raises the question of the extent to which different groups of individuals such as engineers, programmers, political decision makers, or military command and operating staff are accountable for decisions undertaken and mistakes committed by AWS (Hammond 2015; Sparrow 2007; J.I. Walsh 2015). In the case of fully autonomous weapons systems – those operating without meaningful human control – scholars therefore speak of an accountability vacuum because ‘it is uncertain who will be held accountable’ (Heyns 2016b, 373). Liu (2016) takes these questions one step further by identifying a conceptual gap between causal responsibility and role responsibility. The accountability gap discussed so far speaks to who holds causal responsibility ‘for the unlawful consequences that are caused by AWS’ (Liu 2016, 338). But there is also the notion of a ‘role responsibility’, tied to whether an individual adequately performed their role functions and obligations: ‘a programmer has discharged his/her role responsibility if he/she has taken sufficient care to ensure that his/her algorithms function according to the requirement’ (Liu 2016, 337). It follows that a human can have fulfilled their role responsibility in relation to AWS, but the use of AWS can


Autonomous Weapons Systems and International Norms

still result in unlawful outcomes. However, the human cannot be held responsible for these outcomes because they can ultimately only be held accountable for failures associated with the inadequate performance of their role responsibility – thereby resulting in an intractable responsibility gap (Liu 2016, 339). Finding a solution to this can have detrimental consequences. As Liu (2016, 327) cautions, ‘proximate human beings may become scapegoats in the zealous quest to plug the responsibility gap’. As we show in chapter 5, conceptually, this is arguably already the case in the context of human operators of air defence systems. While failures typically arise from complex human–machine interaction – that is, from how humans and machines operate together – disasters in the operation of air defence systems with automated and autonomous features are often blamed on human error. As Elish (2019, 41) argues, we may see the emergence of a ‘moral crumple zone to describe how responsibility for an action may be misattributed to a human actor who had limited control over the behaviour of an automated or autonomous system’. A final, more fundamental problem lies within the permissiveness of IHL when it comes to the use of force. While IHL provides an important limit-setting structure, e.g. it is not permissible to target civilians, it is permissible to kill civilians in missions that are militarily necessary and proportional: ‘considerations of humanity require that, within the parameters set by the specific provisions of IHL, no more death, injury, or destruction be caused than is actually necessary for the accomplishment of a legitimate military purpose in the prevailing circumstances’ (Melzer 2009, 77). It is comparatively easy to find justifications for using force in IHL (Kennedy 2006). This permissibility is also visible in the particular context of new weapons reviews: ‘IHL has been tailored to assist states in obtaining their desired military outcomes while employing weapons compliant with international law’ (Laufer 2017, 64). This highlights that legal and ethical/moral concerns are fundamentally separate, and some deeply unethical types of behaviour are simply non-issues in IHL (see Scharre 2018, 6). Some of the issues identified as concerns in IHL are also picked up in examinations of IHRL in the context of AWS. This body of literature is decidedly smaller in scale, ‘probably because many commentators and policy makers envision the use of AWS in the context of military combat, rather than policing, and because discussions

Autonomous Weapons Systems and International Relations


within the CCW are limited to the use of weapons as [a] means of warfare’ (Brehm 2017, 11). But in light of assessing whether autonomous weapons systems can be used in compliance with international law, adhering to the standards of IHRL is even more demanding: ‘International human rights law is much more restrictive about the use of force than IHL. … Although law enforcement officials are allowed to use force under certain circumstances, such powers are strictly limited. They have a positive duty to protect the population and their rights, and deadly force may only be used to serve that purpose’ (Heyns 2016b, 353; our emphasis). Scholars raise significant doubts about whether machines could ever be programmed in a way to make the value judgments and assessments necessary to comply with IHRL, such as determining whether the use of force is necessary and whether a person represents an imminent threat (Heyns 2016b, 364–5). As Heyns (2016b, 366) summarises succinctly, ‘there is a considerable burden of proof for those who want to make this difficult case’. Importantly, IHRL not only makes it necessary to critically evaluate the use of force on a case-by-case basis, but also articulates demands beyond the potential application of lethal force to the decision-making processes involved in targeting: ‘The algorithmic construction of targets draws on practices that are already considered deeply problematic from a human rights perspective, including secret mass surveillance, large-scale interception of personal data and algorithm-based profiling. The use of AWS is likely to sustain and even promote such practices, threatening human dignity, the right to privacy, the right not to be discriminated against and not to subjected to cruel, inhuman or degrading treatment and the right to an effective remedy’ (Brehm 2017, 69–70). Finally, although the literature on the international legal considerations of AWS is chiefly concerned with jus in bello, there are also a few contributions dedicated to jus ad bellum questions, arguing that AWS increase the general recourse to violent instruments, affecting proportionality and making escalations more likely. Roff (2015b, 41), for example, argues that even using AWS in self-defence against an act of aggression cannot fulfil the requirements of the ad bellum proportionality principle, understood as weighing the benefits/just cause of waging war against its overall consequences. Among these, she highlights that using AWS ‘will adversely affect the likelihood of


Autonomous Weapons Systems and International Norms

peaceful settlement’ as the use of a publicly perceived robotic army will trigger deep resentment and further animosity among the population targeted (Roff 2015b, 47), as we have already seen in the case of drones (Kahn 2002; Oudes and Zwijnenburg 2011). In addition, the initial use of AWS will trigger an AWS arms race as ‘other countries may begin to justify or view as necessary their possession and use of AWS’ (Roff 2015b, 50). Other authors also identify this risk of escalation inherent in the unhindered proliferation and unfolding arms races involving AWS as a key negative consequence that would further lower overall thresholds to using force (Altmann and Sauer 2017; Sparrow 2009; Altmann 2013). Ethics and AWS Ethical studies cover a range of challenges associated with the potential deployment of AWS, centring on the question of whether autonomous weapons systems can ever legitimately harm humans or end human life (Johnson and Axinn 2013; Leveringhaus 2016; N. Sharkey 2010; Sparrow 2016; Schwarz 2018). This complements the legal question – can machines use force in compliance with international law? – with the more fundamental question – should machines make use-of-force decisions (Heyns 2016b, 351)? That is, even if using AWS were legal, could it ever be ethical? Many call for a comprehensive ban on AWS due to ethical issues and concerns for human dignity (N. Sharkey 2010; Heyns 2016a; Rosert and Sauer 2019; A. Sharkey 2019), while a few voices, most prominently Ronald Arkin, emphasise their potential to contribute to more humane warfare, as AWS are not governed by emotions such as fear, hate, or revenge (Arkin 2009, 2010). Essentialising AWS as benevolent instruments often goes hand-in-hand with highly futuristic scenarios distinct from near-term advances in AI. Such studies also tend to neglect that even supposedly virtuous autonomous systems are only as ‘good’ as the purpose or intentions they are designed to serve. This links the current debate on technological autonomy and human–machine relations to general themes with regard to the ethical consequences of AI. Before examining these ethical questions in the broader context of AI, technology, and human–machine interaction (which highlights some general problems), we discuss the main points of the contributions to the ethical debate on AWS. In general, the problems

Autonomous Weapons Systems and International Relations


discussed here fall into the domain of applied ethics (practical ethics), which is an established research area in international politics. Applied ethics are directly linked to formulating guidelines, rules, and regulations that can be used in practical settings. In that sense, applied ethics are intended to translate normative content into practically useable standards of action. There are certain parallels here to norms but, in our understanding, norms are broader sets of standards that go beyond what is thought to be morally ‘right’. As mentioned above, one of the central, recent points of contestation is how AWS relate to human dignity. While the legal viewpoints on AWS are predicated on IHL and the principles of distinction, proportionality, and military necessity as benchmarks to debate whether the use of AWS could potentially be legal, ethical perspectives generally explore dimensions above and beyond the legal framework and consider whether AWS should be used even if they met principles of IHL. In this regard, authors tend to emphasise the lack of human deliberation and judgement (and, as consequence, the lack of accountability) that would violate human dignity if AWS used force to harm humans. It should also be noted that human dignity straddles legal and ethical areas of concern because much of IHL, as well as IHRL, is grounded in this principle (Heyns 2016b, 367). Briefly, ‘underlying the concept of dignity is a strong emphasis on the idea of the infinite or incommensurable value of each person’ (Heyns 2016b, 369). In the words of Asaro (2012a, 708): ‘as a matter of the preservation of human morality, dignity, justice, and law we cannot accept an automated system making the decision to take a human life’. In a highly informative take on the ethical questions raised by AWS, Amanda Sharkey identifies three arguments as dominant in the debate (A. Sharkey 2019, 78): ‘(i)  arguments based on technology and the current and likely near future abilities of AWS to conform to IHL (i.e. what they can do); (ii) deontological arguments based on the need for human judgement and meaningful human control of lethal and legal decisions, and on considerations of what AWS should do. These include arguments based on the concept of human dignity; (iii) consequentialist reasons about their effects on the likelihood of going to war. These reasons include political arguments about their effects on global security, and are not necessarily labelled as consequentialist’. Sharkey provides a detailed review of the human dignity argument in the context of AWS. She notes ‘that there is a lack of a


Autonomous Weapons Systems and International Norms

clear consensus about what dignity is’ (A. Sharkey 2019, 82), which is, as Sharkey argues, the main problem affecting how concepts of human dignity influence the debate. In the same vein, Schippers (2020, 318) points out that ‘[w]hile dignity appears to offer a fixed-point for philosophical conceptions of human nature and a benchmark against which to judge policy, its meaning, as indeed the effects of its usages, are fuzzy and remain contested, partly as a result of the wider disavowed history of the term and its diverse set of historical sources’. Nevertheless, human dignity figures prominently in the ethical debate on AWS, as summarised by Sharkey. The first argument holds that AWS are unable to ‘understand or respect the value of life’ (A. Sharkey 2019, 82). This argument is widespread in the literature, for example, in contributions by Christof Heyns: ‘death by algorithm means that people are treated simply as targets and not as complete and unique human beings, who may, be virtue of that status, meet a different fate’ (Heyns 2016b, 370; see also Asaro 2012, 2019; Rosert and Sauer 2019). The second argument is that a human consideration of law-informing decisions is essential and that ‘lack of human deliberation would render any lethal decisions arbitrary and unaccountable’ (A. Sharkey 2019, 83). The third argument is that dignity is linked to a human rights ‘package’ and that ‘AWS could affect all of these: limiting freedom, reducing quality of life, and creating suffering’ (A. Sharkey 2019, 83). These are clearly valid arguments, but Sharkey also advises against arguing in opposition to AWS solely on the basis of human dignity due its ambiguous meaning. She goes on to argue that meaningful human control is a useful concept for considering the status of technologies and for assessing the extent to which they might violate human dignity. We will return to a closer consideration and discussion of meaningful human control in chapter 5. In our view, current iterations of meaningful human control do not resolve the central point of contention revealed by ethical views on AWS: to provide a precise definition of an adequate involvement of humans and the exact quality of human–machine interaction. The argument that a sufficient level of human control builds the threshold that makes AWS acceptable requires, of course, that this threshold can be clearly defined. But is this possible? Positions in this regard vary, but we argue that even present forms of compromised human control as

Autonomous Weapons Systems and International Relations


they can be found, for example, in the operation of air defence systems are ethically problematic, while they still set standards of appropriate use of force. We will elaborate on this point in chapter 5 and illustrate it empirically. Raising a different point, Schippers (2020, 320) argues ‘that the metaphysically anchored, dignity-based critique of AWS disavows the ontological and ethical entanglement of humans with autonomous and intelligent systems’. She proposes ‘to read AWS through the lens of relational ethics … where the – real, perceived or projected – ontological qualities of autonomous systems generate conceptions of ethics, ethical responsibility, and ethical agency, and where human subjects, vis-à-vis practices of the self and collective engagement with others, constitute themselves as ethical subjects’ (Schippers 2020, 322). This perspective is in line with our emphasis on considering human–machine interaction as spaces where meaningful human control can be compromised. We argue here that increasing the technological ‘sophistication’ of weapons systems, including the integration of autonomous features, has crucial consequences. This does not refer to a potential future scenario but is rather an existing problem. Following the work of N.K. Hayles, Schippers (2020, 321– 2) characterises her perspective of relational ethics in the following way: ‘ethical agency is not based on the free will of an autonomous (human) subject, but emerges from the interpretation of information: human-technic cognitive assemblages interact with as well as transform the terms and terrain where ethical agency is exercised’. These are important, albeit complex and challenging, insights that share our emphasis on practices to study how and what kind of meaning is produced. Turning to the other end of the spectrum, literature criticising the emergence of AWS on ethical grounds typically focuses on those who would be affected by using force. However, arguments in favour of integrating more autonomy into weapons systems also posit that such systems would increase ‘precision’ and therefore improve adherence to IHL principles, such as distinction, e.g. by enabling a better differentiation between a hospital and a military target (see Galliott and Scholz 2018). But viewpoints on human dignity underline that even using AWS in line with IHL can still be unethical (see Asaro 2012; Schippers 2020). As Schwarz (2018a, 25) notes, the focus on AI as a technological fix and solution to complex moral questions ‘is ethics as a mere technical problem’. In that


Autonomous Weapons Systems and International Norms

regard, ethical arguments in favour of using AWS are criticised for disregarding more fundamental questions of human involvement, such as those outlined above. This is also a problem of applied ethics that overemphasises the importance of law for providing ethical guidelines and ‘seeks to establish certain ethical outcomes through regulatory frames, laws and codes’ (Schwarz 2018a, 158), which can arguably also make ethics more accessible as a programmable tool for AI applications. Another argument in favour of AWS as providing ethically superior outcomes holds that the use of AWS would be ethically imperative if it contributed to protecting own combatants (see Strawser 2010). Armed drones are a prime example of a weapons system that has reduced the role of personnel on the battlefield. But, while this removes combatants from harm, the risk has been transferred to civilians, because using remote-controlled technology could mean that the use of force is less precise or actually takes place where it would not have done if this technology were not available (see ICRC 2014, 18). While we can only estimate the numbers of civilian deaths due to the lack of transparency surrounding drone warfare, these are significant: The Bureau of Investigative Journalism (2021) posits that since 2010, between 910 and 2,200 civilians were killed by drones, among them between 283 and 454 children. As Schwarz (2018b, 286) argues, ‘risk transfer from combatants to civilians in warfare is something that has been clearly observed in the US drone war. The foundation for this logic rests on the assertion that military lives matter just as much, if indeed not more, than civilian lives in warfare’. Shannon Vallor offers an alternative take on AWS via the perspective of virtue ethics. This differs from deontological (rule-based, such as IHL) and utilitarian (such as an increase in or prevention of human suffering) ethical arguments that arguably dominate the discourse about ethics and AWS. Virtue ethics describe ‘a way of thinking about the good life as achievable through specific moral traits and capacities that humans can actively cultivate in themselves’ (Vallor 2016, 10). This leads Vallor to raise ‘important ethical questions about robots that only virtue ethics readily allows us to pose: How are advances in robotics shaping human habits, skills, and traits of character for the better, or for worse? How can the odds of our flourishing with robotic technologies be increased by collectively investing in the global cultivation of the technomoral virtues?

Autonomous Weapons Systems and International Relations


How can these virtues help us improve upon the robotic designs and practices already emerging?’ (Vallor 2016, 211). Virtue ethics give an interesting perspective on the impact that AWS can have on users: ‘ethicists are starting to ask not just how robotic systems can be responsibly used by humans as tools of war, but also how robots themselves will alter, cooperate, or compete with the agency of human soldiers’ (Vallor 2016, 212). Here, the central question is (Vallor 2016, 214): ‘[h]ow might the development of autonomous lethal robots impact the ability of human soldiers and officers to live nobly, wisely, and well – to live lives that fulfill the aspirations to courage and selfless service that military personnel pledge?’. Vallor (2016, 217) argues that AWS ultimately also threaten the virtues of the military profession, which are predicated on the central norm of selfless service that manifests in ‘martial virtues’ such as courage, loyalty, or honor, while acting in the framework of IHL. In summary, research on ethics and AWS provides important contributions to the overall question our book seeks to answer – what kind of normative change can AWS induce? This debate draws our attention to the fundamental, ethical consequences of deploying AWS for those at the receiving end as well as for those actors ‘using’ AWS. But it does not systematically address the extent to which new standards of the ‘appropriate’ use of force emerge in in this context. Rather, the central question for ethical consideration is how our understanding and standards of protecting and upholding human dignity, human life, or ethical virtues change with the increasing importance of autonomous technologies or AI-driven decision-making in the military. Beyond the specific context of AWS, there is also a much broader ethical debate about the role of AI, algorithms, and machine learning for human decision-making. The challenge of ‘algorithmic regulation’ (Yeung and Lodge 2019) concerns both government by and government through algorithms, as well as the regulation of algorithms in the public sphere. Yeung (2019, 22), for instance, identifies three normative dimensions concerning algorithmic decision-making systems: processes, outputs, and predicting and personalising services for individuals. Ethical considerations and contestations of increasing machine autonomy in decision-making focus on a similar range of issues in terms of input, processes, and outputs of human–machine relations. It is important to take these


Autonomous Weapons Systems and International Norms

three dimensions to be interrelated but also to consider whether there is a hierarchy in their normative value. For example, an unethical process could compromise an arguably ethical outcome. In other words, arguments about a possibly superior algorithmic decision-making in terms of accuracy, precision, or reliability would still be overridden by violations of ‘human dignity’ as a superior ethical category. In our view, research on algorithmic decision-making motivated by ethical perspectives centres primarily on procedural-consequentialist ‘how?’ questions. Arguments brought forward here highlight the lack or opacity of accountability and responsibility, as well as how it is not possible to appeal algorithmic decisions. In this regard, research on the ethical dimension of non-military AI has increased significantly in recent years. For example, the development of autonomous driving solutions raises a set of considerations symptomatic of the problem of robotic decision-making. Loh and Loh (2017, 37), for instance, provide a detailed discussion of the problem of responsibility, highlighting that responsibility is a relational concept consisting of five related elements: who is responsible; what x is responsible for; to whom x is responsible; the addressee defining the existence of responsibility in context; the conditions under which x is responsible. They argue that, ‘as communication skills can vary, to say that someone is more or less able to act in a specific situation, more or less autonomous, more or less reasonable, and so on, it follows that responsibility itself must be attributed gradually according to the aforementioned prerequisites … assigning responsibility is not a binary question of “all or nothing” but one of degree’ (Loh and Loh 2017, 38). This concept of distributed responsibility, which also implies a potential hierarchy of more or less responsible agents (human and non-human), underlines the difficulty of dealing with the complexity of human–machine interaction in with regard to ethics. Even if we accept the moral and operational superiority of humans in distributed responsibility, the ethical problems remain complex. Bhargava and Kim (2017, 6) outline ‘the problem of moral uncertainty’ as follows: ‘how should autonomous vehicles be programmed to act when the person who has the authority to choose the ethics of the autonomous vehicle is under moral uncertainty?’. This perspective contests that a technological fix can easily solve ethical issues that stem in particular from the important argument that ‘robots

Autonomous Weapons Systems and International Relations


are not agents’ (Talbot, Jenkins, and Purves 2017, 258) and that ethical agency hence always depends on humans, who are, however, not necessarily capable of providing an adequate solution. The essence of the ethical problem of autonomous driving is predicated on the possibility that AI causes harm to humans. This also entails, for example, the question of whom to protect during crash scenarios if different humans are involved as drivers or pedestrians, and to what extent this can be a life and death decision in ways not dissimilar to the ethical challenges raised by AWS. Millar (2017, 20–34) highlights that ‘according to Lin (2014), ethics settings in autonomous vehicles could reasonably be interpreted, both legally and ethically, as targeting algorithms …. Because collision management ethics settings involve decisions that determine collision outcomes well in advance of the collision, they appear very different from snap decisions made by drivers in the moments leading up to the collision. Seen this way, an ethics setting could be interpreted as a premediated harm perpetrated by whoever set it, which could result in that person being held ethically and legally responsible for the particular outcome of the collision’. The question of responsibility and accountability is exacerbated by the increasing sophistication of machine learning in terms of self-learning systems, which already play a role in surveillance and targeting processes. The systems in question are able to perform ‘specific forms of task intelligence’ and ‘in many cases they not only compete with but handily outperform human agents’ (Vallor and Bekey 2017, 340, emphasis in original). Considerations of ethical questions raised by autonomous systems in both military and civilian contexts often show that applied ethics run the risk of becoming colonised by the logic of technological ‘solutionism’ (Morozov 2014). As Schwarz (2018a, 165) notes, applied ethics ‘seeks to use principles external to the realm it deals with in order to solve internal problems. This turns ethics in to a matter of problem-solving, for which a certain level of expertise is required to correctly identify and apply relevant external principles for a correct solution’. This is an important argument that highlights the relevance of a type of situational ethics or, in other words, of flexibility and interpretative capacity. The notion of a human–machine assemblage as promoted by Schippers (2020) and Schwarz (2018a) could point in the direction of such an understanding, in that ethics overall cannot


Autonomous Weapons Systems and International Norms

be programmed because judgements require a form of complex and sophisticated reasoning not yet delivered by AI. In a sense, we also adopt this line of thought in our view on norms as flexible and emerging in practice in contrast to being fixed and decided a priori. While we will not further expand on the ethics of AI, the issues outlined above point to important broader debates and considerations currently taking place in academia, which often address similar problems as scholarship on AWS. We will focus on the contribution of this specific body of research in the following.

gaps: what is missing from the debate? Contributions to the study of AWS are rich and growing, but they currently come with two important gaps. First, debates on the legality of AWS and ethical challenges attached to their usage both operate under a fixed, often predefined understanding of what is right and appropriate. However, we arguably require an analytical perspective that accommodates the constitutive quality of AWS as emerging technologies. In the following chapters of the book, we therefore present viewpoints on the flexible constitutions of appropriateness that come along with considering how AWS work through practices. As is the case with all developing weapons technologies, there is no specific regulation in international law regarding the use of AWS. Further, while remote-controlled weapons such as drones have been used extensively by states such as the United States, their usage also remains largely unregulated. States continue to operate drones in contested areas of international law: drone warfare arguably violates the principles of distinction and proportionality, but is not covered by specific legal regulations (Kaag and Kreps 2014). Currently, the deployment of AWS remains legally permissible, with the added uncertainty that their technological autonomy is unaccounted for. The most problematic aspect of the legal perspective on AWS is due to their elusive character. Any type of regulation requires looking towards the future in defining technologies that are constantly evolving. In other words, the technical complexity, the dual-use character, and the variety of deployment scenarios make it challenging to formulate legal restrictions on technological developments: ‘History works against preventive norm-making. States usually react,

Autonomous Weapons Systems and International Relations


as technological developments usually outpace political agreements to tackle future potential problems’ (Garcia 2016, 101). Yet, standards of appropriateness regarding the usage of AWS are already emerging. The increasing recourse to drones in the United States’ approach to the use of force has led to the emergence of practices in the sense of routine ways of ‘how things are done’ (see Warren and Bode 2015). These are far from meeting the requirements of customary international law: they do not indicate ‘dense’ (behavioural or verbal) state practice, nor do they exhibit opinio juris, ‘a belief that such practice is required … or allowed as a matter of law’ (ICRC 2010). Yet, these practices create a status quo that makes it more difficult to challenge the further proliferation and usage of drones. This means that the emergence of customary law is even more restrained, despite the existence of practical precedents. The use and spread of the ‘unwilling or unable’ formula by the United States as a ‘test’ to inform drone targeting decisions is a case in point (Bode 2017a). Accordingly, discussing this issue in the context of (customary) international law does not provide an adequate framework because it does not allow researchers to capture emerging standards of appropriateness attached to AWS. In contrast to considering how norms prevent or regulate, this book therefore studies how norms may emerge outside of deliberative forums and the framework of international law making. Second, the incremental development of weapons systems with automated and autonomous features has impeded deep public discourse on this issue. There has been media attention, and civil society organisations or campaigns such as Article 36, the Campaign to Stop Killer Robots, and the International Committee for Robot Arms Control (ICRAC) seek to raise the issue’s profile. But the wider political discourse is not particularly concerned with the most fundamental questions and realities of AWS that are significant for the quality of future human–machine interaction. Technological advances are typically incremental and come with a low political profile. Hence, the important role public acceptance plays in debating and creating dominant understandings of appropriateness is lacking. Further, media coverage on AWS taps into fictional representations of AI, typically envisioning smart, humanoid killing machines (such as the T-900 from the movie Terminator) when it would be more appropriate to lead with real-life images (figure 1.3).


Autonomous Weapons Systems and International Norms

Figure 1.3 Humanoid killer robots (top) versus X-47B demonstrator with autonomous features (bottom).

Online editors have accompanied short pieces by the authors, for example, with sensationalist images of a robotic scorpion (Bode 2017b) or a humanoid robot hand about to press a red button (Bode and Huelss 2017). Clearly, such images are preloaded with meaning, and (fictional) narratives about AI and ‘killer robots’ shape the public imagination on AWS (Odell and McCarthy 2017). Such speculations about future technological trajectories, including those centring around the ‘singularity’, are rife in public debate but are

Autonomous Weapons Systems and International Relations


undesirable, as they are often out of kilter with current capacities of weaponised AI (T. Walsh 2017b). This science fiction imagery serves to distance debate from what we should worry about in reality. Building on the role of the public, the (lacking) discussion on developing AWS should also be seen in the context of overall lowering use-of-force standards when countering terrorism (see Bode 2016). Military efficiency and effectiveness as the main arguments for why drone warfare is appropriate have turned into a public legitimacy source (McLean 2014; M.W. Lewis 2013). The promise of ‘surgical’ strikes and the protection of US troops has turned drones into the most appropriate security instrument to counter terrorism abroad. AWS are ‘considered especially suitable for casualty-averse risk-transfer war’ (Sauer and Schörnig 2012, 375). This points to the important role AWS may play in democratic systems because they make the use of force appear more legitimate. In the case of drones, their broad acceptance across the military and political– public sphere marginalises their contested ethical and legal roles. A poll conducted in 2013 showed that 75 per cent of American voters approved drone strikes ‘on people and other targets deemed a threat to the US’ (Fairleigh Dickinson University 2013). The constitutive role that this security technology plays in international norms in terms of setting precedents and hence new standards of appropriate use-of-force is, however, completely out of sight. To conclude, as the conventional frameworks of law and ethics have difficulties in accommodating flexibility and change for structural reasons, we have demonstrated that it is necessary to consider other types of norms that are not accounted for in research on AWS. A purely regulative perspective on AWS risks losing track of current developments and their future implications, in particular their possible role in shaping standards of appropriateness and our understanding of the ‘new normal’ in warfare.


New Technologies of Warfare: Emergence and Regulation

The emergence of new weapons has always shaped the conduct of warfare and repeatedly given one party the edge over an adversary, most often in rather asymmetrical and limited campaigns. An example from the nineteenth century is the Battle of Dybbøl in 1864, the decisive battle of the Second Schleswig War between Denmark and Prussia. The Prussian army was equipped with relatively novel Dreyse needle-guns, breech-loading rifles that could be loaded in a prone position, while the Danish soldiers were still using muzzle-loaders that required the shooter to stand up. The Dreyse gun was also capable of far more rapid fire than a muzzle-loader. The advantage of the Deyse rifle also contributed to the Prussian victory in the Austro-Prussian war of 1866, resulting in the hegemony of Prussia in the German-speaking area. While introducing a new rifle model seems a rather small step in the context of military innovation, particularly from a twenty-first century perspective, the political implications of the wars mentioned above were far-reaching, as they transformed the political landscape of alliances and borders, with new states emerging and others vanishing. The political consequences of weapons that create or increase asymmetry in warfare and the use of force can be far reaching. In this regard, this book is interested in studying not only how adding autonomous features to the critical functions of weapons systems influences use-of-force practices but also how these come with an understanding of what ‘appropriate’ use of force is, thereby leading to novel ways of warfare becoming considered legitimate. While the introduction of the Dreyse rifle gave Prussian troops a technical,

New Technologies of Warfare: Emergence and Regulation


strategic advantage, it did not trigger novel questions about whether the use of this new weapon was legal or legitimate. Weapons innovations throughout the twentieth century, however, were far more controversial and had a very different impact. This chapter provides a historical account of how new technologies of warfare emerged and how practices of using such weapons systems have influenced perspectives on the ‘appropriate’ use of force. The chapter will also outline how and whether new technologies of warfare were regulated to provide an understanding of the use, regulation, and impact of weapons systems, which can give us important insights into the powerful role of new weapons systems in shaping norms, as standards of appropriateness, beyond legal control and regulation. We start with a historical overview of selected weapons systems introduced in the past, focusing on systems that were of particular significance or could be defined as ‘game changers’ when it comes to the character of warfare and consequently wider implications for norm emergence. This chapter chiefly aims to discuss whether and how these weapons systems were regulated, how international legal norms were established, and how use-of-force practices were related to these norms – potentially creating standards of appropriateness that surpassed the limits of legal prohibition. We consider four weapons systems that played an important part in the emergence of novel, regulative norms in the twentieth century: submarines, chemical weapons, nuclear weapons, and blinding lasers. The individual sections in this chapter discuss the development of these weapons systems, whether and how they were used in practice, and how the international community reacted to their emergence. In this way, this chapter addresses cases of ex ante and ex post regulation: while submarines, nuclear and chemical weapons have been regulated or banned (comprehensively) only after their usage in warfare, blinding lasers were preventively banned before they had ever been used in combat operations. This discussion will highlight the different effects that use-of-force practices centred on deploying particular types of weapons can have on the emergence of norms. As our review of norms research in chapter 4 will show, the discipline of IR has diversified its perspective on norms substantially in the last two decades. However, there is still very limited work on the genuine norm-making character of practices, particularly if we think of mundane and non-verbalised ways of doing things and how micro-practices can feed back into emerging norms.


Autonomous Weapons Systems and International Norms

Current research on norm emergence concentrates almost exclusively on fundamental norms with regulative and constitutive qualities (see, for example, Tannenwald 1999; Finnemore and Sikkink 1998a) as well as on studies how such norms emerge and are contested in deliberative processes (Rosert et al. 2013; Jefferson 2014). Yet, the comprehensive legal regulation and normative stigmatisation of chemical and nuclear weapons, for example, only emerged after these weapons had been used in the First and Second World Wars, respectively. This empirical fact stands in contrast to the analytical focus of existing studies on public deliberations after use. However, technological advances below the radar of international legal regulation might set emerging norms long before any debates have unfolded. The transnational debate on LAWS at the UN is a rather unusual instance of discussing an ambiguous weapons system in a formalised setting even while its definition remains contested. Our main argument – that norms emerge in the context of practices – opens up an entirely new perspective on studying norm emergence and contributes important viewpoints on AWS. As we have seen in chapter 1, critics of a pre-emptive ban argue that the emerging nature of LAWS as a technology does not allow governmental representatives to deliver a concise and comprehensive definition of such systems, seen by some as a necessary requirement for negotiating binding legal commitments. In this regard, it is therefore insightful to discuss how and why particular novel weapons technologies – also considered as key technologies of significant strategic value in their time – were regulated and prohibited. The history of warfare provides many examples of weapons technologies with a profound impact on military success when they were first introduced and that, as a consequence, proliferated (see Roland 2016). Examples since ancient times include the chariot, mounted knights, the longbow and crossbow, gunpowder, cannons, tanks, missiles, and aircraft. Attempts to limit the use of specific weapons are not novel phenomena, and social norms, such as understandings of chivalry, were put forward to argue that killing from a distance, e.g. using a firearm, was unchivalrous conduct. However, only the twentieth century has seen the regulation and prohibition of certain weapons on a comprehensive, international scale, trying to create universally accepted, formalised rules and also introducing mechanisms to sanction violations. This chapter does not aim to deliver a detailed, historical account of regulating different weapons systems, which would only replicate

New Technologies of Warfare: Emergence and Regulation


many extensive presentations in the relevant literature. Instead, we are interested in investigating the interplay of formal, deliberative norm-setting processes and evolving use-of-force practices in order to discuss why the legal response to a political-normative problem – the usage of weapons that conflict with basic humanitarian norms – may not be sufficient to prohibit the emergence of diverging standards of appropriateness. The chapter will also highlight the differences between the four cases (submarines, chemical weapons, nuclear weapons, and blinding lasers) and AWS as characterising weapons of unparalleled complexity in IR. This complexity consists not only in the technological sophistication of what the weaponisation of AI means, but also in the multi-faceted questions what AWS are and what is acceptable in the context of their emergence. The novelty of AWS as an unprecedentedly broad weapons ‘category’ also necessitates developing different theoretical and practical approaches in IR, especially when it comes to answering our central research question: how can AWS change our understanding of what the appropriate use of force is?

submarine warfare and the quest of legality and legitimacy Although the era of submarine warfare had commenced during the American Civil War with the deployment, successful attack, and immediate loss of the H.L. Hunley, submarines only emerged as a weapons platform of military significance in the late nineteenth century. Considerations of a possible regulation or prohibition of submarine warfare entered the international agenda at the First Peace Conference in The Hague in 1899. Based on a Russian proposal to codify the abstention from building submarines if adopted unanimously, the issue was discussed but attempts to introduce a specific case of arms control failed due to a lack of consensus. Although a growing number of states acquired submarines in the following years, neither the Second Hague Peace Conference in 1907 nor the London Declaration concerning the Laws of Naval War (London Declaration) addressed the issue of submarine warfare explicitly (Hays Parks 2000, 342–4). In 1914, the United Kingdom was leading the acquisition of submarines with 73 boats, followed by France (55 boats), the United States (38 boats), and Germany (35) (Delgado 2011, 123). Thus, although the development of submarines had quickly expanded, they were first used on a large scale


Autonomous Weapons Systems and International Norms

Table 2.1 Articles 48, 49, 50 of the 1909 London Declaration. Article 48. A neutral vessel, which has been captured, may not be destroyed by the captor; she must be taken into such port as is proper for the determination there of all questions concerning the validity of the capture. Article 49. As an exception, a neutral vessel which has been captured by a belligerent warship, and which would be liable to condemnation, may be destroyed if the observance of Article 48 would involve danger to the safety of the warship or to the success of the operations in which she is engaged at the time. Article 50. Before the vessel is destroyed all persons on board must be placed in safety, and all the ship’s papers and other documents, which the parties interested consider relevant for the purpose of deciding on the validity of the capture must be taken on board the warship. Source: ICRC (2018b).

during the First World War without either explicit, specific regulation or having been subjected to extensive practical experience in combat theatres. Notwithstanding the absence of specific regulation, the legal norms established by the London Declaration were also applicable to submarines, as these were considered surface warships void of regulations that were more concrete. Therefore, submarines were expected to abide by certain rules, particularly when engaging non-combat or neutral vessels. The London Declaration was signed in 1909 by all major naval powers of the pre-war period, Austria-Hungary, France, Germany, Italy, Japan, the Netherlands, the Russian Empire, Spain, the United Kingdom, and the United States. However, none of these states ratified the treaty, which therefore never entered into force. The London Declaration is hence only important to show that a deliberative understanding of appropriateness in terms of submarine warfare emerged slowly. The most important rules were Articles 48–50 (ICRC 2018d; see table 2.1). As Gilliland (1985, 977) argues, however, the London Declaration was imprecise in its legal stipulations, inter alia because it ‘made no reference to armed or unarmed belligerent merchants operating in direct support of the war effort or on a purely commercial mission’ (emphasis in original). With no clear legal regulation or even a prohibition of submarines, they played a more important military-strategic role during the First World War than previously anticipated. In particular, the political–legal implications of ‘unrestricted submarine warfare’ as practised by the German Kaiserliche Marine were immense. Not only were the extensive deployment of

New Technologies of Warfare: Emergence and Regulation


submarines and US casualties resulting from submarine warfare a major reason for the United States entering the war in 1917, but they also served to underline the importance of regulating and restricting submarine warfare as a task for the international community. The historical case of unrestricted submarine warfare in the First World War is covered extensively in the literature (Delgado 2011, 129–41; Steffen 2004; McCaig 2013). At this point, it suffices to summarise briefly that the concept was introduced by Germany in 1915 when it declared the area around the British Isles a war zone in which merchant ships from neutral countries would also be attacked. The sinking of the British ocean liner Lusitania in 1915 – causing primarily British, Canadian, and US causalities – sparked major public outrage among the Western Allied Powers. This also led Germany to intermittently interrupt unrestricted submarine warfare, chiefly out of concern that this practice would draw the United States into the war. However, Germany resumed the practice in 1917, when its economic and military position had further weakened (not least due to the British naval blockade of Germany). Despite the highly controversial character of unrestricted submarine warfare, testified, for example, by the portrayal of the Lusitania sinking as a barbaric act,1 the increasing lack of military and civilian supplies on the German side combined with the opinion that the United Kingdom could be decisively beaten contributed to the decision to broadenen the range of ‘legitimate’ targets. The initial success of submarines in sinking an extensive tonnage of Allied shipping seems to support the German change of strategy. However, as noted, unrestricted submarine warfare was a major contributing factor for drawing the United States into the First World War, an event eventually decisive for the Central Powers’ defeat. During the interwar period, a series of conferences were held in an attempt to regulate and delimit armament. While these efforts were, in particular, a political response to German use-of-force practices during the First World War, they also served budgetary purposes after the economic and financial downfall associated with the Great War. On these occasions, states considered the legal liability and prosecution of those responsible for unrestricted submarine warfare, as well as the status of merchant and armed merchant vessels (a common occurrence during the First World War), and belligerent and neutral vessels. However, as Hays Parks (2000, 345) comments, states did not consider two central questions concerning


Autonomous Weapons Systems and International Norms

the ‘appropriate’ use of force: ‘(a) when does an enemy merchant ship forfeit its non-combatant status, and (b) what rules should apply to submarines in light of the changes brought about by (a)?’. The first major international conference to specifically address submarine warfare was the Washington Naval Conference (International Conference on Naval Limitation, 1921–22).2 While the United Kingdom went as far as proposing the complete abolition of submarines, and significant limitations on numbers of submarines were discussed, the participating states did not formalise an agreement. In the end, the major outcome of this conference, the Five-Power Naval Limitation Treaty, signed by France, the United Kingdom, Italy, Japan, and the United States on 6 February 1922, limited the practice of submarine warfare considerably (see table 1.2). But it failed to account for key characteristics of submarines as new technologies of naval warfare, such as the central role of the surprise and stealth attack by dived submarines, which could be seen as either a decisive tactical advantage or a fundamental mechanism of submarines’ self-protection. In contrast, the treaty effectively ruled out the legality and normative legitimacy of targeting merchant vessels by submarines. Table 2.2 outlines the explicit provisions of a treaty regulating the use of submarines, discussed and signed by all powers present following an initiative by Elihu Root, a member of the US delegation. While the treaty was signed by all five powers, it failed to be ratified due to France’s inaction. Parts of the French public and political opposition understood the treaty as an unfair limitation of French naval power because it included a clause on capital-ship ratio that had been negotiated without French participation (Birn 1970, 301). However, it was still an important precursor to the regulations in the 1930 London Naval Treaty and contained detailed, albeit often imprecise, considerations of submarine warfare. A partly more successful moment of establishing legal norms regulating submarine warfare before the Second World War came at the 1930 London Naval Conference, which summoned the five powers of the 1922 Treaty, but again excluded Germany. Largely, the 1930 London Treaty reproduced earlier attempts to codify rules for the use of submarines at the 1922 Washington Naval Conference, while it did not surpass the 1922 Treaty qualitatively. The 1930 London Treaty, signed on 22 April 1930 by only the United Kingdom, Japan, and the United States due to different views expressed by

New Technologies of Warfare: Emergence and Regulation


Table 2.2 Treaty relating to the Use of Submarines and Noxious Gases in Warfare, Washington Naval Conference 1922. Article 1. The Signatory Powers declare that among the rules adopted by civilised nations for the protection of the lives of neutrals and noncombatants at sea in time of war, the following are to be deemed an established part of international law: (1) A merchant vessel must be ordered to submit to visit and search to determine its character before it can be seized. A merchant vessel must not be attacked unless it refuses to submit to visit and search after warning, or to proceed as directed after seizure. A merchant vessel must not be destroyed unless the crew and passengers have been first placed in safety. (2) Belligerent submarines are not under any circumstances exempt from the universal rules above stated; and if a submarine cannot capture a merchant vessel in conformity with these rules the existing law of nations requires it to desist from attack and from seizure and to permit the merchant vessel to proceed unmolested. Article 2. The Signatory Powers invite all other civilised Powers to express their assent to the foregoing statement of established law so that there may be a clear public understanding throughout the world of the standards of conduct by which the public opinion of the world is to pass judgment upon future belligerents. Article 3. The Signatory Powers, desiring to ensure the enforcement of the humane rules of existing law declared by them with respect to attacks upon and the seizure and destruction of merchant ships, further declare that any person in the service of any Power who shall violate any of those rules, whether or not such person is under orders of a governmental superior, shall be deemed to have violated the laws of war and shall be liable to trial and punishment as if for an act of piracy and may be brought to trial before the civil or military authorities of any Power within the jurisdiction of which he may be found. Article 4. The Signatory Powers recognize the practical impossibility of using submarines as commerce destroyers without violating, as they were violated in the recent war of 1914–1918, the requirements universally accepted by civilized nations for the protection of the lives of neutrals and noncombatants, and to the end that the prohibition of the use of submarines as commerce destroyers shall be universally accepted as a part of the law of nations, they now accept that prohibition as henceforth binding as between themselves and they invite all other nations to adhere thereto. Source: Washington Treaty (2018).

France and Italy, only had a limited legal duration, until 1936. The relevant article, Article 22, is shown in table 2.3. The Second London Naval Conference, held in 1936, also failed to replace the 1930 London Naval Treaty with more comprehensive regulation of submarine warfare accepted by a larger group of states. By then, Japan had announced its withdrawal from the conference. At the same time, the 1936 conference at least adopted Article 22 of the London Treaty as ‘procès-verbal’, or process of amendment, to the rules set out in Part IV of the 1930 treaty. Article 22 of 1930 therefore did not expire.


Autonomous Weapons Systems and International Norms

Table 2.3 Limitation and Reduction of Naval Armament (London Naval Treaty), London Naval Conference 1930. The following are accepted as established rules of International Law: (1) In their action with regard to merchant ships, submarines must conform to the rules of International Law to which surface vessels are subject. (2) In particular, except in the case of persistent refusal to stop on being duly summoned, or of active resistance to visit or search, a warship, whether surface vessel or submarine, may not sink or render incapable of navigation a merchant vessel without having first placed passengers, crew and ship’s papers in a place of safety. For this purpose the ship’s boats are not regarded as a place of safety unless the safety of the passengers and crew is assured, in the existing sea and weather conditions, by the proximity of land, or the presence of another vessel, which is in a position to take them on board. Source: London Naval Treaty (1930, Article 22).

The 1936 Protocol, as a confirmation of the 1930 Article 22, was and remains the only explicit regulation of submarine warfare before the Second World War and thereafter. Although only France, the United Kingdom, and the United States signed the treaty in 1936, forty-eight states, including Germany, Japan, and the Soviet Union, accepted submarine rules before the outbreak of the Second World War due to their affirmation of the verbatim, which refers to an unabridged word-for-word account (Legro 1997, 40; Miller 1980, 267). The reasons for Germany’s willingness to conclude agreements, particularly with the United Kingdom, in the interwar period can be explained as part of Hilter’s strategy and general position towards the United Kingdom until the late 1930s: ‘Hitler wanted to re-establish the Deutsches Reich in its “rightful” position of power. This was the non-negotiable core of Hitler’s concept. For this he wanted Britain as an ally. The 1935 Anglo-German naval agreement falls in the “mit-England [with England]” phase and had a special function in Hitler’s plans. It was the bridgehead for further steps with Britain on the common road towards the nation’s rightful position in the world’ (Hoerber 2009, 173). In this regard, Germany initially followed the rules of the 1936 Protocol for political-strategic reasons in the hope of coming to an early peace agreement with the United Kingdom, but resorted once again to unrestricted submarine warfare starting in 1941. In the course of the war, Germany used submarine weapons extensively and relatively successfully, sinking 2,882 merchant vessels (Thiel 2016, 97).

New Technologies of Warfare: Emergence and Regulation


That other countries also deployed submarines in similar ways was less in the political and public spotlight during both the war and post-war period. The United Kingdom initially held an ambivalent position with regard to the existing rules when it ordered the attack of all vessels within defined (‘sink at sight’) zones in the North Sea, the Mediterranean, and the Baltic Sea. While Japan’s focus was on attacking navy ships for operational reasons, it also sunk merchant ships and clearly did not follow the submarine rules established in the interwar era treaties. The case of the United States is equally noteworthy: in response to the Japanese attack on Pearl Harbor on 7 December 1941, the United States issued the order to ‘execute against Japan unrestricted air and submarine warfare’, resulting in significant US assaults on different types of vessels identified as having a link to Japan all over the Pacific until 1945 (Holwitt 2013; Delgado 2011). Overall, the institutionalised interwar legal norms governing the use of submarines widely failed to have a guiding effect on the use of force during the Second World War. The apparent erosion, or simply practical irrelevance, of the interwar submarine rules was also confirmed by how they were discussed at the Nuremberg trials: violations of these norms formed part of the initial prosecution against Grand Admiral Erich Raeder and Fleet Admiral Karl Dönitz, who were inter alia accused of having deliberately breached international rules of submarine warfare. In response, the defence argued that the British ‘reprisal doctrine’ as well as the practice of compromising the status of merchant vessels, the technological development of submarines and anti-submarines measures, and the US adoption of unrestricted submarine warfare in the Pacific theatre made following the interwar submarine rules impossible and uncommon (Burns 1971, 60–1). In its judgement, the Nuremberg tribunal ruled that ‘[i]n view of all the facts proved and in particular of an order of the British Admiralty announced on the 8th May, 1940, according to which all vessels should be sunk at sight in the Skagerrak, and the answers to interrogatories by Admiral Nimitz stating that unrestricted submarine warfare was carried on in the Pacific Ocean by the United States from the first day that nation entered the war, the sentence of Doenitz is not assessed on the ground of his breaches of the international law of submarine warfare’ (Yale Law School 2008). While the legal norms were therefore weakened or even discarded by the ruling at Nuremberg, their initial consideration at


Autonomous Weapons Systems and International Norms

the trial, however, also showed that states considered the provisions of the 1930 and 1936 London Treaties regulating submarine warfare as the guiding legal principles and that these principles had established some, albeit limited, normative substance (see Panke and Petersohn 2012, 728). Notably, however, US use-of-force practices associated with unrestricted submarine warfare, which resulted in the sinking of 1,113 merchant ships (Delgado 2011, 187) and included actions that might have qualified as war crimes, were not considered from a legal viewpoint in the aftermath of the Second World War. In particular, these actions refer to the attack on Japanese merchant and transport vessels and the killing of survivors (including British-Indian prisoners of war) by the crew of USS Wahoo under the command of Dudley ‘Mush’ Morton in 1943 (Holwitt 2013, 171–5; Sturma 2009). At the same time, German U-boat commander Heinz-Wilhelm Eck was prosecuted in the ‘Peleus Trial’ at Nuremberg for sinking the Greek merchant vessel Peleus in 1944 and for ordering the shooting at rafts, putatively assuming that survivors had abandoned them, killing almost all of the survivors. The rationale for this action, as argued during the trial, was to eliminate all traces of the vessel to protect the U-boat in this specific situation, which is an argument of operational necessity. While the actions and the line of argument presented by Eck as a defendant were very similar to those of Morton, Eck and two other officers were sentenced for war crimes and executed in 1945. The Judge Advocate in the Eck case argued that ‘[i]t was a fundamental usage of war that the killing of unarmed enemies was forbidden as a result of the experience of civilised nations through many centuries. To fire so as to kill helpless survivors of a torpedoed ship was a grave breach of the law of nations. The right to punish persons who broke such rules of war had clearly been recognised for many years’ (ICRC 2018a). The judgement referred to basic laws of war regulating the protection of non-combatants and was not directly linked to the fact that a submarine was involved in the incident. This underlined the potential incompatibility of conducting submarine warfare in line with established legal norms for war at sea, particularly with regard to the requirement of evacuating a merchant vessel’s crew and passengers. The actions ordered by Morton and Eck represented the extreme pole of the logic of unrestricted submarine warfare, which was in itself a violation of the 1930 and 1936 London Naval Treaties.

New Technologies of Warfare: Emergence and Regulation


However, the post-Second World War trials did not consider how to reconcile formal law and practices in light of the failure to prohibit submarine warfare and they failed to define warships as the only legitimate and legal targets. While rules existed that were often disregarded during the First and Second World Wars, a clear, precise norm banning submarines, banning the sinking of merchant vessels under all circumstances, or considering the use of submarines a taboo did not emerge. Eventually, it seems that the practices of killing associated with submarines were not considered very different from how other weapons systems inflicted death and destruction. Vague moral norms and more specific legal norms on how to use force and protect human life that were still often violated existed independently of submarines. Importantly, submarines did not become a weapons system that affected public opinion and political debate in the ways that chemical or nuclear weapons did, and public perspectives on submarines seem to have become more permissive after the First World War. In addition, after the Second World War, the strategic role of submarines and associated practices for the major powers gravitated towards ballistic-missile-carrying submarines. The extensive sinking of vessels by submarines has never occurred in subsequent armed conflicts – therefore, unresolved questions with regard to legal norms have been of little concern. In this regard, the case of submarines is not dissimilar to that of aerial warfare and the practice of indiscriminate carpet-bombing of civilian targets during the Second World War. Thus, somewhat counterintuitively, submarines were effectively legally and publicly normalised and ceased to be a controversial weapons system because technological advancement also meant that their strategic role changed to making them a mobile launcher platform of nuclear missiles, for example.

chemical weapons (poisonous gas) Chemical weapons (or ‘poisonous gas’ in the terminology of the time when they were first introduced) were the subject of discussion at the turn of the twentieth century and reviewed at major conferences. This makes their case similar to that of submarines but also different in the sense that they were considered more explicitly. The industrial production of gas shells only started in the late nineteenth century, thereby enabling this novel weapon technology to


Autonomous Weapons Systems and International Norms

Table 2.4 Declaration (IV,2) concerning Asphyxiating Gases, 1899, The Hague Peace Conference. The Contracting Powers agree to abstain from the use of projectiles the sole object of which is the diffusion of asphyxiating or deleterious gases. The present Declaration is only binding on the Contracting Powers in the case of a war between two or more of them. It shall cease to be binding from the time when, in a war between the Contracting Powers, one of the belligerents shall be joined by a non-Contracting Power. Source: ICRC (2018a).

become (militarily) significant. However, the use and stigmatisation of poison has a much longer cultural trajectory in society and warfare and a constitutive norm against its use has already been deeply anchored (see Jefferson 2014). The first comprehensive consideration and regulation of chemical weapons took place during the 1899 Peace Conference at The Hague. Here, state representatives adopted three declarations, the second of which concerned poisonous gases (Declaration [IV,2] concerning Asphyxiating Gases, The Hague, 29 July 1899). The Declaration ‘prohibiting the use of projectiles, the only object of which is the diffusion of asphyxiating or deleterious gases’, was signed and ratified by most major powers present at this First Hague Conference, such as Austria and Hungary, France, Germany, Italy, Japan, Spain, and Russia. The United Kingdom only signed the declaration at the Second Hague Conference in 1907 and the United States abstained on both occasions (Pearce Higgins 1909, 493; ICRC 2018c). While the phrasing of the 1899 The Hague Declaration on ‘asphyxiating gases’ was relatively clear in setting a new legal norm, it did not rule out the use of gas against non-signatories (see table 2.4). In fact, practices of warfare during the First World War violated this norm in fundamental ways, as is well known and widely discussed in the relevant literature (see Coleman 2005; Jefferson 2014; Vilches, Alburquerque, and Ramirez-Tagle 2016; Tucker 2007; Jones 2014). France, Germany, and the United Kingdom, all signatories to the 1899 Hague Declaration, were the nations to use the most chemical weapons during the First World War. The traumatic experience of chemical warfare in the First World War – the term ‘gas hysteria’ (Jones 2014, 356) was used to describe the effect of chemical weapons on troops – had a strong

New Technologies of Warfare: Emergence and Regulation


Table 2.5 Washington Treaty 1922, Article 5. The use in war of asphyxiating, poisonous or other gases, and all analogous liquids, materials or devices, having been justly condemned by the general opinion of the civilized world and a prohibition of such having been declared in treaties to which a majority of the civilized Powers are parties. The Signatory Powers, to the end that this prohibition shall be universally accepted as a part of international law binding alike the conscience and practice of nations, declare their assent to such prohibition, agree to be bound thereby between themselves and invite all other civilized nations to adhere thereto.

influence on the public perception of poisonous gas as an inappropriate method of warfare and convinced many governments of the necessity to agree on regulating or prohibiting chemical weapons in an internationally binding treaty. As Jones shows, the casualties caused by ‘gas’ in the First World War were low, in terms of both overall numbers and the specific weapons categories. However, ‘gas remained among the most feared weapons of the war and continued to exercise a powerful hold over the popular imagination such that anti-war campaigners focused on its use to mobilize support for their cause’ (Jones 2014, 357). A first step towards regulating chemical weapons was the 1919 Treaty of Peace with Germany (Treaty of Versailles), which confirmed the norm against poisonous gas established at The Hague. Article 171 of the Treaty of Versailles declared that ‘the use of asphyxiating, poisonous or other gases and analogous liquids, materials or devices being prohibited, their manufacture and importation are strictly forbidden in Germany’ (US Library of Congress 2018, 119). Obviously, this norm was severely circumscribed, as the entire Treaty was explicitly directed at Germany. However, only three years later, the aforementioned 1922 Washington Naval Conference included more detailed stipulations in the Treaty on the Use of Submarines and Noxious Gases in Warfare (see table 2.5). As discussed in the preceding section, the Washington Treaty of 1922 had a limited scope and was only signed by the five major powers at the time, excluding Germany (which had, however, signed the Treaty of Versailles, clearly prohibiting the development, acquisition and use of ‘gas’ weapons by Germany). The final treaty to prohibit chemical weapons comprehensively before the Second World War was the 1925 Geneva Protocol to the Hague Conventions. More precisely, it was the Protocol for the


Autonomous Weapons Systems and International Norms

Table 2.6 Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous or Other Gases, and of Bacteriological Methods of Warfare, Geneva, 17 June 1925. The undersigned Plenipotentiaries, in the name of their respective governments: Whereas the use in war of asphyxiating, poisonous or other gases, and of all analogous liquids, materials or devices, has been justly condemned by the general opinion of the civilised world; and Whereas the prohibition of such use has been declared in Treaties to which the majority of Powers of the world are Parties; and To the end that this prohibition shall be universally accepted as a part of International Law, binding alike the conscience and the practice of nations; Declare: That the High Contracting Parties, so far as they are not already Parties to Treaties prohibiting such use, accept this prohibition, agree to extend this prohibition to the use of bacteriological methods of warfare and agree to be bound as between themselves according to the terms of this declaration. The High Contracting Parties will exert every effort to induce other States to accede to the present Protocol. Such accession will be notified to the Government of the French Republic, and by the latter to all signatories and acceding Powers, and will take effect on the date of the notification by the Government of the French Republic. The present Protocol will come into force for each signatory Power as from the date of deposit of its ratification, and, from that moment, each Power will be bound as regards other Powers which have already deposited their ratifications. Source: OPCW (2018b).

Prohibition of the Use in War of Asphyxiating, Poisonous or Other Gases, and of Bacteriological Methods of Warfare, which entered into force in 1928 (see table 2.6).3 The adoption of the protocol represented a major step towards banning chemical weapons as a legal, but importantly also as a legitimate, means of warfare. Although the legal norm against the use of chemical weapons was violated in several instances during the interwar period (Sislin 2018; Warren 2012) – by Spanish and French troops using mustard gas in Morocco during the third Rif War in 1924, as well as by Italy in Libya in 1928 and in Ethopia in 1935 – the use of chemical weapons in the Second World War was marginal, particularly compared with First World War practices. In its Second World War campaigns, Japan used gas weapons in China, and Germany used chemical weapons on rare occasions at the Eastern Front and, most notably, in the Holocaust. But the Western Allies refrained from their usage, although British and US political and military leaders prepared for retaliatory action against a possible

New Technologies of Warfare: Emergence and Regulation


German or Japanese chemical weapons attack with reciprocal means (van Courtland Moon 1996). After the Second World War, new, typically comprehensive frameworks governing the use of force via international law were established with the UN Charter at their centre. The use of chemical weapons further dwindled but never completely stopped, with one notable usage being the US deployment of anti-plant and irritant agents during the Vietnam War (Martin 2016; Bonds 2013). In military regards, chemical weapons played a more important role in Middle Eastern conflicts, for example, during the Iran–Iraq War from 1980 to 1988. The most notorious case was a chemical attack by Iraqi forces on the Kurdish city of Halabja on 16 March 1988. The attack killed between 3,000 and 5,000 civilians and injured many more (see Wirtz 2019, 793; Hiltermann 2007). In 2010 the Iraqi High Criminal Court recognised the attack as an act of genocide, years after the fall of the Saddam Hussein regime in 2003. The main figure responsible for the attack was identified as Ali Hassan al-Majid, also known as ‘Chemical Ali’, who was sentenced to death and executed in 2010. The Iraqi government refused to accept responsibility for the attack at the time, blaming Iran, and the international community’s response was insignificant and politically indifferent. Iraq had sourced material and knowledge required for the production of chemical weapons from countries such as Singapore, the Netherlands, Germany, the United Kingdom, and France. While international sanctions were in place during the Iran–Iraq War, the United States, the Soviet Union and other countries supported Iraq, partly as a traditional ally, partly because of the Islamic Revolution in Iran (Fredman 2012). But the use of chemical weapons by Iraq during this conflict was extensive and resulted in approximately 45,000 direct casualties (Russell 2005, 197). The Iraqi case underlines both the weakness of pre-established legal norms (Iraq had ratified the Geneva Protocol on Asphyxiating or Posinous Gases in 1931)4 and the relevance of practices of use for creating a patterned set of deviant actions. These practices had been widely ignored by the international community at the time and were potentially strengthened by this indifference – in particular that exhibited by the United States (Walker 2017). Nevertheless, many countries, including the United States, significantly reduced their chemical weapons stocks until the end of the Cold War. By the


Autonomous Weapons Systems and International Norms

Table 2.7 Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on their Destruction. Article 1. Each State Party to this Convention undertakes never under any circumstances: (a) To develop, produce, otherwise acquire, stockpile or retain chemical weapons, or transfer, directly or indirectly, chemical weapons to anyone; (b) To use chemical weapons; (c) To engage in any military preparations to use chemical weapons; (d) To assist, encourage or induce, in any way, anyone to engage in any activity prohibited to a State Party under this Convention. Article 2. Each State Party undertakes to destroy chemical weapons it owns or possesses, or that are located in any place under its jurisdiction or control, in accordance with the provisions of this Convention. Article 3. Each State Party undertakes to destroy all chemical weapons it abandoned on the territory of another State Party, in accordance with the provisions of this Convention. Article 4. Each State Party undertakes to destroy any chemical weapons production facilities it owns or possesses, or that are located in any place under its jurisdiction or control, in accordance with the provisions of this Convention. Article 5. Each State Party undertakes not to use riot control agents as a method of warfare. Source: OPCW (2018a).

late 1980s, chemical warfare had been widely regarded, and rhetorically framed, as a global ‘taboo’ (Price and Tannenwald 1996). But the international legal norm prohibiting the use of chemical weapons and making their use appear normatively problematic was apparently not universal or powerful enough to override all military strategic and tactical considerations. The norms on chemical weapons were only reaffirmed relatively late with the comprehensive Convention on the Prohibitions of the Development, Production, Stockpiling and Use of Chemical Weapons and on their Destruction (Chemical Weapons Convention (CWC)) of 1993, which entered into force in 1997 and comprises twenty-four articles plus annexes. Table 2.7 summarises the stipulations of Articles 1–4 of the CWC. The CWC turned out to be a significant success in the attempt to establish international legal norms against the use, possession, and proliferation of chemical weapons (OPCW 2018a,b). To date, 193 countries are parties to the CWC (only Egypt, Israel, North Korea and South Sudan remain outside of the treaty framework)

New Technologies of Warfare: Emergence and Regulation


and almost all declared chemical weapons stockpiles have been destroyed under UN supervision. Nevertheless, the use of chemical weapons has not been completely eradicated: Syria had not signed the treaty before the outbreak of the civil war in 2011 and the Assad regime used chemical weapons repeatedly, as confirmed by the OPCW-UN Joint Investigative Mechanism5 (see also Arms Control Association 2018). The danger that the Syrian chemical weapons programme and the possible proliferation to other conflict parties posed was significant: ‘[a]ccording to US estimates, at more than 1,300 metric tons spread out over as many as 45 sites in a country about twice the size of Virginia, Syria’s arsenal of chemical weapons in 2013 was the world’s third-largest. It was 10 times greater than the CIA’s (erroneous) 2002 estimate of Iraq’s chemical weapons stash, and 50 times larger than the arsenal Libya declared it had in late 2011’ (Chollet 2016). The US response to the potential use of chemical weapons by Syrian government troops in 2012 was a decisive factor in shaping future events. During a press briefing at the White House on 20 August 2012, President Obama made the following statement when asked about the situation in Syria and whether the US military might be deployed to keep chemical weapons safe: ‘I have, at this point, not ordered military engagement in the situation. But the point that you made about chemical and biological weapons is critical. That’s an issue that doesn’t just concern Syria; it concerns our close allies in the region, including Israel. It concerns us. We cannot have a situation where chemical or biological weapons are falling into the hands of the wrong people. We have been very clear to the Assad regime, but also to other players on the ground, that a red line for us is we start seeing a whole bunch of chemical weapons moving around or being utilized. That would change my calculus. That would change my equation’ (US Office of the Press Secretary 2012; our emphasis). The ‘red line’ drawn by Obama did not, however, make the Syrian regime refrain from further using chemical weapons: ‘[t]he Syrian Archive has documented 212 likely chemical attacks, but the OPCW-UN Joint Investigative Mechanism Fact-Finding Mission has only been able to confirm 16 cases as of June 2018. US Ambassador to the United Nations Nikki Haley claimed in April 2018 that the Bashar al-Assad regime conducted at least 50 chemical weapons attacks, while Human Rights Watch accused the Syrian regime of


Autonomous Weapons Systems and International Norms

the majority of 85 documented chemical weapons attacks in Syria in an April 2018 report’ (Arms Control Association 2019b). Although there was therefore evidence for the use of chemical weapons, the international community, and the United States in particular, overall remained inactive during the twelve months following Obama’s ‘red line’ statement. However, on 21 August 2013, a chemical attack occurred in Ghouta, a suburb of Damascus controlled by opposition forces. The United Nations Mission to Investigate Allegations of the Use of Chemical Weapons in the Syrian Arab Republic concluded in its final report in December 2013 that they had ‘collected clear and convincing evidence that chemical weapons were used also against civilians, including children, on a relatively large scale in the Ghouta area of Damascus on 21 August 2013’ (UNSC 2013, 21). President Obama responded to the Ghouta attack in early September 2013 during a press conference in Stockholm. The statement makes some important points with regard to practices and norms and it is hence useful to consider it at some length: ‘The world set a red line when governments representing 98 percent of the world’s population said the use of chemical weapons are abhorrent and passed a treaty forbidding their use even when countries are engaged in war. … The international community’s credibility is on the line … because we give lip service to the notion that these international norms are important. And so the question is how credible is the international community when it says this is an international norm that has to be observed? … And I do think that we have to act because if we don’t, we are effectively saying that even though we may condemn it and issue resolutions and so forth and so on, somebody who is not shamed by resolutions can continue to act with impunity. And those international norms begin to erode and other despots and authoritarian regimes can start looking and saying that’s something we can get away with, and that then calls into question other international norms and laws of war and whether those are going to be enforced’ (Washington Post 2013). This statement is important because it constructs a clear discourse making arguments about Syrian norm-violation, about the responsibility of the international community to protect norms, and about the risk that practices erode norms – in this case that the international taboo represented by the broad agreement on the CWC by

New Technologies of Warfare: Emergence and Regulation


almost all states is contested by the Syrian government’s practice of using chemical weapons. The US Congress approved the use of military force against the government of Syria on 4 September 2013, but Syria agreed to a deal on 10 September 2013 to destroy all of its chemical weapons and to accede to the CWC. While the OPCW inspection teams could enter parts of the country and verify the destruction of some stockpiles (see Makdisi and Pison Hindawi 2017), reports about chemical weapons attacks continued during the duration of hostilities and are documented until at least 2018 (BBC News 2018; Arms Control Association 2019a). Washington’s ‘red line’ position, as well as the reluctant response to chemical warfare by the international community particularly with regard to a possible military intervention, has triggered controversial assessments. A former official of the Obama administration, for example, argued that ‘[j]udged by what the red line was originally intended to do – address the massive threat from Syria’s chemical weapons – it was a success. In fact, it has been perhaps the only positive development related to the Syria crisis’ (Chollet 2016). Others, however, noted that ‘around 90 percent of the use of chemical weapons took place after the “red lines” were drawn by Barack Obama’s administration back in 2012’ (Kasapoğlu 2019). While a substantial amount of chemical weapons were destroyed or removed and the OPCW–UN Joint Mission completed its mission in September 2014, the use of chemical weapons in the Syrian conflict did not stop, and the Trump administration launched missile strikes against Syrian military infrastructure in 2017 and 2018 (together with the United Kingdom) as a response to new reports of chemical attacks. Overall, it can be concluded that the chemical weapons taboo is apparently not powerful enough to have had a constitutive impact on the Syrian government to abandon its chemical weapons stockpiles voluntarily and completely, nor does it shape the actions of those individuals that are responsible for chemical attacks. The question of why this norm or other norms governing the use of force are less strong than constructivist research might argue is not the focus of this book, however. Rather, we are interested in the problem that Obama addressed in his statement: the risk and possibility that practices erode norms or that actions create precedents of standards for using force that could become more widespread.


Autonomous Weapons Systems and International Norms

Whether or not this scenario is probable in the case of the Syrian use of chemical weapons and whether a military intervention would have prevented the continuation of chemical attacks and led to a more preferable outcome in a complex conflict is debatable. On the one hand, the CWC has established a strong legal norm and a regime that prohibits chemical weapons and builds a solid basis for counteracting violations of this norm legally and legitimately, as well as by military means. On the other hand, the case of Syria also shows the limits of international norms in cases of violation, whatever the reason for violation is. Even the strongest legal norms are fragile, depending on contexts, and the reach of international law is shorter than that of domestic criminal law, for instance. Moreover, the reaction to the string of chemical attacks over years in Syria is limited if judged against the initial ‘red line’ drawn by Obama as synonymous with a taboo. We have to ask ourselves: is there a tendency to accept a situation considered to be too complex and risky to address decisively? This tendency could, over time, allow possible conditions for setting use-of-force standards in practices that do not conform to international, deliberative norms. In sum, the CWC as a specific and comprehensive treaty is a major step towards not only regulating, but also prohibiting chemical weapons as means of warfare. However, initially, the international community’s response was slow and somewhat indecisive with regard to developing a comprehensive treaty banning the use, the production, and the proliferation of chemical weapons based on the Geneva Protocol. The Second World War and the ensuing Cold War created a political constellation in which major powers did not push for such a step and were politically unwilling to abandon their chemical weapons stockpiles for a long time. Although the use of chemical weapons has increasingly been considered as a key taboo in IR (Dolan 2013; Price 1995; van Courtland Moon 2008), both Iraq’s practices of the 1980s and the incidents in Syria in the 2010s show that the production and use of chemical weapons can still be part of military–political strategy and tactics. This not only questions the existence of a truly universal taboo but also underlines the risk of tacit acquiescence by the international community in cases where an open and decisive position against the use of chemical weapons seems to be politically inopportune. This also includes Russia’s position as a firm supporter of the Assad government.

New Technologies of Warfare: Emergence and Regulation


nuclear weapons The history of regulating the use, production, and proliferation of nuclear weapons by establishing international legal norms is different from the previous two cases in at least three regards: nuclear weapons are a more recent weapons system, only having become operational in 1945; although nuclear weapons have been an academic, political, and societal focus since their emergence, they were used only twice in warfare, at the end of the Second World War; only a small minority of the 193 UN member states possess nuclear weapons. The latter are the ‘recognised’6 nuclear weapon states (China, France, the United Kingdom, the United States, and Russia), plus India, Pakistan, and North Korea (who have declared the possession of nuclear weapons) as well as Israel (which is generally believed to possess nuclear weapons but has never officially confirmed this). Nevertheless, the possession and possible proliferation of nuclear weapons is one of the decisive issues of international politics in the post-Second World War era. It dominated global security policy until the end of the Cold War and is of continued importance in international affairs as the struggle over the North Korean and the Iranian nuclear armament programmes demonstrates. Nuclear weapons as a technology of warfare emerged during the Second World War, when US President Roosevelt approved the launch of the atomic programme (also known as the Manhattan Project) in 1941. It was mainly motivated by the fear that Nazi Germany could develop a workable nuclear weapon and use it against its adversaries.7 The first nuclear device developed by the Manhattan Project was tested in July 1945. Then followed the devastating attacks on Hiroshima on 6 August and Nagasaki on 9 August 1945, causing estimated casualties of 140,000 in Hiroshima and 70,000 in Nagasaki (these figures do not take into account the long-term effects of radiation exposure). Japan surrendered a few days after the Nagasaki bombing and thereby made a third, planned use of the atomic bomb redundant. With the acquisition and first successful test of a nuclear weapon by the Soviet Union in 1947, and tests by the United Kingdom in 1952 (followed by France in 1960, and China in 1964), nuclear weapons became a central military asset for major powers, an important strategic and tactical weapon, and an integral part of the global arms


Autonomous Weapons Systems and International Norms

race between the Western and Eastern blocs until the 1960s. Only then were first steps to limit the proliferation and testing of nuclear weapons undertaken at the international level. During the post-Second World War era, nuclear weapons acquired the legacy of a system that seems impossible to use for political reasons and has only very limited military usefulness. However, while nuclear weapons were never used again after 1945, the US political and military leadership in fact sounded options for their usage during the Korean War and the Vietnam War. In particular, the availability of tactical nuclear weapons with less explosive power since the 1960s made their use theoretically more appropriate and ‘useful’ from a purely military standpoint (see Tannenwald 1999). The lack of a regulation or prohibition of the use of nuclear weapons in international law in the first decades of their existence has led researchers to argue for the emergence of a strong moral–ethical norm against the use of nuclear weapons in the form of a taboo, even surpassing a possible chemical weapons taboo (Tannenwald 1999; Dolan 2013; Price and Tannenwald 1996). The existence of this taboo is helpful for understanding why states refrained from using this weapons system. Arguably due to the history of non-usage, the political focus was not on regulating the practice of nuclear weapons use but on steering the effects and implications of nuclear tests and averting nuclear weapons proliferation. While the numbers of nuclear tests remained low, with less than 20 tests per year until the mid-1950s, it skyrocketed to 116 tests in 1958 and 178 tests in 1962 (Arms Control Association 2017). Only one year later, the international community succeeded in concluding a treaty substantially limiting nuclear weapon tests: the Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space and Under Water or Partial Test Ban Treaty (PTBT) was first signed by the Soviet Union, the United Kingdom, and the United States on 5 August 1963. In fact, initiatives to control and regulate atomic power and nuclear weapons date back as early as 1945, but it took another two decades before the first important international treaty establishing legal norms emerged as an outcome of international deliberations. Several factors facilitated the conclusion of an agreement at this time. Among these were the effects of the Cuban Missile Crisis in 1962, as well as the growing public awareness and concern about the impact of nuclear weapon tests in terms of a global radioactive fallout: most tests

New Technologies of Warfare: Emergence and Regulation


Table 2.8 Partial Test Ban Treaty 1963, Article I. 1. Each of the Parties to this Treaty undertakes to prohibit, to prevent, and not to carry out any nuclear weapon test explosion, or any other nuclear explosion, at any place under its jurisdiction or control: (a) in the atmosphere; beyond its limits, including outer space; or under water, including territorial waters or high seas; or (b) in any other environment if such explosion causes radioactive debris to be present outside the territorial limits of the State under whose jurisdiction or control such explosion is conducted. It is understood in this connection that the provisions of this subparagraph are without prejudice to the conclusion of a Treaty resulting in the permanent banning of all nuclear test explosions, including all such explosions under-ground, the conclusion of which, as the Parties have stated in the Preamble to this Treaty, they seek to achieve.

carried out during the test peak in the 1960s were atmospheric in nature, such as the Pacific US tests on the Marshall Islands (‘Pacific Proving Grounds’), for example, Bikini Atoll. Deliberations on an agreement to delimit nuclear weapons testing were complicated by the political constellation of the Cold War and by Soviet concerns that such a treaty would only serve the purpose of underpinning American nuclear hegemony. Furthermore, the United States and the United Kingdom intended to include a comprehensive verification and control mechanism, comprising on-site inspections, in a treaty to make sure secret tests were not conducted, while the Soviet Union opposed this plan decisively (US Department of State 2018). A breakthrough was reached by excluding underground tests from the treaty, thereby disregarding the possibility of secret tests that could not be detected by the technologies available in treaty parties. The ensuing PTBT was widely accepted as a first international attempt to construct legal norms regulating nuclear weapon testing – in 1963, 108 countries signed the treaty, which was eventually ratified by 94, while 23 further countries acceded to the treaty at a later point (US Department of State 2018). Consequently, the total number of nuclear tests indeed decreased slightly over the next few years after the peak of 1962 and started to oscillate around fifty at the end of the decade. However, France, having started nuclear testing in 1960, never signed the treaty, nor did the People’s Republic of China, the new emerging nuclear weapon state that launched its first test in 1964, only one year after


Autonomous Weapons Systems and International Norms

Table 2.9 Key elements of the NPT’s non-proliferation dimension. Article I Each nuclear-weapon State Party to the Treaty undertakes not to transfer to any recipient whatsoever nuclear weapons or other nuclear explosive devices or control over such weapons or explosive devices directly, or indirectly; and not in any way to assist, encourage, or induce any non-nuclear-weapon State to manufacture or otherwise acquire nuclear weapons or other nuclear explosive devices, or control over such weapons or explosive devices. Article II Each non-nuclear-weapon State Party to the Treaty undertakes not to receive the transfer from any transferor whatsoever of nuclear weapons or other nuclear explosive devices or of control over such weapons or explosive devices directly, or indirectly; not to manufacture or otherwise acquire nuclear weapons or other nuclear explosive devices; and not to seek or receive any assistance in the manufacture of nuclear weapons or other nuclear explosive devices.

the treaty had entered into force. In contrast, the future nuclear weapons states India, Israel, and Pakistan signed the treaty, while Pakistan ratified it only in 1988. Although the PTBT could not completely eradicate the atmospheric fallout of underground tests, the number of tests that would have (before the treaty) or did violate the PTBT decreased significantly: the United Kingdom, the United States, and the Soviet Union/Russia did not conduct atmospheric tests after 1963 (Bergkvist and Ferm 2000, 9). Moreover, the depositories of the PTBT – the United Kingdom, the United States, and Russia – did not conduct any tests after 1992, while all nuclear weapons states except North Korea refrained from nuclear tests after 1998. The reasons for this development are the negotiations on a Comprehensive Nuclear-Test-Ban Treaty (CTBT) in the early 1990s, as well as the fact that technological innovation made financially and politically costly nuclear tests largely redundant for the major powers. In this sense, ‘[i]t is now widely recognized that the United States no longer has any need for, nor any interest in, conducting nuclear explosive tests’ (Collina and Kimball 2012). The next step in the emergence of legal international norms on nuclear weapons was negotiations on a Treaty on the Non-Proliferation of Nuclear Weapons (NPT), which was signed in 1968 and became effective in 1970. The main purpose of this treaty was to prevent the proliferation and spread of nuclear weapons. The treaty was negotiated in the framework of the ‘Eighteen Nation Committee

New Technologies of Warfare: Emergence and Regulation Table 2.10


The NPT’s disarmament dimension.

Article VI Each of the Parties to the Treaty undertakes to pursue negotiations in good faith on effective measures relating to cessation of the nuclear arms race at an early date and to nuclear disarmament, and on a treaty on general and complete disarmament under strict and effective international control.

on Disarmament’, initiated as a deliberative forum by a UN General Assembly resolution in 1961.8 The NPT covers three dimensions: non-proliferation, disarmament, and the right to use nuclear technology peacefully. The central non-proliferation dimension obliges the recognised nuclear weapon states, which are the five permanent members of the UN Security Council (UNSC), to refrain from proliferating nuclear weapons in any form to non-nuclear-weapon states. Moreover, the article stipulates that nuclear weapon states are prohibited from assisting non-nuclear-weapon states in acquiring nuclear weapons. This proposition is met by Article  II, which stipulates that non-nuclear-weapon states must not receive the transfer of nuclear weapons. However, the norm is weakened by the fact that three declared nuclear weapon states are not party to the treaty. Regarding the disarmament dimension, the relevant article (Article VI) only amounts to a brief declaration, lacking the enumeration of more concrete measures or steps to promote the aim of nuclear disarmament. Article VI appears vague and fails to set any obligations towards disarmament, referring to ‘good faith’ as a basis of such a development. However, in a contrasting perspective, the International Court of Justice unanimously ruled in an advisory opinion ‘On the Legality of the Threat or Use of Nuclear Weapons’ in 1996 (at the request of the UN General Assembly) that ‘[t]here exists an obligation to pursue in good faith and bring to a conclusion negotiations leading to nuclear disarmament in all its aspects under strict and effective international control’ (ICJ 1996), referring explicitly to Article VI of the NPT. The third dimension addressed by the NPT, the peaceful use of nuclear energy, is another important aspect of the aim to delimit the spread of nuclear weapons. In the light of the unrealistic renunciation of the non-military use of nuclear energy by user countries, the NPT opens a way forward to developing relevant capacities to all


Autonomous Weapons Systems and International Norms

Table 2.11 The NPT’s peaceful use of nuclear energy dimension. Article IV 1. Nothing in this Treaty shall be interpreted as affecting the inalienable right of all the Parties to the Treaty to develop research, production and use of nuclear energy for peaceful purposes without discrimination and in conformity with Articles I and II of this Treaty. 2. All the Parties to the Treaty undertake to facilitate, and have the right to participate in, the fullest possible exchange of equipment, materials and scientific and technological information for the peaceful uses of nuclear energy. Parties to the Treaty in a position to do so shall also co-operate in contributing alone or together with other States or international organizations to the further development of the applications of nuclear energy for peaceful purposes, especially in the territories of non-nuclear-weapon States Party to the Treaty, with due consideration for the needs of the developing areas of the world.

parties to the treaty. From the viewpoint of non-proliferation and disarmament, however, this increases the risk of non-compliance with the NPT obligations. Furthermore, the NPT contains a verification article (Article III), which stipulates safeguard mechanisms to ensure compliance with the treaty’s obligations in line with the role of the International Atomic Energy Agency (IAEA) as the international institution overseeing the legal norms of the NPT. These safeguard obligations were particularly important in the international community’s response to the Iranian nuclear programme. Iran signed the NPT in 1968 and ratified it in 1970. However, in the early 2000s, concerns about Iran’s uranium enrichment programme grew, and the IAEA concluded in 2005 that Iran failed to meet its safeguards obligations (IAEA 2005), while the UNSC passed a resolution demanding Iran end the enrichment programme in response. Although the IAEA has been able to increase its verification and inspection activities in Iran considerably since the mid-2000s (IAEA 2018), the Iran nuclear programme remains subject to international contestation. At the same time, Iran explicitly refers to Article IV of the NPT and argues it has the right to develop nuclear capabilities for peaceful use. The North Korean nuclear programme is a second case of international contestation. North Korea withdrew from the NPT in 2003 and, in contrast to Iran, openly pursues a programme of nuclear armament including nuclear tests, arguing it has the right to acquire nuclear weapons for self-defence if other countries possess these

New Technologies of Warfare: Emergence and Regulation


weapons. This also means that IAEA inspectors have not been able to visit North Korea since 2009. The NPT was developed to be universal in character, like the PTBT. Initially adopted for a limited period, it was extended indefinitely in 1995 and now has 93 signatories and 191 states parties. However, the nuclear weapon countries India, Israel, and Pakistan have not adhered to the treaty, while North Korea withdrew from the treaty, as mentioned above. In terms of prohibiting proliferation, the NPT has given the IAEA the competence to refer countries that do not comply with it to the UNSC, which can in turn adopt an international legal response, such as sanctions or other punitive measures. As argued, the effectiveness of the mechanism depends not only on the political will of UNSC members to respond to breaches of the NPT – provided that the respective country is party to it – but also on the IAEA’s capacities to perform the required inspections and monitoring, which is often not the case (Council on Foreign Relations 2012). Another prominent example of the failure to stop proliferation completely by establishing a norm that would be universally accepted is the Syrian nuclear programme: Syria has been party to the NPT since 1969, but began constructing a plutonium production reactor in the mid-2000s. The reactor facility was undeclared under the NPT and built with North Korean assistance.9 Israel destroyed the facility in an air strike in 2007. Syria only cooperated reluctantly with the IAEA, while the agency’s board of governors adopted a resolution declaring Syria in non-compliance with its NPT safeguards obligations in 2011. The international community has taken two further steps to establish an international legal–normative regime to regulate and prohibit nuclear weapons. These are the CTBT, as mentioned before, and the Treaty on the Prohibition of Nuclear Weapons (also Nuclear Weapons Prohibition Treaty (NWPT)). Discussion on a CTBT started in 1994, and the final CTBT was adopted by a General Assembly resolution only two years later, when it was made open for signature. Article XIV of the CTBT stipulates that it will enter into force after all forty-four states listed in Annex 2 to the treaty have ratified it. This group of states includes the recognised nuclear weapon states as well as those having declared or being considered as nuclear weapon states. However, several Article XIV states have yet to ratify the treaty, namely China, Egypt, India, Iran, Israel, North


Autonomous Weapons Systems and International Norms

Korea, Pakistan, and the United States (UNODA 2018a). This also means that key actors are not yet parties to the CTBT. The United States, for example, has signed the CTBT but has not ratified it. It is argued that the ‘Senate did not approve the treaty in 1999, mainly due to two reasons: It was unclear whether the United States could maintain a reliable nuclear arsenal without testing, and there were doubts about the ability to detect cheating’ (Pifer 2016). Domestic politics continue to prevent the US Senate from ratifying the CTBT. The second and most recent international initiative is the NWPT, the first attempt to comprehensively prohibit nuclear weapons. In 2016, the UN General Assembly passed a resolution ‘to convene in 2017 a United Nations conference to negotiate a legally binding instrument to prohibit nuclear weapons, leading towards their total elimination’ (UN General Assembly 2017a, Paragraph 8). The initiative for the NWPT is based on three international conferences that debated the humanitarian impact of nuclear weapons in 2013 and 2014. These conferences were convened by Norway, Mexico, and Austria and represented a renewed interest in the threats and risks linked to nuclear weapons. The initiative was notably inspired by the persistent lack of a substantial disarmament of nuclear weapons, which is not the focus of the existing treaties, although the number of nuclear warheads – particularly those stockpiled by the United States and Russia – has decreased significantly since the end of the Cold War due to the NPT and bilateral agreements such as the Strategic Arms Reduction Treaty (START). A large number of state representatives and NGOs such as the International Committee of the Red Cross (ICRC) participated in these conferences to promote the advancement of nuclear disarmament negotiations (UNODA 2018b). Setting out that the states parties to the NWPT are ‘[d]eeply concerned about the catastrophic humanitarian consequences that would result from any use of nuclear weapons, and recognizing the consequent need to completely eliminate such weapons, which remains the only way to guarantee that nuclear weapons are never used again under any circumstances’, the treaty contains twenty articles specifying the prohibition of nuclear weapons and relevant mechanisms towards this end. The NWPT opened for signature in 2017 and reached the required fifty ratifications necessary for its entry into force in October 2020, chiefly via ratifications by countries in the Global South. In fact, the negotiation process leading up to the NWPT had been heavily

New Technologies of Warfare: Emergence and Regulation


Table 2.12 Treaty on the Prohibition of Nuclear Weapons, Article I. Prohibitions 1. Each State Party undertakes never under any circumstances to: (a) Develop, test, produce, manufacture, otherwise acquire, possess or stockpile nuclear weapons or other nuclear explosive devices; (b) Transfer to any recipient whatsoever nuclear weapons or other nuclear explosive devices or control over such weapons or explosive devices directly or indirectly; (c) Receive the transfer of or control over nuclear weapons or other nuclear explosive devices directly or indirectly; (d) Use or threaten to use nuclear weapons or other nuclear explosive devices; (e) Assist, encourage or induce, in any way, anyone to engage in any activity prohibited to a State Party under this Treaty; (f) Seek or receive any assistance, in any way, from anyone to engage in any activity prohibited to a State Party under this Treaty; (g) Allow any stationing, installation or deployment of any nuclear weapons or other nuclear explosive devices in its territory or at any place under its jurisdiction or control. Source: UN General Assembly (2017b).

influenced by a group of Global South states (mainly Mexico, Chile, South Africa, and Costa Rica) that worked closely with the International Campaign to Abolish Nuclear Weapons (Bode 2019; Potter 2017; Sauer and Pretorius 2014; Thakur 2017). Most countries of the Global North have, however, abstained from voting on the treaty. This group includes all North Atlantic Treaty Organization (NATO) members apart from the Netherlands, who voted against the treaty. In Europe, Austria, Ireland, Sweden, and Switzerland signed the treaty. Moreover, none of the nuclear weapon states participated in the negotiations and in the vote on adopting the NWPT. In a joint statement, the United Kingdom, the United States, and France emphasised that ‘[t]his initiative clearly disregards the realities of the international security environment. Accession to the ban treaty is incompatible with the policy of nuclear deterrence, which has been essential to keeping the peace in Europe and North Asia for over 70 years’ (Permanent Mission of the United States 2017). At the same time, the three countries highlighted the importance of the NPT. The fact that so far none of the nuclear weapon states has shown an interest in becoming party to the NWPT makes its future role in establishing legal norms uncertain. Nevertheless, the NWPT can be taken as an important political signal and has created momentum


Autonomous Weapons Systems and International Norms

for a closer and more intensive cooperation of states and civil society towards the long-term goal of abolishing nuclear weapons and propagating an important social norm (Mian 2017). The history of developing, testing, and using nuclear weapons shows an intensive set of activities aimed at reducing the risks of nuclear warfare after the first (and only) devastating use of nuclear weapons in 1945. Due to their special status as the most powerful and deadly weapon of mass destruction, it is not far-fetched to argue for the existence of a universal nuclear weapons taboo. To what extent such a norm is constitutive is unclear, but both the political consequences of and public opinion on the use of nuclear weapons make them a weapon category of limited practical, military importance apart from their deterrence effect and status as a weapon of last resort. At the same time, the case of North Korea in particular shows that nuclear weapons remain high in political importance, and that nuclear proliferation and production remain a crucial issue for international relations.

blinding laser weapons Establishing legal norms on blinding laser weapons (BLW) is closely linked to the 1980 adoption of the United Nations Convention on Certain Conventional Weapons (UN-CCW). The UN-CCW entered into force in 1983 and we have already become familiar with it as the international framework covering discussions on LAWS in Geneva. The prohibition of BLW was advocated as the fourth protocol of the UN-CCW in 1986, based on an initiative by Sweden and Switzerland, and was thereafter promoted by the ICRC (ICRC 1995). When this first draft resolution was introduced, ‘there was little discussion because the vast majority of States were unaware of developments and thought that such weapons were science-fiction’ (Doswald-Beck 1996, 273). After it became apparent in the late 1980s that BLW were not science fiction but indeed a military option, the ICRC launched an expert meeting in 1989 to assess the development and implications of BLW further. When France requested a review conference for the CCW in 1993, it was an opportunity to convene four rounds of meetings of the Group of Governmental Experts (GGE), which prepared the review conference and considered the issue of BLW. In this context, Sweden and the ICRC made two proposals for an additional protocol to the CCW (see table 2.13).

New Technologies of Warfare: Emergence and Regulation Table 2.13


Proposals for CCW Protocol by Sweden and the ICRC.

‘It is prohibited to use laser beams as an anti-personnel method of warfare, with the intention or expected result of seriously damaging the eyesight of persons’ (Sweden) ‘1. Blinding as a method of warfare is prohibited; 2. Laser weapons may not be used against the eyesight of persons’ (ICRC) Source: Doswald-Beck (1996, 278–9).

The proposal by the ICRC and Sweden shows that it aimed at a comprehensive ban of blinding lasers as a method of warfare. However, during the negotiations, the United States and the United Kingdom opposed this explicit language, arguably in order to protect their soldiers from prosecution for war crimes if a laser incidentally blinded an enemy combatant. Moreover, these two countries and Russia declared their intention, as well the military necessity, of developing laser weapons that could take out optical devices (Peters 1996, 110–11). As a consequence, Protocol IV to the UN-CCW on Blinding Laser Weapons, which eventually entered into force on 30 July 1998, does not constitute a comprehensive, total ban of BLW and their use is generally permitted, even if this could cause blindness as incidental or collateral damage. It contains four brief articles comprehensively prohibiting the use of BLW specifically aiming at causing permanent blindness (see table 2.14). During the negotiation, there were differences in opinion regarding definitions, such as how ‘permanent blindness’, reiterating a WHO definition, would be defined, but also what the term ‘weapon’ would mean in this context. Article 1 explicitly refers to ‘laser weapons’ and to ‘specifically designed’, which raised the question of whether other types of laser devices not primarily designed as a weapon would be covered by the protocol (Zöckler 1998, 335–6). The extent of linguistic ambiguity is a recurring feature of the protocol as can be seen in table 2.14 and is also relevant for discussions on AWS, as repeatedly outlined in this book. However, the protocol does not prohibit the production or use of laser weapons that can ‘incidentally’ cause blindness, nor does it prohibit BLW causing temporary blindness or permanent blindness if it is not considered as ‘seriously disabling’. Nevertheless, the adoption of the protocol, which represents a rare case of ‘outright prohibitions of conventional weapons’ (Peters 1996, 109), is regarded as an important step in the regulation of weapons systems,


Autonomous Weapons Systems and International Norms

Table 2.14 UN-CCW Protocol on Blinding Laser Weapons, Articles 1–4. Article 1 It is prohibited to employ laser weapons specifically designed, as their sole combat function or as one of their combat functions, to cause permanent blindness to unenhanced vision, that is to the naked eye or to the eye with corrective eyesight devices. The High Contracting Parties shall not transfer such weapons to any State or non-State entity. Article 2 In the employment of laser systems, the High Contracting Parties shall take all feasible precautions to avoid the incidence of permanent blindness to unenhanced vision. Such precautions shall include training of their armed forces and other practical measures. Article 3 Blinding as an incidental or collateral effect of the legitimate military employment of laser systems, including laser systems used against optical equipment, is not covered by the prohibition of this Protocol. Article 4 For the purpose of this protocol ‘permanent blindness’ means irreversible and uncorrectable loss of vision which is seriously disabling with no prospect of recovery. Serious disability is equivalent to visual acuity of less than 20/200 Snellen measured using both eyes.

and potentially in the advancement of norms ostracising brutal weapons systems: ‘It is the first time since 1868 that a weapon has been prohibited before it has been used on the battlefield. It has also stigmatized deliberate blinding’ (Doswald-Beck 1996, 296). The protocol became widely accepted and has 108 states parties as of October 2020, including key military powers and weapon-producing countries, such as China, France, Germany, India, Israel, Japan, Russia, the United Kingdom, and the United States. Apart from the issue of definition, the effectiveness of the protocol is further complicated by the fact that different categories of lasers exist for tactical and non-tactical purposes (see Seet and Wong 2001, 217–18). These are Category A lasers, for example, range finders for measuring target distances; Category B lasers, which are anti-sensor lasers used to destroy electro-optical equipment; Category C lasers, which comprise two types of anti-personnel laser weapons (dazzle lasers, which only temporally blind or confuse, for example, pilots, and lasers specifically designed to blind opponents); Category D lasers, which are high-energy anti-material lasers that can potentially cause serious bodily harm.

New Technologies of Warfare: Emergence and Regulation


This means that the protocol in fact only prohibits a specific segment of laser weapons or laser devices that could be used as a weapon. In this regard, the existing legal framework was also relevant for considering the status of BLW. Article 35, Additional Protocol I of the 1977 Geneva Convention, outlining basic rules, stipulates in Paragraph 2 that ‘it is prohibited to employ weapons, projectiles and material and methods of warfare of a nature to cause superfluous injury or unnecessary suffering’. The aspect of unnecessary suffering, in particular, led to discussions about the general lawfulness of BLW in absence of a specific regulation, as in the case of chemical weapons. A 1995 Human Rights Watch report provided a detailed discussion of this aspect, highlighting that there was large consensus that blinding causes suffering – whether this suffering is unnecessary or disproportionate compared with other weapons systems is, however, more controversial (Human Rights Watch 1995). The United States, for instance, argued for the legality of BLW. As the ICRC notes, ‘[a]n evaluation in 1988 by the Office of the Judge Advocate General concluded that such weapons would not cause unnecessary suffering and therefore would not be illegal’ (ICRC 2018b). However, the US Secretary of Defense clearly stated in 1996 in a letter on the question of BLW: ‘there is no prohibition in [the CCW] on research, development or production. Nevertheless, the Department has no intent to spend money developing weapons we are prohibited from using. We certainly would not want to encourage other countries to loosely interpret the treaty’s prohibitions, by implying that we want to develop or produce weapons we are prohibited from using’ (ICRC 2018b). The attempt to establish a legal norm outlawing the use of BLW appears to have been successful not only regarding international law, but also in terms of anchoring a widely accepted taboo against blinding as a means of warfare – at least at a rhetorical level. For example, China, which had invested significantly in researching and developing BLW in the 1990s, stated on the occasion of the adoption of the protocol that ‘[t]he Chinese delegation positively appraises the important results achieved by this conference. We adopted a new Protocol banning the use and transfer of blinding laser weapons which are specially designed to cause permanent blindness to naked eyes. This is the first time in human history that a kind of inhumane weapon is declared illegal and prohibited before it is actually used. This is significant’ (ICRC 2018b).


Autonomous Weapons Systems and International Norms

While laser weapons (more specifically, dazzler weapons) had been used by the United Kingdom during the Falklands War in 1982 and deployed by the United States during the Gulf War and the intervention in Somalia in the early 1990s (Peters 1996, 109), the development and use of laser weapons meant to blind seemed to become more prominent and contributed to the adoption of the protocol. At the time when the negotiations intensified in the early 1990s, countries such as China, France, Germany, Israel, Russia, Serbia, Ukraine, the United Kingdom, and the United States arguably had research and development programmes on BLW in place (Human Rights Watch 1995). In the case of China and the United States, such systems were basically operational (Henckaerts et al. 2005, 293; Doswald-Beck 1996, 283). In this regard, the prohibition of a weapons system that is cheap to manufacture, psychologically effective, and overall constitutes ‘a military capability that provides a superior technological edge against enemy troops, vehicles and aircraft’ (Seet and Wong 2001, 216) is noteworthy because the establishment of a legal norm and ethical considerations prevailed over reasons of practical and strategic usefulness. However, research into laser technology continued after the protocol entered into force; this is unsurprising given the dual-use character of lasers and their importance in military devices that are not banned by the protocol. Nevertheless, it is reported that, for example, China has made at least four different laser weapons operational since 2015, including the BBQ-905 Laser Dazzler Weapon, the WJG-2002 Laser Gun, the PY132A Blinding Laser Weapon, and the PY131A Blinding Laser Weapon. China was also reported to have used a dazzler weapon against US aircrafts in Djibouti – the type and spread of these weapons is, however, unknown (Brown 2018; Trevithick 2018). Also, the United States has continued its research and development programme on BLW, arguably in order to protect own troops from becoming victims of enemy laser weapons (Drolette Jr 2014). It is noteworthy in this context that dazzler weapons have the potential to cause permanent damage to the eye when used at a shorter distance and their widespread acquisition in the military increases the risks of ‘collateral’ blinding, which is not prohibited by the 1995 Protocol. The debate and negotiations leading to the protocol are an interesting case from the perspective of AWS as a non-state actor (ICRC) lobbied strongly and firmly for the prohibition of blinding

New Technologies of Warfare: Emergence and Regulation


(in contrast to the US/UK position). The ICRC pursued this strategy in collaboration with Sweden and Switzerland and later supported by key human rights advocacy NGOs, such as Human Rights Watch. Even though countries had research and development programmes in place and BLW seemed a ‘promising’ weapon from a military-tactical viewpoint at that time, the case of the BLW demonstrates that the preventive prohibition of weapons systems is possible in a relatively short period of time and can be largely uncontested. However, this example also shows that it is difficult to adopt precise language in a binding treaty and there is often considerable political interest in keeping a certain degree of ambivalence: neither blinding in all circumstances nor all types of laser devices that could cause incidental but permanent damage to the human eyeball were banned. This again underlines the often ambivalent or indeterminate character of norms enshrined in international law, as we have already demonstrated via considering the case of submarine warfare earlier in this chapter. The development of laser devices and weapons was not completely stopped by the protocol – not least due to its dual-use character in the military context and the lack of prohibiting dazzler weapons, for example. With regard to the case of AWS, the prohibition of BLW has some similarities in terms of process and the preventive nature of the protocol. However, the case is also different because BLW are a very specific type of weapon that was banned due to the specific injuries it would cause (Rosert and Sauer 2021). AWS are ill-defined regarding their autonomous features and their weaponry would be the same as other conventional weapons. In other words, the injuries caused by dropping bombs or firing bullets or missiles are not the reason why a ban of AWS is considered.

norms and practices of warfare: implications of historical cases The preceding sections presented a historical overview of how an emerging international community has addressed novel weapons technologies since the late nineteenth century. Although the four cases diverge considerably with regard to time, circumstances, and context, as well as with regard to the technical characteristics of the weapons, they demonstrate that attempts to regulate weapons with features that noticeably changed warfare (compared with


Autonomous Weapons Systems and International Norms

conventional means with a long-standing legacy of acceptance) have always been undertaken. The post-Second World War international order, with the UN system at its core, transformed the way disarmament and the legality of the use of force are deliberated. In the following, we highlight six aspects that are of particular importance when discussing how weapons systems and use-of-force norms relate to each other. These apply both in general and in particular to our consideration of AWS. First, establishing international legal norms governing the use of force regarding specific weapons systems is a slow process, often more reactive than proactive, and one that is dominated by political considerations. In the cases of submarine warfare and nuclear weapons, the international community did not agree on specific norms before these weapons had been used and also failed to comprehensively regulate what a legal use could mean in practice afterwards. In the case of chemical weapons, a norm that was limited in scope had been in place since the early twentieth century, but it did not influence their excessive usage in the First World War. Blinding laser weapons mark the only case where the international community was able to proactively agree on preventively banning a specific weapon category, although blinding as a (coincidental) act of warfare was not completely prohibited. This implies that norms defined by international law often lack impact for three main reasons: they may not even exist at the time the weapons are used; they are ambivalent or indeterminate; and they exist but are ignored, often because they arguably conflict with military strategy or ‘necessity’ as in the cases of submarine and chemical warfare demonstrate. These three observations influence the way legal norms matter in governing the use of force. Second, and more importantly, historical cases provide us with answers to the question of how and why norms matter. In those cases where legal norms exist, there is often a wide gap between the codification of rules and the practice of warfare. Partly, this is the result of ambiguous or unspecific legal norms with regard to how weapons can be used appropriately. In other cases, such as chemical weapons norms, the rules explicitly prohibit their use and leave little room for interpretation. Nevertheless, practices of warfare have completely ignored such norms at times. Without going into detail about the different strategic and political reasons for why legal norms are followed or ignored, the initial chemical weapons norm

New Technologies of Warfare: Emergence and Regulation


completely failed to develop either a regulative or a constitutive impact during the First World War. This means that there was no impact based on prohibition or based on understandings of what the morally ‘right’ thing to do is in this context. Third, the question of where norms originate should be examined when considering their impact. It is apparent that legal norms in the sense and meaning that dominates the political and academic debate, and in particular the current considerations of AWS, are an outcome of intensive, institutionalised, and long-lasting negotiations and deliberations. With the creation of the UN in 1945 and the subsequent establishment of specialised forums such as the UN-CCW to address disarmament, the process guiding these deliberations has become more formalised and inclusive than the processes at the beginning of the nineteenth century. All UN member states are invited to participate in these talks, which is certainly progress in comparison with the treaties only signed by a very limited number of major powers. Fourth, from a more comprehensive social sciences point of view, the question of what defines appropriate action, or the appropriate use of force, is not answered exclusively by international law. The cases of new weapons systems presented in this chapter point to the importance of practices and the micro level. The relevant literature for the most part focuses on the deliberation of legal norms at a macro level of international politics. The outcome of this process is international law. But we have also seen that practices of warfare are performed at the micro level by individuals, such as submarine commanders or military leaders and personnel in the First and Second World Wars. The use of chemical weapons in the First World War was an established practice by some proponents to some extent, and it defined what was ‘appropriate’ use of force in certain situations. Appropriateness should be taken here as a multifaceted concept, comprising military, moral, or legal appropriateness. The use-of-force practices described in this chapter were often illegal and morally widely considered as illegitimate due to the characteristics of the weapons systems – although understandings of legitimacy in the military context seem to have been chiefly influenced by enemy action. Further, the legal norms of naval warfare defining rules for the conduct of submarine warfare were considered to be impractical and ambivalent, for example, with regard to the right to protect


Autonomous Weapons Systems and International Norms

one’s own boat. Therefore, they never developed the power of legitimacy among submarine personnel and in navies. The consequence of unrestricted submarine warfare led to the emergence of a specific understanding of appropriateness located in a contested area between legality and legitimacy. Regarding legality, the norms were comparatively explicit and clear in prohibiting unrestricted attacks on merchant vessels. Nevertheless, the extensive sinking of vessels loosely linked to the enemy established a strong practical norm defining the confines of the ‘appropriate’ use of force. In the case of submarine warfare, practices even had an impact on the implementation of international law, as the Dönitz trial shows. This means that the micro level fed back to the macro level of law and policymaking. The implication is that the relationship between deliberation and implementation, between the macro- and the micro levels, is complex and multilayered. Conventionally, we would expect established legal norms to shape practices and to be implemented in a more or less linear way. While legal norms certainly define the range of prohibited actions, depending on the degree of specificity they exhibit, practice often deviates from this range, or may even contradict or conflict with it. If these practices are repeated and patterned, an understanding of hybrid-appropriateness emerges, which might or might not be in line with what has been established as legal or legitimate. In cases where specific norms exist, this can have an effect on how legal norms are understood and respected. In cases where there are no specific norms in international law, existing practices do not necessarily mean that law will follow practice. The use of nuclear weapons at a time when no specific regulations existed shows that it did not establish a precedent of what would be appropriate in future conflicts, nor did the use of chemical weapons prevent the emergence of strict rules (although the entire annihilation of cities like Hiroshima and Nagasaki would today in any case conflict with norms of IHL such as distinction and proportionality). However, there is a close interplay of deliberation and implementation when it comes to legal norms. Attempts to define what is appropriate when it comes to using weapons are typically more complex than is often acknowledged. If we leave behind the rather narrow perspective on legal norms, the question arises of what other norms might influence the way actors behave. There is a widespread conflation of legal norms

New Technologies of Warfare: Emergence and Regulation


and ethical norms in the sense that what is legal is considered as the ‘right’ thing to do. But this is does not need to be case – primarily, because law only defines what is prohibited and typically does not give guidance on moral questions. Certainly, legality and ethics are often closely connected, and the deliberate killing of civilians in war is not legal nor would be regarded as a legitimate or ‘moral’ practice. However, the above examples of redefining what is appropriate use of force in a military context, whether it was unrestricted submarine warfare or the use of chemical and nuclear weapons, show that the boundaries of what is legitimate are being pushed and other guiding principles can play a role here. Even normative considerations might be relevant. The use of nuclear weapons against Japanese cities, in full awareness of the devastating consequences for hundreds of thousands of civilians, was justified at that time by the necessity to save lives and end the war – the life not just of US troops, but also of Japanese soldiers and civilians who might have been affected in future combat scenarios on Japanese soil. Decision-making is influenced by an array of reasons and ethical-normative, strategic, or legal considerations can be important. Fifth, on what level are norms defined or emerge? This foregrounds the role of individuals and small groups. Enacting norms at a macro level, and their formal codification in the case of legal norms, still leaves the implementation of norms up to individuals. Practices comprise acts by individuals, and the examples of weapons regulations also show that individuals define appropriateness and substantiate indeterminate law. If actions follow a pattern or small groups of decision makers define appropriate actions, this can have a norm-shaping or even a norm-making effect. IR research has taken on this challenge of diversifying perspectives on norms in the last two decades, as we will see in more detail in the following chapters. However, the individual level of norm emergence, different types of norms, and the complex interrelatedness of macro and micro levels has not been studied comprehensively. The lesson learned from the historical cases of this chapter is that there are limits to international law in terms of specification, relevance, consideration, and implementation. It might lag behind technological developments or established practices; it might be too vague or be simply ignored; or it might be non-existent because agreements have not been found.


Autonomous Weapons Systems and International Norms

Sixth, can weapons systems be banned preventively? This question is particularly important for current debates on AWS. The case of BLW shows that the international community can act in a relatively efficient and goal-oriented manner within the established frameworks of international cooperation that arguably assist the emergence of comprehensive legal treaties tremendously compared with historical conditions. The main characteristic of the BLW case is that a specific practice, the deliberate blinding of humans as a means of warfare, was prohibited. Laser weapons were not banned completely; there are devices that can be used legally, and cases of collateral damage or incidental blinding were not defined as violations of the legal norm. The advantage for those advocating a ban on BLW was that these weapons were comparatively easy to define as only serving a specific purpose and enabling a practice that was considered as a normative violation. This is similar to chemical weapons, although the range of chemical weapons is greater. In the case of submarine warfare, it was the practice that was contentious not the weapon. Initial norms defined prohibited ways of using submarines in warfare, which were, however, not sufficiently respected. The Nuclear Weapon Prohibition Treaty refers to ‘catastrophic humanitarian consequences’ and ‘unacceptable suffering of and harm caused to the victims’, which the use of nuclear weapons had and would have caused. In the same vein, chemical weapons and blinding laser weapons have been discussed as causing unnecessary suffering, being uncivilised and brutal. This implies that the way weapons are or could be used largely shapes whether a weapons system is subject to regulation. If we put this into the context of AWS, it is apparent that these systems are much more complex in terms of their shape, features, and methods of potential use. A remote-controlled weapons system, such as a drone, can be used within the margins of conventional warfare with regard to ammunition and aspects such as distinction and discrimination. As technological development advances, the contested feature is technological autonomy and its implications for human decision-making, while the actual means used to kill are conventional. This implies that neither the practice of using AWS nor the weapons systems themselves are as clearly defined as those cases discussed in this chapter (see also Rosert and Sauer 2021). Overall, the developments described in this chapter show the struggle to establish what Garcia (2015, 55) calls ‘humanitarian

New Technologies of Warfare: Emergence and Regulation


security regimes’, which are defined as ‘regimes driven by altruistic imperatives aiming to prohibit and restrict behaviour, impede lethal technology, or ban categories of weapons through disarmament treaties, and centrally embracing humanitarian perspectives that seek to prevent civilian casualties, precluding harmful behaviour, and protecting and guaranteeing the rights of victims and survivors of armed violence’. Garcia argues here that three conditions have an impact on whether humanitarian security regimes emerge: marginalisation and delegitimisation, multilevel agency in terms of state and non-state activities, and reputational concerns. Indeed, the historical cases show that these conditions can contribute to the emergence of legal and normative regimes making specific weapons illegal and illegitimate. However, Garcia (2015, 73) also posits that ‘[f]or the other emerging humanitarian regimes, namely those relating to nuclear weapons, to depleted uranium and other toxic remnants of war, and to the use of explosive weapons in populated areas, incendiary weapons and killer robots, there is either no recent use or no use at all. This will make the work of activists and champion states substantially harder’. This underlines the difficulties in discussing or even promoting the prohibition of AWS as weapons systems whose definition and therefore existence is, in fact, contested. Nevertheless, it should be considered that the use or non-use of weapons is not exclusively influenced by formal considerations, as important as the treaties described in this chapter are. The perspective on norms we take throughout this book can shed more light on the question of what the inclusion of autonomy in weapons systems means for international standards influencing their development and deployment.


International Law, Norms, and Order

At this point of the book, it is important to explore in more detail how we can understand the constitution and fabric of international order. Our discussion in Chapter 2 showed that the various attempts by the international community to regulate the use of force established an evermore dense net of international law constituting what is widely considered an international order. As we intend to show, norms emerging in practices can affect the international order governing the use of force, which is often considered to be the comparatively stable product of detailed, slow-paced deliberations in formalised settings. The previous chapter highlighted how new, emerging weapons technologies played a role in the conduct of warfare and the extent to which the international community has launched attempts to regulate or even prohibit their usage. In this regard, chapter 2 also discussed the tensions between adopted legal regulations guiding the use of force and how weapons were used in practice. It illustrated how practices that diverge from explicit but slightly ambiguous legal rules established emerging understandings of appropriateness that were often reciprocal or relational. In other words, using a weapon in a specific, even clearly illegal way, meant that others (often adversaries) followed suit to some extent. Reconsidering the way international order emerges and changes is crucial for highlighting the importance of current developments regarding AWS. Our initial argument here is that international order is more than international law, and can be shaped and changed outside of deliberative settings. Further, increasing technological autonomy via decreasing human agency raises a set of

International Law, Norms, and Order


novel questions regarding the emergence of an international security order defining the appropriate use of force. Moreover, we argue that it is essential to follow a broader understanding of what constitutes international order, moving beyond the focus on order as a legal structure. In this regard, we emphasise the importance of norms building a normative global order that often relates to, but is not identical to, international law. In this sense, chapter 3 connects our thoughts about how norms emerge in practices (chapter 4) to questions of international order and its relationship to international law. In this, it continues to provide the conceptual backdrop for our empirical analysis of how AWS may change use-of-force norms in practice (chapter 5). Research in IR has invested much effort into proposing different models of international order from a top-down, macro-perspective, also in terms of how order changes. What constitutes international order is often only considered narrowly by associating it rather vaguely with a ‘rule-based system’ (Nye 2017, 11) – a structure stabilised by institutions and rules and amended by formal, institutionalised, and deliberative acts (see Stokes 2018, 138). Since the 2010s, IR research has chiefly focused on the status and contestation of the liberal international order that arguably emerged as the dominant post-1945 model upheld by key powers such as the United States and ‘Western’ multilateral institutions such as the International Monetary Fund (IMF), the World Trade Organization (WTO), and the World Bank. Notably, the debate on the liberal international order often lacks a thorough conceptualisation of what order is. While scholars discussing liberal world order, such as Duncombe and Dunne, highlight, for example, Crawford’s concept of ‘institutionalized ideas [that] become embedded through practice, and in so doing they affect “the possibility and legitimacy of later ideas”’ (Duncombe and Dunne 2018, 26; emphasis in original), an understanding of a less stringent type of order than legal structures remains underdeveloped in IR. In this sense, this debate is also over-reliant on positivist understandings of international law as the primary, relatively stable source of international norms as the fabric of order. At the same time, the focus on practices suggested in the quote above has not yet been taken up comprehensively. Given our conceptualisation of norms, understood as standards of appropriateness, as unfolding their meaning primarily in practice, we also consider such practices as


Autonomous Weapons Systems and International Norms

building blocks of international order and seek to contribute to the debate in this regard. With regard to AWS, Garcia suggests that there are three domains of global governance relevant for peace and security. The first domain consists of the prohibition of the use of force based on the legal norm codified in the United Nations Charter Article 2.4. The second domain is about upholding peace and security efforts on the basis of human rights law and international humanitarian law (IHL). The third domain concerns cooperation in cultural, economic, social, and environmental matters and is based on UN Charter Article 1.3, which calls on us ‘To achieve international co-operation in solving international problems of an economic, social, cultural, or humanitarian character, and in promoting and encouraging respect for human rights and for fundamental freedoms’ (Garcia 2018, 338). This outlines the importance of international legal norms for establishing what is widely considered an international order that could provide ‘preventive security governance frameworks’ (Garcia 2018, 339) prohibiting lethal AI. Indeed, practices and norms pertaining to the use of force have been particularly important for order-making: ‘the management of violence is a key function of political orders as it delimits types of violence that violate or reinforce the principles of an order, that is illegitimate and legitimate forms of violence’ (Senn and Troy 2017, 176; see also Hurd 2015). Studying how these ‘principles’ change can therefore tell us something about the changing patterns of international order. For this purpose, we propose a flexible and diversified understanding of order in differentiating between a legal-regulative order, chiefly constituted by international law, and a normative order, chiefly constituted by international norms. This builds on the indeterminate and permissive nature of international law as identified by critical legal scholars. International law provides an indeterminate, baseline structure leaving room for interpretations in varied ways that ‘depend on what one regards as politically right, or just’ (Koskenniemi 2011, 61). We argue that this indeterminacy of international law is constitutive of a realm of normative order where interpretations are negotiated and formed via verbal and non-verbal micro-practices. Examining the dynamic relationship between these orders can tell us something about how orders change and the role that norms play therein.

International Law, Norms, and Order


In particular, conceptualising this relationship between legal-regulative and normative orders allow us to understand current dynamics in how states are forcefully targeting terrorist suspects, mainly via drone warfare. Observing these dynamics points to important precedents set for more autonomous technologies that cannot be captured using the language of international law. The use of drones and other security technologies has led states to propose novel interpretations of the law of self-defence and engage in new practices, such as those concerning the ‘unwilling or unable’ formula (Bode 2017a). However, the significance of these interpretations and practices cannot be captured using purely legal terminology or logic, such as customary international law, as they remain far from ‘a general practice accepted as law’ (ICRC 2010) or do not manifest in a consistently stated belief in the applicability of a particular rule. A rigid, positivist understanding of international law is therefore not helpful in making sense of these evolving standards of appropriateness – something that distinguishing between the realm of legal-regulative and normative orders instead promises to do. A growing literature highlights that drones operate in contested areas of international law governing the use of force, particularly concerning IHL and the law of armed conflict (see, for example, Boyle 2015; Kaag and Kreps 2014; Brunstetter and Jimenez-Bacardi 2015; Carvin 2015). We take a different route in arguing that it matters whether there is high or low congruence between the legal-regulative and the normative orders. As noted above, the legal-regulative order encompasses institutionalised standards of international law, while the normative order encompasses the full range of accepted interpretations and readings of such standards as well as other shared understandings of appropriateness that need not be attached to international law. If there is high congruence between the two orders, this creates a certain stability of expectations for state behaviour, demonstrating the constraining power of international law via the legal-regulative order. We can see how this stability of expectations in the UN Charter era, centring legally on the general prohibition on the use of force, was manifested in a significant reduction in the number of inter-state wars after 1945 (Human Security Centre 2005). This congruence is still imperfect in the sense that international law on the use of force has not been observed all the time. The extent to which use-of-force standards


Autonomous Weapons Systems and International Norms

are observed has been geopolitically mediated, and the provisions surrounding the use of force have generally been subjected to greater contestation than aspects of international law that, for example, regulate routine international practices such as international trade or aviation. At the same time, congruence between international legal stipulations and normative interpretations has been high overall, thereby reducing uncertainty when it comes to state behaviour (Falk 2014, 324). As states have agreed upon the international legal framework voluntarily, and if they continue to share a set of normative interpretations, this congruence offers a reliable set of rules for states to comply with. This is based on both states’ shared sense of being bound by these rules and the amount of certainty in expectations they thus provide. Congruence between the legal-institutional and the normative orders has therefore provided common standards regulating the use of force – above all emphasising that force should only ever be used as a last resort. However, international law not only restrains state practices on the use of force but also adds to them (Hurd 2015, 63). As Ian Hurd argues, by singling out individual and collective self-defence as the only legitimate unilateral1 recourse to force in light of the general prohibition stipulated in Article 2(4), ‘the Charter encourages states to go to war under the banner of self-defence’ (Hurd 2015, 65). The UN Charter’s Article 51 on self-defence therefore presents states with a legal language in which to state their arguments, thereby rendering support to their use of force as necessary, legal, and legitimate (Hurd 2015, 65). These dynamics also trigger contested state justifications for the use of force, which can lead to a growing incongruence between legal-regulative and normative orders. Verbal and non-verbal state practices surrounding drone usage have significantly increased the number of contested areas in international law on the use of force. Such contested areas point to mismatches of expectations surrounding international legal standards and their norm-based interpretation. States have introduced many competing readings of previously shared jus ad bellum standards. In line with Hurd’s arguments, these competing readings chiefly refer to self-defence by broadening the scope of, for example, attribution, imminence, and necessity. There is now considerably less agreement among

International Law, Norms, and Order


states about the precise legal content of core standards than 20 years ago. These contested areas lower thresholds towards using force further and have made the use of force more commonplace. The lack of clarity or consensus originating in the mismatch between legal-regulative and normative orders result in a more permissive environment for using force: justifications for its use can now more ‘easily’ be found within these increasingly elastic areas of international law that is already permissive to the application of violence – a permissiveness that ‘has expanded since 1945 under the influence of state practice’ (Hurd 2016, 1). Widening incongruence between the legal-institutional and normative orders therefore has significant effects on use-of-force standards. We develop our argument in two steps. First, we consider the concept of normative order in terms of how norms and order are thought to emerge, which is closely related to the concepts of normative structure in IR and the rule of law. We argue that normative order is linked to international law but is also shaped outside of law and goes beyond it. We conclude that there is both a legal-regulative order and a normative order structuring the use of force in IR, which interrelate and can have more or less alignment or congruence. Second, by examining state practices in relation to drones and the contested areas we illustrate what happens when the legal-regulative and normative orders misalign. Cognisant of the dual quality of law as both restraining and permissive, we hold that the restraining aspect of law is stronger when states share significant understandings of use-of-force, chiefly self-defence, standards. This corresponds to an alignment of the legal-regulative and normative orders. Conversely, the permissive quality of law comes to the fore when states diverge on significant understandings of self-defence standards. This corresponds to a misalignment of the legal-regulative and normative orders. We cover aspects of jus ad bellum, laws relevant to the resort to armed force, chiefly those relevant to self-defence, such as attribution, imminence, and targeted killing. We connect this discussion to the overview of the political debate on AWS (chapter 1) and to the previous regulation of weapons systems (chapter 2) in highlighting the potential adverse consequences of this misalignment of orders.


Autonomous Weapons Systems and International Norms

approaching order in ir: normative order and international law IR are, in many regards, densely regulated. Comparable with domestic law, the existence of international law structures actions to some extent and provides a certain degree of order, which has been incorporated into IR scholarship on account of these functions (Finnemore and Toope 2001; Wiener 2008). While arguing for the existence of this structure is analytically reasonable, its implications have rarely been comprehensively or clearly explored theoretically or empirically. Yet, a critical segment of IR research has taken this up, departing from a purely legal perspective on IR by considering new perspectives on norms. With the ascendancy of constructivism, studying norms has become a vibrant area of scholarly interest in IR, as we will see throughout chapter 4 (see Checkel 1998). Paying attention to norms broadens research beyond the clearly defined yet limiting dimension of a positivist version of legality by adding considerations such as legitimacy or appropriateness to understanding human actions (Hurrell 2006; Hurd 1999; March and Olsen 1989). At the same time, perspectives on the origin or emergence of norms have long remained attached to an assumed stable structure of international law, which is therefore often conflated with a normative structure. This thinking coincides with a limited understanding of a fixed and defined order, which is likewise derived from central terms of international law regulating international relations. Research has invested much effort in conceptualising different models of international order from a top-down, macro perspective (Ikenberry 2011) that is usually thought of as a web of regulatory regimes (see Duncombe and Dunne 2018, 33). In this sense, orders are perceived as ‘relatively stable configurations of power among sovereign states’ (Lapid 2001, 8), in which law constitutes and is constituted by power. Other dimensions mentioned as crucial for the constitution of order apart from power are intention, authority, and legitimacy (see Duncombe and Dunne 2018, 26), which also underline the role of authoritative actors that can exert these dimensions to influence and change order. Yet, addressing the questions of how order emerges, functions, and changes can contest its stability and challenge the understandings of order inevitably linked to power in an international system. Studies have noted the importance of ‘practices of liberal ordering

International Law, Norms, and Order


– the patterns of activities, institutions, and performances that sustain world order’ (Dunne and Flockhart 2013, 8; emphasis in original). Still, perspectives on such ordering practices contributing to the important debate about the constitution and contestation of world order – especially of liberal international order – often highlight the role of powerful states such as the United States and China in a great-power-centric perspective. This goes back to scholars such as Hedley Bull, who argued in the 1970s that ‘order is a particular kind of social pattern of human activity’ particularly maintained by the institutions of ‘balance of power, international law, diplomacy, war and great powers’ (Flockhart 2016a, 12). While this definition also mirrors the great-power-centric view of the theoretical debates between the then-dominant neo-realist school and contestants, primarily emerging neo-liberalism in IR, contemporary discussions of world order remain attached to this great powers’ narrative. It is also noteworthy that the debate lacks a clear and universally shared definition of what an order – or an international order more specifically – is. In fact, contributions to this debate often seem to make use of a perceived implicit understanding of order without further discussing this important aspect. In a more comprehensive consideration of the question what international order constitutes, Flockhart (2016a, 14) argues that ‘[i]n its most basic form an international society – or an international order – may be understood as a cluster (or club) of sovereign states or nations with shared values, norms and interest, expressed through a number of institutions both primary ones that are informal and evolved (rather than designed) and performed through fundamental and durable shared practices and secondary ones that are formal and designed and which perform specific administrative and regulative functions’. Taking this conceptualisation as a starting point and combining it with a critical understanding of international law allows us to move away from considering international order based on positivist legal terms. At the same time, in considering a normative international order, we also take up the conventional argument that ‘order produced through international society is associated with the participating states having a sense of common interest and they are following established ordering practices associated with commonly held values’ (Flockhart 2016a, 17). In our view, this is an important point that is, however, partly under-conceptualised,


Autonomous Weapons Systems and International Norms

as what constitute ordering practices and commonly held values (or ‘norms’ in the language we use throughout this book) and how these are interrelated are not considered in any detail. Critical legal scholarship considers international law as ‘an expression of politics’ that involves choice rather than simply ‘applying a pre-existing principle’ (Koskenniemi 2011, v). International law provides a baseline structure, but this structure is substantially indeterminate, therefore ‘deferring substantial resolution elsewhere’, leaving room for interpretations in varied ways that ‘depend on what one regards as politically right, or just’ (Koskenniemi 2011, 61). We argue that this indeterminacy of international law is constitutive of a realm of normative order where interpretations are negotiated and formed via verbal and non-verbal practices. Verbal practices are the spoken (in some form) outcome of reflection and consideration, and typically are concerned not only with interpreting standards of international law but also potentially with formulating novel normative understandings that are only loosely (if at all) tied to (accepted interpretations of) international law. Non-verbal practices, in contrast, are typically non-verbalised practices (i.e. where the adoption of formal, text-based decisions is not central), conducted by a plethora of actors on different levels. In our case, these are those practices undertaken in the process of developing, testing, and deploying autonomous technologies. Non-verbal practices are not necessarily influenced by interpretations of existing law (norms) but can still contribute to constituting normative order. This builds on the important point that the emergence and promotion of order does not inevitably depend on centralised, planned initiatives or the practices of key actors in international deliberations. This analytical perspective challenges a traditionally dominant neo-realist perspective on a ‘hegemon [that has] the capacity to shape world order in ways that confer upon it advantages’ (Stokes 2018, 141). While we acknowledge that strategic and intentional attempts at ordering (also in the sense of spreading understandings of law) by powerful states play a role in international relations, the importance of other levels and actors are often unconsidered. As Duncombe and Dunne (2018, 32) argue with regard to who has agency in the liberal world order, ‘[w]hat if there were no driver, no locomotive, no script for running the world according to liberal principles and goals? Integration is the term that best describes

International Law, Norms, and Order


the characteristics of liberal ordering that are non-intentional – the ordering that happens because of convergent institutional procedures, individuals playing roles, the spread of universal standards such as the scientific method, and the forging of a common sense that is somehow above politics’. Moreover, should we not also consider that intentional, non-verbal practices or the intentional absence of ordering may also lead to the emergence of a normative order that is unintended or unplanned? In other words, ordering can also occur as an unintentional product of practices, but in a way also emerges as a (deliberative) lack of clear regulation. Conceptually, order can therefore be thought of as the outcome of regularity (the realm of normative order in our reading) and rule (the realm of institutionalised international law) (Wrong 1994, 41). Here, regularity stands for the behavioural dimension and rule represents the institutional dimension (Senn and Troy 2017, 181). This means that regularity is particularly interesting as the outcome of non-verbal practices that create shared patterns of doing things that are, however, fluid and flexible. In this dimension, there is considerable wiggle room for agency to shape and form understandings of what is appropriate action. The institutional dimension mainly represents verbal practices as formal instances of norm-setting and is more concerned with attempts to fix the meaning of legal rules and norms. There is a conceptual interrelatedness between regularity and rule – while it is widely accepted that rules in the form of international law do have an impact on behaviour and can create regularity, we seek to also highlight the effect of non-legal regularity on rule. In this sense, rules are defined by their proscriptive quality, which prohibits certain actions but does not define what legal behaviour is appropriate or right. We prefer the term norms also in legal regards because it is broader and underlines qualities of co-constitution (in theoretical terms, norms are constituted by actors and constitute them). Furthermore, a crucial point to study is whether and how there is a mismatch between legal rule and regular patterns of behaviour perceived as appropriate. However, it is also important to note that regularity in the form of norms can exist and have an impact as established practices without a clear link to legal rules. In other words, the lack of specificity, or ambivalence, of law can create conditions where there is neither a match nor a mismatch between


Autonomous Weapons Systems and International Norms

regularity/norm or rule. The gap of meaning left open by international law can be filled by practices without further attempts to codify or legally contest them. The absence of a legal, deliberative order does not denote the absence of order in general. In an ideal perspective, degrees of order range from the absence of order and stability on one pole to complete, universal, and uncontested order on the other pole. While we argue that neither of these poles is analytically empirically relevant beyond its existence as an ideal type, the interesting dimension is the grey area between absent order and complete order in which interactions of ordering between regularity/norm and rule take place. We argue that normative order is therefore much more than a stable, universally applicable order usually linked to international law. Generally, normative order contains all common and acceptable interpretations and justifications used in relation to enshrined/ institutionalised standards of international law. It sets ‘the conditions of possibility for a given set of practices’ (Wight 2006, 23). As Reus-Smit (2011, 344) notes, ‘[t]he idea that legal practices are embedded within, and constituted by, layers of nested social understandings is a significant step toward overcoming the limitations of existing approaches to international legal obligations’. The relationship between a legal-regulative order and a normative order signifies that the ‘line separating acceptable from unacceptable behaviour is not clear or fixed’ (Hurd 2016, 10), thereby pointing to the permissive quality of international law on the use of force. Overall, the normative order provides for a range of common or acceptable interpretations of legal standards regulating the use of force. As noted by the UN-organised High-level Panel on Threats, Challenges and Change: ‘The maintenance of world peace and security depends importantly on there being a common global understanding and acceptance of when the application of force is both legal and legitimate’ (High-Level Panel on Threats, Challenges and Change 2004, 62). This triggers two observations: first, that there will be areas of disagreement about when the application of force is both legal and legitimate; and second, that this may, over time, lead to changes in the overall common global understanding. At any given time, there will be therefore areas of more or less contestation within that normative order – some interpretations (norms) may be accepted by a larger number of actors than others and these constellations may change over time – through

International Law, Norms, and Order


the interaction of verbal and non-verbal practices. Depending on how significant these areas of contestation are there is a greater or smaller match/mismatch between the legal-regulative and normative orders. The distinction between international legal-regulative and normative orders also acknowledges that a global common understanding of international use-of-force standards is, by the very nature of international law, tenuous. The indeterminate nature of international law, however, does not mean that ‘anything goes’: different justifications will carry more or less weight depending on how many actors use and adhere to them (and arguably also who uses them), thereby creating cases of stronger or less stringent common understanding. The use-of-force justifications put forward by states typically connect to existing legal standards by way of presenting novel interpretations or more loose forms of attachment to what is currently accepted as appropriate. These are then assessed by legal specialists, the dynamics of which are fundamentally political (Hurd 2016, 14) and determine over time whether there will be a change in what is considered appropriate (norms). Without such an evolving agreement, we will see the appearance of mismatches between the international legal-regulative and normative orders in the form of what we refer to as grey areas. These are interpretations or justifications for the use of force that are at least partly outside the realm of accepted, widely shared understandings of appropriateness. Whether they will move into the normative order remains to be seen. In making these arguments, we depart from Hurd, who argues that looking for a shared kernel of normative understandings is futile because there is no objective standard as to what is legal. In our view, that there is no objective standard of legality due to the indeterminacy of international law does not signify that there are no shared understandings. Indeed, we can see areas of international law where there are a significant number of shared understandings, while there are others where understandings have become increasingly patchy. To sum up, international law is essentially indeterminate, following critical, post-positivist legal scholarship. Within the framework of international law, substantive decisions always imply political choice. This indeterminacy also opens up room to account for the novel emergence of norms as standards of appropriateness in two ways: first, in (re)interpreting aspects of international law; and


Autonomous Weapons Systems and International Norms

Normative order

International law

Figure 3.1 Relationship between international law and normative order.

second, in coming up with entirely new practices with no or only highly tenuous relationship to the existing law. We therefore argue that there is a difference between legal structure and the conceptualisation of norms informed by it. A normative order emanates from how aspects and principles of international law are interpreted, applied, and elaborated upon. We suggest that a positivist legal perspective on international relations cannot capture these dynamics, while the language of norms and normative order offers a useful, alternative conceptual approach. Following critical legal scholars, we conceive of international law as a system characterised by an ‘experience of fluidity and contestability’ (Koskenniemi 2011, v). This is where the international legal-regulative order represented by international law is embedded in international normative order (see figure 3.1). As we argue in the next section, the match between international legal-regulative and normative orders crystallising around the use of force has come under strain with the emergence of threats posed by non-state actors. Directly connected to the emergence of terrorist groups operating across national borders, states have sought to apply use-of-force standards in different, often more permissible, ways. In particular, this concerns the use of drones for the targeted killing of terrorist suspects. Here, states have explicitly and implicitly pushed novel interpretations or understandings of at least three key legal standards of international law governing the use of force (jus ad bellum): attribution, imminence, and targeted killing (state-sponsored

International Law, Norms, and Order


assassination). We examine these dynamics more closely in the next section. The discussion focuses on verbal practices to shed more light on the constitution of international legal order as an integral part of a broader normative international order. It seeks to underline how the indeterminate and permissive conditions of international law enable normative order to emergence beyond and outside of the legal context in the contested areas of interpretative struggles. In Chapter 5, we will explore how non-verbal practices contribute to the constitution of international normative order.

mismatches between the legal-regulative and normative orders and the use of force International law governing the use of force centres on the general prohibition on the use of force. In terms of jus ad bellum, this proscription of the pursuit of state interests by military means under the UN Charter was a legal (and normative) breakthrough. The general prohibition states that all UN member states ‘shall refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state, or in any other manner inconsistent with the Purposes of the United Nations’ (United Nations 1945, Paragraph 2(4)). That general prohibition and its two exceptions – self-defence (Article 51 of the UN Charter) and UN Security Council authorisation under Chapter VII (Article 42 of the UN Charter) have become fundamental to the international rule of law governing the use of force. Jus in bello standards, deliberatively discussed at the international conferences of the late nineteenth and early twentieth centuries, have become institutionalised in a series of conventions named after their conference locations (the Hague and Geneva Conventions). The fundamental principles of this body of international humanitarian law – distinction, discrimination, proportionality, and avoiding unnecessary suffering – have already been discussed in chapter 1 and illustrated in chapter 2. As we showed in the latter, this last principle is also at the heart of additional conventions banning certain types of weapons characterised as indiscriminate or excessively harmful, such as biological and chemical weapons, as well as cluster munitions and land mines. The following sections will take a closer look at emerging, contested interpretations in two jus ad bellum standards in the context of


Autonomous Weapons Systems and International Norms

self-defence against non-state actors, especially in respect to counterterrorism: attribution and imminence. We also examine debates about the extent to which ‘targeted killing’ with drones has come to be seen as increasingly ‘appropriate’ by states. In similar ways, we also see increasing discussion triggering uncertainty around jus in bello standards, such as non-combatant immunity or, more generally, the scope of distinguishing between civilians and combatants (Gregory 2017; Kinsella 2011). Here, we re-connect to the discussion of regulating particular types of weapons in chapter 2 and to the examination of how understandings of what constitutes excessive harm and unnecessary suffering have changed in the past. Our discussion summarises arguments presented by international legal scholars and international jurists working with various methodological doctrine-based approaches: it features references to rulings made by the International Court of Justice (ICJ), as well as differing readings of treaties, assessments of evolving state practice, and opinio juris. Our summary simplifies these debates, which have literally filled volumes, especially after 9/11. The purpose of our summary is to demonstrate how significant areas of uncertainty have been introduced into the law governing the use of force after 9/11. This ensuing and growing lack of shared understandings manifests in mismatches between international law and the international normative order – the contested areas. With this, we are not saying that ‘where the law stood on the morning of 11 September 2001’ (Kammerhofer 2015, 629) was clear and undisputed. Indeed, the legal status of a state practice is flexible: ‘what was legal may become illegal, and vice versa’ (Hurd 2016, 11). This effect is produced by the quality of the law as indeterminate. But 9/11 started a trend of much more clearly and frequently voiced and radically opposing understandings of it. In fact, the very fact that so many commentators portrayed the law as either changing radically or unaltered underlines the presence of a majority understanding at the pre-9/11 starting point. Attribution In an international society that remains structured around states as its main subjects, the non-state nature of terrorists creates substantial challenges. In particular, this relates to how victim states should be able to respond to terrorist attacks in self-defence when

International Law, Norms, and Order


these attacks are invariably planned and staged on the territory of host states. Trapp (2015, 680) summarises this challenge succinctly: ‘Using defensive force against the base of operations of NSAs [nonstate actors] within a foreign host state’s territory, even if that defensive force only targets the NSAs which have launched an attack, still amounts to a violation of the host state’s territorial integrity’. To address this, attacks by non-state actors have been associated with the legal mechanism of attribution, attaching their actions to a state. Attribution originates in the inter-state nature of the UN Charter’s self-defence provisions: ‘Nothing in the present Charter shall impair the inherent right of individual or collective self-defence if an armed attack occurs against a Member of the United Nations’ (United Nations 1945, Paragraph 51). Yet, the threshold of involvement for attribution needed to justify forceful action (against a state) has for long been a matter of debate. It featured heavily in negotiations on the UN Definition of Aggression in the 1970s, for example, which eventually accepted that ‘acts of aggression could be carried out by NSAs, but to require their attributability’ (Trapp 2015, 683). This was defined in narrow terms as ‘sending by or on behalf of’ and ‘substantial involvement therein’ (UN General Assembly 1974, Paragraph 3(g)), while phrases indicating more distant state involvement such as ‘assistance to’ (UN General Assembly 1973, 23) or ‘knowing acquiescence in’ (UN General Assembly 1973, 22) were rejected. Gray (2018, 207) refers to this version of the attribution standard as a ‘test generally accepted by states’ before 9/11. The common global understanding in the mid-1990s can be summarised as follows: ‘according to the International Law Commission, an armed intervention into a state in order to attack terrorists cannot be regarded as self-defence when the State itself has not been guilty of an armed attack and has not directed or controlled the terrorists in question’ (Alexandrov 1996, 183). This clearly echoes the phrases as they appeared in the 1974 UN Definition of Aggression. Arend (1993) summarised different attribution thresholds (see table 3.1), arguing that only state sponsorship and, to a lesser extent, state support are potentially recognised as acceptable forms of attribution justifying the use of force. In her summary of pre-9/11 practice, Gray (2018, 202) further notes that in all instances of using force in response to terrorist attacks, ‘force was used against the state allegedly harbouring the terrorist


Autonomous Weapons Systems and International Norms Table 3.1

Diverging attribution thresholds.

1 State sponsorship State contributes actively to planning and directing terrorist organisations 2 State support

State provides support to terrorist organisations, e.g. in the form of logistics, intelligence, funds, and weapons

3 State toleration

State knows of usage of territory by terrorist actors but fails to act against them

4 Without state sponsorship, support, or toleration

organizations responsible’, signalling a clear link to the attribution standard. Up until the 1990s, interpretations and justifications of the attribution standard therefore seem to have been comparatively widely shared, while recognising that their precise legal assessment has always been a complex endeavour (Moir 2015, 721). Furthermore, what is important in the few instances where victim states (mainly Israel but also the United States) used force in response to terrorist attacks on the territory of alleged host states, is that these actions and the justifications provided were either expressly condemned or not accepted by many states, including via UN Security Council resolutions (Gray 2018, 204). Since the early 2000s, the international normative order has been influenced by substantial changes to legal justification practices surrounding the attribution standard. As the following discussion shows, this has led to more uncertainty, and frequently elasticity, when it comes to what has been deemed as normatively appropriate. At the centre of this development are two United Nations Security Council Resolutions (1368 and 1373) related to Afghanistan in the immediate aftermath of 9/11 and the reaction by the international community to the US-led Operation Enduring Freedom (UNSC 2001a,b). These triggered ‘radically opposing versions of the significance of 9/11’ as the ‘operation against Afghanistan can be interpreted as a wide or as a narrow precedent in the development of the law on the use of force’ (Gray 2018, 201). One, expansionist version considers that both resolutions, in referring to Article 51, implicitly accept the use of force in self-defence as permissible against a non-state actor and ‘without attribution to a state’ (Moir 2015, 724). A second, more restrictive

International Law, Norms, and Order


version, argued that the resolutions lowered the threshold for the attribution standard to state support and state toleration as defined in table 3.1 (Schmitt 2004, 88). Security Council resolution 1368 explicitly ‘stresses that those responsible for aiding, supporting or harbouring the perpetrators, organizers and sponsors of these acts will be held accountable’ (UNSC 2001a, Paragraph 3; our emphasis). As Moir (2015, 728) notes, this lower attribution threshold featured in Israeli justifications for uses of force in Lebanon and Syria in the early 2000s, but has not become widely shared. However, even such a lower threshold of the attribution standard still affirmed that the use of force against terrorist suspects is only legal if there is some sort of link or complicity between the host state and the terrorist suspects (Bode 2017a). A similar line of argument vis-à-vis attribution was also put forward by the United States in the post-9/11 letter from its Permanent Representative to the UN Security Council: ‘my Government has obtained clear and compelling information that the Al-Qaeda organization, which is supported by the Taliban regime in Afghanistan, had a central role in the attacks’ (UNSC 2001c; our emphasis). At the same time, decisions pronounced by judges of the ICJ even after 9/11 have upheld high thresholds of the attribution mechanism, but the three pertinent cases2 brought before it were only concerned with the use of force ‘against the state from whose territory NSAs operate’ (Trapp 2015, 689). Still, in the case of Nicaragua v. United States, the ICJ made it clear that state support as defined in figure 3.2 did not meet attribution requirements, thereby upholding ‘an exacting threshold for attribution’ (Moir 2015, 722). But there was dissent among several judges and the ICJ effectively refused to ‘address the circumstances under which a state has a right to use force in self-defence against (and only against) NSAs’ (Moir 2015, 686). This is a significant limitation in terms of assessing the emerging range of acceptable limits of the attribution standard after 9/11. Surveys of state practice demonstrate movement but also significant uncertainty on whether the use of defensive force against non-state actors is justifiable if it is unattributable to the host state. Trapp (2015, 689–94) identifies three groups of state responses to prominent cases of such uses of force: those where a majority of states came out clearly in favour of such a right, those where a majority were opposed, and a significant number of cases that split


Autonomous Weapons Systems and International Norms

the international community or did not trigger reactions. This leads her to argue that ‘state practice suggests support for the legitimacy of such a right [to respond forcefully to unattributable attacks] in principle’ (Trapp 2015, 694), but that its contours are still being worked out in practice. We can see this play out in various use-of-force practices states have employed as part of counterterrorism efforts. Forceful responses to nonattributable attacks by the Islamic State (ISIS) have triggered various interpretations and justifications ‘without any possibility to establish a common understanding of what international law could mean’ (Corten 2017, 17). The fact that such interpretations are far from uniform signals an increasing mismatch between the legal-regulative and normative orders. A particular practice to consider is the ‘unable or unwilling’ formula, which implies a further departure from even expanded understandings. This has been chiefly employed by the United States, who argue that the use of force is permissible if the host state is ‘unwilling or unable to take measures to mitigate the threat posed by domestic non-state actors’ (G.D. Williams 2012, 630). Variations of the ‘unwilling or unable’ formula have been invoked as the legal basis for airstrikes against ISIS in Syria by the United States, Turkey, and Australia (Lanovoy 2017, 572). Legal scholars argue that this formula is based on the necessity principle for the use of defensive force to be found in customary international law (Trapp 2015, 695; Deeks 2012, 495; G.D. Williams 2012, 630; Dinstein 2001, 275). Here, the argument goes that the use of defensive force becomes necessary against a state that is either complicit in non-state actors using its territory for their operations (‘unwilling’) or for some reason unable to prevent such usage. But the ‘unwilling or unable’ phrase is by no means consistently used (Gray 2018, 237). More fundamentally, it points to an inherently speculative and deeply subjective mode of assessment, as it has typically been made by the intervening state (Deeks 2012; Ahmed 2013; Bode 2017a). Moreover, it is clear that the ‘unwilling or unable’ formula will only ever be used against some states, adding to its problematic nature and positioning in a contested area: ‘This is a one-sided doctrine in that it is impossible to envisage it ever being invoked against the United States, or even against European states that have shown themselves to be unable to act against terrorists operating from their territory’ (Gray 2018, 245).

International Law, Norms, and Order


Moir (2015, 736) argues that ‘most claims of self-defence against terrorist targets have asserted the right only against terrorist targets’, rather than against other locations on host state territory. This underlines the ambivalence of the formula as an actual guideline of using force. Following similar arguments in relation to post-9/11 practice, many state actors have characterised the threat posed by ISIS, and therefore also the response by states, as exceptional rather than as an indication of widening self-defence standards. French Legal Adviser Francois Alabrune, for example, cautiously justified French military action against ISIS in Syria after the 2015 Paris attacks as a ‘special case’ (Gray 2018, 238). UN Security Council debates, in particular the one leading up to Resolution 2249, which condemned ISIS attacks in the autumn of 2015, illuminate this aspect further. Speaking directly to the distinction we make between a realm of international law and a realm of international normative order, Russia affirms that ‘the French resolution [2249] is a political appeal, rather than a change to the legal principles underlying the fight against terrorism’ (UNSC 2015, 5). Yet, it is precisely within this realm of what is politically accepted as appropriate interpretations of the law of self-defence that we see growing uncertainty. Gray (2018, 242) provides a succinct summary of whether there is now a right for using force in response to a nonattributable attack conducted by terrorist actors: ‘Before 9/11 there was very little support for such a right; the legal significance of 9/11 remains unclear; the post 9/11 incidents are not conclusive’. Imminence A second standard that has witnessed considerable movement is imminence in the context of self-defence. The UN Charter’s Article 51 specifically allows states to use force in self-defence ‘after an armed attack has occurred’ (United Nations 1945). Since the 1980s, states have begun to support pre-emptive self-defence in case of an imminent, that is, temporally proximate, armed attack, although state practice on this was far from clear or uniform (Ruys 2010, 324-6; Lubell 2015, 701). Understandings remain attached to the so-called Caroline formula that stipulates a restricted version of pre-emptive self-defence when ‘the necessity of self-defence was instant, overwhelming, leaving no choice of means, and no moment of deliberation’ (Webster


Autonomous Weapons Systems and International Norms

1841). There are arguments that these Caroline requirements have become ‘too restrictive’ (Deeks 2015, 666) and bear little relation (even historically) to how states have used or considered using force (M. Doyle 2008, 15). But ‘a temporal description, pointing to a specific impending attack’ (Lubell 2015, 699) remains the traditional descriptor of imminence. This temporal understanding of imminence has, along with necessity and proportionality, become a requirement of justifying the defensive use of force. Some states, chiefly the United States and Israel, have attempted to push the meaning of pre-emptive self-defence even further towards including more temporally distant emerging threats, especially in connection to weapons of mass destruction and in response to threats posed by non-state actors (Warren and Bode 2014, 2015; Peter 2011). After 9/11, the Bush administration thus blurred the lines between pre-emption (responding to imminent in the sense of immediate threats) and prevention (long-term, latent threats) in key documents such as the National Security Strategy 2002, arguably ‘as a means to make its preventive strategy more acceptable to the international community’ (Warren and Bode 2015, 181). But these remained singular understandings in the mid-2000s: two international reports indicative of a shared global understanding published after 9/11, the Report of the High Level Panel on Threats, Challenges, and Change entitled ‘A More Secure World: Our Shared Responsibility and the UN Secretary-General’s in Larger Freedom’ affirmed the continued relevance of a limited, temporal reading of imminence (Ruys 2010). In fact, the Bush administration’s strategy to blur distinctions between pre-emption and prevention crystallised ‘a consensus surrounding the legality of self-defence against imminent threats’ (Schmidt and Trenta 2018, 209). In Larger Freedom affirms this and points to collective action by the UN Security Council as opposed to unilateral action in the case of prevention: Imminent threats are fully covered by Article 51, which safeguards the inherent right of sovereign states to defend themselves against armed attack. Lawyers have long recognised that this covers an imminent attack as well as one that has already happened. Where threats are not imminent but latent, the UN Charter gives full authority to the Security Council to use military force, including preventively, to preserve international peace and security (UN Secretary-General 2005, 33).

International Law, Norms, and Order


But understandings of imminence continue to diverge. In the context of using military force against terrorist suspects, under the Obama administration the United States largely separated imminence from its heretofore common temporal meaning (Peter 2011; Warren and Bode 2015; Gray 2018, 236). This connects to endeavours of reinterpreting imminence in response to the changing ‘capabilities and objectives of today’s adversaries’ (Bush 2002, 15) started by the Bush administration. What this refers to, in the context of 9/11, is the nature of terrorist attacks characterised by ‘unpredictability, stealth, and concealment’ (Deeks 2015, 672). As not only the United States but also the United Kingdom have argued, this requires a change in the understanding of imminence. In 2017, the UK Attorney General, Jeremy Wright, highlighted the law’s capacity to adapt to ‘modern developments and new realities’ and quoted the understanding of imminence put forward in the so-called Bethlehem principles: ‘[t]he absence of specific evidence of where an attack will take place or of the precise nature of an attack does not preclude a conclusion that an armed attack is imminent’ (Wright 2017). However, critical voices do not find these versions acceptable because they leave too much up to the discretion of individual states and therefore significantly lower use-of-force thresholds, much like in the application of the ‘unwilling or unable’ formula: in their efforts to ‘prevent vague and non-specific threat[s] … it challenges not so much the interpretation of imminence, but in effect calls into question the very existence of the imminence requirement’ (Lubell 2015, 707). A further understanding attached to what constitutes an imminent threat has seen it conflated with a group identity of terrorist suspects. In other words, all ‘members’ of terrorist groups count as imminent threats because the modus operandi of terrorist groups is to constantly plan attacks (Koh 2010). As the United States has argued, forcible self-defence actions against terrorist suspects therefore do not require ‘clear evidence that a specific attack on Americans and interests will take place in the immediate future’ (US Department of Justice 2013). This reading completely delinks imminence (or necessity, another vital principle regulating the use of force) from a case-by-case assessment (Brooks 2013). Even a caseby-case assessment could not, in any case, be openly contested due to secrecy and the lack of transparency surrounding current targeting practice. Following these arguments, one of the key prudential


Autonomous Weapons Systems and International Norms

principles governing the usage of military force, imminence, is always already fulfilled when it comes to terrorists (Warren and Bode 2015). US doctrine therefore ‘offers an extremely wide discretion to the state using force’ (Gray 2018, 236) as ‘the notion of imminence is diluted in the all-encompassing aura of threat’ (Brunstetter 2012). Of course, practices connected to a single state, even if that state is the United States, do not lead to a mismatch between the rule of international law and the regularity of normative order surrounding it. In fact, similar to the ongoing discussions around attribution, there is uncertainty as to whether and which states support a US-style, broad understanding of imminence. Some conclude that both US allies, such as chiefly Israel, the United Kingdom, and Australia, and strategic competitors, such as India, China and Russia, have at least accepted a widening of the imminence standard (Schmidt and Trenta 2018; Fisk and Ramos 2014). Others, by contrast, argue that the new, expanded understanding of imminence has failed to gain majority support among the society of states, notably because it ‘risk[s] ushering in a new age of widespread unwarranted force on the pretext of self-defence’ (Lubell 2015, 707). Overall, these practices have triggered significant debate, including among international legal jurists. Targeted killing Self-defence has also figured prominently in US justifications of its targeted killing of suspected Al Qaeda terrorists and affiliates and adherents. The overall practice has been subject to considerable critique, as many commentators ‘do not accept the existence of an ongoing armed conflict outside Afghanistan against a diverse range of terrorist groups, some of which took no part in the 9/11 attacks’ (Gray 2018, 235). In fact, contributions about the extent to which targeted killings, in particular via drone strikes, have altered the legal regime governing the use of force are plentiful (see, for example, Melzer 2008; Finkelstein, Ohlin, and Altman 2012; McDonald 2017; O’Connell 2010). Rather than going into the many important intricacies of this debate, our discussion focuses on how targeted killing as a practice has been evaluated by states deploying armed drones and whether it has begun to appear more ‘acceptable’. This would signal the evolution

International Law, Norms, and Order


of previous understandings of political assassination and their legality, also implied by the morally distancing language of targeted killing. There is much discussion about the terminological distinction between ‘assassination’ and ‘targeted killing’ as conceptually significant or merely mediated by scholars’ judgements on the legality or morality of such practices (Carvin 2012, 543; Senn and Troy 2017, 186). As Hurd (2017, 307) highlighted, they point to ‘a practice with a long history and some relatively stable expectations about its legitimate use’. We do not engage with this debate in detail but, instead, start our argument from the fact that they are conceptually related and that the parameters of that relationship are undergoing change, with significant consequences for the relationship between the realms of international law and international norms. According to a report by Philip Alston, former UN Special Rapporteur on extrajudicial, summary or arbitrary executions, ‘targeted killing’ as a term was first used by Israel in referring to its policy in the Occupied Palestinian Territories (UN General Assembly 2010b, 4). Melzer (2008, 5) defines targeted killing as ‘the use of lethal force attributable to a subject of international law [for our purpose a state] with the intent, premeditation and deliberation to kill individually selected persons who are not in the physical custody of those targeting them’. We could add that, especially in its US iteration, not only ‘individually selected persons’ but also groups of people have been subject to the practice, typically via ‘signature strikes’, based on patterns of behaviour associated with terrorist suspects (Gusterson 2016, 93). Up until recently, targeted killing had been publicly condemned by most states in the international community – including by those who are now its chief performers (J.I. Walsh 2018, 144). But some of these same states had still resorted to targeted killing in the form of state-sponsored assassinations, such as CIA conspiracies to assassinate foreign leaders in the 1970s and US air raids intended to kill Muammar Qadhafi in the 1980s (Melzer 2008, 37). What makes these historical practices significantly different from today’s targeted killing is that they were not publicly acknowledged: US officials involved in planning the raids on Libya, for example, refused to admit that their primary purpose was Qadhafi’s assassination, ‘sometimes with considerable indignation’ (Ward 2000, 115). These attitudes are changing: ‘targeted killing is in the process of escaping the shadowy realm of half-legality and non-accountability,


Autonomous Weapons Systems and International Norms

and of gradually gaining legitimacy as a method of counterterrorism and “surgical’ warfare”’ (Melzer 2008, 9). The United States, Israel, Turkey, the United Kingdom, Russia, the United Arab Emirates, and Saudi Arabia have publicly or implicitly acknowledged a policy of targeted killing terrorist suspects as part of their counterterrorist efforts. Under the Obama administration, the United States, for example, justified its targeted killing policy publicly in a series of speeches, using legal reasoning. Scholars increasingly speak of how such state practices have led to the erosion of the (legal) norm prohibiting state-sponsored assassination in light of the simultaneous emergence of a targeted killing norm (Fisher 2007). Many works that mix international law and IR scholarship populate this field and often directly continue along the lines of established, sequential models of norm evolution in IR, such as the norm life cycle, while considering the opposite side of the spectrum in focusing on norm erosion (Großklaus 2017). Other scholars speak of targeted killing as an emerging norm (Jose 2017b; Lantis 2016; Fisher 2007). Jose (2017b), for example argues that this emerging norm has passed through the initial stages of Finnemore and Sikkink’s norm life cycle, in particular after the targeted killing of Bin Laden was received positively rather than condemned by states and representatives of international organisations. Yet, many questions remain unanswered when we think about targeted killing as a norm in the ‘pre-emergence stage’ (Jose 2017b, 57). First, so far, only a limited number of states have engaged in this practice. This group includes Israel, an early and significant performer whose engagement with the practice precedes that of the United States, Iran, or the United Kingdom (Fisk and Ramos 2014, 174). This finite range is not significant enough to refer to the practice as an international norm in terms of regularity. The chief performers of the targeted killing practice continue to be the United States and Israel. Yet, some scholars argue that a powerful state such as the United States is, by consequence, a powerful maker of norms: ‘When the reigning hegemon promotes a new code of conduct, it alters the normative frame of reference for virtually everyone else’ (Kegley and Raymond 2003, 391). Jose (2017b, 54) notes that ‘the United States is … a powerful state capable of changing norms with a single act’. But these are highly challengeable assumptions given the inherently intersubjective nature of (international) norms and

International Law, Norms, and Order


the complexity of normative structure that is not as easily changed as it is made out to be here. Second, can the silence by the majority of the international community really be interpreted as a tacit acceptance of this norm? Jose (2017b, 48) affirms this, building on a rather simple legal understanding of silence, while stating that this can only characterise targeted killing as an emerging norm rather than an emerged norm (Jose 2017b, 49). This reading of silence is, again, contestable (Starski 2017). Some legal scholars do indeed interpret governmental silence vis-à-vis the practice of targeted killing as consent for widening self-defence standards (Reinold 2011; Tams 2009). We have already seen this line of argument in the context of the attribution standard with regard to the ‘unwilling or unable’ formula. Other scholars hold that silence cannot be interpreted as proof of changing international law (Corten 2010, 180). Silence is ambiguous, making its interpretation in the international law discourse ‘a political act’ (Schweiger 2015, 270). This ambiguity therefore allows two competing readings of the silence on targeted killing as a practice that are on opposite sides of the spectrum: either as tacit consent or disregarded as legally irrelevant (Schweiger 2015, 273). For our purposes, silence adds to the situation of uncertainty in the increasingly contested areas surrounding legal use-of-force standards. Rather than arguing that targeted killing is on the verge of becoming a norm or an emerging norm, we consider it as a set of practices that attempt to alter the normative content of the norm prohibiting state-sponsored assassination. But, in similar ways to those we have examined in relation to the imminence and the attribution standards, this practice has not been uniform or widespread. In this sense, we do not argue for a new regularity regarding targeted killing but argue rather for a contestation of an existing norm. We therefore again remain more interested in the consequences of targeted killing as a destabilising practice of the international legal order in the sense that is creates more uncertainty and a greater mismatch between legal norms and the associated acceptable standards of appropriateness (norms). The destabilising effect of this uncertainty manifests in more readily available interpretations or readings to legitimise recourse to the use of force: a normalising of the use of force as a first rather than as a last resort. Further, the norm prohibiting state-sponsored assassination and evolving state practices on targeted killing appear to differ mostly


Autonomous Weapons Systems and International Norms

in relation to whether they target state or non-state actors. Targeted killing appears to have become more ‘appropriate’ only in the context of counterterrorism rather than as applied to foreign state leaders. In this sense, the contours of the political assassination norm that identified the killing of state leaders as a source of systemic stability within a Westphalian ordering system is still intact (Ward 2000, 116). Further, the state-centric norm prohibiting political assassination and the simultaneous targeted killing practices to counter non-state actors safeguard the central position of state actors as arbiters of when and how using (lethal) force is most appropriate. This conduct can still have debilitating effects for the overall norm prohibiting assassination: ‘even actions against nonstate, terrorist targets are, in the long run, likely to undermine the norm as a whole and erode the barriers to the use of assassination in other circumstances’ (Ward 2000, 129). Many commentators, such as Philip Alston, explicitly associate the spread of the targeted killing practice among states (singling out Israel, the United States and Russia) with the technological availability of drones (UN General Assembly 2010b, 3). Others, by contrast, contend that ‘while the story of the rise of targeted killings is in large part the story of the rise of the drone, politics rather than technology drove both developments’ (J.I. Walsh 2018, 150). This debate ultimately revolves around the perception of technology either as instrumental or as vested with a capacity to shape social meaning. We find that states’ choices are explicitly mediated by what is technologically possible. Also, some of the features inherent to drone technology shaped new political, legal, and ethical understandings of appropriateness. We need to appreciate, though, that the US-declared ‘war on terror’ was an important push factor for drone technology, which had been technologically available and readily developed for some time. 9/11 therefore provided the necessary context to overcome some social constraints over using unmanned aircraft. But then the use of drones took on a dynamic of its own. Alston further argued how the practice of targeted killing is tied to ‘acting under colour of law’, cautioning how ‘[i]n the legitimate struggle against terrorism, too many criminal acts have been re-characterized so as to justify addressing them within the framework of the law of armed conflict’ (UN General Assembly 2010a, 3; see also Hurd 2017, 307). We could go further and argue that the

International Law, Norms, and Order


very exercise of putting arguments on targeted killing into the language of law, whether advanced in opposition or in favour, still serves to put them onto legal–normative terrain. There is also a distinctly functional logic to targeted killing as practices: ‘being framed as a military tactic …, targeted killing rids itself of its historical association with the clandestine practice of assassinating political leaders and opponents’ (Krasmann 2012, 669). This speaks to exactly the kind of observations we have been making about the creation of contested areas due to mismatches between international law and the normative order. This reasoning also highlights the juxtaposition associated with the indeterminacy and permissiveness inherent in international law – thereby leaving ‘space for security matters to embed themselves in the law’ (Krasmann 2012, 677; see also Hurd 2016). In summary, we can make sense of these diverging interpretative practices through considering how they relate to the relationship between international law and normative order. They point to two consequences associated with an increasing lack of shared understanding surrounding the use-of-force law. First, diverging and ambiguous state practices have led to widely different doctrine-based understandings put forward by groups of legal scholars. This is especially true in the case of the attribution standard. Experts in the law of self-defence ‘drew opposing conclusions regarding the state of the law, often as the result of the review of the very same instances of state practice’ (Marxsen 2017, 91). In this essentially indeterminate situation, legal-doctrinal analysis does not and cannot offer firm answers (Marxsen 2017, 92). Yet, this situation arguably enables a second observation. As demonstrated by readings of imminence that separate it from immediacy, some states have used this wider lack of agreement to put forward highly contestable, potentially destabilising readings. Even though these are not widely accepted, they still point to an increasing elasticity in highly indeterminate circumstances. Marxsen (2017, 92) notes that, under these conditions of indeterminacy, ‘scholarly positions taken are in fact motivated by political choices’ – and the same, of course, also goes for purported choices by expressly political actors in the debate. Second, this lack of shared understanding has led to the emergence of a series of contested areas in international law in terms of the use of force. As figure 3.2 depicts, the thoughtful practices of states regarding imminence without immediacy or unwilling or


Autonomous Weapons Systems and International Norms

Normative order


International law Attribution

Imminence without immediacy

Figure 3.2

Unable or unwilling formula Targeted killing

Emergence of contested areas outside of current normative order.

unable are connected to some established understandings of appropriateness. But such practices also seek to coin new understandings that are currently outside the spectrum of the normative order, thereby creating contested areas. Practices related to targeted killing likewise try to connect to some established understandings not only to move into the realm of normative order but also to shape conceptions of normative order. These developments signify that there is now considerably less agreement among states about the precise content of, and established understandings relating to, standards such as attribution and imminence than there was ten years ago. These contested areas come with the risk of lowering thresholds towards using force, which also means that a normative order is not necessarily absent: a lack of clarity results in a highly permissive environment for using force, and justifications for its use can more ‘easily’ be found within these increasingly elastic, contested areas of international law. In this context, new normative standards of when and what kind of use of force is appropriate are emerging on a local, individual level but can diffuse to a macro level. The emergence of contested areas triggers another observation: the risk of potentially ‘hollowing out’ institutionalised standards of international law by leaving them intact in theory or in treaty form but changing their content in practice. This is not necessarily

International Law, Norms, and Order


a question of ambiguity. While legal norms themselves remain static, their range of accepted interpretations or those deemed to be appropriate change (Kammerhofer 2015, 627). This creates the risky situation that ‘established norms and rules of international law are preserved formally, but filled with a radically different meaning’ (Krasmann 2012, 674). So far, this lowering of thresholds towards the use of force has been most clearly connected to the use of unmanned aerial vehicles or drones, particularly by the United States, while the United Kingdom has also followed the same line of argument on some occasions (Wright 2017; Turns 2017). But, interestingly, and distinct from these purposeful, deliberative movements in terms of diverging legal justifications for action provided by states, the availability of particular use-of-force technologies, such as drones, may count as another push factor for lowering the use-of-force thresholds. Following a counterfactual line of argument, had the United States not had access to drone technology, it is very unlikely to have engaged in the same number of military interventions over the past eighteen years. Novel understandings of how appropriate states consider the use of force to be may therefore also emerge through (finding new ways of) using available security technologies. The embeddedness of and relationship between international law and normative order is at the centre of our examination of these detrimental dynamics. As Brehm (2017, 71) succinctly argues, ‘evolving security practices challenge the categories and disrupt the human–machine configurations around which the legal regulation of force is articulated. This generates controversies and uncertainties about the applicability and meaning of existing norms, thus diminishing existing law’s capacity to serve as a guidepost’.


Norms in International Relations

The chapter introduces the main assumptions of norm research in IR as a discipline, which is the theoretical background for the analytical approach we take in this book. It explicates how constructivist norm research tends to conceptualise the emergence of norms in rather static and sequential models, although research on norms has diversified significantly over the last decade. As a consequence, the various functions of norms, how they impact behaviour, and under what conditions state compliance is most likely are well-researched themes in IR (see, for example, Risse, Ropp, and Sikkink 1999a; Adler 2013; Albuquerque 2019; Andrews 2019; Jose 2017a; Rublee and Cohen 2018; Hofferberth and Weber 2015; Rosert 2019). At the same time, an in-depth examination of norm emergence and change after decision-making/institutionalisation remains a marginal research objective (see Huelss 2017a). While critical studies of norm contestation and localisation argue for a diversity of normative meaning, these conceptualisations arguably still refer back to the deliberative expression of a shared normative kernel, rather than its reconceptualisation in practice, as we will demonstrate below (see, for example, Garcia 2006; A. Wiener 2014; Zimmermann 2016, 2017). Research therefore predominantly employs the concept of a stable normative structure (reminiscent of a positivist understanding of international law as discussed in chapter 3) that shapes actions. We agree that norms based on international law constitute a certain structure with constitutive effects in terms of, for example, providing points of reference for understandings of a moral taboo, as our

Norms in International Relations


discussion of how weapon systems such as chemical weapons have been regulated in the past demonstrates, for example (chapter 2). But we aim to introduce a broader understanding of norms as standards of appropriateness in two ways. First, we build on our outlined critical understanding of international law as indeterminate. Second, we seek to overcome the narrow focus on international law as the sole source of norms, and conceptualise the emergence of norms in practices of a verbal and non-verbal nature. This chapter develops this core analytical argument further and provides a conceptual framework for studying what kind of norms may emerge when states develop, test, and deploy AWS. Apart from providing this analytical framework, this chapter is also a conceptual contribution to norm research in IR. The remainder of this chapter is organised in the following way: the first two sections introduce norm research as an analytical tradition in IR and provide an overview of how theoretical approaches and norm research in IR overall have developed over the last three decades. The third and fourth sections reflect on the contributions and shortcomings of existing research on norms and introduce our conceptual differentiation between contexts and types of norms relevant for our analytical purposes. They also outline our approach to the study of norms that we employ for the empirical study of weapons systems with autonomous features (chapter 5). Next, we present our broader analytical objectives.

from interests to norms: the first generation of norm research Norms that provide actors with (specific) standards of how to behave in certain situations are the fabric of social relations. They not only define ‘appropriate’ ways to act for individuals but also offer a group or even a larger community standards organising interactions, thereby increasing behavioural and interactional certainty. In contemporary IR, the existence and role of norms in the broad sense appears to be relatively uncontroversial, and Finnemore and Sikkink (1998b, 889) argue that ‘[n]orms and normative issues have been central to the study of politics for at least two millennia’. However, norm research is still a comparatively novel major field of IR. Interest in the role of ideational, non-material factors, such as norms and identity, only emerged with the constructivist turn in IR


Autonomous Weapons Systems and International Norms

(Checkel 1998) in the early 1990s as a countermovement to the dominance of rationalist, material approaches, such as (neo)-realism and neo-liberalism. The latter approaches, despite their ontological differences, argued for the importance of material structures and rational, self-interested actors. The meta-theoretical debates and diverse facets of constructivism (Adler 2013) are not central to this book’s topic and have been extensively covered in the literature, particularly in the 1990s and 2000s (Kurki and Wight 2007; Jackson 2008; Lapid 2001; Zürn and Checkel 2005; Adler 2013; Waever 1996). It suffices to note here that constructivism’s basic tenet of the social construction of intersubjective reality accommodates the constitutive representation of issues in discourses, for instance, while questioning and contesting the predominant importance of stable, externally given interests for guiding human behaviour. Analytically, this insight opens up the perspective not only of studying various political issues in terms of their constructedness, thereby contesting both what is discursively portrayed as a ‘problem’ and what is presented as a ‘solution’, for example, but also of considering the normative dimension of IR. This dimension comprises understandings of appropriateness, which can be based on legal norms (see chapter 3) or encapsulate moral–ethical norms, epitomising standards of the ‘right’ thing to do. The fundamental conceptual controversy between interests and norms, partly reflected in the debate between constructivist and rationalist approaches (Fearon and Wendt 2002), primarily investigates the extent to which actors are influenced by interests assumed to be similar for all rational actors and hence stable and exogenous, or endogenous norms, which are considered as potentially colliding with material interests. To illustrate this with an example from chapter 2, the actions in the Second World War of some submarine commanders who killed the survivors of torpedoed ships to conceal the presence of a submarine can be understood conceptually as a battle between interests (to protect the submarine and the secrecy of its operation) and norms (acknowledging and protecting the human rights of survivors). It is apparent that determining when interests and norms prevail is often not clear-cut. Staying with the example above, the submarine commander could also be concerned about an immediate detection and attack by enemies and consider it their moral imperative to protect the crew, with whom they may have established close social bonds, and do whatever it requires to ensure the crew’s safe return.

Norms in International Relations


Most IR researchers would agree that both interests and norms play a role in informing human behaviour. Nevertheless, the extent of this interaction, the origin of interests and norms, the existence of objective and intersubjective aspects, and the possible co-constitution of interests and norms are widely contested. It is also questionable whether differentiating, or rather dichotomising, between interests and norms is useful in the first place: the distinction is, in fact, only meaningful if scholars do not accept human motivations as basically socially constructed and instead accept the supposed existence of exogenously given, fixed human interests as determining factors of human behaviour, or as a consequence of state behaviour (Wendt 2004). Although the existence of some basic needs/interests, such as satisfying hunger, is obvious, it is equally apparent that humans can and do use different strategies and recourse to varying actions as a result of the basic interest to secure food. We share the view that norms play a role in informing what ‘appropriate action’ is in this context (for example, refraining from stealing or buying vegan, organic, or sustainable food to satisfy hunger). In the late 1980s, the first generation of norm researchers, considered to be the pioneers of constructivism in IR, started a novel analytical endeavour into human decision-making, investigating factors influencing human actions (Kratochwil 1984, 1989; Onuf 1989; Kratochwil and Ruggie 1986). This coincided with a growing interest in the core social relationship between social structures and agents/agency beyond purely material factors (Wendt 1987). While IR theory had considered material capabilities, geography, or population size as important aspects influencing the foreign policy of states, non-tangible, ideational elements such as norms and identity now moved into the centre of attention (Wendt 1992, 1994). The first generation of norm research therefore argued for the relevance of norms and provided (basic) concepts of how they mattered. In another conversation, with rationalist approaches favouring the power of interests, constructivist research moved to transferring sociological action theory to the discipline of IR. A first step consisted of introducing into the debate the differentiation between a ‘logic of consequentialism’ and a ‘logic of appropriateness’ (March and Olsen 1989, 1998). The logic of appropriateness was particularly influential for developing concepts of how norms work in shaping actors’ behaviour. However, it was also meant to


Autonomous Weapons Systems and International Norms

contrast with the interest-based actions of a logic of consequences, and therefore underpinned constructivist works trying to show that ideational aspects matter more than pre-defined, material interests. Starting with initial definitions of norms as ‘standards of behavior defined in terms of rights and obligations’ (Kratochwil and Ruggie 1986, 767), for example, appropriateness is a recurring theme in norms research, reproduced in later, dominant definitions, such as ‘[n]orms are collective expectations about proper behavior for a given identity’ (Jepperson, Wendt, and Katzenstein 1996, 54). In that, first generation scholars argued that ‘[t]here is general agreement on the definition of a norm as a standard of appropriate behavior for actors with a given identity’ (Finnemore and Sikkink 1998b, 891). In this book, we build on these perspectives by defining norms broadly as standards of appropriateness. As a concept, appropriateness is open to different motivations and contexts. In other words, appropriateness does not entail a normative judgement in terms of what is morally right in a universal sense; nor does it represent an ‘objective’ value, such as efficiency and effectiveness. What constitutes appropriate action can have different meanings to different actors in different situations and across contexts. The ultimate analytical task of norm research therefore consists in investigating not only the emergence and influence of norms across contexts but also how actors’ motivations are influenced by a variety of factors and aspects.

from empirical models of norm evolution and diffusion to contestation and localisation In contrast to constructivism’s first generation of researchers, who provided basic, ontological conceptualisations of norms as important building blocks in a constructivist analytical mindset, the second generation of constructivist norm research was more interested in developing workable empirical models of norm evolution and diffusion (Keck and Sikkink 1998; Finnemore and Sikkink 1998b; Finnemore 1996a; 1996b; Risse, Ropp, and Sikkink 1999b; Björkdahl 2002). These models were largely in line with the then dominant positivist approach to the social sciences – notably to provide a counterpoint to rationalist approaches that would be taken seriously by the opposing camp. In this regard, norms were not only

Norms in International Relations


primarily considered as ‘independent variables’ but also conceptualised as rather static factors with relatively fixed meaning. Beyond the influential norm life-cycle model developed by Finnemore and Sikkink (1998a) and Keck and Sikkink’s (1998) seminal work on transnational advocacy networks as norm entrepreneurs, the literature focused more on how norms work than addressing how they emerge and change. Second-generation norm researchers therefore developed analytical concepts for empirical research on processes such as ‘diffusion’ and ‘socialisation’ (Risse and Sikkink 1999; Gheciu 2005; Schimmelfennig 2000; Zürn and Checkel 2005; Checkel 2001; Risse-Kappen 1994), studying how actors were socialised into the norm-based settings of specific communities and how such norms diffused into other contexts. For example, the influence of the European Union in terms of the Europeanisation processes was an important part of this research agenda (Börzel and Risse 2012; Featherstone and Radaelli 2003; Risse, Cowles, and Caporaso 2001; Checkel 2001; Flockhart 2010). Yet, even the influential norm emergence models mentioned above followed a linear, chronological approach to conceptualising the sources and change of normative content. All these approaches shared assumptions about the stability of normative meaning. This, arguably, contributed to the increasing acceptance of constructivism within the discipline of IR as scholars were basically ‘speaking the same language’ as rationalist approaches. It also allowed constructivists to develop straightforward approaches designed to analyse whether specific norms with a clearly defined meaning are followed or not. In the further development of theoretical perspectives in constructivism, this static image of norms became increasingly contested. In particular, the elaboration of concepts accommodating the flexible meaning of norms further diversified norm research. For example, the ‘logic of arguing’ (Risse 2000), engaging directly with the logics of appropriateness and consequences, suggested that actors in settings of international negations, for example, are often unclear about the exact meaning of a norm and typically become involved in deliberations to clarify what appropriateness or appropriate action means. These insights mark the beginning of an increasing challenge to second-generation constructivists by a new group of scholars who were primarily interested in the flexibility and contestation of norms. This research can be subsumed under the label of norm contestation studies, most prominently represented by the work of Wiener


Autonomous Weapons Systems and International Norms

and her ‘principle of contestedness’ (A. Wiener 2004, 2007a,b; Puetter and Wiener 2007; A. Wiener 2017). Wiener basically argues that the meaning of norms is contested by default. Even if norms are shared and actors generally agree on adopting or complying with certain norms – for example, those enshrined in international treaties – different actors still hold different meanings or understandings of norms. Studies on the ‘localisation’ and ‘translation’ of international norms (Acharya 2004; Zimmermann 2016; 2017; Eimer, Lütz, and Schüren 2016; Müller and Wunderlich 2018; Draude 2017; Berger 2017; Aharoni 2014; P. Williams 2009) contribute to this perspective by pointing to normative meaning as flexible and asking to what extent international norms are transformed when they are implemented at a local level. This literature on contestation, translation, and localisation, which generally addresses similar questions from different viewpoints, signifies a broadening of the conceptual basis of norm research as developed in the 1990s, such as studies on compliance and non-compliance (see Checkel 1997, 2001; Risse, Ropp, and Sikkink 1999b; Chayes and Chayes 1993). The contribution of this more recent body of research is to develop a conceptual understanding that can grasp levels of differentiation and complexity in norms. As Acharya (2004, 241) put it, ‘Instead of just assessing the existential fit between domestic and outside identity norms and institutions, and explaining strictly dichotomous outcomes of acceptance or rejection, localisation describes a complex process and outcome by which norm-takers build congruence between transnational norms (including norms previously institutionalised in a region) and local beliefs and practices. In this process, foreign norms, which may not initially cohere with the latter, are incorporated into local norms. The success of norm diffusion strategies and processes depends on the extent to which they provide opportunities for localization’. The focus on localisation hence extends previous perspectives on norms in several regards. First, it broadens the set of actors crucial for studying the role of norms by emphasising the importance of the micro level versus the macro level, as well as of the domestic level versus the international level. Second, it opens up the relatively static understanding of normative impact as a binary outcome of compliance or non-compliance even though approaches are often more nuanced. Third, it contests the related concepts of norm-givers and norm-takers by showing that assumed norm-takers

Norms in International Relations


have significant room for changing the content of norms in implementation processes even if they ‘officially’ adopt a norm. Fourth, it draws our attention to a closer study of processes and outcomes of apparent norm-acceptance. Here, Eimer, Lütz, and Schüren (2016, 455), for example, differentiate between four separate forms of localisation: adoption (‘an international norm is implemented in the domestic context without any significant modifications’); accentuation (‘an international norm becomes subject to reinterpretation during the course of its implementation’); addition (‘an international norm becomes subject to significant semantic shifts by the endorsement of further amendments’), and subversion (‘an international norm is implemented according to the letter, but substantial elements of the domestic legislation contravene the spirit of the international wording’). Again, it is helpful to consider the development of norm research in the context of questions about the relationship between agency and structure, which denotes one of the fundamental theoretical concepts, but also problems, in the discipline of IR. It originates in sociological structuration theory and was mainly introduced to the discipline by Alexander Wendt in the late 1980s and early 1990s (Wendt 1987, 1992, 1994). The main argument is that agency (actors) and social conditions (structures) co-constitute each other. Simply put, this means that actors are influenced by the social structures they are imbedded in, which can be culture, bureaucracies, law, organisational procedures – but that agency also influences these structures. While the co-constitution of agency and structure is a widely accepted theoretical foundation of the social sciences, the empirical study of it has proven to be difficult. It is particularly challenging, for example, to identify exhaustively relevant structural conditions and points in time when these conditions influence actors. It is equally difficult to prove how these structures are changed by actions simultaneously. In particular, ‘positivist’ research, which aims to establish evidence of causal relations between agency and structure, or between norms and actions more specifically, struggles with this issue. Researchers have partly tried to resolve, or rather circumvent, this problem by resorting to ‘bracketing’, in the sense that only the constitution of structures by actors or of actors by structures is considered. This is, however, analytically unsatisfying because it fails to accommodate the complexity of the agency and structure


Autonomous Weapons Systems and International Norms

relationship and co-constitution as the central argument of structuration theory. Our theoretical approach is broadly post-positivist in the sense that we do not seek to identify causality; nor do we hold that proving causal relations is even possible or meaningful in social sciences. While we refrain from a more elaborate meta-theoretical debate here, we note that our research interest is to understand how practices can influence understandings of what appropriate action is, as well as how practices are influenced by existing understandings, whether held by individuals, groups, or communities. It is important to point out that this complexity of social relations makes it impossible to represent the co-constitution of agency and structure in its entirety. But we argue that this is not necessary to gain important insights into how norms emerge, and their relationship with practices and agency. Influenced by structuration theory, basic conceptual considerations of norm research change investigating the extent to which normative structure influences behaviour into investigating the extent to which actors’ agency contests normative meaning and influences normative structure, and therefore the extent to which the fundamental assumption of the co-constitution of agency and structure is accommodated in theoretical approaches. While the logic of arguing (the principle of contestedness) and research on localisation processes underline the role of agency, deliberation, and reflection in examining the overall effect of norms, the ‘logic of practicality’ (Pouliot 2008) as a contribution of the increasingly influential ‘practice turn in International Relations’ (Jackson 2017) emphasised the importance of agency and of ‘inarticulate’, ‘practical’, or ‘background knowledge’ to shaping actions (Pouliot 2008). As Pouliot (2008, 258) argues ‘most of what people do, in world politics as in any other social field, does not derive from conscious deliberation or thoughtful reflection—instrumental, rule-based, communicative, or otherwise. Instead, practices are the result of inarticulate, practical knowledge that makes what is to be done appear “self-evident” or commonsensical. This is the logic of practicality’. Practice theorists have not yet often sought to make conceptual contributions on a wider analytical plain than norm research. Currently, useful analytical combinations remain restricted to particular empirical domains, such as analysing United Nations (UN) peacekeeping (Paddon Rhoads 2019; Laurence 2019; Bode and Karlsrud

Norms in International Relations


2019; Holmes 2019) or decision-making at the UN Security Council (Ralph and Gifkins 2017; Bode 2018). Nevertheless, the logic of practicality and the ‘practice turn’ more generally can make at least two potential contributions to the debate on norms in IR. First, it highlights the role of practices, in specific contexts, and how their performance corresponds to the varying knowledge bases of actors. Second, in focusing on how IR are effected, practice theories provide a theoretical background to understanding that standards of appropriateness in the sense of the ‘logic of appropriateness’ mentioned above are not necessarily the outcome of reflection and deliberation, but can also emerge in social contexts of practices, which lack the codification or even verbalisation that is typically central for norm research focused on law. In recent years, a third generation of norm research has begun to emerge. This further develops the perspective on flexible normative meaning and practices initially put forward by contestation and localisation (Walter 2018; Engelkamp, Glaab, and Renner 2017; Bode and Karlsrud 2019; Dixon 2017; Jose 2017a; Bloomfield 2016; A. Wiener 2018), building and refining new models of norm emergence and diffusion (Rosert 2019; Winston 2018; Towns and Rumelili 2017), and also focuses on norm emergence beyond an understanding of linear translation between the macro and micro levels (Huelss 2017; Bode and Huelss 2018). This last set of studies also examines fundamental, critical questions associated with notions such as normativity and normality (Huelss 2019, 2020; Taylor 2009). Overall, this third generation of norm research is characterised by the diversification of theory, methodology, and analytical levels, typically leaving the initial phase of a positivist-oriented social constructivism far behind. In addition, the surge of post-positivist approaches to IR, such as discourse analysis and post-structuralist studies of identity and discursive representations as well as practice theories (Bueger and Gadinger 2015), are often not designed as direct interventions in norm research in IR. However, as noted above, we believe that the ways in which these studies emphasise the constructedness and flexibility of relational meaning open up new pathways for beneficial knowledge exchange with norm research, encouraging scholars to zoom in on processes of norm emergence beyond the macro level of deliberation. In summary, three decades of norm research have produced important insights into the role of ideational factors more generally,


Autonomous Weapons Systems and International Norms

as well as the limits of rationalist-materialist approaches in understanding actions in IR. The broadening of research from compliance studies (the norm-follower model) to contestation studies (the norm-opposer model) represents an important step in investigating how the flexible meaning of norms influences the range of possible actions. Implicitly, these perspectives also contribute further to the crucial question of how normative meaning changes – a key object of critique for scholars of contestation and localisation processes.

beyond macro-level deliberation: different conceptualisations of norms We argue that two important aspects remain are yet to be sufficiently addressed in IR norm research: first, how actors at different levels of analysis interact in producing normative content and are (potentially) changing normative meaning; and second, the related, fundamental question of how norms emerge beyond settings where their normative content is deliberatively discussed. In combination, these questions ask where norms emerge, where they have an impact, and how they work. These considerations led us to develop a relational model of how norms emerge and change. The research literature on norms, and on AWS in particular (see chapter 1), privileges the macro level of deliberation and formal decision-making or norm-setting. The outcome of deliberation is generally a codification of norms, for example, in the form of international treaties. These legal norms represent the concept of a stable normative structure and are important in IR precisely because they are represented here as the dominant, main source of norms. Perspectives working exclusively with the concept of a normative structure not only often prioritise the structural (constitutive) qualities of norms while their structured (constituted) side remains unconsidered but also privilege the structure of international law while disregarding the agency of actors in establishing understandings of what relevant normative meaning is. The examples of regulating the use-of-force and weapons technologies in chapter  2 underlined how a normative-legal structure can be important in structuring how and whether weapons are used. But an institutionalised, normative-legal structure located at the macro level is not the only factor shaping and influencing practices. Because of this focus, the micro level, comprising individuals,

Norms in International Relations


groups, and communities that have no direct influence on political and legal decision-making institutions, largely remains out of sight when studying norms. Wiener’s most recent contribution to the debate speaks to this omission in putting forward an ‘agency-centred approach to norm change’ (A. Wiener 2018, 4), identifying the ‘norm-generative power that materialises through contestation’ (A. Wiener 2018, 11). While this allows Wiener to bring in diverse agents, their scope for agency, however, depends on whether they have access to contestation in ‘a given political and socio-cultural environment’ (A. Wiener 2018, 28). Wiener understands this to be verbal access through public, deliberative forums, as she highlights how access opportunities to ‘validation and contestation remain clearly restricted (i.e. favouring state representatives operating through international organisations)’ (A. Wiener 2018, 30). Even in this innovative model, whether agents can contribute to norm emergence or change remains strongly attached to whether they have formalised access to key public international platforms and does not go down to the micro, individual level. Yet, we have seen that the decisions and actions of individuals (can) play a role in setting precedents of what are considered standards of appropriate actions. Unrestricted submarine warfare violated legal norms as enshrined in the 1935 Second London Naval Treaty, but German, and later British and US, practices made it ‘acceptable’ to attack merchant vessels randomly. To an extent, the pre-established legal norms were no longer applied and consequently lost their meaning. Before we come to a further discussion of norm emergence via practices, it is important to closely scrutinise how norms have been conceptualised. We do this by way of discussing Wiener’s typology of norms, because it is influential for the stream of norms research interested in contestation. Wiener’s approach also builds a bridge between the more flexible and more stable qualities of norms as both structured and structuring entities, which are of interest for our take on how norms emerge. In a series of articles, Wiener outlined her typology of norms, differentiating between three types of norms, two levels, and three degrees of moral reach and contestation (table 4.1). Wiener’s contributions draw awareness to the different levels at which norms are deliberated, for example, the macro level of


Autonomous Weapons Systems and International Norms

Table 4.1 Norm types. Norm type Fundamental norms Type 1 Organizing principle Type 2 Standardized procedures, regulations Type 3

Examples Level Human rights, rule of law, Macro democracy Responsibility to Protect (UN); Meso Rule of law mechanism (EU); Qualified majority voting (EU) Responsibility to protect Micro pillars; specific rule of law implementation; electoral rules

Moral reach Wide

Contestation More

Medium Medium



Source: adapted from Wiener (2017).

governmental representatives enshrining fundamental norms (type 1) in international treaties, which have a wide moral reach and universal claim. Wiener conceptualises different types of norms on the meso and micro levels (type 2 and type 3 norms). Type 2 norms are ‘are constituted through policy and political practice at the mesolevel’, while type 3 norms ‘are the least negotiable type of norm. They entail specific directives for implementation by designated norm followers’ (A. Wiener 2017, 119). Following Wiener’s conceptualisation, type 2 and type 3 norms have a medium or narrow moral reach and are (only) contested to a medium or to a small extent. While this contestation model and its distinction between three types of norms contributes to a more sophisticated understanding and empirical study of what norms do, it still works with a relatively static and linear concept of normative structure. The idea of a normative structure as the outcome of formal deliberations is closely linked to the emergence of type 1 norms. Equally, type 2 and 3 norms are the products of more specific legal regulations or the implementation of regulations formalised in rulebooks or manuals. The links that exist between the three types of norms are, however, not conceptualised explicitly enough. Further, the conceptualisation appears to capture their relationship along a linear sequence of implementation, in which the other types of norms inform, to some extent, the contents of type 3 norms. It is understandable how and why the contestation of norms might occur, but contestation as conceptualised here is primarily an event of deliberative norm construction, or rather the construction of divergent normative meaning in the form of objections and

Norms in International Relations


disagreements. Wiener (2017, 109) finds that normative content is (typically) contested both in the sense of ‘merely objecting to norms … by rejecting them, and as a mode of critique through critical engagement in a discourse about them’. Conceptually, contestation is closely tied to rejecting, refusing, or resisting particular norms: even the ‘critical engagement’ referred to above is cast with reference to how ‘struggling agents … question the norms that govern them’ (A. Wiener 2017, 111; see also Dixon 2017, 83). As such, contestation only becomes analytically visible if it is articulated in response to established, formal norms. In other words, contestation implies holding a constant ‘conversation’ with a comparatively stable normative structure by either accepting or opposing it. Norm research that is based on the principle of contestation therefore risks losing sight of those standards of appropriateness that emerge in practice and are not cases of contestation or compliance simply because either there is no directly relevant normative structure to contest, or existing norms are not determinate. We illustrate this with a counterfactual example: the hypothetical, repeated use of smaller-scale nuclear weapons beyond the bombings of Hiroshima and Nagasaki, or the hypothetical usage of blinding laser weapons in warfare, could have, over time, set novel standards of ‘appropriate’ use of force because explicit legal norms governing their usage beyond the generally applicable (and often quite permissive) stipulations of international humanitarian law did not exist at the time of their development. Indeed, the 2017 Treaty on the Prohibition of Nuclear Weapons was a late response to the devastating but isolated use of nuclear weapons that remained an anomaly instead of a repeated use-of-force practice. Exposing humans to nuclear radiation or blinding them contests other norms such as the prohibition of unnecessary suffering or the anti-torture norm, but this is less clear than it might seem. Can practices at the micro level whose reach is initially limited shape norms at the macro level? Are norms on the micro level necessarily always subjected to limited contestation and have narrow moral reach, as Wiener’s model suggests? These are important questions we seek to address in the course of developing our model of norms in the remainder of this chapter. As argued above, the conventional models of norm research work with a relative fixed or stable normative structure and thereby give little room for the emergence of normative meaning in less formalised or less institutionalised settings. The analytical advantages of


Autonomous Weapons Systems and International Norms

working with an understanding of norms as fixed concepts enshrined in treaties are obvious. But even those approaches that accept the general contestation of normative meaning in different contexts have not sufficiently addressed the micro level of individual practices and the question of whether we can study norm emergence as de-linked from an existing, formalised normative structure. Even contestation still requires an existing and relatively stable normative structure to make sense as a conceptual framework. While this explicit engagement with a (stable) normative structure is part of investigating how norms emerge and work, practices might also produce standards of ‘appropriate’ action that do not correspond to any clear normative structure. Here, we want to consider whether the pattern of actions by individuals can also construct a comparatively stable set of ‘appropriateness’ expectations over time that is, however, neither formalised nor the outcome of explicit deliberations. In our perspective, what we call deliberative norms dominate research on norms in IR and should be contrasted with procedural norms that emerge in practice (Bode and Huelss 2018) beyond the formal normative structure and beyond deliberative settings that remain central for a key segment of norms research. But what kinds of practices matter and who performs them? Addressing this point requires revisiting the connections between levels of analysis and norm emergence (in terms of macro/structure and micro/individuals), including considering in which spheres norms as standards of appropriateness can emerge. In other words, we argue that the relevance of individuals at the macro level is largely, although often only implicitly, accepted. Individual actors, such as state representatives in international negotiations, create a normative structure. While individual actors remain widely unstudied and unseen (see Bode 2014, 2015), the emerging structure that is a translation of their intentions has come into the focus of research. At the micro level, research is unable to study practices constituted by individuals comprehensively and exhaustively. But it is possible to gain an understanding of different practice cases, of standards of appropriateness in these cases, and how norms may emerge in their performance. Keeping in mind that constructivist research has so far differentiated between ethical and legal norms, to which we add the category of procedural norms, it is important to consider which analytical spheres or levels of analysis are relevant when interested in norm emergence.

Norms in International Relations


Deliberative norms speak to a legal–public sphere. This means that those norms that have been predominantly in the focus of IR research – Wiener’s ‘fundamental norms’ – are relevant from a legal perspective, but are important also for justifying actions in terms of public legitimacy. Arguments that allude to the protection of human rights or democracy can be especially powerful in the public domain. This legal–public sphere has both macro- and micro-dimensions, as the question of what norms mean not only is deliberated on at the level of (so-called) ‘high’ politics but also concerns the ‘use’ of norms at a local level, including their interpretation and transformation by groups and individuals. While research has accommodated these practices of localisation, as mentioned before, we argue that even these studies still tend to prioritise legally institutionalised, international norms. Zimmermann (2016), for example, accounts for how the content of international norms, such as the rule of law, can be modified through processes of translation at the local level, and also analyses how such localisation processes can contribute to changes in normative meaning at the international level via feedback loops. But institutionalised, international norms remain the starting point of her investigations. Clearly, we do not contest that these norms are highly relevant in IR: they have played a pivotal role in regulating and prohibiting weapons systems as historical cases, but they also play a key role in the current debate at the UN-CCW on whether AWS can be used in adherence with IHL. But this perspective excludes other spheres and other types of norms that are equally or even more important when trying to understand where standards of appropriateness come from and how these change over time. Our argument about procedural norms rests on this assumption. Such norms are relevant for and emerge in what we call the procedural–organisational sphere. This sphere is often neglected in norm research because it concerns everyday, administrative, mundane, and often individualised dimensions. Current norm research only links these dimensions to processes of norm implementation (Betts and Orchard 2014; Jacob 2018; Holmes 2019). However, we argue that specific and often diversified organisational contexts, such as the military in the case of weapons technologies, develop and work through specific procedures in the form of established and improvisatory practices as ways of doing things. The historical cases of


Autonomous Weapons Systems and International Norms

using, regulating, and banning weapons technologies (chapter 2) showed that the military has developed standards of appropriate action from the level of high command down to that of individual actors, which can divert from legal norms or public legitimacy expectations. These practices are not concerned with purely implementing or linearly transforming existing, clearly defined and stable norms. Particularly in cases of indeterminate regulations or in the absence of specific regulations, norms as standards of appropriateness are likely to emerge here. We now turn to conceptualising these thoughts further.

from deliberation to practice: developing norm research in ir In order to investigate the role of norms in the context of AWS, we introduce the novel category of procedural norms that emerge in practices. Procedural norms contradict the conventional perspective of norm research in IR, which largely studies what, for example, Wiener (2017, 118) identifies as ‘fundamental norms’ (such as human rights, rule of law, or democracy) as outlined above. We consider these norms as deliberative norms because they are conceptualised as being agreed upon in formal settings of international norm-making, such as negotiations on treaties that give norms a legal quality. Deliberative norms are also verbalised, which means that they are ‘fixed’ in written documents and referred to in speeches or other forms of communication. However, norms, when understood as ‘standards of perceived appropriateness,’ do not just emerge and change in open, public debate through deliberative processes in institutional forums. The development, testing, training, or deployment of weapons technologies such as AWS may also shape norms in practice. This insight builds on how the preceding chapters have outlined the ambivalence, indeterminacy, or permissiveness of conventional, institutionalised legal norms in use-of-force practices. In these everyday practices related to weapons technologies, procedural norms that define standards of perceived procedural–organisational appropriateness, such as efficiency, often tend to be privileged. Procedural norms can be fixed in terms of manuals or guidelines, but we suggest that they can also be non-verbalised. This means that standards of perceived appropriateness do not require written documentation to unfold meaning.

Norms in International Relations


Procedural norms are not purely technical or functional. Rather, they accommodate different understandings of appropriateness within specific organisational and situational contexts. Procedural norms are a very broad type of norm because we refrain from proposing a narrow definition of norms that would foreclose their meaning and impact. Again, we contrast procedural norms conceptually with deliberative norms – not because they are merely reactive or intuitive, but because they are not the outcome of formalised, public, deliberative negotiations and explicit norm-setting practices in the political arena. This does not mean that norms emerging in certain ways of doings things are merely the outcome of reflexive actions instead of the deliberation about options; it also does not imply that procedural norms generally lack specification or formalisation. The differentiation between deliberative and procedural norms is analytical and points to different ways of focusing empirical research. We seek to avoid oversimplifying the distinction between deliberative and procedural norms in terms of a dichotomy, but rather aim to reintroduce complexity into the study of norms. Examining practices around developing, testing, and deploying weapons systems with an increasing range of autonomous features can therefore exemplarily capture how procedural norms may push forward novel standards of perceived appropriateness that might become dominant in state conduct as the ‘right’ ways of using force. What appears to be useful and efficient in procedural terms and in a specific organisational context could become the new standard of what is normatively legitimate, linking procedural norms to fundamental norms by setting de facto standards of what is appropriate action. This process transfers practical consideration of appropriateness to a more abstract, normative level. This also points to a failure in norm research, which tends to underestimate and under-study norm emergence outside of the formal norm-setting framework. Even if the use of force in certain cases is covered by international law, the normative substance of related practices will only emerge at the micro level. The general character of international law as an indeterminate and abstract regulatory framework adds to the room that individual actors or actors in organisational spheres have in defining what appropriateness is. We therefore propose studying AWS in the context of two different but interrelated normative spheres, thereby linking our


Autonomous Weapons Systems and International Norms

analytical model to the critical review of norm research: the legal– public sphere, which is the primary realm of fundamental norms; and the procedural–organisational sphere, which is the primary realm of procedural norms. While these spheres are not entirely independent from each other, we use them to examine a broader notion of appropriateness that encompasses both legal–public appropriateness and procedural–organisational appropriateness (see also chapter 5). Legal–public appropriateness represents fundamental norms, including public expectations in terms of (political) accountability, lawfulness, or transparency. In contrast, procedural–organisational appropriateness represents procedural norms, considerations of appropriateness in specific organisational settings, such as the different units of the armed forces, or in specific situations of warfare. Here, appropriateness is primarily concerned with functional legitimacy, specific regulations, and accountability hierarchies, such as a chain of command. In this sense, appropriateness is first manifested in following procedures regardless of their normative substance in terms of the morally ‘right’ thing to do. In representing different contexts of appropriateness, this model accounts for diverging, interplaying layers of appropriate actions. We therefore argue that a comprehensive research framework on the normativity of AWS should consider both legal–public and procedural–organisational appropriateness. To gain an understanding of procedural norms, we suggest focusing on practices. We assume that these practices can construct new or different standards of appropriateness that turn into procedural norms on becoming widespread. Practices are decisive for studying both how procedural norms emerge and how they relate to fundamental norms. Importantly, AWS not only refer to a specific type of weapon platform (in contrast to nuclear missiles, for example), but are a catch-all category (see Heyns 2016a, 6). Autonomous features span from simple to very complex platforms (see chapter 1). This wide range of platforms with autonomous features for use at sea, in air, and on land greatly increases the number of development and testing practices, in the course of which diverging standards of ‘appropriate’ use may emerge. Further, certain particularities of AWS, such as their autonomous character and the diminishing role of human actors in their control functions, are distinctly relevant

Norms in International Relations


in the context of examining procedural norms. Basic functionally inspired norms, defining when and how the use of force is proportional or discriminate as well as how targets are selected and engaged, may increasingly be ‘made’ by machines (Huelss 2020). This turn to practices as key analytical sites in norm research not only provides fruitful links to the dynamic programme of practice theories, but should also be seen in the context of two fundamental analytical considerations that the study of norms entails: responding to the agency–structure concept, and investigating normativity and normality. Differentiating between spheres of appropriateness points to the existence of structural elements comprising norms that shape understandings of what appropriate action is. We do not contest the general logics of agency and structure, but we emphasise that the emergence of structure and of points of stability at the heart of social science research can take on a micro-level dimension, consisting of interrelated spheres, different actors, actions, and instruments, and feed back to the macro level of deliberative norm-setting (and de facto structuring). Our conceptualisation and the related analytical move towards the micro level also signifies an individualisation of norms research in terms of zooming in on individual practices that, for example, constantly reproduce and construct what procedural–organisational appropriateness is. The intention of this analytical move is to accommodate more flexibility in the agency and structure model, which not only contributes to our overall approach to norm research, but also includes change as a constant feature of constructivist research. Structure-centred models often struggle to conceptualise how change can happen and influence the structure (Flockhart 2016b). The second fundamental issue relates to a discussion of normativity and normality as dual qualities of what we call a norm. In our view, research has not sufficiently problematised the meaning of, and the relationship between, these terms. Following the conventional view represented in most constructivist studies since the 1990s, normativity is inscribed in norms by way of specifying their meaning for regulating social relations. Human rights provide a typical example here: normativity with claims to universality is inscribed in established norms, which regulate the margins of the ‘right’ thing to do vis-à-vis individual human beings. It should be noted that the link between what is practically and morally ‘right’ is apparent in the context of normativity. However, a clear differentiation between


Autonomous Weapons Systems and International Norms

both dimensions is usually not delivered in studies on the ‘normative’ in IR. While we do not focus on moral–ethical questions here, we suggest that it can be useful to consider the relationship between normativity and normality. In this sense, normativity is conventionally thought of as shaping normality – it defines what constitutes ‘normal’ action and ‘normal’ conduct, for example, when governments act. Clearly and extensively defining what appropriate normative action or the appropriate use of force is in the case of AWS on a deliberative level would hence also mean that a certain understanding of normativity could be transferred via norms to practices. However, in cases where there is no specific, pre-established normativity in the sense of ethically and legally ‘right’ actions, such as protecting civilians, (a form of) normativity can still emerge, following our argument. Practices that define what kind of actions are practically normal (normality) can therefore also set standards of what could be perceived as appropriate. For example, intentionally killing civilians would be considered ethically and legally wrong; unintentional killings that are, for example, the outcome of complex situations including technological failure or human error in the interplay of human operator and complex weapons systems could be assessed differently. It would be still widely considered as morally wrong that civilians are killed in combat (although how states distinguish between civilians and combatants is by itself not clear-cut; see chapter 3), but this perception of normativity is relational as well as situational. In other words, functional-procedural understandings of what is acceptable constitute the normality of using a weapons system, for example. And this normality can also shape understandings of normativity. If the overall intention for using a weapon is deemed to be ‘self-defence’ and ‘protecting lives’, the normality of using this weapon can become normatively ‘right’. We therefore argue that normativity also emerges in practices by constructing norms. But it is theoretically and analytically impossible to disentangle normativity and normality, as these concepts reflect the co-constitution of agency and structure in the sense that both normativity and normality are structuring and structured. We therefore want to encourage flexible analytical conceptualisations of normativity and normality, rather than prioritising the study of a fixed and determinate normativity that shapes the normality of international relations.

Norms in International Relations


To sum up, our empirical approach to studying the role of norms in the case of AWS (chapter 5) is the outcome of theoretical reflections in this chapter. It engages with two spheres of appropriateness, their interrelatedness, and their implications for use-of-force norms. We seek to investigate norms as standards of appropriateness in a comprehensive way. In considering two spheres of appropriateness (legal–public and procedural–organisational), we therefore also examine the role and implications of international law for the development and (potential) use of AWS. This links the book to the current academic and political debate, but also shows the extent to which an exclusive focus on international legal norms established at the macro level is empirically insufficient. Therefore, we not only extend our study to the sphere of procedural–organisational appropriateness and to the question of how norms emerge at the micro level but also remain in constant conversation with the macro level and the structures provided by international law. To do this, we investigate the concept of meaningful human control (MHC) addressed in chapter  1 more closely. While concrete legal rules accommodating this concept (arguably) do not exist yet, MHC is an interesting concept for studying the emergence of norms. Discussions at the UN-CCW clearly speak for deliberative attempts to establish MHC as a novel standard of appropriate use of force at the macro level, and constituted within the sphere of legal–public appropriateness. As yet, the specific normative impact and form that MHC may eventually take as a deliberative norm are unclear, but we want to investigate the extent to which MHC already plays a role and is in the process of being established as an emerging norm in the procedural–organisational sphere. Ensuring an adequate level of human control when using weapons systems is a fundamental procedural norm of warfare that is neither new nor uniquely relevant for AWS. Establishing an appropriate level of MHC should therefore be considered as a central problem associated with procedural–organisational appropriateness in the military. Everyday standards and ways of doing things are therefore established in practices that are potentially detached from legal–public norms, which may or may not exist. Operationalising the study of MHC as an emerging norm in practices means developing an understanding of what MHC means for weapons systems with increasing autonomy. Drawing on qualitative research and a qualitative data catalogue developed in a research


Autonomous Weapons Systems and International Norms

project led by Ingvild Bode (Bode and Watts 2021), we engage with the development of autonomous features and the integration of (meaningful) human control in air defence systems that are crucial precursors for the further integration of autonomy in weapons systems (chapter 5). Hence, the book’s theoretical approach and empirical analysis contributes to norm research in IR as a further development and conceptual refinement of an existing and growing interest in the role of practices for norms as seen in the broad research literature on contestation, localisation, and operationalisation. We emphasise the importance of the micro level for studying the emergence of norms, thereby pointing to diverse sources of norms, to their flexibility, and to the interplay of actors and structures. The theme of this book also points to the potentially game-changing, almost revolutionary effect of increasing autonomy in (weapons) technologies. Studying the role of AI as an element in complex human–machine interaction will soon be of significant analytical relevance. The growing sophistication of technological agency, however, not only poses an ontological problem in the sense of changing social relations and interactions in all dimensions of life but is also an epistemological problem for social sciences and for IR. To engage with this development, the most fundamental and influential premises of constructivism such as the agency–structure model encounter the challenge of integrating elements of non-human agency into basic assumptions of structured and structuring human agency. In other words, once autonomous technological agency or elements of it become increasingly important in human– machine interaction, the clear dichotomy between human agency and material and social structures weakens. Existing theoretical models will require reconsideration in the light of technological autonomy replacing the singularity of human agency. Our more concrete suggestion in the context of this book is to differentiate the analytical levels further and to open up the supposed linear sequences of normativity shaping normality by gaining a better understanding of the role of norms in micro-practices, influenced by the development and application of specific technologies. This book should therefore also be read as a call for taking the implications of technologising social relations in the broad sense more seriously than is currently the case. This means, primarily, amending and adapting long-standing theoretical models and

Norms in International Relations


concepts to changing practice. Our work is designed to broaden the perspective while we do not discard the relevance of legal norms or of a normative structure established in political deliberations. Instead, we argue for the importance of bringing in another practice-based sphere of norm emergence and, ultimately, ensuring that, analytically, both spheres are in close and constant conversation with each other.


How Autonomous Weapons Systems Make Norms

This chapter investigates the emerging norm of meaningful human control for the practice of using weapons systems with autonomous features. The previous chapters have shown that the development and introduction of new weapons systems often happen in contested areas where specific legal norms do not exist, are ambivalent, or where their emergence lags behind associated practices of warfare. As we argue, this is also the case with autonomous weapons systems and their current consideration in the framework of the CCW. We have outlined that persistent difficulties in finding a shared definition of AWS as well as around meaningful human control (MHC) (or associated qualifying human control) present deliberations at the CCW with significant challenges in initiating a potential negotiation process for a novel regulatory framework on AWS. For example, some voices in the debate argue that any such new regulation is premature because fully autonomous weapons systems do not yet exist, and we cannot know whether new law would be needed before seeing the consequences of their application. However, we emphasise that the gradual inclusion of autonomous as well as automated features into weapons systems, notwithstanding their comparatively low level of sophistication in terms of AI, has already led norms to emerge that set standards of appropriateness with regard to the use of force. More specifically, we argue that the process of integrating automated and autonomous features into weapons systems has already shaped an emerging norm of meaningful human control. Notwithstanding whether the norm of meaningful human control will or will not be deliberatively defined in the coming years, its contours have long had practical

How Autonomous Weapons Systems Make Norms


relevance for systems that were used in the past or are currently in use. The nature and quality of human–machine interaction has always been a central issue in the interplay of weapon technology and human control pertaining to the question of how to exert meaningful control with regard to efficiency and effectiveness. These considerations are at the heart of our differentiation between verbal and non-verbal practices that simultaneously shape new understandings of appropriateness. Differing understandings of appropriateness can motivate different practices, with significant consequences. In the case of weapons systems with an increasing number of automated and autonomous features, non-verbal practices manifesting in the research, development, testing, and usage of such systems can contrast with verbal practices centred on attempts to formulate a (consensus) norm of meaningful human control. We argue in the following that non-verbal practices surrounding existing systems with automated and autonomous features – such as cruise missiles, air defence systems, or active protection systems – have, over decades, already created an understanding of what meaningful human control is. This understanding implicitly acknowledges the type and quality of human control that is necessary for weapons systems to be deemed acceptable or /appropriate. There is, then, already an emerging norm, an emerging understanding of appropriateness, that emanates from these practices and has not been verbally enacted or reflected upon. This emerging norm reflects a shared understanding of the quality of human control in the context of human–machine interaction, a vital component of exercising meaningful human control that is considered appropriate or acceptable. As we argue, human control in practice is a complex, relational, and subtle concept. We do not have precise answers to the central question of where human control starts and ends that is of high importance for attempts to regulate AWS. What we want to highlight is how the emerging norm of meaningful human control resulting from existing practices has already defined unwritten ‘standards’ as to how much the ceding of immediate human control over specific decisions to use force is considered appropriate. The current debates about the type and quality of human control needed to ensure compliance with international law arguably bring out this non-verbalised norm. At the same time, it neglects to acknowledge its origins in the non-verbal practices of human– machine interaction in existing weapons systems with automated


Autonomous Weapons Systems and International Norms

and autonomous features. We argue that this is a case of silent norm-making that takes place outside the political–public limelight and beyond the concepts of deliberate norms researched in IR theory. It points to an important but widely overlooked process, as we shall seek to establish in this chapter. This chapter is structured as follows: in the first section, we outline the discursive emergence of MHC as a potential norm to govern weapons systems with autonomous features and establish a basic definition of its content. In the second section, we discuss how the contours of what counts as meaningful human control emerged over decades of developing, testing, and using weapons systems with automated and autonomous features. We focus on air defence systems to illustrate how this has shaped the contours of a meaningful human control norm.1 We choose these systems because they have proliferated widely: according to the Stockholm International Peace Research Institute (SIPRI), eighty-nine states possess air defence systems, while sixty-three of those operate more than one system (Boulanin and Verbruggen 2017, 37). Air defence systems also provide a useful illustration for how technological developments regarding automation and autonomy have gradually been integrated into weapons systems and how this has, over time, significantly changed the role of their human operators. After reflecting on what these emerging standards mean in the context of the discussion on autonomous weapons systems and norms research, we also identify these developments as part of a permissible policy discourse on automated and autonomous features in weapons systems. We show how this discourse builds an implicit understanding of autonomous features as ‘positive’ through casting their integration as ‘appropriate’ on either legal–public or procedural–organisational grounds. This not only helps to create the permissible conditions for integrating more autonomous features into air defence systems but also engenders a perceived inevitability about this development that characterises the debate about AWS generally and needs to be questioned.

meaningful human control: a deliberative norm in the making? The concept of meaningful human control has taken root alongside the general debate on defining characteristics of LAWS since the inception of (informal) discussions at the CCW in 2014. Although

How Autonomous Weapons Systems Make Norms


various, similar concepts have emerged (for example, human supervisory control, human–machine touchpoints, or human judgement), they all pertain to the central requirement of meaningful control and have significant overlaps (Ekelhof 2019, 344). The British NGO Article 36 originally coined the concept of meaningful human control in 2013 and has since deepened its conceptualisation (Article 36, 2013). Especially since discussions at the CCW have been formalised in the form of the GGE on LAWS (since 2017), a consensus between states parties that it is unacceptable on ethical and legal grounds to delegate use-of-force decisions to machines without human control has emerged. At the September 2020 meeting of the GGE, the UK government, for example, stated that it ‘has no intention to [develop] systems that could operate without human control and we are fully committed our weapons remain under human control’ (GGE on LAWS 2020). Advocates promote codifying an obligation to maintain meaningful human control as part of an international treaty that would also prohibit those systems not meeting this requirement (Rosert and Sauer 2021). Multiple understandings of what meaningful human control implies present a complication, as does the extent to which these are shared among states parties to the CCW. Considering that Article 36 proposed the concept of meaningful human control to open up ‘a space for discussion and negotiation’ in policy discourse around how human control should be understood, such differences in opinion are not surprising (Moyes 2016b, 46). At its most basic, meaningful human control signifies ‘[t]hat a machine applying force and operating without any human control whatsoever is broadly unacceptable’ (Roff and Moyes 2016, 1). But not all possible forms of human control are normatively adequate: for example, ‘a human simply pressing a “fire” button in response to indications from a computer, without cognitive clarity or awareness, is not sufficient to be considered “human control” in a substantive sense’ (Roff and Moyes 2016, 1). Arguably, and although it was not framed in this way, the international community has been wrestling with the issue of human control since the regulatory debates on landmines. The conceptual focus of meaningful human control that comes out of the debate on AWS thereby opens up novel perspectives on existing weapons systems.2 A meaningful exercise of human control includes specific useof-force decisions as well as recognising that human control is a process (Bode 2020). We will unpack these further.


Autonomous Weapons Systems and International Norms

First, to be meaningful, control requires ‘humans to deliberate about a target before initiating any and every attack’ as an integral component (Suchmann quoted in Brownlee 2018). Indeed, ‘the brunt of IHL [international humanitarian law] obligations’ apply to specific use-of-force decisions (Mission of Brazil 2019). Adhering to IHL obligations such as distinguishing between civilians and combatants in the rapidly changing context of warfare requires contextual human deliberation and judgement. As we have seen, this cannot simply be assessed algorithmically. And this is especially the case in urban warfare, where civilians and combatants share the same physical space. Second, human control does not just refer to a single targeting decision undertaken by a human operator; it includes multiple humans at multiple stages of the targeting process. This distributed understanding sees human control as present across the entire life cycle of the weapons system and across various stages of military decision-making. Exerting human control across the system’s life cycle means that it matters in (Singh Gill 2018): (1) research and development; (2) testing, evaluation, and certification; (3) deployment, training, command and control; and (4) use and abort phases. Furthermore, human control is part of various military decision-making stages across the targeting process. This starts at the strategic level, where overall political aims are formulated, and goes via the operational level to the tactical level, where specific missions are planned and formulated (UNIDIR 2020). Addressing human control from the research and development stage of the weapon system’s life cycle is indeed necessary to ensure that the technology is predictable, understandable, and reliable (Holland Michel 2020). These qualities are crucial to enable an exercise of meaningful human control by human operators in the use phase (Moyes 2016a). Further, recognising that the targeting process of modern militaries spans across various stages where human control is exerted reflects operational reality (Ekelhof 2019). But this should not overshadow acknowledging requirements for exercising meaningful human control in specific use-of-force decisions (Bode 2020). This is precisely the dimension of meaningful human

How Autonomous Weapons Systems Make Norms


control adversely affected by current human–machine interaction practices performed in operating air defence systems. Human control of weapons systems is therefore a necessity in legal, moral, and accountability terms. In 2020, two key civil society stakeholder groups – the SIPRI and the ICRC, as well as the Campaign to Stop Killer Robots – put forward further refinements of the concept of meaningful human control in order to move the debate in the UN-CCW forward. In the case of the Campaign to Stop Killer Robots, this was done chiefly to prepare the ground for new international law that prohibits ‘weapons systems that by their nature select and engage targets without meaningful human control’ (ICRC 2019b, 6). These three stakeholder groups presented reports with operational definitions of meaningful human control composed of three different but overlapping dimensions, as summarised in table 5.1. Following this, we can infer three dimensions of meaningful human control: • A technological dimension that enables human agents to exercise control over the use of force through the design of weapon parameters, for example, limits on target type. • A conditional dimension that sets operational limits to the ways weapons systems are used in order to enhance human control, for example, via geographical or temporal restrictions of usage. • A decision-making dimension that defines acceptable forms of human–machine interaction through ensuring appropriate human supervision, for example, by way of guaranteeing that the human decision maker understands how the weapons systems function. All three stakeholders agree that retaining meaningful human control will necessitate covering multiple components across all three dimensions (Boulanin et al. 2020, 33; Campaign to Stop Killer Robots 2019b, 4). In particular, they reject considering technological components alone as a sufficient condition to ensure ‘meaningful’ human control (Boulanin et al. 2020, 9). But, otherwise, the reports attribute roughly equal importance to the three dimensions. As the Campaign to Stop Killer Robots (2019b, 4) argues: ‘[w]hile none of these components are independently sufficient to amount to MHC “meaningful human control”, all have the potential to enhance control in some way. In addition, the components often work in tandem’.


Autonomous Weapons Systems and International Norms

Table 5.1 Operational definitions of meaningful human control. SIPRI/ICRC

Campaign to Stop Killer Robots

(1) Control over weapon parameters: ‘limits on target type; constraints on spatial and temporal scope of operation; control over weapon effects; fail-safe requirement’ (Boulanin et al. 2020, 27).

(1) Technological components: ‘embedded features of a weapons system that can enhance meaningful human control’, e.g. ‘predictability and reliability, the ability of the system to relay relevant information to the human operators; and the ability of the human to intervene after the activation of the system’ (Campaign to Stop Killer Robots 2019, 4).

(2) Control over environment: ‘maintain situational understanding; exclude protected persons and objects; exclusion zone, physical barriers, warning’ (Boulanin et al. 2020, 27)

(2) Operational components: ‘make human control more meaningful by limiting when and where a weapons system can operate and what it can target’, e.g. ‘the time between a human’s legal assessment and the system’s application of force; the duration of the system’s operation; the nature and size of the geographic area of operation; and the permissible types of targets (e.g. personnel or material)’ (Campaign to Stop Killer Robots 2019, 4).

(3) Control through human-machine interaction: ‘ensure human supervision over system; ensure ability to intervene and deactivate; train user’ (Boulanin et al. 2020, 27).

(3) Decision-making components: ‘give humans the information and ability to make decisions about whether the use of force complies with legal rules and ethical principles’, e.g. ‘an understanding of the operational environment; an understanding of how the system functions, including what it might identify as a target; and sufficient time for deliberation’ (Campaign to Stop Killer Robots 2019, 4).

This assessment has significant consequences for the kind of conclusions the reports draw with regard to human control in air defence systems. As a matter of fact, both publications capture air defence systems in their remit. The Campaign to Stop Killer Robots (2019b, 4) proposes regulating ‘weapons systems that select targets on the basis of sensor input’, which also applies to existing systems in this broad category. However, neither publication evaluates the current practices of operating air defence systems as problematic for the exercise of meaningful human control. Instead, both sets of stakeholders clearly orient their definition of meaningful human control towards future systems.

How Autonomous Weapons Systems Make Norms


Table 5.2 Levels of human control. (a) Humans deliberate about specific targets before initiating an attack


(b) Humans choose from a list of targets suggested by a program


(c) Programs select the calculated targets and need human approval before attack


(d) Programs select calculated targets and allocate humans a timerestricted veto before attack


(e) Programs select calculated targets and initiate attacks without human involvement Source: N. Sharkey (2016)

This focus on the future is understandable and relevant. But, as we argue, it is also important to carefully assess and interrogate the precedents for what counts as meaningful human control set by existing technologies. As we show in the following, a closer look at air defence systems suggests that some dimensions are more important than others in defining meaningful human control in a substantive way. In particular, to be considered meaningful, the conditional or the technological dimension should not outweigh the decision-making dimension. In order to capture the human–machine interaction shaping the use of air defence systems, we distinguish between five different levels of human control (based on Sharkey 2016, 34–7) that human operators may exercise (see table 5.2): (a) humans deliberating about specific targets before initiating an attack, (b) humans choosing from a list of targets suggested by a program, (c) programs selecting the targets and needing human approval before attack, (d) programs selecting targets and allocating humans a time-restricted veto at the lowest level, (e) programs selecting calculated targets and initiating attacks without human involvement. The image of the control loop helps to visualise the relationship between the human and the system in specific situations when targets are selected and engaged, rather than in earlier phases of the targeting process, e.g. strategic planning (Burt 2018, 11). Levels


Autonomous Weapons Systems and International Norms

(a) and (b) are classified as systems with human operators ‘in the loop’ because human agents actively participate in the selection of specific targets and the decision to use force (Hawley 2017, 3). Levels (c) and (d) situate humans ‘on-the-loop’ as ‘the operator sets goals, monitors system actions, and intervenes [only] when necessary’, and thus reacts to specific targets suggested by the machine (Hawley 2017, 3). As documented later in this chapter, due to the time constraints often involved in using air defence systems, the distinction between (c) and (d) can in practice collapse: while human approval before attack may be needed (level (c)), this becomes a de facto time-restricted veto (level (d)) when the air defence system is operating at machine speed. This discussion highlights two aspects that are important for our understanding of meaningful human control. First, MHC pertains not only to the technological features of AWS and the ability of humans to control these features in the broad sense, but also to the situational environment in which they are used. In other words, MHC is a norm that (only) becomes accessible on the level of practices. Merely discussing technological aspects related to programming is not sufficient to ensure that human control is meaningful. Second, the observation that MHC is a procedural and therefore flexible norm speaks to a central question motivating this book: to what extent it is possible to formulate and formalise a deliberative norm, enshrined in law, which could then define what MHC is in practice? As technology changes, use-of-force practices change, which in turn are likely to influence what MHC means and what is considered as appropriate in context. Operationalisations of MHC, such as those put forward by SIPRI, the ICRC and the Campaign to Stop Killer Robots, offer useful frameworks for rethinking what it means to be in meaningful control of emerging weapons technologies, but they also underline the tremendous complexity involved in pinpointing the quality of MHC in specific practices. This complexity is not just an analytical problem in the sense that representing interrelated, multi-faceted, and multi-actor relationships in the context of use-of-force arrangements is very challenging; it also suggests that even if an ideal-typical and precise norm of MHC is designed in verbal practices, there is a risk that this may not correspond to or ‘work’ in use-of-force practices. We will return to these considerations in our conclusion.

How Autonomous Weapons Systems Make Norms


In addition, we have seen that international norms as outcomes of multilateral, consensus-oriented deliberation, in particular at the UN, typically take the form of comparatively basic and ambiguous phrasings. This is often referred to a ‘constructive ambiguity’ in multilateral diplomacy, building on Henry Kissinger (Berridge and James 2003; Byers 2020). Constructive ambiguity has the advantage of getting many states on board in the initial law-making process, but ultimately defers the resolution of (potential) conflicts over normative meaning elsewhere. As ambiguity is embedded into the institutionalised version of a norm, diverging understandings as to what it means in operational terms and what counts as appropriate implementation practices continue to coexist. This is a common feature of international norm-making, and we can already see inclinations in this direction at the UN-GGE. As a consequence, there is no reason to believe that things will turn out differently for the case of MHC. But this ambiguity risks undercutting the value of potential new international law curbing autonomous weapons systems. Yet, we want to emphasise that even in the absence of so-called fully autonomous weapons systems, that is, those operating without a human in the loop or on the loop in specific use-of-force decision, we have already entered an era of diminished human control. If we consider characteristics of current-level human–machine interaction, such as our chosen case of air defence systems, these provide multiple examples for situations where an ideal-typical norm of MHC has already been compromised. These observations underline a long-standing problem of MHC, as the capacity of human actors to meaningfully control the use of force diminishes with the rise of more complex forms of human–machine interaction (see the following sections). To sum up, the emergence of MHC as a typical, deliberative norm and an outcome of international discussion, vested with a supposedly stable meaning and formalised in international law, may arguably be less relevant than its significance in the debate on AWS suggests. While debate at the UN-CCW continues, various use-of-force practices, such as those connected to widespread use of air defence systems, are already in the process of establishing a specific understanding, a procedural norm, of what it means to exert human control and the quality of human control that is appropriate. In this way, these non-verbal practices are setting new understandings of


Autonomous Weapons Systems and International Norms

what appropriate use of force is and to what extent (and what form of) human control needs to be involved. Interestingly, the use of weapons systems can be perfectly in line with deliberative norms of jus ad bellum and jus in bello such as IHL, while the reality of human control neither is noticed nor has been problematised. This suggests that norms are already in place that have stabilised certain understandings of MHC even before concrete deliberations on what this concept should mean are concluded. Over time, these norms have shaped perceptions of what constitutes ‘appropriate’ human–machine interaction – the decision-making element to meaningful human control – without this having been (publicly) discussed or deliberated upon. In fact, as we will see in the following, the level of human control in specific targeting decision-making situations is on the low end of Sharkey’s spectrum cited above. This has led to the substantially reduced role of the human operator in specific targeting decisions. These precedents may make it difficult to move outside of the normative status quo.

practices of ‘meaningful’ human control in air defence systems: human–machine interaction and automated and autonomous features In this section, we outline that the question of meaningful human control is not a recent one but has instead become increasingly important in the context of developing weapons systems with ever-increasing numbers of automated and autonomous features. Specifically, we concentrate our discussion on air defence systems, which have long included significant automated and autonomous elements, although these have increased in technological sophistication over the decades. Following a short introduction to the role that automated and autonomous capabilities play in air defence systems, we focus our discussion on meaningful human control by singling out one particular element: human–machine interaction, that is, the decision-making dimension of human control. It is here that human control and machine agency are most closely interrelated, and it also captures specific uses of force. This discussion is prefaced by a short historical overview of how human control of weapons systems changed with their technological development.

How Autonomous Weapons Systems Make Norms


This discussion also serves to highlight why human–machine interaction is a crucial component of meaningful human control. Technological development, meaningful human control, and human doubt Even without accounting for AI-driven systems, the history of using force accentuates the increasingly problematic position of human control vis-à-vis sophisticated weapons systems. Human error has always played a role in how deadly force has been used and has long been at least partly influenced by how instruments and devices represent reality. To provide an example, the image of a ship ‘translated’ by a submarine periscope during the Second World War could lead the commander to mistake an illegitimate target for a legitimate one, especially if this concerned differentiating clearly between an armed merchant ship or a troopship from a merchant or passenger vessel. Whether or not this was indeed a ‘technical’ problem in practice, it established a grey zone of ambiguity open for individual intentional or unintentional interpretations. Air warfare at that time is equally full of examples of misdirected bombs. However, it can be argued that these examples point to a range of error or uncertainty that was known and deliberately accepted as a consequence of warfare. In other words, the interplay of weapons systems and human actors was based on a more direct concept of meaningful human control: human actors were aware of the possibility of error, acted to the best of their knowledge, but ultimately also accepted errors or suboptimal outcomes as (common) features of waging war. Indeed, this is what has been referred to as the friction of war from the writings of von Clausewitz (1984) onwards. The technological developments that have made weapons systems increasingly complex have also introduced a different logic: the possibility of eliminating doubt. In the context of AWS and ‘algorithmic warfare’, Louise Amoore (2019) has underlined how important it is to acknowledge mathematical concepts as the basis of calculating probabilities and providing a degree of certainty. This machine agency equals autonomy in calculating targets and leads to a basic disconnection between human and machine judgement. While mathematical calculations or equations can be ‘correct’, the outcomes they produce can still be wrong in a social sense.


Autonomous Weapons Systems and International Norms

Machine learning, which is at the heart of advances in narrow AI characterising applications in current and future use with militaries, represents a black box where human actors lose the capacity to assess how algorithms reach conclusions or ‘make decisions’. This means that trusting machines becomes something of a fundamental requirement for operating them. The supposed increasing elimination of doubt is at the core of the problem of MHC. However, the question of trust in machines is not novel and has played an increasing role in the growing technological sophistication of weapons systems. A widely known case is the 1983 incident involving Soviet officer Stanislav Petrov, who played a major role in avoiding a nuclear catastrophe by correctly doubting information about a putatively happening US nuclear missile strike produced by the Soviet early-warning satellite network. Petrov disregarded a supposed incoming nuclear missile strike as a system malfunction, deciding not to report it to superiors, who would have retaliated with a nuclear attack. In doing this, Petrov prevented a nuclear war by (partly) disobeying formal procedures and stepping outside of his defined role, thereby acting as deliberate human decision maker. At the same time, this case shows how an electronically translated reality can influence the exercise of human agency. While Petrov lacked any technical-functional basis to doubt the information he received, he used his intuitive understanding of what seemed to be an implausible sighting based on his prior social knowledge and, one might add, a form of situational awareness. Having acted on the basis of a ‘gut feeling’, Petrov later commented: ‘I told myself I won’t be the cause of World War III. I won’t. Simple as that. … Something told me that over there, on the other side of the ocean, are people just like me. They probably don’t want war’ (Maynes 2017). In this case, doubt expressed itself as a social feeling of ‘this cannot be right’, rather than being based on an informed, supposedly ‘superior’ form of technical-factual knowledge. The incident is a good example of the limits but also the possibilities and the importance of meaningful human control. On the one hand, a more complex system involving different interrelated components might be able to identify a similar malfunction. On the other hand, even the most advanced learning algorithms lack the capacity to use a creative, intuitive version of doubt to check on results if the situation is not part of their training data. Moreover, the complexity of systems also generally reduces the ability of humans

How Autonomous Weapons Systems Make Norms


to doubt their data output. In particular, this is the case in less ‘spectacular’ events and when reaction time is (extremely) limited. The missile early-warning system described above gave human operators limited options as well as limited time to decide whether the output it produced was correct. While an unexpected nuclear attack constitutes a ‘worst possible case’ scenario, which could therefore allow for a higher degree of human doubt, a set of mundane pieces of electronic information in the context of a complex target identification and engagement cycle makes it much more difficult to doubt because there is hardly any external social basis for formulating the necessary feelings of doubt. Air defence systems and autonomous features Air defence systems have proliferated around the world in the past decades. State users include global military powers, such as the five permanent members of the UN Security Council (China, France, Russia, United Kingdom, United States), as well as powers with regional significance, such as Brazil, Egypt, Israel, India, Japan, and Turkey. These systems identify, track, and, if necessary, engage (for the most part) airborne threats in order to defend a platform, installation, or population from attack. The history of such systems is traceable to the interwar period (Bousquet 2018, 55–9) and gained particular momentum during the Cold War. Autonomy and automation have long been integrated into the critical functions of air defence systems in order to ‘detect, track, prioritise, select and potentially engage incoming air threats’ (Boulanin and Verbruggen 2017, 37). Crucially, ‘autonomy in air defence systems has no other function than supporting targeting’ (Boulanin and Verbruggen 2017, 37). Unlike other AWS, autonomy is therefore not needed for mobility because many air defence systems are static or fixed to a movable platform, such as a ship. In fact, CIWS mounted on ships are the most commonly used type of air defence system, with approximately 2,000 of such systems being operated by more than forty-five states (Artificial Intelligence Committee 2018, 395; Bode and Watts 2021). Generally, air defence systems operate under at least two modes of human–machine interaction: manual mode, where the human operator authorises weapons launch and manages the engagement process; or automatic mode, where, within its preprogrammed


Autonomous Weapons Systems and International Norms

parameters, the system ‘can automatically sense and detect targets and fire upon them’ (Roff 2016). We can connect these two modes to different levels of human control (see figure 5.2). In manual mode, human operators remain ‘in the loop’ – that is deliberating about specific targets before initiating an attack or choosing from a list of targets suggested by a program. In automatic mode, human operators are only ‘on-the-loop’ and their roles may range from approving preselected targets to being allocated a time-restricted veto. Following Boulanin and Verbruggen’s summary, human supervisors only have a time-restricted veto in automatic mode: ‘the system, once activated and within specific parameters, can deploy countermeasures autonomously if it detects a threat. However, the human operator supervises the system’s actions and can always abort the attack if necessary’ (Boulanin and Verbruggen 2017, 39; our emphasis). Whether a human operator remains ‘in the loop’ or ‘on the loop’, the complexities of the human–machine interaction at the heart of targeting using air defence systems need to be accounted for. Ultimately, common descriptions of the human operator make two assumptions: first, human operators can retain situational awareness; second, they have sufficient insights into the parameters under which the automated or autonomous parts of the command module select and prioritise targets to question their selection and, if necessary, abort the attack. Yet, as we will demonstrate throughout the subsequent sections, both assumptions are rarely satisfied, circumscribing meaningful human control in specific targeting situations. Further, autonomy and automation in air defence systems generally refer to the command module subsystem, but the systems’ tracking radar may also include automated or autonomous components. For a simplified overview of the three-phased sequencing of an air defence system’s operation, see table 5.3. As air defence systems evidence, the principle of autonomy in warfare and the questions it raises about the role of human decision-making therein are not new (Hawley 2017, 2). Despite this, as noted by the United Nations Institute for Disarmament Research, ‘[t]he international discussion on AWS has not been about these sorts of existing, already long deployed systems’ (UNIDIR 2017, 9). Some early publications on autonomy in weapons systems, such as Human Rights Watch’s Losing Humanity report (Human Rights Watch 2012), included a brief discussion of air defence systems (see, for example, ICRC 2016). Broadly speaking, however, this type of

How Autonomous Weapons Systems Make Norms Table 5.3 Stage Stage 1


Sequencing of an air defence system operation. Sub-system involved


Target Radars detection and assessment

Description An air defence system’s search and tracking radars detect a potential target. The trajectory and velocity of this target is then calculated. This is triangulated against the system’s approved engagement zones – generated using data on the flight paths of the civilian aircraft and friendly military aircraft that it has been provided with – and, on some systems, an identification, friend, or foe system designed to limit friendly fire. The human operators consult the Rules of Engagement under which they are operating. An assessment is then made on whether a potential target poses a threat.

Stage 2 Target prioritisation

Command module, human operators

If multiple targets are detected, the command module will need to prioritise the order in which they are engaged. Target prioritisation is determined by the system’s preprogrammed engagement parameters.

Stage 3 Target engagement

Launchers, interceptors, human operators

The system’s launchers then release its interceptors, attempting to destroy the identified target. Human initiation or approval is needed depending on the system’s mode of operation: when ‘in the loop’, the operator must approve weapons release; when ‘on the loop’, after having switched the air defence system on, the operator is limited to a supervisory role.

Source: Bode and Watts (2021).

analysis has been either marginalised from the relevant literature or narrowed down to the discussion of a single system such as the Patriot (Hawley 2017). If mentioned in the context of broader AWS, air defence systems are sometimes not characterised as problematic from a meaningful human control perspective. The Campaign to Stop Killer Robots, for example, states in its 2019 publication outlining a path towards a regulatory framework on AWS that ‘these [air defence] systems … seem to function within the bounds of meaningful human control and the final treaty would be unlikely to restrict their use’ (Campaign to Stop Killer Robots 2020a, 3). The Campaign reaches this conclusion based on assessing technological and conditional components of meaningful human control: humans can override air


Autonomous Weapons Systems and International Norms

defence systems, and such systems ‘operate within tight parameters in relatively controlled environments and target munitions rather than people’ (Campaign to Stop Killer Robots 2020a, 3). However, this overlooks the precedents air defence systems have set for the usage and development of AWS. In particular, existing air defence systems demonstrate significant problems with regard to the meaningful exercise of human control along the element of human– machine interaction, as we will show in the following sections.

air defence systems and the role of the human operator in specific targeting decisions Whilst air defence systems are technically capable of engaging manned fixed- or rotary-winged aircraft, they are generally ‘not used against human targets’ (Nath and Christie 2015). Incidents in which air defence systems have led to the loss of human life are the subject of considerable public and political scrutiny. By studying these incidents, we can deepen the discussion of the human operator’s role in specific targeting decisions and highlight the significant challenges that human agents encounter in performing this role. We focus on the decision-making dimension of meaningful human control. As defined earlier in this chapter, this concerns the quality of human supervision in human–machine interaction, including whether the human decision makers understand how the air defence systems function. This section analyses five cases involving a range of air defence systems with significant automated and autonomous capabilities. The cases include three civilian airline disasters: Iran Air Flight 655 shot down by an Aegis system on USS Vincennes over the Persian Gulf in July 1988; Malaysian Airlines MH17, shot down by a Buk system over Eastern Ukraine in July 2014; and Ukrainian Airlines PS752, shot down over Tehran by a Tor-M1 in January 2020. These are the most prominent incidents involving air defence systems, and therefore also those that have received the most significant media and analytical coverage, but there are also similar less well-known cases, such as the destruction of Siberia Airlines Flight 1812 (4 October 2001) or Korean Airlines Flight 007 (1 September 1983). We further analyse two friendly-fire incidents involving the Patriot air defence system during the 2003 Iraq War. Examining the Patriot system in detail is useful because it has been central to the US air defence

How Autonomous Weapons Systems Make Norms


strategy since the 1990s and has been used by four other states in combat operations. Iran Air Flight 655 (IR655) – 3 July 1988 On 3 July 1988, the guided missile cruiser USS Vincennes, equipped with the Aegis air defence system, destroyed Iran Air Flight 655 (IR655) over the Strait of Hormuz, killing all 290  passengers and crew on board the Airbus A300 (Ling 2020). The Aegis is a complex air defence system that began development in the 1970s. The Aegis system is described in a US Navy Fact file as a ‘centralized, automated, command-and-control (C2) and weapons control system that was designed as a total weapon system, from detection to kill’ (US Navy Office of Information 2019). In 2005, an anti-ballistic missile capability was developed that was likely to require the greater integration of automation into the system’s core features. As John Hawley, an expert on the Patriot air defence system notes ‘[t]he nuts and bolts of the ballistic missile engagement process are too complex and time-limited for direct, in-the-loop human participation’ (Hawley 2017, 6). While the exact details of the incident remain contested, the destruction of IR655 took place in the context of the Iran–Iraq War (1980–8). Washington was concerned about the potential disruption of oil exports from the Persian Gulf. Beginning in July 1987, the United States, along with several other countries, tasked naval warships with guaranteeing the safe passage of oil tankers transiting through the Strait of Hormuz. In May 1987, the frigate USS Stark – allegedly mistaken for an Iranian tanker – had been hit by two Exocet anti-ship missiles fired from an Iraqi Mirage F-1, killing thirty-seven American sailors and wounding twenty-one others.3 The Stark had observed the Iraqi F-1 for an hour, but its captain had decided not to engage the aircraft (Maclean 2017, 23). After this attack, the rules of engagement for US warships operating in the region were loosened, providing commanders with significantly greater latitude for exercising force (Halloran 1988). This loosening of the rules of engagement provides important context for the downing of IR655. As Admiral William J. Crowe Jr., then chairman of the Joint Chiefs of Staff, emphasised in a Congressional hearing: ‘each commanding officer’s first responsibility was to the safety of his ship and his crew. … Ship’s captains are expected to make forehanded judgments, and if they genuinely


Autonomous Weapons Systems and International Norms

believe to be under threat, to act aggressively’ (US House of Representatives 1992; our emphasis). IR655 had taken off from Bandar Abbas International Airport – an airport used by both civilian and military aircraft (Halloran 1988) – en route to Dubai, United Arab Emirates. The USS Vincennes and the frigate USS Montgomery were in the middle of a skirmish with Iranian gunboats as IR655 departed (US Department of Defense 1988, 2). As stated in the Pentagon’s investigation report, the Vincennes’s combat information centre (CIC) operators saw IR655 had a ‘direct relationship to the ongoing surface engagement’ (US Department of Defense 1988, 2). Its radar signature was interpreted as an Iranian F-14 ‘head[ing] directly for Vincennes on a constant bearing at high speed, approximately 450 knots’ (Halloran 1988). Thereafter, the Vincennes is claimed to have sent multiple warnings to IR655 on military and civilian channels without response, which was taken as a further indicator of its hostile intent. Later reports suggest that the pilot of IR655 was not likely to have been monitoring the channels these warnings were issued on. Events unfolded differently from the perspective of the USS Sides, a guided missile frigate, which was also operating in the Strait of Hormuz. The Sides analysed the same radar data as the Vincennes,4 but its captain did not authorise a missile strike against IR655 because it ‘simply had not behaved like a combat aircraft’ (D. Evans 1993). Unlike the Vincennes, the Sides was not equipped with the Aegis. Tragically, ‘the electronic specialists in the Sides’ combat information centre had correctly identified the aircraft’s commercial transponder code at virtually the same instant that the Vincennes fired her missiles’ (D. Evans 1993). Key here was the time available to the commander to act: as the Pentagon’s investigation noted, ‘the compression of time gave him an extremely short decision window’ (US Department of Defense 1988, 6). IR655 first appeared on the radar screen at 10:47am – the Vincennes made its decision to fire four minutes later, at 10:51am. In the lead-up to this decision, Captain Rogers received faulty information from the CIC on the Vincennes. The most significant misreading identified IR655 as decreasing in altitude (US Department of Defense 1988, 5). This appears to have been the result of how information was displayed to human operators, in particular a codechange for IR655’s radar track. Initially, IR655 had been assigned track number (TN) 4474 but this was changed automatically by Aegis

How Autonomous Weapons Systems Make Norms


to TN4131 along with its designation on the USS Sides (D. Evans, 1993).5 Not only had Captain Rogers not been aware of this change, but TN4474 had also been ‘re-assigned to an [sic] US-Navy A6 making a carrier landing in the Arabian Gulf’ (Maclean 2017, 29; see also Dotterway 1992, 63–5). There were also many psychological factors at play: the misinterpretation that IR655 was descending was not double-checked by key personnel higher up in the decision-making hierarchy of the Vincennes, for example, the anti-air warfare tactical action officer (AAW TAO). Had the AAW TAO checked, they would have seen that the system displayed IR655 as ascending rather than descending (US Department of Defense 1988, 5). Apart from the faulty readings highlighted above, decisions made by human operators in the CIC of the Vincennes also displayed a significant lack of situational awareness, ‘the perception of elements in the environment …, the comprehension of their meaning, and the projection of their status in the near future’ (Endsley 1995, 36). This led the CIC operators to misinterpret information they received during the minutes that IR655 was in the air. Due to a lack of adequate foresight planning, CIC operators on the Vincennes did not have the background knowledge necessary to retain situational awareness and correctly assess incoming information. A key piece of such knowledge was, for example, that ‘failure of a track to respond to warnings would have an entirely ambiguous meaning – it could be a commercial flight or a hostile military air. No store should be given to this when making a decision as to its identity or intent’ (Maclean 2017, 40). The Aegis system itself was reported to have ‘performed as designed’, as it was not capable of determining aircraft types, a task that required ‘human judgment’ (US Department of Defense 1988, 7). Notably, however, Aegis’s longer-range radar was designed to give operators more time to make such judgements – a functionality that was negated in this case given that warships were ‘operating close-in to a land-based airfield’ (US Department of Defense 1988,  7). As human operators misread or mistook some of the Aegis’s indications, the case of the Vincennes is sometimes referred to as an example of ‘under-trust’, defined as ‘… the human operator ignor[ing] relevant information provided by the system or overrid[ing] its action without justification’ (Boulanin et al. 2020, 19). However, based on the brief summary of events above, we argue that the deciding factors for the incident come down to (unidentified) complexities inherent in human–machine interaction.


Autonomous Weapons Systems and International Norms

The Pentagon’s investigation identified ‘combat induced stress on personnel’ as likely to be a ‘significant’ contributing factor to this failure. It consequently recommended more detailed investigations of the ‘stress factors impacting on personnel in modern warships with highly sophisticated command, control, communications and intelligence systems, such as Aegis’ (US Department of Defense 1988, 69; our emphasis). Indeed, the deployment of the USS  Vincennes marked the first combat operation of an Aegis-equipped cruiser, meaning that its CIC operators had no prior experience of operating the highly complex Aegis system under combat conditions. Reports also indicate that several senior CIC personnel were unfamiliar and uncomfortable with the computerised exercise of their roles demanded by operating the Aegis (Barry and Charles 1992). From a US Navy perspective, the mistaken identification of IR655 as an F-14 ‘was a professional disgrace’ (Barry and Charles 1992). It led to a seven-year research project that included human factor analysis and resulted in ‘a series of design changes in the Aegis user-interfaces to eliminate obvious sources of error’ in human– machine interaction (Maclean 2017, 26). However, challenges inherent in human–machine interaction are not easily solved. Human factor analysts continue to question ‘whether even the besttrained crew could handle, under stress, the torrent of data that Aegis would pour on them’ (Barry and Charles 1992). Malaysia Airlines Flight 17 (MH17) – 17 July 2014 On 17 July 2014, Malaysia Airlines Flight 17 (MH17) was destroyed over the contested Eastern Ukraine, killing all 298 people on board. A report authored by an international investigation team under the direction of the Dutch Safety Board established that a Buk air defence system (either Buk M1 or Buk M1-2) was responsible for MH17’s destruction (Dutch Safety Board 2015b). The investigation team also established that the Buk system entered and exited Ukraine from Russia and was fired from territory under the control of pro-Russian separatists (Harding and Luhn 2016). Whilst Russia continues to deny any involvement, a criminal trial of three Russian nationals and a Ukrainian national accused of ‘co-operat[ing] to obtain and deploy’ the Buk opened in the Netherlands in March 2020 (Holligan 2020). Coverage of the MH17 tragedy has been dominated by assignations of responsibility and the search for justice.

How Autonomous Weapons Systems Make Norms


The Buk (Russian designation 9K37, NATO designation SA-11) is an all-weather medium-range air defence system manufactured by Almaz-Antey. Its earliest variants were first operational with the Soviet military in 1979. Thereafter, its base capabilities have been added to through multiple system upgrades: Buk M1 1983, Buk M1-2 1988, Buk M2 2008, Buk M3 2016 (Dutch Safety Board 2015a, 134; ODIN 2020). Algeria, Azerbaijan, China, Egypt, Georgia, India, Iran, Syria, Ukraine, and Venezuela are among other states that operate export variants of this system (ODIN 2020). Because of the warhead fragments found at the wreckage site, the Dutch Safety Board concluded that only the three older Buk variants (Buk, Buk M1, Buk M1-2) could have been used to down flight MH17 (Dutch Safety Board 2015a, 132). In a typical operation, ‘a Buk battery consists of three elements: an armored vehicle with a large radar device for target acquisition; the command vehicle, where there are monitors from which the battery is controlled; and finally, one or more mobile launching pads with four missiles each’ (Der Spiegel staff 2014; see also Harress 2014; Lele 2014). According to documentation provided by the Dutch Safety Board, the missiles that destroyed MH17 were fired from a missile-launching vehicle that was operating independently in a field outside Snizhne in eastern Ukraine (BBC News 2020b). This suggests that ‘someone simply started firing from a missile-launching vehicle’ (Der Spiegel staff 2014) without requisite command, control, and radar support. Speaking to a BBC journalist, Lieutenant Colonel Sergey Leshchuk of the Ukrainian Air Force presents the Buk air defence system as being capable of engaging six different targets within ninety seconds (BBC News 2016). In the same segment, whilst the Buk system is reported as possessing an automatic friend/foe identification system, it came down to the ‘expertise and experience’ of the operator to determine whether unidentified planes were civilian or military in nature (BBC News 2016). These assessments are confirmed by other experts on Russian military technology, such as Steve Zaloga: ‘When those guys are looking at a target they don’t have the same sort of information that the air traffic controllers have. … All they know is a target is travelling at 33,000 ft. That’s it’ (Harress 2014). As the Buk’s missile-launching vehicle was operating independently, it is probable that its operators ‘didn’t know what they were shooting at because they would not have been connected to


Autonomous Weapons Systems and International Norms

the civilian air traffic system that helps identify what is civilian and what is a military target’ (Harress 2014). Further, older Buk variants are suspected to have track classification problems related to civilian aircraft because, given their envisaged usage in a ‘hot’ war against NATO, they were not originally designed with this capability (Harress 2014). Some experts indicate that the Buk systems operators most probably misidentified MH17 as a Ukrainian military transport aircraft (Lele 2014). In the four weeks preceding the downing of MH17, more than ten Ukrainian military aircraft were shot down over the region. It is therefore hard to understand why the Ukrainian authorities kept the airspace open for civilian aviation (Dutch Safety Board 2015a, 244). As the airspace remained open, aircraft operators, with a single exception, did not deviate their routes (Dutch Safety Board 2015a, 245). The precise level of human control under which the Buk system was operated remains uncertain. Reports point to different capabilities and the inclusion of automated and autonomous features: ‘On the screen there would be a target identified using a symbol and the Buk would do the rest. … This happens at such speeds that a human couldn’t control it. It’s all automatic after the launch starts’ (Zaloga quoted in Harress 2014; our emphasis). The question of whether the human operators of the Buk received adequate training looms particularly large in this case. From the available information, it is clear that they must have received at least some rudimentary training in order to operate the system. Three operators with at least a month’s training are needed to properly operate the Buk system (Der Spiegel staff 2014). According to US Secretary of State John Kerry, ‘the separatists have a proficiency that they’ve gained from training from Russians as to how to use these sophisticated SA-11 systems’ (Der Spiegel staff 2014). It is also possible that the Buk system was operating in automatic or semi-automatic mode, compounding the operators’ lack of training and operational experience (Der Spiegel staff 2014). Ukraine International Airlines Flight 752 (PS752) – 8 January 2020 On 8 January 2020, two missiles fired from the Iranian Tor-M1 air defence system brought down Ukraine International Airlines Flight 752 (PS752), killing all 176 of the passengers and crew on board the Boeing 737-800 (BBC News 2020a).

How Autonomous Weapons Systems Make Norms


Iran imported 29 Tor-M1 systems from Russia in 2005 as part of a US$700 million contract, and first successfully tested these systems in 2007 (Sputnik News 2007). The Tor system has been described as ‘an all-weather low to medium altitude, short-range surface-toair missile system designed for engaging airplanes, helicopters, cruise missiles, precision guided munitions, unmanned aerial vehicles, and short-range ballistic threats’.6 Having achieved initial operational capability with the Soviet Union in 1986, its baseline capabilities have been added to in order to counter cruise missiles and other forms of ‘precision’ munitions.7 These upgrades have in all likelihood been enabled by a greater integration of automation and autonomy into its critical features.8 Like most air defence systems, Tor-M1 can be operated in manual and automatic modes. When operating in the latter, ‘the system constantly scans the operational airspace and automatically targets all objects not recognised as friendly via a “friend or foe” radar-based identification system’ (Ponomarenko 2020). It reportedly takes as little as 8–12  seconds from target identification to missile launch.9 This reaction window sets clear limits to the exercise of ‘meaningful’ human control. The incident happened hours after Iranian attacks on two military bases housing US troops in Iraq, an action precipitated by the Trump administration’s assassination of the Iranian General Soleimani. During this time, Iran’s air defence systems were on high alert. According to the commander of the Aerospace Force of the Islamic Revolution Guards Corps, his forces ‘were totally prepared for a full-fledged war’ (IFP Editorial Staff 2020). Given this, it is likely that the Iranian air defence forces operated under loose rules of engagement, similarly to those that we discussed in the case of IR655 (Bogaisky 2020). The operators of the Tor-M1 were also in a situation of combat-induced stress. Initially, the head of the Iranian Civil Aviation Organization commented: ‘scientifically, it is impossible that a missile hit the Ukrainian plane’ (Sabbagh and Safi 2020). The United States challenged such claims, maintaining that it had evidence that the Tor-M1 had locked onto PS752’s radar signature prior to its destruction (Sciutto et al. 2020) and had identified ‘infrared signals from two suspected missiles’ via US satellites (Sabbagh and Safi 2020). Both European and North American leaders were quick to point out that PS752 could have been shot down by mistake (i.e. human error) (Sky News 2020; Sabbagh and Safi 2020). Capitalising upon the diplomatic


Autonomous Weapons Systems and International Norms

wiggle-room such statements created, Iranian officials conceded their culpability for the destruction of PS752. The commander of the Islamic Revolutionary Guards Corps’ airspace force, General Amir Ali Hajizadeh, admitted that ‘[t]he plane was flying in its normal direction without any error and everybody was doing their job correctly’ (New York Times 2020). He added that ‘if there was a mistake, it was made by one of our members’ (New York Times 2020). Other Iranian leaders also attributed the failure to human error: on 11  January, Iranian Foreign Minister Mohammad Javad Zarif took to Twitter to attribute the incident to ‘human error at time of crisis caused by US adventurism led to disaster’ (Zarif 2020). But does that mean that the system was operated under meaningful human control? As discussed in relation the previous civilian airline disasters, there are a range of human–machine interaction challenges associated with operating air defence systems in high-pressure environments including, most importantly, target identification. In the words of a former European air defence officer: ‘Shooting down a hostile aircraft is easy. It’s identifying the aircraft and not shooting down friendlies that are the challenges’ (G. Doyle 2020). The Tor-M1 targeting software relies upon a combination of radar, visual identification, and signals from the plane’s tracking transponder (New York Times 2020): ‘a radar beacon that transmits flight data and an aircraft’s identity back to ground controllers’ (Peterson 2020). While the system is performing the latter task automatically, without human triangulation, ‘everything becomes an enemy to the missile – unless you can identify it by sight and turn the missile off’ (Ponomarenko 2020). Even without transponder signals, PS752’s ‘flight speed, altitude, and the fact that it was in a civilian corridor’ should have promoted the system’s operator to identify it as a civilian plane (Peterson 2020). The Tor-M1’s operators appeared to be ‘operating without a solid picture of the known traffic in Iranian airspace as whole’ (Bronk 2020). This is typical for this type of close-in weapons system, characterised as a ‘stand-alone system, meaning it is mounted on the back of a vehicle and not typically plugged into a country’s broader air defence radar network’ (Peterson 2020). Whilst the system operator had sought authorisation for the attack higher up in the chain of command, they were unable to communicate this request due to either jamming or the high level of traffic across the system (IFP Editorial Staff 2020). As importantly, the short-range radar of the

How Autonomous Weapons Systems Make Norms


Tor-M1 gave ‘a very short reaction time’ of about ten seconds ‘to interpret the data’ (IFP Editorial Staff 2020; Peterson 2020). Interestingly, on 11 January, Commander Ali Hajizadeh claimed that an Iranian air defence operator had misidentified flight PS752 as a cruise missile: ‘At several stages, the Alert Level 3, which is the highest level, is communicated and emphasized to the entire network. So all air defence systems were at highest alert level. For several times, these systems including the one involved in the incident were notified by the integrated network that cruise missiles have been fired at the country. For a couple of times, they receive reports that “the cruise missiles are coming, be prepared”. … So you see the systems were at the highest alert level, where you should just press a button. They had been told cruise missiles were coming, and the air defence unit engaged in this incident and fired a missile’ (IFP Editorial Staff 2020; our emphasis). Given these observations, some commentators have concluded that ‘a badly-trained or inexperienced crew …, scared of being hit as part of a retaliatory US strike following the ballistic missile attacks on bases in Iraq, made a series of tragic and incorrect assumptions when PS752 appeared on their radar screen’ (Bronk 2020). This observation suggests that the Tor-M1 crew lacked the necessary combat experience to properly operate the system: ‘taking the time to cross-reference or confirm the status of a radar contact under those circumstances takes a level of discipline uncommon to operators with no combat experience, and that no longer exists in the Iranian military’ (Peterson 2020). But it should also be acknowledged that the kind of ‘snap decision’ that led to the downing of PS752 is a typical part of how human agents operate air defence systems. The focus on human ‘error’ or human ‘mistakes’ distracts from how the automated and autonomous technology structures the use of force.10 Even well-trained crews are subject to limited situational awareness and increased complexity that operating an air defence system with autonomous features brings with it (see subsequent section on the Patriot). This means that the individual human operators at the bottom of the chain of command often bear the responsibility for structural failures in how air defence systems are designed and operated.11 It is also unclear whether the Tor-M1 was operating in manual or in automatic mode. If run in the latter, this ‘could have led to


Autonomous Weapons Systems and International Norms

an accidental launch by an inexperienced ground crew’ (Peterson 2020). In this case, inexperience does not refer to target identification specifically but rather to a broader understanding of the system’s operation: ‘any system that can work automatically is always a danger if the crew does not fully understand the merits and limitations of that ability’ (Ponomarenko 2020). The official claim that PS752 was misidentified as a cruise missile (New York Times 2020), however, suggests that the Tor-M1 operated in manual mode: the flight behaviours of airliners and cruise missiles differ so significantly that PS752 would not have been captured within the system’s algorithmic parameter classification for cruise missiles.12 The Patriot system and fratricides In the early stages of the 2003 invasion of Iraq, the MIM-104 Patriot was involved in two fratricidal engagements that respectively destroyed an RAF Tornado fighter jet (24 March 2003) and a US Navy F-18 fighter jet (2 April 2003), killing three crewmembers in total (US Department of Defense 2005, 2). The RAF Tornado was wrongly identified as an Iraqi anti-radiation missile: ‘The track [of the intended target] was interrogated for IFF [Identification Friend or Foe] but there was no response. Having met all classification criteria, the Patriot crew launched the missile’ (UK Ministry of Defence 2004, 2). Two Patriots downed the US Navy F-18 about a week later.13 Compounding matters, there was a further close-call friendly fire incident on 25 March 2003, when a Patriot battery locked onto a US Air Force F-16. In this case, the pilot ‘was alerted to the fact that he had been targeted by radar’ and launched a counter-attack that destroyed the Patriot battery (Haines 2004). The MIM-104 Patriot ‘is a long-range, all-altitude, all-weather air defence system to counter tactical ballistic missiles, cruise missiles, and advanced aircraft’ (Army Technology 2019). First fielded by the United States in the mid-1980s, the system has since been upgraded multiple times (Hawley and Mares 2012, 3; Piller 2003). Export variants of the system are operated by the Dutch, Egyptian, German, Israeli, Japanese, Jordanian, Kuwaiti, Saudi, South Korean, and the United Arab Emirates armed forces. The Patriot’s command module comprises two operators (Piller 2003) and is the only subsystem of a Patriot battery that involves direct human control. As the central subsystem, the command module can communicate and

How Autonomous Weapons Systems Make Norms


coordinate actions with launching stations, other Patriot systems, and command headquarters. In simple terms, the Patriot’s radar tracks objects in air, and its engagement algorithm ‘identifies those objects, and then displays them as symbols on a screen’ (Leung 2004), e.g. as ballistic missiles. What happens then depends on whether the Patriot is in semi-automatic or automatic mode, the two modes of operation pointing to very different levels of human control (see figure 5.2). In semi-automatic mode, Patriot is a human-in-the-loop system: while the human operators receive ‘more computer-based engagement support’ (Hawley 2017, 4), they make all critical decisions and play essential parts in the control loop. That said, problems with validating the accuracy of the system recommendations and its performance remain even for this human-in-the-loop setting. In automatic mode, however, Patriot becomes a human-on-the-loop system and ‘is nearly autonomous, with only the final launch decision requiring human interaction’ (Missile Defense Project 2018). This sets clear limits as to whether meaningful human control in specific targeting decisions is possible. According to John Hawley, an engineering psychologist at the US Army Research Laboratory with long experience in the Patriot system, ‘there are few “decision leverage points” that allow the operators to influence the system’s engagement logic and exercise real-time supervisory over a mostly automated engagement process’ (Hawley 2017, 4). Whilst human agents monitor the command module, the Patriot system is ‘capable of applying lethal force with little or minimal direct human oversight’ (Hawley 2017, 4). This has reduced the human agent’s role to a veto power in engagement decisions (Singer 2010, 125). As with other air defence systems, Patriot operators have ‘just seconds to decide whether to override the machine, or let it fire’ (Leung 2004). The Patriot was employed in automatic mode by the United States in both Gulf Wars. It was intended to shoot down tactical ballistic missiles (TBMs), a capability that manufacturer Raytheon introduced to the Patriot shortly before the First Gulf War (1990–1) (Leung 2004). Official statistics claimed that, during the conflict, the system ‘intercepted 79 percent of the Scuds launched over Saudi Arabia and 40 percent of those fired at Israel’ (Kaplan 2003). However, the veracity of these numbers has been questioned by independent reports, not least because intercepts were minimally classified as when the Patriot missile ‘got within lethal range of the Scud and


Autonomous Weapons Systems and International Norms

its fuse exploded’ rather than when it actually destroyed the missile (Kaplan 2003). A US Congressional report commissioned by the House Committee on Government Operations found that the Patriot had downed less than 9% of the Scuds (US Congress House Committee on Government Operations 1992). While anecdotal reports also indicated a number of close-call fratricide incidents (Hawley 2017, 6), the publicised success of the Patriot system whilst operating in automatic mode paved the way for its subsequent use in much the same way during the 2003 invasion of Iraq. In fact, in the twelve years between the two Gulf Wars, the US Army had become so confident of the Patriot system’s automatic mode that it systematically de-skilled its operators by ‘reduc[ing] the experience level of their operating crews [and] the amount of training provided to individual operators and crews’ (Hawley 2017, 8). The experience level of the Patriot crew involved in the Tornado fratricide underlines this: ‘the person who made the call … was a twenty-two-year-old second lieutenant fresh out of training’ (Scharre 2018, 166). This underlines the level of confidence invested in the Patriot system’s capabilities. These actions are consistent with a characteristic myth associated with autonomous systems: ‘the erroneous idea that once achieved, full autonomy obviates the need for human–machine collaboration’ (Bradshaw et al. 2013, 58). In reality, however, operating the Patriot in automatic mode has increased the complexity of its management, demanding more not less ‘human expertise and adaptive capacity’ (Johnson, Hawley, and Bradshaw 2014, 84). A series of interrelated factors were found to have contributed towards the fratricide incidents. The British Board of Inquiry’s report on the downing of the Tornado, for example, lists the following six factors among others (UK Ministry of Defence 2004, 2–3): (1) Patriot anti-radiation missile classification criteria; (2) Patriot firing doctrine and crew training; (3) autonomous Patriot battery operation; (4) Patriot IFF procedures; (5) the ZG710’s (the Tornado’s) IFF serviceability; (6) orders and instructions. In the light of our focus on the decision-making element of meaningful human control, we focus on the nature and quality of human–machine interaction.

How Autonomous Weapons Systems Make Norms


Track classifications problems in the Patriot system were a major factor in the incidents. Similarly to how other air defence systems track and classify targets, the Patriot system classifies tracks as aircraft, different kinds of missiles (ballistic, cruise, anti-radiation), or other categories based on ‘flight profiles and other track characteristics such as point of origin and compliance with Airspace Control Orders’ (Hawley and Mares 2012, 6–7). In the case of misclassifications, ‘the system-generated category designation does not match the track’s actual status’ (Hawley and Mares 2012, 7). It is important to note that the target profiles of air defence systems like the Patriot are not programmed to defend against a specific set of target profiles, but around a wider envelope of possible target profiles.14 If the target parameters are defined too precisely, the risk of false-negatives increases.15 These are situations when the system fails to recognise a target object because one of its prerequisite target conditions is not met, e.g. a missile may be flying slightly slower or from a different angle than the defined profile (Moyes 2019, 5). The Patriot system’s engagement algorithm suffered from both general and specific track classifications problems. In general, the system’s track classification was not completely reliable or accurate – and this had been known prior to the incidents (Hawley and Mares 2012, 7). In fact, there are limits as to how reliably track classification can be programmed due to the ‘brittleness’ typically associated with algorithms and AI, which is characterised by their inability to contextualise (Pontin 2018) and their having ‘little capacity to handle gray or ambiguous situations’ (Hawley 2017, 4). Rather than addressing these deficiencies upfront by, for example, communicating them to the system’s operators, the US Army framed these as a software problem: ‘the claim was repeatedly made that a ‘technical fix’ … was just around the corner’ (Hawley and Mares 2012, 7). More specifically, the algorithm governing the Patriot system’s targeting selection was trained on a data set that was not specific enough to rule out false identifications. The target selection criteria ‘were based on the many different Anti-Radiation Missiles available worldwide’, rather than ‘on the known threat from Iraq’ (UK Ministry of Defence 2004, 4). Informed by his personal experience with developing the system, Hawley concludes that the Patriot’s engagement algorithms were consequently not specific enough to reliably ‘handle unusual or ambiguous tactical situations’ that invariably


Autonomous Weapons Systems and International Norms

presented themselves in the context of countering air threats (Hawley 2017, 4). As a result of these inaccuracies, the oncoming data points received by the system confused the friendly jet for an imminent missile attack. In the words of Robert Riggs, a journalist who was embedded with Patriot batteries in the Second Gulf War: ‘This was like a bad science fiction movie in which the computer starts creating false targets. And you have the operators of the system wondering is this a figment of a computer’s imagination or is this real. They were seeing what were called spurious targets that were identified as incoming tactical ballistic missiles. Sometimes, they didn’t exist at all in time and space. Other times, they were identifying friendly US aircraft as incoming TBMs’ (Leung 2004). Compounding these track classifications problems, it also appears as if the identification friend or foe (IFF) system did not perform as expected. This was a known problem, as indicated by earlier nearmiss fratricidal engagements involving Patriot batteries in the First Gulf War and in training (Scharre 2016, 30). As the Defence Science Board Task Force’s report stated: ‘This is not exactly a surprise; this poor performance has been seen in many training exercises. The Task Force remains puzzled as to why this deficiency never garners enough resolve and support to result in a robust fix’ (US Department of Defense 2005, 2). As a consequence, the Patriot crew lacked the time, and, crucially, it also lacked the necessary information and expertise to overrule the targeting outputs made by system. This included known limitations of the Patriot system and the potential conditions under which it might fail, such as the IFF system and track classifications problems. As the British Ministry of Defence assessed: ‘Patriot crews are trained to react quickly, engage early and to trust the Patriot system. … The crew had about one minute to decide whether to engage. The crew were fully trained, but their training had focused on recognising generic threats rather than on those that were specific to Iraq or on identifying false alarms’ (UK Ministry of Defence 2004, 3). These aspects of training and trust are characteristic of a wider problem with existing patterns of human–machine interaction across a range of air defence systems. The concept of over-trust (Boulanin et al. 2020, 19), also referred to as automation bias, or ‘automation complacency’ (Parasuraman and Manzey 2010), refers to human operators being overly confident in the reliability of automated and autonomous systems and

How Autonomous Weapons Systems Make Norms


the veracity of their outputs. This manifests in a ‘psychological state characterized by a low level of suspicion’ (E. L. Wiener 1981, 117). Automation complacency and bias has been a frequent feature of passenger airline crashes (Parasuraman and Manzey 2010), such as the prominent crash of Air France 447 en route to Paris from Rio de Janeiro as a result of the pilots putting more trust into information provided by their on-board computer rather than their own reflective facilities: ‘It [automation bias] creeps in when people give undue weight to the information coming through their monitors. Even when the information is wrong or misleading, they believe it. Their trust in the software becomes so strong that they ignore or discount other sources of information’ (Carr 2016). More fundamentally, the technologicalisation of weapons system started a process challenging the human ability to doubt (see Suchman and Weber 2016, 102). Trusting the system was deeply ingrained into the Patriot’s software, as the Defense Science Board’s review of the Patriot noted: ‘The operating protocol was largely automatic, and the operators were trained to trust the system’s software; a design that would be needed for heavy missile attacks’ (US Department of Defense 2005, 2). These observations point to increasing challenges for the human operators tasked with remaining on-the-loop for the Patriot system, as well other systems with automation and autonomy in target engagement. Hawley (2017, 2) refers to this as the ‘humans’ residual role in system control, and how difficult that role can be to prepare and perform’. The human operator has to step in where the system fails – this requires ‘sustained operator vigilance, … broad-based situation awareness’ (Hawley 2017, 8), and adequate expertise/experience in hands-on battle management so that they can question the system’s outputs. In other words, human operators must know when to trust the system and when to question its outputs (Hawley and Mares 2012, 7). This is a case of human judgement that operators have to get exactly right: as already discussed, either too much or too little trust in the system can create problems. This means that operators must have a full understanding of how the system works and its weaknesses. But these requirements for meaningful human control are typically lacking, if not impossible to meet, in specific targeting situations such as the ones discussed here (Hawley 2017, 4). The friendly fire incidents demonstrate the extent to which the inclusion of automated and autonomous functions renders


Autonomous Weapons Systems and International Norms

human–machine interaction incredibly complex – and in the process sets de facto circumscribed standards for a key component of exercising meaningful human control. To illustrate the system’s technical and tactical complexity, ‘Patriot currently employs more than 3.5 million lines of software code in air battle management operations’ (Hawley and Mares 2012, 4). Further, systems such as the Patriot operate not in isolation but rather as part of an integrated air and missile defence system. This means that the Patriot works in close association with other, equally complex air defence systems that also include autonomous features, such as the Aegis or the Terminal High Altitude Area Defense (THAAD) (Hawley and Mares 2012, 4). In order to comprehend such complex ‘system[s] and [their] operating environment at any point in space and time’ (Hawley and Mares, 2012: 5), its human operators need an extraordinary amount of knowledge and information. This has significant repercussions for operators’ situational awareness – a key issue in the diminished capacity for exercising meaningful human control through human–machine interaction. Following an established definition in human factor analysis, situational awareness means ‘the perception of elements in the environment …, the comprehension of their meaning, and the projection of their status in the near future’ (Endsley 1995, 36). To retain situational awareness and ‘behave appropriately … operators must keep track of considerable information from a variety of sources over time and organize and interpret this information’ (Hawley, Mares, and Giammanco 2005, 5). But, human crews of air defence systems with autonomous features, such as the Patriot, are ill-equipped to retain situational awareness because they have been transformed from direct and active controllers of a weapons system (in-the-loop) to system monitors (on-the-loop) (Hawley 2017, 10). This observation is fully in line with what human factor researchers have long argued: automation (and autonomy) ‘change the nature of the work that humans do, often in ways unintended and unanticipated’ (Parasuraman and Riley 1997, 231). The human agent’s modified role from active controller to system monitor implies the delegation of cognitive skills, rather than only motor and sensory tasks, to machines (Hawley, Mares, and Giammanco 2005, 3). This produces two distinct but interrelated problems regarding the retention of situational awareness. First, in their role as monitors, human agents are either overloaded or underloaded with tasks vis-à-vis those delegated to the

How Autonomous Weapons Systems Make Norms


system (Kantowitz and Sorkin 1987). This means that either humans are not capable of competently performing the tasks allocated to them (overload) or that these tasks are so menial that retaining appropriate vigilance increases in difficulty over the required period (underload) (Hawley, Mares, and Giammanco 2005, 8). The underload problem manifesting in a lack of vigilance was identified as one of the key factors contributing to the Patriot fratricide incidents (Hawley 2007). Second, and more fundamentally, human operators may not have a workable model of how the machine makes decisions or the control process that informs this operation. Thus, they lack an understanding of the logic underpinning the tasks they are expected to perform (Hawley, Mares, and Giammanco 2005, 8). With many of their decision-making tasks now being performed by the system, humans are left to monitor and potentially second-guess a ‘decision-making’ process that they are no longer familiar with: ‘there is considerable evidence that when an abnormal situation occurs, operators will be slower to detect it and it will take longer time to jump back into the control loop and make the appropriate control actions’ (Hawley 2017, 10). As the operator does not have ‘something reasonable to do when the system is operating normally, it is unlikely that he or she can function effectively [either] in manual backup mode’ (Hawley, Mares, and Giammanco 2005, 8) or when something out of the ordinary occurs. Given these restrictions to the situational awareness of human operators, ‘calling for reliable supervisory control over a complex automated system is an unreasonable performance expectation’ (Hawley 2017, 8). These aspects were confirmed in an internal review of the Patriot system conducted at the US Army Research Laboratory in the aftermath of the fratricide incidents (summer/autumn 2004), which reached two conclusions: that functions had been automated in the Patriot system both at the design and the implementation stages ‘without due regard for the consequences for human performance’; and that the Patriot system was fielded and operated with significant automation bias, evident in a ‘blind faith in technology’ (Hawley and Mares 2012, 6). What is more, in addition to hindering human comprehension, the complexity of systems with automated and autonomous features, such as the Patriot, makes them susceptible to failure. This is because it is not possible to ascertain and test how the system and its


Autonomous Weapons Systems and International Norms

subsystems will behave across all possible conditions and situations – ‘the number of potential interactions within the system and with its environment is simply too large’ (P. D. Scharre 2016, 5). Consequentially, operating weapons systems with automated and autonomous features comes with significant risks of failure – and that ‘risk can be reduced but never entirely eliminated’ (Scharre 2016, 25). Apart from these concerns that have to do specifically with human–machine interaction, news sources also point to significant deficiencies of the Patriot system that had become known in prior testing rounds between the First and Second Gulf Wars: ‘on the test range, [the Patriot system] kept targeting friendly planes … in exercises in 1997, 2000, and 2002’ (Leung 2004). As former Assistant Secretary of Defense Phillip Coyle noted: ‘The focus was on hitting a target. Other issues, such as friendly fire, didn’t get the same – either spending, or priority, as the first priority of hitting a target’ (quoted in Leung 2004). The issue of training can be neatly summed up in a statement included in the report authored by the US Army’s Board of Investigation following the fratricide incidents: ‘the system (Patriot) is too lethal to be placed in the hands of a crew trained to such limited standard’ (Hawley 2007, 4). The case of the Patriot demonstrates how air defence systems have established de facto standards of appropriateness regarding the use of force, setting emerging norms of what the component of human–machine interaction looks like and where its acceptable limits are. In most cases, these evolving norms go unnoticed as they only become subject to scrutiny in the case of failure. The Patriot fratricides led to a significant review exercise conducted by experts at the US Army Research Laboratory. These experts were tasked, inter alia, with securing human control for the Patriot system based on conducting a series of training tests. However, the training tests were not conducted as planned and still featured many of the issues that had been previously identified as problematic. As a consequence, the test never led to significant changes in Patriot system training overall (Hawley 2007, 7). Even if such procedures are assessed, it is still difficult to decide what appropriate and meaningful human control consist of, given that unpredictable scenarios might occur, for which humans are untrained. Interestingly, since 1987, US law has specified that ballistic missile systems, such as the Patriot, cannot use ‘lethal fire except by affirmative human decision at an appropriate level of authority’ (Office

How Autonomous Weapons Systems Make Norms


of Law Revision Counsel of the House of Representatives 2013, 618). Yet, what ‘affirmative human decision’ means has never been defined and it ‘has had little impact on air and missile defence system development or operations’ (Hawley 2017, 9). Instead, as our analysis demonstrates, the standard of ‘affirmative human action’ has been filled with content concerning operational practices and in a minimal way: ‘the requirement for positive human control is met even if that means not much more than having a warm body at the system’s control station’ (Hawley 2017, 9), while on-the-loop, there is no ‘substantive situational understanding: the operator has to respond to the situation as filtered and presented by the system at very high speed’ (IPRAW 2018, 17). This is tantamount to meaningless human control, making the Patriot system the de facto ‘ultimate decision-maker in engagement decisions’ (Hawley and Mares 2012, 10). The Patriot system’s complexity and its significant challenges to exercising meaningful human control via human–machine interaction are representative of a wider range of air defence systems as well as other weapons systems with automated and autonomous features that the US Army may field in the future (Hawley and Mares 2012, 4–5). Moreover, the Patriot case study highlights the incremental way in which meaningful human decision-making via human–machine interaction has diminished over time (Hawley 2007, 5). System developers and users edged into the friendly-fire incidents by degrees. Rather than delegating decision-making tasks to machines in one sweep, this was an incremental process of progressive software updates that has culminated in human operators being asked to fulfil minimal, but impossibly complex, roles. Summary: Challenges inherent to human–machine interaction The detailed analysis of severe incidents involving air defence systems reveals a long list of overlapping challenges to human–machine interaction, which are characteristic of operating complex systems with automated or autonomous features: • Automation bias/over-trust. This leads human operators to uncritically trust system outputs without subjecting them to deliberative or critical reasoning. This makes them more likely, for example, to not question algorithmic targeting parameters, despite the potential existence of track classifications problems.


Autonomous Weapons Systems and International Norms

• Lack of system understanding. Human operators do not understand the precise functioning of automated and autonomous features in air defence systems, including their target profiles and how they calculate target assessments. This is partly due to the system’s complexity creating a barrier to understanding. But incidents have also shown that operators were not aware of known system weaknesses, e.g. IFF performance in the case of the Patriot. • Lack of situational understanding. In the move from active controllers (being ‘in the loop’), to supervisory controllers (‘on the loop’), human operators lose situational understanding. This makes it near-impossible to question system outputs and to make reasoned deliberations about selecting and engaging specific targets. • Lack of time for deliberation. The current engagement window of air defence systems provides human operators with only a few seconds to make decisions. This places impossible demands on any potential critical deliberation. • Lack of expertise. Operating complex systems such as air defence systems competently requires extensive training and experience. Examples from the Patriot system suggest that human operators lack this expertise due, in part, to a misguided faith in the capabilities of weapons technologies and management policies. • Inadequate training. Training of human operators for air defence systems, as evidenced by the Patriot example, appears to focus on an inappropriately mechanistic approach as opposed to simulating scenarios key for retaining meaningful human control in targeting decision-making. • Operating under high-pressure combat situations exacerbates challenges inherent to human–machine interaction. All the incidents discussed happened in the context of a period of international tension when militaries where on high alert. The pressure on the individual human operators involved was a factor in the incidents and exacerbated the challenges inherent to human–machine interaction. It is important to remember that such high-pressure combat situations are the default situation in times of war, which is when weapons systems can be expected to be used. These challenges are well known to scholars of human factor analysis. The fact that they severely compromise the exercise of

How Autonomous Weapons Systems Make Norms


meaningful human control in specific targeting situations involving air defence systems is therefore an expectable outcome. However, despite this knowledge there has been no fundamental reflection on the appropriateness or the unintended consequences of the continued integration of automation and autonomy in air defence systems. In fact, continued upgrades of automated and autonomous features in air defence systems have made the human– machine interaction challenges more acute. States using air defence systems have therefore implicitly accepted the compromised role of human agents in modern air defence systems. On our reading, these challenges to human–machine interaction have made human control meaningless. These circumscriptions of meaningful human control have generally not been openly acknowledged or discussed. Nevertheless, they are characteristic of how modern air defence systems are operated. This means that meaningless human control has become accepted by a group of states as an emerging, ‘appropriate’ understanding of how force can be used in individual targeting situations.

the silent making of a norm: how meaningful is human control? Practices associated with developing and using air defence systems with automated and autonomous qualities and the historical practices relating to new weapons systems discussed in chapter 2 both underline an incremental erosion of meaningful human control, whether understood as meaningful, deliberative regulation or meaningful control in non-verbal practices. In military practice, human control is, in fact, often less meaningful than ideal-typically expected or presupposed. While it is important to keep the historical trajectory of this development in mind, the emergence of weapons systems with automated and autonomous features has had a significant impact on the largely unvoiced status quo. This comes out particularly clearly in relation to the human–machine interaction component of meaningful human control. Here, we situate autonomy as a relational concept that captures not only the increasing level of machine ‘agency’ but also, importantly, the degree to which the creation of an electronically translated and transformed reality has weakened the capacity


Autonomous Weapons Systems and International Norms

of humans to retain control. The examples provided by air defence systems show that the reliance of human operators on the electronic representation of reality has become crucial in their operation. At the same time, the fact that operating these systems led to unintended, fatal consequences, which were the outcome of complex situations, proves that control was not meaningful in these scenarios. Nevertheless, their operation in combat theatres remains entirely uncontroversial. Prominent failures associated with air defence systems emphasise the extent to which error is not only possible but actually amplified by the specific characteristics and role of technologies – or may even be caused completely by technical malfunctions. These cases where something failed and that were subject to subsequent scrutiny constitute practices establishing what is de facto appropriate when it comes to the use of force. International law in terms of jus in bello and jus ad bellum represents the general normative framework in which any use of force takes place. As we have argued in detail, the normative meaning of law is, however, not fixed and often disconnected from what happens ‘on the ground’. The more fundamental problem therefore pertains to the overall relation of the legal tradition of establishing norms, which is the dominant perspective in the debate on AWS and also important in IR theory, and the emergence of norms in practices. Here, we want to draw attention to the silent making of norms, as illustrated by our analysis of air defence systems and meaningful human control. To say that norms are made silently describes a process whereby standards of appropriateness (norms) emerge in patterns of ways of doings things (practices) or from the practical reality of operating weapons systems, for instance. We call this a process of silent norm-making because the non-verbal practices that inform it are typically not verbalised and problematised explicitly. Determining what is acceptable or not acceptable or what ‘works’ and is functional often does not trigger deeper levels of assessment or reflection. At the same time, practices of using force become patterned, established, and inform actions in other contexts and by other actors. The norms that are the outcome of such practices are not the products of deliberation or legal codification. Nevertheless, they can constitute international norms in the sense of setting standards

How Autonomous Weapons Systems Make Norms


for the appropriate use of force that go beyond multilateral discussion or negotiation. With regard to AWS, such ‘silent norms’ therefore risk emerging before and outside the deliberation on a formal agreement regulating their usage. This is significant for how we should assess AWS: it makes the inclusion of autonomous features in the critical functions of weapons systems less of a revolutionary paradigm shift and more of a symptom of the continuously growing importance of technological solutions meant to ‘support’ human operators. The slow but steady loss of human control is therefore indeed a process of silent norm-emergence, where losing human control is incremental and becomes ‘acceptable’ over time – although its implications have never been deliberated on or discussed. While the technological effects of autonomous features in terms of AI and machine-learning are hence less of a rupture in weapons technology than the political debate might suggest, their normative effect constitutes an important break with the legal portrayal of norm emergence that has become dominant in the IR literature. The most fundamental principle of law is the requirement that norms are specific, clear, and fixed – and that states follow, implement, and translate this fixed meaning into practices. This also points to the importance of time and space in the legal logic. Deliberations, decisions, and actions are conceptualised as separate, sequential stages. While distributed understandings of human control also capture this for the case of weapons systems with autonomous features in their critical functions, in practice, such systems effectively dissolve this separation. In particular, it is the testing of machine-learning algorithms that is bound to collapse the sequential differentiation between deliberation, decision, and action. Looking at a particular US Department of Defense project – project SKYNET, a computer program that has tested machine learning in US target identification for Pakistan since 2015 – can illustrate this point further. The investigatory website The Intercept has published different documents that provide insights into testing the use of SKYNET to identify terrorist courier networks. The program scanned the movement patterns of 55 mobile phone users in Pakistan for anomalies (Aradau and Blanke 2018). As used here, an anomaly is a certain degree of deviation from the average, normal movement.


Autonomous Weapons Systems and International Norms

The normal is the ’noise’ that the SKYNET program tries to filter out using machine learning, thereby allowing the ‘analysts’ to only focus their attention on a limited range of ‘anomalous’ cases. While the program’s data output is combined with other data sets, for example, if only the movement to specific geographical regions is of interest, this is an important development from the viewpoint of research on norms. Political sociology, drawing on the work of Michel Foucault, has highlighted the importance of measurement and statistical calculation in defining what is normal and abnormal. Here, the perspective on norms is reversed: the basis of a government’s actions is not a fixed and stable norm but a flexible or ‘mobile norm’ (Amoore 2019). Statistically calculating the average normal as the basis for a norm means that the norm changes along with the measured phenomena (see Huelss 2017). In technical regards, the example above shows how understandings of anomaly are produced and transferred into a differentiation between normal and abnormal behaviour, which is, however, a political act. Foucault’s concepts underline that the interplay between political rationality and technologies of governing that produce normativity is a complex arrangement. With the emergence of AI in the form of learning algorithms, impenetrable technical processes have become increasingly important in defining not only what is anomalous, but also what constitutes the normal and the abnormal. In the case of SKYNET, one of the ‘best’ targets as identified by the program was Ahmad Muaffaq Zaidam, Al-Jazeera’s Islamabad bureau chief, who regularly travelled between the capital and regions linked by the US to terrorist activities for his investigative work. Meaningful human control played a key role here as additional human scrutiny revealed the identity of the mistakenly identified ‘target’, allowing common sense to intervene. Yet, in different settings of human–machine interaction, the extent of meaningful control may be weaker. Again, what is at risk is the elimination of doubt. Constructing norms on the basis of electronically calculated normality means that any external support to consider what is appropriate simply disappears. This contradicts the conventional understanding of what a deliberative, political norm defining agreed upon standards of appropriateness is (see Huelss 2020). Turning again to the legal tradition, the algorithmically informed norm that is largely the outcome of machine agency is not translated

How Autonomous Weapons Systems Make Norms


into practice, nor does it shape actions according to what is a priori defined as appropriate – instead, the norm is shaped by practices. If elements relating to the appropriate use of force (such as the targeting process), which are largely dominated by machine-learning, only develop a concept of anomaly in practice, and if this concept turns into an understanding of normal and abnormal human behaviour by default, then the dominant legal-political approach to norms is deeply compromised. This suggests that both the expectations and the very logic of international law are, in fact, structurally unable to cope with the emergence of machine-learning algorithms governing the use of force because international law cannot accommodate the compression of time and space in norm-making. In other words, machine learning could mean that norms are both created and implemented within the black box of weapons systems. Surveillance, target identification, and target engagement, defining what the appropriate use of force is, could be conducted by weapons systems without human intervention. This idea of a technologically generated norm involving machine agency is fundamentally opposed to the century’s old legal tradition. We argue that the increasing loss of human control is accepted as a side effect of increasing the integration of automated and autonomous features in weapons systems. This process is aided by the fact that there already appear to be ‘accepted’ yet non-verbalised standards of diminished human control present in contemporary warfare, as the study of air defence systems illustrates. What we call the silent making of norms is a twofold onslaught on conventional understandings of norm-making. First, the procedural norms of what meaningful human control consists of when it comes to the use of force emerge in practices that are de-linked from formal, institutionalised law-making. Second, the complex and largely unnoticed processes of how technologies influence important aspects of operating weapons systems, creating those practices and thereby setting standards of appropriateness, signal a further, significant deviation from conventional norm-making. We will next demonstrate how such non-verbal practices and their associated standards of appropriate behaviour (can) spread through a permissible discourse for integrating autonomous features into the critical functions of weapons systems.


Autonomous Weapons Systems and International Norms

‘there’s no other answer than to leverage artificial intelligence’: permissible discourse of autonomous features in air defence and other weapons systems Apart from shaping understandings of what counts as ‘appropriate’ human–machine interaction in the context of meaningful human control, weapons systems with automated and autonomous features, such as air defence systems, have also set in motion a wider, permissive policy discourse for (further) integrating such features into weapons systems. This policy discourse on casting autonomy as ‘positive’, ‘necessary’, and ‘inevitable’ includes not only state representatives but also policy analysts and researchers at think tanks. We now examine some of the component arguments of this discourse that provide a backdrop to understanding the dynamics of how integrating automated and autonomous features into weapons systems is constructed as appropriate. For this, we return to our distinction between two perspectives on appropriateness: legal–public and procedural–organisational (table 5.4; see also Bode and Huelss 2018, 409). In the case of legal–public appropriateness, arguments in favour of automated and autonomous features in weapons systems are made based on their compliance with international law, in particular international humanitarian law. Arguments based on procedural–organisational appropriateness highlight that integrating autonomous features into weapons systems is functionally legitimate and effective in fulfilling their purposes, e.g. tactical or operational efficiency, and cost-cutting. In practice, legal–public and procedural–organisational understandings of appropriateness often interact as interconnected layers of how actors consider what, when, and against whom it is appropriate to use force. Arguments based on procedural–organisational understandings of appropriateness tend to prevail in the policy discourse on AWS in general, and on existing weapons systems, such as air defence systems, in particular. This shows that the debate has moved away from a legal–public understanding connected with public deliberations in forums such as the CCW. By contrast, the prevailing understanding of appropriateness appears to be shaped by largely functional demands that cast autonomy as positive or inevitable, thereby removing its discussion from deliberative, public scrutiny. This has the effect of making the trajectory towards integrating more and more autonomy in weapons systems harder to question.

How Autonomous Weapons Systems Make Norms Table 5.4


Two perspectives of appropriateness.

Legal–public appropriateness

Legal–normative restrictions to the use of force in the forms of international humanitarian and human rights law also: (democratic) principles, e.g. transparency or accountability.

Procedural–organisational Highlights functional legitimacy, efficiency, and effectiveness appropriateness within particular organisations (such as the armed forces) in particular contexts (such as situations of warfare).

Table 5.5 summarises the arguments found in the policy discourse, associating them with legal–public or procedural–organisational appropriateness. The arguments are compiled on the basis of published sources, as well as insights gained through observing three GGE meetings (November 2017, August 2018, September 2020) as academic participants. Air defence systems and legal–public appropriateness Arguments based on legal–public appropriateness in support of integrating automated and autonomous features into air defence systems come in at least two forms, including: those highlighting their defensive purpose and tightly constrained area of operation, and characterising them as life-saving technologies; and those presenting air defence systems as legacy and therefore already accepted. 1. purely defensive purpose The perceived ‘appropriateness’ of air defence systems can be traced to what is described as their defensive purpose and limited spatio-temporal area of operation (Scharre and Horowitz 2015, 12). Air defence ‘tend[s] to be portrayed by proponents of AWS as a defensive and limited and therefore acceptable application’ (Brehm 2017, 12; our emphasis). Policy discourse on air defence systems in the context of AWS is therefore full of references to their defensive posture (see, for example, UN-CCW 2015, 15). For example: ‘The near-autonomous defensive systems adopted by several countries are primarily used in a protective role to intercept incoming attacks. They defend a specific object or area; they do not actively seek out targets but instead respond to predetermined threats’ (Reddy 2016, 3–4; our emphasis). This or similar reasoning appears in many statements delivered by state representatives to the CCW; see, for example,


Autonomous Weapons Systems and International Norms

Table 5.5 Arguments made to support the appropriateness of integrating automated and autonomous features into air defence systems. Legal–public appropriateness 1. Defensive purpose 2. Legacy weapon systems

Procedural–organisational appropriateness ‘There are no alternatives’

1. Reaction time 2. Great power competition 3. ‘Normal accidents’/ ‘Problems that can be managed’ 4. Worst case scenario logic

this excerpt from Greece’s statement in 2018: ‘current weapon systems with high degree of autonomy for defensive purposes … have not raised the question of non-compliance with the principles of International Law, although sometimes they act fully autonomously due to the limited time to eliminate the threat’ (Permanent Mission of Greece 2018, 1; our emphasis). Comparable reasoning can also be found in one of the CCW summary documents. Here, it is highlighted that whether or not autonomous weapons conform to IHL ‘will depend on … its use for offensive versus defensive purposes’ (UN-CCW 2015, 15; our emphasis). Here, the ‘appropriateness’ of automation and autonomy stems from their application in the context of international law governing the right to self-defence. The United States has taken this argument a step further, arguing that air defence systems constitute ‘defensive autonomous weapons’ that can ‘provide additional time to develop a considered response to an enemy threat’, thereby helping to ‘reduc[e] civilian casualties from the immediate use of force in self-defense’ (Mission of the United States 2018a, 5). But differentiating between defensive and offensive types of weapons systems is not as clear-cut as this argument suggests. For example, air defence systems can be used for the defensive purpose of protecting armed forces that are undertaking offensive actions. In essence, air defence systems may be tactically defensive but necessarily strategically offensive.16 Also, from the standpoint of international humanitarian law, which is the main reference point at the CCW, the very distinction between offensive and defensive system is curious: an attack in war, whether used in offence or in defence, is subject to the same rules (e.g. UN-CCW 2017b, 2). What matters instead is whether the weapons systems and its usage can comply with IHL principles.

How Autonomous Weapons Systems Make Norms


2. legacy systems. As we argue throughout this chapter, most operational air defence systems incorporate automated or autonomous features and can therefore provide important insights into the challenges associated with human–machine interaction in specific use-of-force situations. As Hawley (2017, 2) reminds us, ‘[t]here are lessons and pitfalls associated with the use of automation in older systems that apply directly to what can be expected with newer applications’. Despite the value of studying precursor technologies, states parties have by and large excluded their discussion from the debate on LAWS unfolding at the CCW. States parties refer to existing weapons systems with automated and autonomous features as legacy weapons systems, ‘those already deemed lawful and fielded’ (Schuller 2017, 398). In the context of which systems with autonomous features should fall under the remit of the CCW’s discussion, for example, the 2017 Report of the GGE on LAWS noted: ‘reaching too far back into legacy technologies … is problematic’ (UN-CCW 2017a, 11). With this, states appear to bracket off scrutiny of their consequences, deeming their usage already ‘appropriate’ in legal–public terms. Following this view, legacy systems would and should not be subject to any potential future legal regulation on meaningful human control negotiated at the CCW. However, ways of using legacy weapons systems, such as air defence systems, may only have been ‘deemed lawful’ by virtue of the fact that most states have used them for decades with minimal (public) scrutiny or criticism. Revisiting the legality of legacy systems in the light of dedicated discussion surrounding advances in autonomous features in weapons systems could therefore also lead to new, critical perspectives on their legality and (assumed) legal– public appropriateness (Schuller 2017, 398) – something that many states parties to the CCW seem keen to avoid. Air defence systems and procedural–organisational appropriateness A considerable amount of the rhetorical justification for the development of increasingly automated and autonomous air defence systems centres on their presentation as the only viable response to the complexity and speed of modern warfare. This ‘there are no alternatives’ narrative, as we coin it, underpins and informs other arguments used to justify and explain the continued integration of the automated and autonomous features summarised below into air defence systems. In many policymaker circles, there appears


Autonomous Weapons Systems and International Norms

to be a fundamental unwillingness to question the general trajectory towards greater autonomy in weapons systems. The arguments fielded serve to make this trajectory appear inevitable, thereby making a deliberative discussion of whether they are appropriate in the legal–public sense redundant in the light of technological (and security) demands. Further, visions around different emerging weapons, such as autonomous weapons systems and swarms of these systems, feed off each other: ‘both are justified by reference to the increasing speed of warfare, whilst driving that acceleration’ (Brehm 2019, 16). As we will show, arguments in support of greater autonomy in weapons systems are also rooted in and amplify technological determinism and solutionism, often by portraying emerging technology in an overwhelmingly positive light. At issue here is procedural–organisational appropriateness, i.e. using air defence systems constitutes an appropriate use of force because they are functionally efficient and effective for defence institutions and purposes. There are at least four arguments that are made for appropriateness, including: highlighting the demands of reaction time, the (imagined) context of great-power competition, presenting failures in relation to the integration of automated and autonomous features as acceptably ‘normal’ and/or ‘problems that can be managed’, and remnants of a ‘worst case scenario’ Cold War logic. 1. reaction time. Reduced reaction time has been put forward as a key ‘push’ factor animating the integration of more automated and autonomous features into air defence systems (and other types of weapons systems). This argumentative logic clearly punctuates recent efforts to counter the next-generation of anti-ship and surface-to-ground missiles. Speaking at a policy event, Vice Admiral Jon Hill, director of the US Missile Defense Agency, argues that ‘[w]ith the kind of speeds that we’re dealing with today, that kind of reaction time that we have to have today, there’s no other answer other than to leverage artificial intelligence’ (Harper 2019). Policy analysts support this assessment, highlighting that ‘speed, accuracy, and the element of surprise in respect of any incoming ballistic missile threat provides a very limited time window for any human intervention’ (Lele 2019, 66). Further, ‘the effectiveness of any missile

How Autonomous Weapons Systems Make Norms


defence system will depend on split-second decision making that is only possible if the defence system is fully autonomous’ (Lele 2019, 66). As the failures involving air defence systems discussed above have highlighted, human operators had to ‘decide’ whether to use force in response to indications by the system in a matter of seconds. What wider implications does such thinking on response time have? The focus on reaction time is contingent on potential adversaries developing increasingly advanced, and also increasingly autonomous, weaponry – all the while leading development of those same technologies. In this way, the development of autonomy becomes a self-fulfilling prophecy – even if only portrayed as a reaction to what the other side is doing. The logic can also play out with a purely imaginary ‘other’. Presenting a line of thinking as being the only viable option effectively shields the development of autonomous features from potential debate about its appropriateness. The latest generation of emerging hypersonic missiles17 will push this argument even further, as this technology compresses, if not effectively eliminates, the time operators will have to process and respond to incoming threats following radar detection (Eshel 2015; Smith 2019). Their development is therefore likely to exacerbate the dynamics mentioned here. 2. great-power competition. Another push factor shaping (even greater) integration of automated and autonomous features into weapons systems is the return of talk about great power competition. In 2017, Vladimir Putin declared to much public fanfare that ‘whoever becomes the leader in [AI] will become the ruler of the world’ (Vincent 2017). China has likewise declared its ambition to ‘lead the world’ in AI (Kania 2017). US policymakers situate themselves in a strategic reorientation towards great-power competition with China and, to a lesser extent, Russia. The logic here is that, to preserve the United States’ military dominance, AWS and AI must be leveraged to fend-off the security and geopolitical challenge of near-peer competitors (Harper 2019; Wyatt 2020, 16–17). Prior to his resignation, US Defense Secretary Jim Mattis called on President Trump to create a national strategy for AI in order to combat China’s recent advances in these areas (Metz 2018b; Kania 2020). This action overlapped with the reorientation of US


Autonomous Weapons Systems and International Norms

defence planning priorities away from the counterinsurgency style campaigns fought in Afghanistan and Iraq towards high-end conventional warfare capabilities including AI, hypersonic weapons and directed-energy weapons (Congressional Research Service 2020, 9–10). Whilst this shift intensified during Trump’s presidency, its essential logic pre-dates the Trump administration. It was integral to the Third Offset Strategy announced by US Defense Secretary Chuck Hagel in November 2014 and pursued during the Obama administration (Hagel 2014; Ellman, Samp, and Coll 2017). In functional terms, this logic creates a self-fulfilling prophecy. Because China and Russia are developing autonomous systems, so the argument goes, the United States (and its allies) must as well. As noted in the Summary of the 2018 National Defense Strategy, for example (US Department of Defense 2018, 3): ‘[t]he drive to develop new technologies is relentless, expanding to more actors with lower barriers of entry, and moving at accelerating speed. New technologies include advanced computing, “big data” analytics, artificial intelligence, autonomy, robotics, directed energy, hypersonics, and biotechnology—the very technologies that ensure we will be able to fight and win the wars of the future’. Thus, alongside pursuing other lines of effort, the Pentagon ‘will invest broadly in military application of autonomy, artificial intelligence, and machine learning, including rapid application of commercial breakthroughs, to gain competitive military advantages’ (US Department of Defense 2018, 7). At the core of this thinking is the old idea that security can be achieved by dominating others through competitive advantages in technological development (Brehm 2019, 15–16). This zero-sum thinking has long fuelled adverse positionings at the multilateral level and hampered disarmament efforts by ‘uncritically reproduc[ing], rather than challeng[ing], insecurities, which are generated by state-level competition dynamics’ (Edwards 2019, 27). 3. normal accidents/problems that can be managed. The integration of autonomous features into air defence systems has not been questioned despite unintended outcomes associated with the complex human–machine interaction they create in specific use-of-force situations. We identified two arguments justifying the procedural-functional appropriateness of automation and autonomy: commentators portray failures as so-called ‘normal

How Autonomous Weapons Systems Make Norms


accidents’, an argument associated with organisational sociologist Charles Perrow, or as ‘problems that can be managed’ in the future as technology advances. Both arguments strengthen the perceived inevitability of AWS development. The concept of normal accidents considers ‘major accidents [as] inevitable in certain high-risk systems’ due to their ‘multiple and unexpected interactions’ (Hawley 2017, 11). Perrow (1999) coined the term ‘normal accidents’ with civilian high-risk technologies, specifically nuclear power plants, in mind, but it has also been applied to air defence systems or AWS (see, for example, Crootof 2016, 1373). For example, Paul Scharre, a leading contributor to the discourse on AWS, concludes his discussion of the Patriot fratricides with this statement: ‘Viewed from the perspective of normal accident theory, the Patriot fratricides were not surprising – they were inevitable’ (Scharre 2018, 154). This argument suggests that unintended outcomes, such as the Patriot fratricides and destruction of civilian airlines we analysed, are an unfortunate part of the ‘normal’ operation of these complex systems due to their high-risk technology. On the part of the users of these systems – states – this suggests that dealing with (the potential fallout from) failures and tolerating this is part and parcel of using modern air defence systems. Defence policymakers are likely to evaluate such unintended outcomes, failures, or errors simply as part of the friction of war (Von Clausewitz 1984, chapter 7). Human soldiers, after all, can also be expected to (and do) make deadly mistakes. But let us keep in mind that what we are supposed to accept in this case is the delegation of human decision-making steps to machines likely to fail in fulfilling them. The fact that a machine, rather than a human, fails matters. It matters because handing over life and death decisions to machines violates human dignity by reducing human beings to objects: ‘Distinguishing a “target” in a field of data is not recognising a human person as someone with rights’ (Asaro 2020b, 16; see also Rosert and Sauer 2021, 13). Further, through this line of reasoning and comparable ones, more and more failures and more and more risks become acceptable and normal in warfare. As war is already an exceptionally permissive context, this accentuates the potential to create a slippery slope of enhanced risk-taking in using automated and autonomous technology. A contrasting argument that produces a similar outcome characterises such failures as ‘problems that can be managed in the future’.


Autonomous Weapons Systems and International Norms

Here, the management of unintended outcomes of the kind committed by air defence systems refers to the even greater delegation to machines and the eventual availability of a ‘technological fix’. To give an example from the wider debate on AWS, machines cannot comply with the principle of distinction, that is, distinguish between civilians and combatants, and it is therefore impossible to use AWS in adherence with fundamental international humanitarian law. Yet, proponents of AWS argue that not only will future machines be able to do this, but they will also outperform humans, which is predicated on the assumed existence of ‘opportunities for using AI and autonomy to improve the conduct of war’ (L. Lewis 2018, 35). As the US submission to the CCW conference highlighted in 2018: ‘The United States … continues to believe that advances in autonomy and machine learning can facilitate and enhance the implementation of IHL, including the principles of distinction and proportionality’ (Mission of the United States 2018b). This argument is contingent on the development of technologies that are still hypothetical or even speculative, making it ‘impossible to evaluate their actual performance, or compare it to human performance’ (Asaro 2020, 11). Such technological solutionism is, however, flawed. The obstacles that led to the unintended outcomes described in relation to civilian airline disasters involving air defence systems are not easily fixed, and are not resolvable by simply ‘plugging in’ the right software update (Hawley 2007, 4). They are instead the product of complex human–machine interaction. To date, the pursuit of a ‘technological fix’ that would resolve the enduring problems associated with the air defence systems have proven to be, and are likely to continue to be, ineffectual. This attitude also implies over-trusting technology: belief in technological ‘silver bullets’ ‘discourage[s] the adequate assessment of side-effects – both technical and social – and close examination of political and ethical implications of engineering solutions’ (Johnston 2018, 48). Just because the machine after some advanced update can do something, this does not mean that it should. In light of this, it is useful to recall that Alvin Weinberg, the nuclear physicist who is credited with popularising the notion of the ‘technological fix’, viewed the Manhattan Project as ‘the paradigm technological fix, in which a powerful technology neutralized enemy aggression and bypassed diplomatic negotiation and political alliances’ (Johnston 2018, 51). Given the wealth of evidence collected to demonstrate

How Autonomous Weapons Systems Make Norms


the vast humanitarian consequences of nuclear weapons in the seventy-five years since the horrific devastation and suffering of Hiroshima and Nagasaki, these words only serve to underline the deeply problematic, and in this case illegitimate, nature of the ‘technological fix’. The portrayal of the technological challenges inherent in automation and autonomy in the context of ‘normal accidents’ or ‘problems that can be managed’ reproduces an increased acceptance of autonomous features in air defence and other weapons systems as well as the weakening of meaningful human control – even if this means accepting civilian airliners being wrongfully targeted or friendly fire incidents. 4. ‘worst case scenario’ logic of missile defence. Interestingly, some arguments that justify the further integration of automated and autonomous features in air defence systems originate in the Cold War era logic of mutually assured destruction. Some key software components of existing air defence systems were developed and tested in this period. In its automatic mode, the Patriot system, for example, works with engagement-control algorithms originally used in the Safeguard, the United States’ first operational anti-ballistic missile system deployed in the 1970s (Hawley 2017, 4). After having been switched on by a human operator, Safeguard’s computer module ‘fought the air battle’ on its own – a level of automation that was deemed appropriate for its ‘mission and operational context: Fight the first salvo of the Battle of Armageddon at the edge of space’ (Hawley 2017, 4). This engagement algorithm was therefore designed for the apocalyptic scenario of a nuclear war ‘in which all bets are off … and risk tolerance is very high’ (Hawley 2017, 4). As Hawley highlights, ‘this level of automation’ and associated lack of human supervisory control was not ‘an appropriate operating mode for Patriot’s mission and operating environment’ (Hawley 2017, 4). It was included ‘because it was available’ (Hawley 2017, 4) and remains used in the system’s control module. Generally, many air defence systems that are still in use today, such as the Aegis, have their origins in the Cold War. This means that their settings and design follow a usage scenario that is very different to their current ways of operating. As Singer (2010, 125) highlights in the context of the IR655 and the US Vincennes, the Aegis ‘had been designed for managing battles against attacking Soviet bombers in


Autonomous Weapons Systems and International Norms

the open North Atlantic, not for dealing with civilian-filled skies in the crowded Gulf’. This suggests that, in scenarios such as greatpower confrontation with the Soviet Union, including the nuclear war spectre, high levels of automation and autonomy with a low level of overall human control were deemed appropriate. But this ‘worst case scenario’ logic that makes extremes appear acceptable is not representative; nor should it be the deciding factor determining the appropriateness of autonomy in weapons systems. In summary, this chapter has demonstrated the consequences that integrating automated and autonomous features into weapons systems, such as air defence systems, over decades has already had on the quality of meaningful human control. In particular, such integration has diminished the role of the human operator as a meaningful decision maker in specific targeting situations. Interestingly, air defence systems have often been cast as ‘unproblematic’ from the perspective of meaningful human control. This is because they are operated under three measures that can (supposedly) increase human control: they include controls through the weapons systems’ parameters by, for example, only targeting munitions or hostile aircraft; they are used in controlled environments; and ‘there is an opportunity for a human to override’ (Campaign to Stop Killer Robots 2020a, 3). But our detailed study of human–machine interaction in air defence systems (a central component of operationalising meaningful human control) reveals something quite different. By progressively transferring an increasing number of cognitive functions previously associated with the human operator to control and command modules of air defence systems, the possibilities for meaningful human control through human–machine interaction have been incrementally and silently reduced. As greater control has been delegated to machines, the human operators’ role in air defence systems has changed from active control to passive supervision. While, formally, the human operator typically retains the final decision about specific targeting decisions via air defence systems, this ‘decision’ is by and large meaningless. The position of the human operator within the weapons system does not enable them to remain situationally aware, it does not provide them with the knowledge or expertise necessary to understand and question the system’s logic, and it does not give them the time to engage in meaningful deliberation. In current air defence systems, humans are

How Autonomous Weapons Systems Make Norms


currently (unintentionally) set up to fail as meaningful operators. As more and more air defence systems have integrated automated and autonomous functions in response to evolving threats, this has begun to set an emerging norm, a standard of appropriateness, humans a diminished role in specific targeting decisions that are instead essentially made by machines. The process of advancing autonomous features in air defence systems is also part of a policy discourse that casts integrating autonomy in weapons systems as positive, appropriate, and ‘something that needs to be done’. This narrative creates acceptable conditions for the further integration of automated and autonomous features into the critical functions of weapons systems. By drawing attention to this, we want to demonstrate that the integration of further autonomous features into weapons systems is neither as desirable as it is made out to be, nor inevitable. Rather, it is the current outcome of a mutually reinforcing cycle. This cycle has gained its own momentum: the discourse around the supposed benefits of autonomous features creates the policy space for the further integration of such features into the critical functions of air defence systems which, in turn, strengthens the discourse around the benefits of AWS, beginning the cycle anew.18 Over time, the speed at which these processes reinforce themselves increases, as there is greater investment in autonomous features and thus a need to justify them to key stakeholders. There is then, already, an operational understanding of the control through human–machine interaction dimension of meaningful human control – arguably the most important dimension determining the quality of meaningful human control. This understanding is deeply problematic because it asks human operators to fulfil a minimal but impossibly complex role. This silent process of norm emergence has by and large escaped public attention – and stands in contrast to how we expect norms to emerge according to IR scholarship and in international law, i.e. in deliberative, public forums and representing a sequence of deliberation, decision, and action. As an empirical observation, it therefore sets us as scholars a completely new set of analytical tasks that we will revisit in the conclusion. Further, as this understanding of what constitutes meaningful human control has not been publicly acknowledged, it has not been subject to scrutiny. If this emerging standard of ‘meaningful’ human control is not engaged with and critically discussed, it


Autonomous Weapons Systems and International Norms

risks undermining deliberative efforts to codify a vital component of meaningful human control as part of an international treaty on AWS. We may therefore end up in a situation where human control is deliberatively codified in the form of international law – but in an ambiguous wording that allows for existing practices that diminish human control to continue, thereby restricting the legal hold of such an instrument.


Throughout this book, we have developed an in-depth perspective on how developing and using AWS, taken to be synonymous with the increasing integration of autonomous features and AI into weapons systems, can change both the use of force and use-of-force norms in particular. We started by outlining the growing international attention paid to the issue of emerging AWS in the academic, political, and public domains. Academically, the discipline of IR was comparatively slow in taking up this issue but, increasingly, shares a common research interest in the role of AI and related technologies with other social and cultural sciences. At the same time, we showed that both the academic and the political debate remain largely preoccupied by a positivist legal perspective on AWS. This perspective often pertains to considering what AWS (could) mean for jus in bello and jus ad bellum, as well as for international human rights law, and whether and what new international laws are needed to regulate these new weapons systems. While the question of law is naturally at the heart of the important political debate within the framework of the UN-CCW in Geneva, we argue that considering the potential political and normative impacts of AWS requires moving beyond and outside their legal problematique. In particular, it requires moving outside of a positivist legal framing of that debate towards one that draws on post-positivist, critical traditions of law, as these highlight the political dimension of its use. Our presentation of historical cases in chapter 2 showed that various attempts at regulating or even banning novel weapons systems


Autonomous Weapons Systems and International Norms

by establishing international law have had a significant impact on the development of international use-of-force norms. The case of blinding laser weapons is a prime example of how a preventive international agreement can limit the development and use of a weapon deemed not in line with IHL – even before it has been used in warfare. But other cases, e.g. the extensive use of submarines in the two world wars, show that even the existence of specific legal rules does not necessarily override understandings of what is ‘appropriate’ conduct emanating from practices. In other words, actors using weapons systems based on emerging technology often operate in contested areas, or even in explicit ignorance of clear rules, because their understandings of appropriateness differ. The practices related to unrestricted submarine warfare, attacking merchant vessels, or the use of chemical weapons are examples that had an impact on what was accepted as ‘appropriate’. Situating ourselves in a critical, post-positivist understanding, our discussion of international law in chapter 3 underpinned this observation by emphasising that international law is often ambiguous and indeterminate. This characteristic, indeterminate quality of international law means that diverging or conflicting interpretations and the existence of contested areas are a typical and expectable reality with respect to international legal norms. This combines with a permissive quality of international law based on offering states the possibility to make flexible, law-based arguments legitimising using force in self-defence (jus ad bellum) or situates applications of force in the context of what is militarily necessary (jus in bello). These dynamics give actors significant room to voice and perform new and diverging understandings of appropriateness – even with respect to institutionalised, legal norms. We also, more generally, consider understandings of appropriateness as basic norms and highlight that norms are different from law and do not necessarily originate in codified rules. These understandings have significant consequences for how we think about international order. Conventionally, there is a dominant perspective on international order as constituted by international legal rules. In chapter 3, we take issue with this perspective and introduce our understanding of normative order that is constituted around a regularity of norms rather than a fixed set of clear rules (in that international law is part of an international normative order), and which clearly goes beyond legal definitions. In our view,



this concept of normative order introduces the flexibility required to analyse the impact practices can have on what is considered as ‘appropriate’ use of force in the absence (or even despite the existence) of legal regulations. We employ the examples of attribution, imminence, and targeted killings to show what the indeterminacy of international law means for use-of-force practices and how these practices might contribute to emerging norms (re)defining appropriateness. In the context of AWS, the chief question to address is not if or when ‘fully’ autonomous weapons systems (will) exist, but to what extent human control is being compromised by systems with an increasing range of autonomous features in their critical functions. As we argue, practices in human–machine interaction in this regard are embedded and produce understandings of appropriate operation by default. These questions lead us to towards a closer consideration of norms. In chapter 4, we discuss the state of the art of norm research in IR in more detail and outline our conceptual contribution to this body of research. Keeping our discussion of critical international law in mind, we emphasise that norm research typically operates with a relatively stable, fixed concept of normative structure that is often equated with law. While we do not contest that law is one important source of norms, we suggest broadening this perspective towards conceptualising norms that emerge in practices as patterned ways of doing things. Since the 2000s, research on norms has diversified in this regard and has directed significant attention towards the ambivalence of meaning (or ‘meaning-in-use’, following the phrasing in Wiener (2009)), in that norms are thought of as being localised or contested by default. However, these concepts of localisation and contestation often also require a top-down normative structure that is, in fact, the object of contestation or localisation processes. Law can provide such a structure and we emphasise the importance of exploring standards of ‘appropriate’ use of force by careful consideration of law and potential law-making activities such as the GGE rounds at the CCW. But we also add a further level to the critical understanding of norms provided by scholars interested in contestation or localisation by introducing two different types of emerging norms: legal–public and procedural–organisational norms. These build on our differentiation between verbal and non-verbal practices, which refer to settings of international, formal deliberation as common in


Autonomous Weapons Systems and International Norms

the making of international law (verbal practices) and to the shaping of normative meaning in a non-formal, silent, or non-codified way (non-verbal practices). Legal–public norms speak to the common understanding of law as the primary source of norms that are supposed to influence public policymaking. Procedural–organisational norms, by contrast, highlight the relevance of operational practices and specific, contextual, and organisational settings as settings where norms emerge. This latter type of norms is particularly important for our study of how the ways of doing things in organisations such as the military can shape norms. A central norm in the debate on AWS is meaningful human control, originally introduced to provide a novel conceptual basis for regulating AWS (and prohibiting fully autonomous weapons systems outside of human control) in the context of the CCW. While some commentators refer to meaningful human control as an emerging norm, it has not been formally codified or institutionalised and, moreover, its substantive content remains very much up for discussion at the CCW. The discourse of states parties continues to demonstrate significant differences in what they understand to be ‘meaningful’ when it comes to human control. But, as our discussion in Chapter 5 shows, the problem of (meaningful) human control is not new but has rather been one of the persistent challenges of human–machine interaction: weapons systems with autonomous features have long been in use and we show how practices in of developing, testing, and operating such systems have shaped an emerging, unspoken norm of what counts as ‘meaningful’ human control. Our case study of air defence systems with automated and autonomous features that have been in use for decades enables us to discuss in greater detail how the complexity of an electronically translated and mediated reality leads to human control in specific use-of-force situations being made essentially meaningless. Air defence systems with automated and autonomous features, such as the US Patriot system, represent long-established forms of using force that, by their operation, have set standards of appropriateness. Even though the complex human–machine interaction inherent in specific uses of air defence systems has resulted in deadly failures, these have been considered as within the range of acceptable flaws and are, moreover, often referred to as cases



of human error. There is no guarantee given by manufacturers or operators that these systems can operate failure-free (not only in the technical sense but also regarding the overall practice) and no such guarantee is expected. Through use of air defence systems, a deeply compromised quality of human–machine interaction and therefore human control has incrementally become ‘acceptable’ and ‘appropriate’ in specific use-of-force situations. Building on this finding, we argue that the quest towards more or ‘full’ autonomy that goes hand-in-hand with increasing the number of autonomous features in weapons systems is likely to further incrementally compromise human control. Future technological innovations will continue to have an impact on how the use of force is practised. This refers not only to the development of new weapons systems, which is a constant of warfare, but also to potentially profound changes in how technological autonomy becomes further integrated into political (and social) applications. However, we are unlikely to reach a specific point where human control is completely lost, chiefly because humans will continue to be involved throughout the life cycle of weapons systems with autonomous features in one form or another; rather than such a ‘big bang’ development, we should be wary of small changes of human–machine interaction at the heart of operating systems with autonomous features as an incremental process. These are quiet changes that risk passing by unnoticed. They situate us on the slippery slope towards losing meaningful human control in specific use-of-force decisions that started decades ago. The danger of such a process of losing human control is more insidious as it proceeds silently, slowly, and outside of the public eye. Stopping it necessitates increasing awareness of the fact that it is happening in the first place – and to be mindful of its continued impact in what is shaping up to be an AI arms race. As Haner and Garcia (2019, 331) put it, ‘[a]s United Nations member states have made little progress in discussions of lethal autonomous weapons systems (AWS) over the last 5 years, the nature of warfare is transforming before our eyes. This change is occurring without proper accountability or public scrutiny as a handful of countries continue to make massive investments in increasing the autonomy of their weapons systems paired with advanced artificial intelligence (AI)’. What does this mean for the current political debate on AWS? Questions of how to define autonomy, AWS, or meaningful human


Autonomous Weapons Systems and International Norms

control lag behind practices of use. While close, critical, and public consideration and debate are important parts of a process that could result in a form of regulation, there is a risk that the political debate is decoupled from practice. Whether or not attempts to create legal norms in deliberative settings are successful, norms have already emerged and will continue to do so in practice, shaping what is considered to be ‘appropriate’ conduct in warfare. We close our analysis by looking towards the future – not only in terms of changing academic conceptualisations of norms and norm emergence, but also by identifying practical insights that this scholarly analysis can contribute to the political debate on AWS, norms, and meaningful human control. Shaping the academic debate on norms Through our work, we have identified three open questions related to novel, flexible conceptualisations of norms and how norms emerge: the quality of norms emerging from non-verbal practices; the precise conditions for norm emergence; and understandings of normativity. 1. quality of norms emerging from non-verbal practices. There are potentially many different, complex levels of norm emergence that analytical models of the process need to capture. While we argue that norms can emerge in bottom-up settings via the practices of groups, organisations, and communities, their broader, shared impact on state behaviour may not necessarily be the same as that of deliberative, legal norms. While the impact of legal norms is not a given either, largely due to inherent challenges such as indeterminacy and contestation, their spread and scope are generally wider than those of norms emerging in non-verbal practice settings. Further, the spread of legal norms based on verbal practices is typically more straightforward to capture, as we can trace their discursive journey. The extent to which standards of appropriateness emanating from the non-verbal practices of a relatively small number of actors are shared among states is harder to capture. How do you capture the spread of norms that are not spoken about, that evolve silently? Our case study has shown how certain practices of human–machine interaction have become typical of operating weapons systems with



autonomous features across a growing number of states. In some cases, such as the Patriot system, there is also a form of technological determinism, leading to human commanders and human operators at the end of the command and control chain encountering the same challenges of human–machine interaction regardless of context. What we see here, in other words, is how norms originating in non-verbal practices are linked to specific technological features in their interaction with humans and travel with these features to other settings. Regardless of whether many actors overtly or consciously share such understandings of appropriateness, they may be ‘pushed’ into certain directions when using similar systems. In this way, norms that result from non-verbal practices are still powerful even if they lack conventional codification at the level of political deliberation, in that the impact and influence of norms do not necessarily depend on the extent to which they are officially ‘shared’ and documented. Yet, the extent to which emerging normative orders that surround the established legal international order influence the codification activities in the latter has yet to be studied. 2. precise conditions of norm emergence. Our conceptual model does not yet specify at which point practices become norms. Our definition of norms as standards of appropriateness is deliberately very broad, and we maintain that this broad definition is required to accommodate the nuances of norm emergence at a practical level. But it also raises the justifiable question of when understandings of appropriateness that are held by individuals acquire ‘norm’ status. We agree that this is an incremental process that requires a form of spread and some form of settlement to come into effect. While our conceptualisation does not define the specific conditions or points in time when norms come into existence, we argue that is not necessarily important for the point we want to make. Standards of appropriateness can emerge in small settings, comprising only a small group of actors involved in the practice of developing, testing, or operating weapons systems with autonomous features. The different elements that define what is accepted as ‘appropriate’ emerge by default in the interaction between humans and technologies and become perceivable in concrete practices. Further, it may be that norms and practices are not neatly separable because what


Autonomous Weapons Systems and International Norms

is considered as appropriate only exists in the practices in performing this normative content. To explore this relationship between norms and practices further, more empirical work is clearly needed. We have not yet delivered a more detailed account of practices beyond the cases mentioned in this book, especially in Chapter 5. However, these underline how appropriateness is defined via practices. To understand the meaning and role of norms such as meaningful human control with regard AWS, we require more detailed analyses of other weapons systems with autonomous features in practice. Chief candidates for such an analysis are active protection systems, counter-drone systems, guided missiles, or loitering munitions. We therefore consider our model a useful conceptual starting point for more in-depth studies of how AWS influence use-of-force norms. 3. understandings of normativity. We have largely refrained from delving into extensive and abstract discussions about the character of norms in this book. However, we have remarked on the dual quality of norms as normal and normative. At this point, it is worthwhile taking up this relationship again, especially regarding how it relates to the dimension of ethics. The two types of norms (legal–public and organisational–procedural) that we have differentiated in this book make (very distinct) claims about normativity. In fact, procedural–organisational norms define what is considered as normal in IR, in the sense of what is permissible, rather than what is normative, in the sense of purporting to ideas of justice and ‘oughtness’. The use-of-force practices that we have discussed in relation to operating weapons systems with autonomous features appear to widen the gap between what is normal and what is normative in the wider, ethical, just, and moral senses. In fact, practices of human–machine interaction in air defence systems are undesirable in a wider normative sense because they undermine the spirit of meaningful human control. We may therefore see practices and procedural–organisational norms as primarily shaping the normal as opposed to shaping the normative. But there is also an analytically interesting relationship between normality and normativity based on the question of the extent to which understandings of what normal is influence understandings of what normative is and vice versa. This is highly relevant for



our argument insofar as ethical considerations are a chief topic of contestation and argument about AWS. It would be worth studying in more detail how understandings of ethical, technological autonomy in the context of the use of force might change due to practices but could also influence what is deemed appropriate in certain contexts beyond a matter of practicality. For example, the killing of innocent people by mistake is widely regarded as morally wrong but it seems to have had no clear effect on understandings of appropriateness when it comes to operating air defence systems with compromised standards of meaningful human control in specific use-of-force situations. Further, we may want to capture how normativity refers to a sense of oughtness and justice that is particular in character. Rather than being built around a ‘universalist’ or wider understanding of what is normative, certain norms are normative in the sense of suggesting particularist ideas of justice or oughtness. Further, these three open questions should be examined in the broader context of AI beginning to occupy an increasingly important place within our social and political lives, a topic that we touched upon in the introduction. For example, the ethical dilemmas of accountability and responsibility outlined here with regard to autonomous driving are interesting cases in point when we consider the broader implications of our study of norms. We can draw significant comparisons between the deadly accidents involving autonomous vehicles and the acceptance of failures using air defence systems. While technical flaws are at the heart of accidents involving autonomous cars, it is unlikely that a manufacturer will at some point give a 100% safety guarantee, thereby accepting liability if things go wrong. To what extent autonomous vehicles will populate our roads in the future remains to be seen, but there is reason to believe that autonomous features in driving are likely to play a more important role soon. We may therefore find ourselves in a scenario where standards of appropriateness regarding autonomous driving have emerged that consider the occasional accident as acceptable (of course, this is also the case with human drivers). In such cases, legal liability will be a crucial test of regulatory control. But new social norms are likely to emerge in these contexts in practice, just like in other contexts such as privacy and social media or data collection, framing what is acceptable to, or accepted by, the public.


Autonomous Weapons Systems and International Norms

Shaping the political debate on norms, AWS, and meaningful human control The transnational, political debate on AWS is bound to continue, whether in the UN-CCW framework or outside of it. As of January 2021, the positions of key actors such as the United States and Russia have made the near-term prospects for adopting a legally binding agreement that comprehensively affirms standards for meaningful human control and prohibits AWS that do not meet this standard seem remote. The substance of debates over the past six years suggests that the focus crystallises around practices of using weapons systems with autonomous features, rather than their technical capabilities. This level of practice is precisely what a regulatory norm on meaningful human control seeks to target. But even if such a norm were deliberatively set and codified, it would be likely to remain abstract and ambiguous in character. The dynamics of how practices can shape norms that we have identified throughout the book could therefore lead to a sobering conclusion: rather than the deliberative norm of meaningful human control significantly shaping use-of-force practices, we could see a continuation of practices shaping the substance of the ‘meaningful human control’ norm – as well as potentially other pivotal use-of-force norms. Despite the risks inherent in such dynamics, we still consider deliberatively codifying an operationalised version of the meaningful human control norm to be the most promising foundation for creating a regulatory framework on the development and usage of AWS. We acknowledge that the exclusive development of norms governing AWS in non-verbal practices is not inevitable – but this also depends on how it is responded to in the form of verbal practices. To foster this aim and thereby consciously counteract some of the consequences of norm emergence via non-deliberation, we suggest a range of steps (see Bode and Watts 2021). (1) Current practices in how states operate weapons systems with automated and autonomous features in specific use-of-force situations should be brought into the open and scrutinised. As we have demonstrated in the case of air defence systems, such non-verbal practices shape what constitutes ‘meaningful’ human control, especially the quality and type of human–machine interaction. A further



analysis of other existing weapons systems can help make the regulatory dialogue on LAWS less abstract and nudge the debate towards considering the problematic consequences of meaningless human control. Similarly, this could include critical interrogation of how assumptions regarding the appropriateness of integrating automation and autonomy into the critical functions of weapons systems of all types have become internalised across significant parts of the policy and practitioner communities in the field. (2) More in-depth studies of the emerging standards for meaningful human control produced by use of other existing weapons systems with automated and autonomous features beyond air defence systems are required. Such analyses can provide practical insights into existing and emerging challenges to human–machine interaction created by autonomy that, if not explicitly addressed, may shape tacit understandings of appropriateness. All ‘weapons systems that select targets on the basis of sensor input’ (Campaign to Stop Killer Robots 2020a, 4) should be assessed for whether they allow for meaningful human control. Such systems include, but are not restricted to, active protection systems, counter-drone systems, and ‘fire-and-forget’ missiles. These and other case studies would help bridge the gap between deliberation and operational practices that otherwise risks undermining efforts to establish a more robust regulatory framework. (3) Our study of air defence systems highlights that while all three dimensions of meaningful human control (technological, conditional, and human–machine interaction) are important, control through human–machine interaction is the decisive element in ensuring that control remains meaningful, not least because human–machine interaction highlights meaningful human control at the specific point of using a weapons system, rather than the exercise of human control at earlier stages, such as in research and development. Air defence systems are often not deemed problematic from a meaningful human control perspective because states can theoretically limit where, how, and when they are deployed by setting a weapons system’s parameters of use and controls according to the use environment. But our close study of control through human–machine interaction in chapter 5 has demonstrated how this can render the human operator’s role in specific targeting


Autonomous Weapons Systems and International Norms

decisions essentially meaningless and how restrictions on deployment and use intended to achieve human control can fail. (4) Control through human–machine interaction should be integral to any codification of meaningful human control in disarmament debates. We identify three prerequisite conditions needed for human agents to exercise meaningful human control: (a) a functional understanding of how the targeting system operates and makes targeting decisions, as well as its known weaknesses (e.g. track classification issues); (b) sufficient situational understanding; and (c) the capacity to scrutinise machine targeting decision-making rather than over-trusting the system. Of course, human operators should also have the possibility to abort the use of force. In short: human operators must be able to build a mental model of the system’s decision-making process and the logic informing this. This includes, for example, access to additional (intelligence) sources beyond the system’s output, allowing operators to triangulate the system’s targeting recommendations. (5) These three prerequisite conditions (functional understanding, situational understanding, and the capacity to scrutinise machine targeting decision-making) of ensuring meaningful human control in specific targeting situations set hard boundaries for AWS development that should be codified in international law. In our assessment, they represent a technological Rubicon that should not be crossed, as going beyond these limits makes human control meaningless. Adhering to these conditions not only ensures that human control remains meaningful, but also has the potential to ease the pressure put on human operators of air defence systems, who are currently, unintentionally, set up to fail. At the moment, individual human operators at the bottom of the chain of command are often held accountable for structural failures in how automation and autonomy in air defence systems are designed and operated. Human operators have been put into an impossible situation – can they reasonably be responsible for the use of the system if they do not understand how it functions in reality, have no situational understanding, and no critical thinking space? Here,



regulating meaningful human control can make a constructive contribution for states operating weapons systems with automated or autonomous features. Normative constraints in using weapons systems do not have to stand against the state ‘interests’ of using such systems. Militaries want to enhance control, and the failures that we described in relation to current standards of operating air defence systems are something states would want to avoid. Positively codified standards of how to retain meaningful human control could therefore be helpful for states. (6) The complexity inherent in human–machine interaction means that there will be limits to exercising meaningful human control in specific targeting decisions. Ensuring the stringent training of human operators is necessary but not a panacea. This inconvenient truth should be clear to all relevant stakeholders. Finally, given the nature of international law and the normative dynamics inherent in practices, any deliberatively agreed standard of meaningful human control is bound to be just a broad source of pressure in a particular direction rather than a clear demarcation. For any such legal standard to be relevant, it needs ongoing and constant practices that enact it and remain in line with its ‘spirit’. Closing on a hopeful note, a legal instrument on AWS could become something of a campsite1 in the middle of a normative landscape from with actors can draw directions.


acknowledgements 1

For more information about the work of the project, see

introduction 1



However, drones still provide a relevant example of unmanned systems that play a major role in contemporary use-of-force practices. As we can see how AWS may follow a trajectory similar to the one provided by drones, they will offer useful examples and comparisons at times. The full title of the CCW is the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects. It was concluded in 1980 and entered into force in 1983. Currently, it has five protocols annexed to the Convention that outline ‘prohibitions or restrictions on the use of specific weapons or weapon systems’ (UNOG 2018). As of January 2021, the CCW had 125 states parties and four signatories. Chapter five draws on a qualitative analysis of air defence systems and human control authored by Ingvild Bode and Tom Watts and published in collaboration with Drone Wars UK (Bode and Watts 2021).

chapter one 1

Lele (2019, 56), for example, refers to the 1970s Aegis system, representative of a range of close-in weapons systems (CIWS)


Notes to pages 20–40

currently operating on the naval ships of more than thirty countries, as a comparatively ‘dumb’ system. 2 Expert interview #1 on AI and machine learning (2018). All interviews were conducted in confidentiality and according to the Chatham House Rule. The names of interviewees and the professional affiliation are therefore withheld by mutual agreement. 3 Ibid. 4 The OODA loop was developed by US Air Force Lieutenant General John Boyd (see Anderson, Husain, and Rosner 2017; Pearson 2017). 5 We understand formal contributions to consist of either delivering statements to the general debate or contributing working papers ahead of the meetings. Many meetings provide states parties with the opportunity to contribute formally multiple times. 6 We have classified states parties as belonging to the Global South if they appear on the OECD’s Development Assistance Committee’s list of official ODA recipients (OECD 2018). 7 The GGE meetings had initially been delayed: the September 2020 meetings took place in a hybrid format, allowing one representative per delegation in the room and others connected virtually. This format posed challenges, particularly to full participation by delegations from the Global South, chiefly due to time zone differences between their capitals and Geneva as well as a lack of reliable high-speed Internet access (Ferl 2020). Notably, South Africa was the only African country to participate in the 2020 GGE meetings. A second meeting planned for November 2020 was cancelled as further Covid-19 restrictions were introduced in Geneva. 8 As of January 2021, these 30 were Algeria, Argentina, Austria, Bolivia, Brazil, Chile, China, Colombia, Costa Rica, Cuba, Djibouti, Ecuador, Egypt, El Salvador, Ghana, Guatemala, Holy See, Iraq, Jordan, Mexico, Morocco, Namibia, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Uganda, Venezuela, and Zimbabwe (Campaign to Stop Killer Robots 2020b). 9 At the UN-CCW, unlike other UN forums, NGO representatives and other experts can take the floor for speeches. 10 Expert interview #2 (2017). All interviews were conducted in confidentiality and according to the Chatham House Rule. The names of interviewees and the professional affiliation are therefore withheld by mutual agreement. 11 There are some notable exceptions (see Altmann and Sauer 2017; Garcia 2016; Haas and Fischer 2017; Shaw 2017).

Notes to pages 65–87


chapter two 1


3 4

5 6




A poster by the British Parliamentary Recruiting Committee stated, in capital letters: ‘Cold-blooded murder! Remember Germany’s crowning infamy the sinking of the Lusitania, with hundreds of women and children’ and ‘These crimes against god and man are committed to try and make you afraid of these German barbarians’ (Imperial War Museum, object/30601, accessed 27 February 2020). Participating nations were Belgium, China, France, Italy, Japan, Netherlands, the United States, and Portugal, while the emerging Soviet Union was not invited. Notably, the US only ratified the 1925 Geneva Protocol in 1975. However, the following reservation was made on accession in 1931: ‘Subject to the reservations that the Government of Iraq is bound by the said Protocol only towards those Powers and States which have both signed and ratified the Protocol or have acceded thereto, and that the Government of Iraq shall cease to be bound by the Protocol towards any Power at enmity with him whose armed forces, or the armed forces of whose allies, do not respect the Protocol’ (ICRC 2019). See ‘The Fact-Finding Mission’. ‘Recognised nuclear weapon state’ refers to their status with regard to the Treaty on the Non-Proliferation of Nuclear Weapons (NPT): these states detonated a nuclear explosive before 1 January 1967. In 1938, German scientist Otto Hahn and his assistant Fritz Strassmann had discovered nuclear fission in research cooperation with Lise Meitner and Otto Frisch; this was the theoretical basis for developing a nuclear weapon. The eighteen nations that came together in the new forum were the initial ten nations, i.e. representatives of a Western bloc (Canada, France, Great Britain, Italy, United States) and of an Eastern bloc (Bulgaria, Czechoslovakia, Poland, Romania, Soviet Union). In addition, eight further nations participated in the talks: Brazil, Burma, Ethiopia, India, Mexico, Nigeria, Sweden, and United Arab Republic. See ‘Syria’, Nuclear Threat Initiative (last updated April 2018).


Notes to pages 106–179

chapter three 1


A second, broader exception to the general prohibition consists of the UN Security Council authorising the use of force in response to any event that it deems a threat to international peace and security. These are Nicaragua v. United States (1986), DRC v. Uganda (2005), and the Wall advisory opinion (2004) addressing Israel’s construction of a wall in the West Bank.

chapter five 1

2 3 4

5 6

7 8


This chapter draws on and builds on empirical research conducted for the report Meaning-less Human Control authored by Ingvild Bode and Tom Watts, published collaboratively by Drone Wars UK and the Centre for War Studies at the University of Southern Denmark (forthcoming 2021). We want to thank Richard Moyes for drawing our attention to these two points. ‘Operation Earnest Will'. GlobalSecurity.Org. See https://www. A data link ‘enabled the Sides and Vincennes computers to exchange tactical information in real time’, allowing the Sides officers to have access to the same information that was displayed in the Vincennes (D. Evans 1993). News reports published immediately after the downing of IR655 also include this piece of information (Halloran 1988). ‘SAM-15 (SA-15 Gauntlet) Iranian Short-Range Surface-to-Air Missile (SAM) System'. OE Data Integration Network (ODIN). See https:// Gauntlet)_Iranian_Short-Range_Surface-to-Air_Missile_(SAM)_ System. ‘9K331 Tor SA-15 GAUNTLET SA-N-9 HQ-17'. GlobalSecurity.Org. See ‘SAM-15 (SA-15 Gauntlet) Iranian Short-Range Surface-to-Air Missile (SAM) System’, OE Data Integration Network (ODIN). See https:// Gauntlet)_Iranian_Short-Range_Surface-to-Air_Missile_(SAM)_ System. ‘Tor: Short-Range Air Defense System’. Military Today. http://www.

Notes to pages 181–223


10 The authors would like to thank Maya Brehm for this point. 11 The authors would like to thank Maya Brehm for this point. 12 The authors thank Justin Bronk for drawing our attention to this point. 13 These friendly fire incidents lead to the generation of new rules of engagement for the Patriot system, which now had (and continues to have) to receive specific authorisation by the Air Force controlling authority before engagement – a ‘decision [that] took Patriot out of the fight’ due to the short engagement timelines for tactical ballistic missiles. Patriot crew members now also receive slightly longer but not substantially different forms of training before deployment (Hawley 2017, 6, 8). 14 The authors want to thank Peter Burt for drawing our attention to this point. 15 Personal communication with Richard Moyes, November 2020. 16 The authors want to thank Tom Watts for drawing our attention to this point. 17 If developed, hypersonic missiles – currently being tested by China, Russia, the United States and other states – would have significantly greater speed and manoeuvrability than existing ballistic and cruise missiles. They could conceivably evade many, if not all, of the air defence systems examined in this report. 18 The authors would like to thank Tom Watts for drawing our attention to this point.

conclusion 1

The authors would like to thank Richard Moyes for this analogy.


Acharya, A. 2004. ‘How Ideas Spread: Whose Norms Matter? Norm Localization and Institutional Change in Asian Regionalism’. International Organization 58 (2): 239–75. Adams, T.K. 2001. ‘Future Warfare and the Decline of Human Decisionmaking’. Parameters 31 (4): 1–15. Adler, E. 2013. ‘Constructivism and International Relations: Sources, Contributions and Debates’. In Handbook of International Relations, edited by W. Carlsnaes, T. Risse, and B.A. Simmons, 2nd ed., 112–43. London: Sage. Aharoni, S.B. 2014. ‘Internal Variation in Norm Localization: Implementing Security Council Resolution 1325 in Israel’. Social Politics: International Studies in Gender, State & Society 21 (1): 1–25. Ahmed, D.I. 2013. ‘Defending Weak States Against the “Unwilling or Unable” Doctrine of Self-Defense’. Journal of International Law and International Relations 9 (1): 1–37. Albuquerque, F.L. 2019. ‘Coalition Making and Norm Shaping in Brazil’s Foreign Policy in the Climate Change Regime’. Global Society 33 (2): 243–61. Alexandrov, S.A. 1996. Self-Defense against the Use of Force in International Law. Developments in International Law. Boston, MA: Kluwer Law International. Altmann, J. 2013. ‘Arms Control for Armed Uninhabited Vehicles: An Ethical Issue’. Ethics and Information Technology 15 (2): 137–52. Altmann, J., and F. Sauer. 2017. ‘Autonomous Weapon Systems and Strategic Stability’. Survival 59 (5): 117–42. Amoore, L. 2019. ‘Doubtful Algorithms: Of Machine Learning Truths and Partial Accounts’. Theory, Culture, Society 36(6): 147–69. Anderson, K., and M. Waxman. 2013. ‘Law and Ethics of Autonomous Weapon Systems. Why a Ban Won’t Work and How the Laws of War Can’. Hoover Institution, Stanford University. http://media.


References Anderson, W.R., A. Husain, and M. Rosner. 2017. ‘The OODA Loop: Why Timing Is Everything’. Cognitive Times, December, 28–9. Andrews, N. 2019. ‘Normative Spaces and the UN Global Compact for Transnational Corporations: The Norm Diffusion Paradox’. Journal of International Relations and Development 22 (1): 77–106. Aradau, C., and T. Blanke. 2018. ‘Governing Others: Anomaly and the Algorithmic Subject of Security’. European Journal of International Security 3 (1): 1–21. Arend, A.C. 1993. International Law and the Use of Force: Beyond the UN Charter Paradigm. London: Routledge. Arkin, R.C. 2009. Governing Lethal Behavior in Autonomous Robots. Boca Raton: CRC Press. – 2010. ‘The Case for Ethical Autonomy in Unmanned Systems’. Journal of Military Ethics 9 (4): 332–41. Arms Control Association. 2017. ‘The Nuclear Testing Tally’. Factsheet. – 2018. ‘Timeline of Syrian Chemical Weapons Activity, 2012– 2018’. Factsheet. Timeline-of-Syrian-Chemical-Weapons-Activity. – 2019a. ‘Timeline of Syrian Chemical Weapons Activity, 2012– 2019’. Factsheet. Timeline-of-Syrian-Chemical-Weapons-Activity. – 2019b. ‘What You Need to Know About Chemical Weapons Use in Syria’ (blog). what-you-need-know-about-chemical-weapons-use-syria. Army Technology. 2019. ‘Patriot Missile Long-Range Air-Defence System US Army’. Army Technology (blog). projects/patriot/. Article 36. 2013. ‘Killer Robots: UK Government Policy on Fully Autonomous Weapons’, uploads/2013/04/Policy_Paper1.pdf. – 2016. ‘The United Kingdom and Lethal Autonomous Weapons Systems’. Background Paper, Article 36, London. – 2018. ‘Shifting Definitions: The UK and Autonomous Weapons Systems’. Concept Note, Article 36. uploads/2018/07/Shifting-definitions-UK-and-autonomous-weaponsJuly-2018.pdf.



Artificial Intelligence Committee. 2018. ‘AI in the UK: Ready, Willing and Able? Report of the Session 2017–19’. House of Lords AI Committee. ldai/100/10002.htm. Asaro, P. 2009. ‘Modeling the Moral User’. IEEE Technology and Society Magazine, 20–4. – 2012. ‘On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making’. International Review of the Red Cross 94 (886): 687–709. – 2019. ‘Algorithms of Violence: Critical Social Perspectives on Autonomous Weapons’. Social Research: An International Quarterly 86 (2): 537–55. – 2020. ‘Autonomous Weapons and the Ethics of Artificial Intelligence’. In The Ethics of Artificial Intelligence, edited by S.M. Liao, 1–20. Oxford: Oxford University Press. Oxford%20AI%20Ethics%20AWS.pdf. Bahcecik, S.O. 2019. ‘Civil Society Responds to the AWS: Growing Activist Networks and Shifting Frames’. Global Policy 10 (3): 365–9. Barry, J., and R. Charles. 1992. ‘Sea of Lies’. Newsweek, 7 December. BBC News. 2016. ‘How Does a BUK Missile System Work?’, Video, https:// – 2018. ‘Syria ‘Chemical Attack’: What We Know’. BBC News, 10 July, – 2020a. ‘Iran Plane Crash: Why Were so Many Canadians on Board’. BBC News, 11 January. world-us-canada-51053220. – 2020b. ‘MH 17 Plane Crash: What We Know’. BBC News, 26 February. Ben-Yehuda, N. 2013. Atrocity, Deviance, and Submarine Warfare: Norms and Practices during the World Wars. Ann Arbor, MI: The University of Michigan Press. Berger, T. 2017. ‘Linked in Translation: International Donors and Local Fieldworkers as Translators of Global Norms’. Third World Thematics: A TWQ Journal 2 (5): 606–20. Bergkvist, N.-O., and R. Ferm. 2000. ‘Nuclear Explosions 1945–1998’. Defence Research Establishment/Peace Research Institute, Stockholm. Berridge, G.R., and A. James. 2003. A Dictionary of Diplomacy, 2nd ed. Basingstoke: Palgrave Macmillan.



Betts, A., and P. Orchard, eds. 2014. Implementation and World Politics: How International Norms Change Practice. Oxford: Oxford University Press. Bhargava, V., and T.W. Kim. 2017. ‘Autonomous Vehicles and Moral Uncertainty’. In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, edited by P. Lin, R. Jenkins, and K. Abney, 5–19. Oxford: Oxford University Press. Birn, D.S. 1970. ‘Open Diplomacy at the Washington Conference of 1921– 2: The British and French Experience’. Comparative Studies in Society and History 12 (3): 297–319. Björkdahl, A. 2002. ‘From Idea to Norm: Promoting Conflict Resolution’. Working Paper, University of Lund. Bloomfield, A. 2016. ‘Norm Antipreneurs and Theorising Resistance to Normative Change’. Review of International Studies 42 (2): 310–33. Bode, I. 2014. ‘Francis Deng and the Concern for Internally Displaced Persons: Intellectual Leadership in the United Nations’. Global Governance 20 (2): 277–95. – 2015. Individual Agency and Policy Change at the United Nations: The People of the United Nations. London: Routledge. – 2016. ‘How the World’s Interventions in Syria Have Normalised the Use of Force’. The Conversation, 17 February. http://theconversation. com/how-the-worlds-interventions-in-syria-have-normalised-the-use-offorce-54505. – 2017a. ‘Manifestly Failing and Unable or Unwilling as Intervention Formulas’. In Rethinking Humanitarian Intervention in the 21st Century, edited by A. Warren and D. Grenfell, 164–91. Edinburgh: Edinburgh University Press. – 2017b. ‘Verhandlungen über Killerroboter in Genf’. Heise Online. – 2018. ‘Reflective Practices at the Security Council: Children and Armed Conflict and the Three United Nations’. European Journal of International Relations 24 (2): 293–318. – 2019. ‘Norm-Making and the Global South: Attempts to Regulate Lethal Autonomous Weapons Systems’. Global Policy 10 (3): 359–64. – 2020. ‘The Threat of ‘Killer Robots’ Is Real and Closer than You Might Think’. The Conversation, 15 October. the-threat-of-killer-robots-is-real-and-closer-than-you-might-think-147210. Bode, I., and H. Huelss. 2017. ‘Why “Stupid” Machines Matter: Autonomous Weapons and Shifting Norms’. Bulletin of the Atomic Scientists, 12 October, why-stupid-machines-matter-autonomous-weapons-and-shifting-norms/.



– 2018. ‘Autonomous Weapons Systems and Changing Norms in International Relations’. Review of International Studies 44 (3): 393–413. – 2019. ‘Introduction to the Special Section: The Autonomisation of Weapons Systems: Challenges to International Relations’. Global Policy 10 (3): 327–30. Bode, I., and J. Karlsrud. 2019. ‘Implementation in Practice: The Use of Force to Protect Civilians in United Nations Peacekeeping’. European Journal of International Relations 25 (2): 458–85. Bode, I., and T. Watts. 2021. ‘Meaning-Less Human Control. The Consequences of Automation and Autonomy in Air Defence Systems’. Joint Report, Drone Wars UK, Oxford and Centre for War Studies, Odense. Boden, M. 2017. ‘Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS) Meeting: Presentation to Panel 1 – Technological Dimension’. Presentation, 17 November, United Nations Office, Geneva. Bogaisky, J. 2020. ‘If Iranian Troops Really Thought Ukraine Flight 752 Was a Cruise Missile, They Made a “Hail Mary” Shot’. Forbes, 15 January. if-iranian-troops-really-thought-ukraine-flight-752-was-a-cruise-missilethey-made-a-hail-mary-shot/. Bonds, E. 2013. ‘Hegemony and Humanitarian Norms: The US Legitimation of Toxic Violence’. Journal of World-Systems Research 19 (1): 82–107. Börzel, T., and T. Risse. 2012. ‘From Europeanisation to Diffusion. Introduction’. West European Politics 35 (1): 1–19. Bostrom, N. 2016. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. Boulanin, V. 2016. ‘Mapping the Development of Autonomy in Weapon Systems’, Report, Stockholm International Peace Research Institute, Stockholm, other-publications/mapping-development-autonomy-weapon-systems. Boulanin, V., N. Davison, N. Goussac, and M. Peldán Carlsson. 2020. ‘Limits of Autonomy in Weapon Systems: Identifying Practical Elements of Human Control’. Report, Stockholm International Peace Research Institute and International Committee of the Red Cross. Boulanin, V., and M. Verbruggen. 2017. ‘Mapping the Development of Autonomy in Weapons Systems’. Report, Stockholm International Peace Research Institute, Stockholm. default/files/2017-11/siprireport_mapping_the_development_of_auton omy_in_weapon_systems_1117_1.pdf.



Bousquet, A.J. 2018. The Eye of War: Military Perception from the Telescope to the Drone. Minneapolis, MI: University of Minnesota Press. Boyle, M.J. 2015. ‘The Legal and Ethical Implications of Drone Warfare’. The International Journal of Human Rights 19 (2): 105–26. Bradshaw, J.M., R.R. Hoffman, M. Johnson, and D.D. Woods. 2013. ‘The Seven Deadly Myths of ‘Autonomous Systems’.’ IEEE Intelligent Systems 28 (3): 54–61. Branka, M (@brankamarijan). 2021. ‘A Key Question for #UNCCW: Is It Still a Relevant Forum or Is It a Place Where ‘Good Ideas Go to Die’?’ Twitter, 14 January 1:20 p.m., https://twitter. com/brankamarijan/status/1349707930195664898. Brehm, M. 2017. ‘Defending the Boundary: Constraints and Requirements on the Use of Autonomous Weapons Systems under International Humanitarian and Human Rights Law’. Academy Briefing No. 9, Geneva Academy. – 2019. ‘Envisioning Sustainable Security. The Evolving Story of Science and Technology in the Context of Disarmament’. Article 36 (blog). Bronk, J. 2020. ‘What Happened to Flight PS752?’ The Telegraph. 11 January. badly-trained-iranian-defence-team-could-have-made-mistakes/. Brooks, R. 2013. ‘Testimony of Rosa Brooks, US Senate Judiciary Committee, Subcommittee on the Constitution, Civil Rights and Human Rights, Hearing on Drone Wars: The Constitutional and Counterterrorism Implications of Targeted Killing’. US Senate Judiciary Committee. Brown, D. 2018. ‘These Are China’s Laser Weapons That Have Reportedly Been Targeting US Planes in “An Act Just Short of War”’. Business Insider. these-are-the-4-blinding-laser-weapons-that-china-has-developed-2018-5. Brownlee, J. 2016. ‘Supervised and Unsupervised Machine Learning Algorithms’. Machine Learning Mastery, 16 March. supervised-and-unsupervised-machine-learning-algorithms/. Brownlee, V. 2018. ‘Retaining Meaningful Human Control of Weapons Systems’. UN Office for Disarmament Affairs (blog), 18 October. ing-meaningful-human-control-of-weapons-systems/.



Brunstetter, D.R. 2012. ‘Can We Wage a Just Drone War?’, The Atlantic, 19 July. can-we-wage-a-just-drone-war/260055/. Brunstetter, D.R., and A. Jimenez-Bacardi. 2015. ‘Clashing over Drones: The Legal and Normative Gap between the United States and the Human Rights Community’. The International Journal of Human Rights 19 (2): 176–98. Buchanan, B., and T. Miller. 2017. ‘Machine Learning for Policymakers. What It Is and Why It Matters’. The Cyber Security Project, Belfer Center for Science and International Affairs, Harvard Kennedy School, Cambridge, MA. Bueger, C., and F. Gadinger. 2015. ‘The Play of International Practice’. International Studies Quarterly 59: 449–60. Bureau of Investigative Journalism, 2021. ‘Drone Warfare’. Online Database. The Bureau of Investigative Journalism, London. https:// Burns, R.D. 1971. ‘Regulating Submarine Warfare, 1921–41: A Case Study in Arms Control and Limited War’. Military Affairs 35 (2): 65–73. Burt, P. 2018. ‘Off the Leash. The Development of Autonomous Military Drones in the UK’. Drone Wars UK (blog). wp-content/uploads/2018/11/dw-leash-web.pdf. Busby, M. 2018. ‘Killer Robots: Pressure Builds for Ban as Governments Meet’. The Guardian, 9 April. https:// killer-robots-pressure-builds-for-ban-as-governments-meet. Bush, G.W. (President). 2002. ‘The National Security Strategy of the United States of America 2002’. The White House Archives. http:// Byers, M. 2020. ‘Still Agreeing to Disagree: International Security and Constructive Ambiguity’. Journal on the Use of Force and International Law 8 (1): 91–114. Campaign to Stop Killer Robots. 2019a. ‘Global Poll Shows 61% Oppose Killer Robots’. Campaign to Stop Killer Robots (blog), 22 January. global-poll-61-oppose-killer-robots/. – 2019b. ‘Key Elements of a Treaty on Fully Autonomous Weapons’. Working Paper, Campaign to Stop Killer Robots. https://www.



– 2020a. ‘Key Elements of a Treaty on Fully Autonomous Weapons. Frequently Asked Questions’. Working Paper, Campaign to Stop Killer Robots. uploads/2020/04/FAQ-Treaty-ElementsvAccessible.pdf. – 2020b. ‘Country Views on Killer Robots’. Working Paper, Campaign to Stop Killer Robots. wp-content/uploads/2020/03/KRC_CountryViews_11Mar2020.pdf. – 2020c. ‘Diplomatic Talks Re-Convene’. Campaign to Stop Killer Robots (blog), 25 September. diplomatic2020/. Carr, N. 2016. The Glass Cage: Who Needs Humans Anyway. London: Vintage. Carvin, S. 2012. ‘The Trouble with Targeted Killing’. Security Studies 21 (3): 529–55. – 2015. ‘Getting Drones Wrong’. The International Journal of Human Rights 19 (2): 127–41. – 2017. ‘Conventional Thinking? The 1980 Convention on Certain Conventional Weapons and the Politics of Legal Restraints on Weapons during the Cold War’. Journal of Cold War Studies 19 (1): 38–69. Chayes, A., and A. Handler Chayes. 1993. ‘On Compliance’. International Organization 47 (2): 175–205. Checkel, J.T. 1997. ‘International Norms and Domestic Politics:’ European Journal of International Relations 3 (4): 473–95. – 1998. ‘The Constructivist Turn in International Relations Theory’ (Review Article). World Politics 50 (2): 324–48. – 2001. ‘Why Comply? Social Learning and European Identity Change’. International Organization 55 (3): 553–88. Chollet, D. 2016. ‘Obama’s Red Line, Revisited’. POLITICO Magazine. 16 July. Coleman, K. 2005. A History of Chemical Warfare. New York: Palgrave Macmillan. Collina, T.Z., and D.G. Kimball. 2012. ‘No Going Back: 20 Years Since the Last US Nuclear Test’. Arms Control Association Issue Briefs, Volume 3, Issue 14, 20 September. No-Going-Back-20-Years-Since-the-Last-US-Nuclear-Test%20. Congressional Research Service. 2020. ‘Renewed Great Power Competition: Implications for Defense – Issues for Congress’. Document No. R43838. R43838.pdf. Corten, O. 2010. The Law Against War: The Prohibition on the Use of Force in Contemporary International Law. Portland, OR: Hart.



– 2017. ‘Has Practice Led to an “Agreement Between the Parties” Regarding the Interpretation of Article 51 of the UN Charter?’. Heidelberg Journal of International Law 77: 15–17. Council on Foreign Relations. 2012. ‘The Global Nuclear Nonproliferation Regime’, Report, CFR, New York. report/global-nuclear-nonproliferation-regime. Crawford, N.C. 2016. ‘US Budgetary Costs of Wars through 2016: $4.79 Trillion and Counting. Summary of Costs of the US Wars in Iraq, Syria, Afghanistan and Pakistan and Homeland Security’. Costs of War Project. Watson Institute of International & Public Affairs. Brown University. papers/2016/Costs%20of%20War%20through%202016%20FINAL %20final%20v2.pdf. Crootof, R. 2015. ‘The Killer Robots Are Here: Legal and Policy Implications’. Cardozo Law Review 36: 1837–1915. – 2016. ‘War Torts: Accountability for Autonomous Weapons’. University of Pennsylvania Law Review 164 (6): 1347–1402. – 2018. ‘Autonomous Weapons and the Limits of Analogy’. Harvard National Security Journal 9 (2): 51–83. Cummings, M. L. 2018. ‘Artificial Intelligence and the Future of Warfare’. In Artificial Intelligence and International Affairs. Disruption Anticipated, edited by Chatham House, 7–18. London: Chatham House. Davison, N. 2017. ‘A Legal Perspective: Autonomous Weapons Systems under International Humanitarian Law’. In Perspectives on Lethal Autonomous Weapon Systems, edited by United Nations Office for Disarmament Affairs. UNODA Occasional Papers 30. 5–18. Deeks, A. 2012. ‘‘Unwilling or Unable’: Toward a Normative Framework for Extra-Territorial Self-Defense’. Virginia Journal of International Law 52 (3): 481–550. – 2015. ‘Taming the Doctrine of Pre-Emption’. In The Oxford Handbook of the Use of Force in International Law, edited by M. Weller, 661–78. Oxford: Oxford University Press. Delgado, J.P. 2011. Silent Killers: Submarines and Underwater Warfare. New York: Osprey. Der Spiegel staff. 2014. ‘The Tragedy of MH17. Attack Could Mark Turning Point in Ukraine Conflict’. Spiegel International, 21 July. https://www. Dickson, B. 2017. ‘What Is Narrow, General and Super Artificial Intelligence’. TechTalks (blog), 12 May. 2017/05/12/what-is-narrow-general-and-super-artificial-intelligence/.



Dinstein, Y. 2001. War, Aggression, and Self-Defense. 3rd ed. New York: Cambridge University Press. Dixon, J.M. 2017. ‘Rhetorical Adaptation and Resistance to International Norms’. Perspectives on Politics 15 (1): 83–99. Dolan, T.M. 2013. ‘Unthinkable and Tragic: The Psychology of Weapons Taboos in War’. International Organization 67 (1): 37–63. Doswald-Beck, L. 1996. ‘New Protocol on Blinding Laser Weapons’. International Review of the Red Cross 36 (312): 272–99. Dotterway, K.A. 1992. Systematic Analysis of Complex Dynamic Systems: The Case of the USS Vincennes. Monterey, CA: Naval Postgraduate School. Doyle, G. 2020. ‘Explainer: Missile System Suspected of Bringing down Airliner – Short Range, Fast and Deadly’. Reuters, 10 January. https:// Doyle, M. 2008. Striking First: Preemption and Prevention in International Conflict. Princeton, NJ: Princeton University Press. Draude, A. 2017. ‘Translation in Motion: A Concept’s Journey towards Norm Diffusion Studies’. Third World Thematics: A TWQ Journal 2 (5): 588–605. Drolette Jr, D. 2014. ‘Blinding Them with Science: Is Development of a Banned Laser Weapon Continuing?’ Bulletin of the Atomic Scientists (blog), 14 September. Duncombe, C., and T. Dunne. 2018. ‘After Liberal World Order’. International Affairs 94 (1): 25–42. Dunne, T, and T. Flockhart, eds. 2013. Liberal World Orders. Oxford: Oxford University Press. Dutch Safety Board. 2015a. Crash of Malaysia Airlines Flight MH17. Hrabove, Ukraine, 17 July 2014’. The Hague: Dutch Safety Board. – 2015b. ‘Buk Surface-to-Air Missile System Caused MH17 Crash: Crash MH17, 17 July 2014’. Dutch Safety Board (blog), 13 October. https:// Eckstein, M. 2020. ‘Sea Hunter USV Will Operate With Carrier Strike Group, As SURFDEVRON Plans Hefty Testing Schedule’. USNI News, 21 January. sea-hunter-usv-will-operate-with-carrier-strike-group-as-surfdevron-planshefty-testing-schedule.



Edwards, B. 2019. Insecurity and Emerging Biotechnology: Governing Misuse Potential. Basingstoke: Palgrave. Eimer, T.R., S. Lütz, and V. Schüren. 2016. ‘Varieties of Localization: International Norms and the Commodification of Knowledge in India and Brazil’. Review of International Political Economy 23 (3): 450–79. Ekelhof, M.A.C. 2017. ‘Complications of a Common Language: Why It Is so Hard to Talk about Autonomous Weapons’. Journal of Conflict and Security Law 22 (2): 311–31. Ekelhof, M.A.C. 2018. ‘Lifting the Fog of Targeting: “Autonomous Weapons” and Human Control through the Lens of Military Targeting’. Naval War College Review 71 (3): 1–34. Ekelhof, M. 2019. ‘Moving Beyond Semantics on Autonomous Weapons: Meaningful Human Control in Operation’. Global Policy 10 (3): 343–8. Elish, M.C. 2019. ‘Moral Crumple Zones: Cautionary Tales in HumanRobot Interaction’. Engaging Science, Technology, and Society 5: 40–60. Ellman, J., L. Samp, and G. Coll. 2017. ‘Assessing the Third Offset Strategy’. Report, Center for Strategic & International Studies. publication/170302_Ellman_ThirdOffsetStrategySummary_ Web.pdf. Endsley, M.R. 1995. ‘Toward a Theory of Situation Awareness in Dynamic Systems’. Human Factors 37 (1): 32–64. Engelkamp, S, K. Glaab, and J. Renner. 2017. ‘Normalising Knowledge? Constructivist Norm Research as Political Practice’. European Review of International Studies 3 (3): 52–62. Eshel, T. 2015. ‘US Considers Extended Range THAAD, Enhanced BMS to Defend against Attacking Hypersonic Gliders’. Defense Update, 9 January. Evans, H. 2018. ‘Too Early for a Ban: The US and UK Positions on Lethal Autonomous Weapons Systems’. Lawfare (blog), 13 April. too-early-ban-us-and-uk-positions-lethal-autonomous-weapons-systems. Evans, D. (Lieutenant Colonel, Ret.). 1993. ‘USS Vincennes Case Study’. Naval Institute Proceedings 119 (8/1086). persons/huckle/Naval_Science.htm. Fairleigh Dickinson University. 2013. ‘Public Says It’s Illegal to Target Americans Abroad as Some Question CIA Drone Attacks’. Press Release,



7 February, Fairleigh Dickinson University. http://www.publicmind.fdu. edu/2013/drone/. Falk, R. 2014. ‘Global Security and International Law’. In The Handbook of Global Security Policy, edited by M. Kaldor and I. Rangelov, pp. 320–37. Walden, MA: Wiley. Fearon, J., and A. Wendt. 2002. ‘Rationalism v. Constructivism: A Skeptical View’. In Handbook of International Relations, edited by W. Carlsnaes, T. Risse, and B.A. Simmons, 1st ed., 52–72. London: Sage. Featherstone, K., and C. Radaelli, eds. 2003. The Politics of Europeanization. Oxford: Oxford University Press. Ferl, A.-K. 2020. ‘Digital Diplomacy: The Debate on Lethal Autonomous Weapons Systems in Geneva Continues under Unprecedented Circumstances’. PRIF Blog (blog), 29 September. https://blog.prif. org/2020/09/29/digital-diplomacy-the-debate-on-lethal-autonomousweapons-systems-in-geneva-continues-under-unprecedentedcircumstances/. Finkelstein, C.O., J.D. Ohlin, and A. Altman, eds. 2012. Targeted Killings: Law and Morality in an Asymmetrical World. 1st ed. Oxford: Oxford University Press. Finnemore, M. 1996a. National Interests in International Society. Ithaca, NY: Cornell University Press. – 1996b. ‘Norms, Culture, and World Politics: Insights from Sociology’s Institutionalism’. International Organization 50 (2): 325–47. Finnemore, M., and K. Sikkink. 1998a. ‘International Norm Dynamics and Political Change’. International Organization 52 (4): 887–917. Finnemore, M. and S.J. Toope. 2001. ‘Alternatives to “Legalization”: Richer Views of Law and Politics’. International Organization 55 (3): 743–58. – 1998b. ‘International Norm Dynamics and Political Change’. International Organization 52 (4): 887–917. Fisher, J.W. 2007. ‘Targeted Killing, Norms, and International Law’. Columbia Journal of Transnational Law 45 (3): 711–58. Fisk, K., and J.M. Ramos. 2014. ‘Actions Speak Louder Than Words: Preventive Self-Defense as a Cascading Norm’. International Studies Perspectives 15 (2): 163–85. Fleischman, W.M. 2015. ‘Just Say “No!” to Lethal Autonomous Robotic Weapons’. Journal of Information, Communication and Ethics in Society 13 (3/4): 299–313. Flockhart, T. 2010. ‘Europeanization or EU-ization? The Transfer of European Norms across Time and Space’. Journal of Common Market Studies 48 (4): 787–810.



– 2016a. ‘The Coming Multi-Order World’. Contemporary Security Policy 37 (1): 3–30. – 2016b. ‘The Problem of Change in Constructivist Theory: Ontological Security Seeking and Agent Motivation’. Review of International Studies 42 (5): 799–820. Fredman, Z. 2012. ‘Shoring Up Iraq, 1983 to 1990: Washington and the Chemical Weapons Controversy’. Diplomacy & Statecraft 23 (3): 533–54. Galliott, J., and J. Scholz. 2018. ‘Artificial Intelligence in Weapons: The Moral Imperative for Minimally-Just Autonomy’. US Air Force Journal of Indo-Pacific Affairs 1(2): 57–67. Garcia, D. 2006. Small Arms and Security: New Emerging International Norms. Abingdon: Routledge. – 2015. ‘Humanitarian Security Regimes’. International Affairs 91 (1): 55–75. – 2016. ‘Future Arms, Technologies, and International Law: Preventive Security Governance’. European Journal of International Security 1 (1): 94–111. – 2018. ‘Lethal Artificial Intelligence and Change: The Future of International Peace and Security’. International Studies Review 20 (2): 334–41. Gettinger, D. 2020. ‘Drone Databook Update’. The Center for the Study of the Drone, Bard College, Anandale-on-Hudson, NY. GGE on LAWS. 2020. ‘2nd Meeting – 1st Session Group of Governmental Experts on Lethal Autonomous Weapons Systems 2020’. Video of Session, 21 September, United Nations, Geneva. asset/k13/k13mbuikiv. Gheciu, A. 2005. ‘Security Institutions as Agents of Socialization? NATO and the “New Europe”’. International Organization 59(4): 973–1012. Gilliland, J. 1985. ‘Submarines and Targets: Suggestions for New Codified Rules of Submarine Warfare,’. The Georgetown Law Journal 73: 975–1005. Goertzel, B. 2014. ‘Artificial General Intelligence: Concept, State of the Art, and Future Prospects’. Journal of Artificial General Intelligence 5 (1): 1–48. Gray, C. 2018. International Law and the Use of Force. 4th ed. Oxford: Oxford University Press. Gregory, T. 2017. ‘Targeted Killings: Drones, Noncombatant Immunity, and the Politics of Killing’. Contemporary Security Policy 38 (2): 212–36. Großklaus, M. 2017. ‘Friction, Not Erosion: Assassination Norms at the Fault Line between Sovereignty and Liberal Values’. Contemporary Security Policy 38 (2): 260–80.



Grut, C. 2013. ‘The Challenge of Autonomous Lethal Robotics to International Humanitarian Law’. Journal of Conflict and Security Law 18 (1): 5–23. Gubrud, M. 2013. ‘US Killer Robot Policy: Full Speed Ahead’. Bulletin of the Atomic Scientists (blog), 20 September. us-killer-robot-policy-full-speed-ahead. – 2015. ‘Semi-Autonomous and on Their Own: Killer Robots in Plato’s Cave’. Bulletin of the Atomic Scientists (blog), 12 April. http://thebulletin. org/semi-autonomous-and-their-own-killer-robots-plato’s-cave8199. Gusterson, H. 2016. Drone: Remote Control Warfare. Cambridge, MA: MIT Press. Haas, M.C., and S.-C. Fischer. 2017. ‘The Evolution of Targeted Killing Practices: Autonomous Weapons, Future Conflict, and the International Order’. Contemporary Security Policy 38 (2): 281–306. Hagel, C. 2014. ‘Secretary of Defense Speech: Reagan National Defense Forum Keynote’. Speech, US Department of Defense, 15 November. https://www.defense. gov/Newsroom/Speeches/Speech/Article/606635/. Hagström, M. 2016. ‘Characteristics of Autonomous Weapon Systems’. In Expert Meeting: Autonomous Weapon Systems. Implications of Increasing Autonomy in the Critical Functions of Weapons, edited by ICRC, 23–25. Geneva: International Committee of the Red Cross. Haines, L. 2004. ‘Patriot Missile: Friend or Foe?’ The Register, 20 May. Halloran, R. 1988. ‘The Downing of Flight 655: US Downs Iran Airliner Mistaken for F-14; 290 Reported Dead; A Tragedy, Reagan Says; Action Is Defended’. New York Times, 4 July. https://www.nytimes. com/1988/07/04/world/downing-flight-655-us-downs-iran-airlinermistaken-for-f-14-290-reported-dead.html. Hammond, D.N. 2015. ‘Autonomous Weapons and the Problem of State Accountability’. Chicago Journal of International Law 15 (2): 652–87. Haner, J., and D. Garcia. 2019. ‘The Artificial Intelligence Arms Race: Trends and World Leaders in Autonomous Weapons Development’. Global Policy 10 (3): 331–7. Harding, L., and A. Luhn. 2016. ‘MH17: Buk Missile Finding Sets Russia and West at Loggerheads’. The Guardian, 28 September. https://www. Harper, J. 2019. ‘Just in: Pentagon Contemplating Role of AI in Missile Defence’. National Defense, 10 July. https://


245 pentagon-contemplating-role-of-ai-in-missile-defense. Harress, C. 2014. ‘Blowing MH17 Out the Sky Was No Easy Task’. International Business Times News, 18 July. hottopics/lnacademic. Hawley, J.K. 2007. ‘Looking Back at 20 Years on MANPRINT on Patriot: Observations and Lessons’. Report ARL-SR-0158. US Army Research Laboratory, Adelphi, MD. – 2017. Patriot Wars: Automation and the Patriot Air and Missile Defense System. Voices from the Field. Washington, DC: Center for a New American Security. Hawley, J.K., and A.L. Mares. 2012. ‘Human Performance Challenges for the Future Force: Lessons from Patriot after the Second Gulf War’. In Designing Soldier Systems: Current Issues in Human Factors, edited by P. Savage-Knepshield, J. Martin, J. Locket III, and L. Allender, 3–34. Burlington, VT: Ashgate. Hawley, J.K., A.L. Mares, and C.A. Giammanco. 2005. ‘The Human Side of Automation: Lessons for Air Defense Command and Control’. Report ARL-TR-3468. US Army Research Laboratory, Adelphi, MD. Hays Parks, W. 2000. ‘Making Law of War Treaties: Lessons from Submarine Warfare Regulation’. International Law Studies 75(1): 339–85. Heath, N. 2017. ‘AI: How Big a Deal Is Google’s Latest AlphaGo Breakthrough?’ TechRepublic. 3 November. https://www.techrepublic. com/article/ai-how-big-a-deal-is-googles-latest-alphago-breakthrough/. Helmore, E. 2009. ‘US Air Force Prepares Drones to End Era of Fighter Pilots’. The Guardian. 23 August. world/2009/aug/22/us-air-force-drones-pilots-afghanistan. Henckaerts, J.-M., L. Doswald-Beck, C. Alvermann, K Dormann, and B Rolle. 2005. Customary International Humanitarian Law. Cambridge: Cambridge University Press. Heyns, C. 2013. ‘Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns. Lethal Autonomous Robotics’. UN Document A/HRC/23/47. UN Human Rights Council. – 2016a. ‘Autonomous Weapons Systems: Living a Dignified Life and Dying a Dignified Death’. In Autonomous Weapons Systems: Law, Ethics, Policy, edited by N. Bhuta, S. Beck, R Geiss, H.-Y. Liu, and C. Kress, 3–20. Cambridge: Cambridge University Press. – 2016b. ‘Human Rights and the Use of Autonomous Weapons Systems (AWS) During Domestic Law Enforcement’. Human Rights Quarterly 38 (2): 350–78.



Hiltermann, J.R. 2007. A Poisonous Affair: America, Iraq, and the Gassing of Halabja. New York: Cambridge University Press. Himmelreich, J. 2018. ‘The Everyday Ethical Challenges of Self-Driving Cars’. The Conversation. 27 March. the-everyday-ethical-challenges-of-self-driving-cars-92710. Hoerber, T. 2009. ‘Psychology and Reasoning in the Anglo-German Naval Agreement, 1935–1939’. The Historical Journal 52 (1): 153–74. Hofferberth, M., and C. Weber. 2015. ‘Lost in Translation: A Critique of Constructivist Norm Research’. Journal of International Relations and Development 18 (1): 75–103. Holland M.A. 2019. Counter-Drone Systems. 2nd ed. Anandale-on-Hudson, NY: Centre for the Study of the Drone, Bard College. https:// Holland M.A. 2020. ‘The Black Box, Unlocked. Predictability and Understandability in Military AI’. UNIDIR, Geneva. publication/black-box-unlocked. Holligan, A. 2020. ‘Flight MH17: Trial Opens of Four Accused of Murdering 298 over Ukraine’. BBC News, 9 March. Holmes, G. 2019. ‘Situating Agency, Embodied Practices and Norm Implementation in Peacekeeping Training’. International Peacekeeping 26 (1): 55–84. Holwitt, J.I. 2013. Execute against Japan: The US Decision to Conduct Unrestricted Submarine Warfare. College Station, TX: Texas A&M University Press. Horowitz, M.C., S.E. Kreps, and M. Fuhrmann. 2016. ‘Separating Fact from Fiction in the Debate over Drone Proliferation’. International Security 41 (2): 7–42. Horowitz, M.C., and P. Scharre. 2014. ‘Do Killer Robots Save Lives?’ POLITICO Magazine. 19 November. story/2014/11/killer-robots-save-lives-113010.html. House of Lords Select Committee on Artificial Intelligence. 2018. ‘AI in the UK: Ready, Willing and Able?’ Report of Session 2017-19. HL Paper 100. House of Lords, London. ld201719/ldselect/ldai/100/100.pdf. Huelss, H. 2017. ‘After Decision-Making: The Operationalisation of Norms in International Relations’. International Theory 9 (3): 381–409. – 2019. ‘Be Free? The European Union’s Post-Arab Spring Women’s Empowerment as Neoliberal Governmentality’. Journal of International Relations and Development 22 (1): 136–58.



– 2020. ‘Norms Are What Machines Make of Them: Autonomous Weapons Systems and the Normative Implications of Human-Machine Interactions’. International Political Sociology 14 (2): 111–28. Human Rights Watch. 1995. ‘Blinding Laser Weapons. The Need to Ban a Cruel and Inhumane Weapon’. Report, Human Rights Watch, September, Vol. 7, No. 1. – 2012. ‘Losing Humanity: The Case against Killer Robots’. Report, Human Rights Watch. losing-humanity/case-against-killer-robots. Human Security Centre. 2005. Human Security Report 2005: War and Peace in the 21st Century. New York: Oxford University Press. Hurd, I. 1999. ‘Legitimacy and Authority in International Politics’. International Organization 53 (2): 379–408. – 2015. ‘Permissive Law on the International Use of Force’. American Society of International Law Proceedings 109: 63–7. – 2016. ‘The Permissive Power of the Ban on War’. European Journal of International Security 2 (1): 1–18. – 2017. ‘Targeted Killing in International Relations Theory: Recursive Politics of Technology, Law, and Practice’. Contemporary Security Policy 38 (2): 307–19. Hurrell, A. 2006. ‘Hegemony, Liberalism and Global Order: What Space for Would-be Great Powers?’ International Affairs 82 (1): 1–19. Husby, C. 2015. ‘Offensive Autonomous Weapons: Should We Be Worried?’ Michigan Journal of International Law, Volume 37. IAEA. 2005. ‘Implementation of the NPT Safeguards Agreement in the Islamic Republic of Iran: Report by the Director General’. Report GOV/2005/67, International Atomic Energy Agency, Vienna. – 2018. ‘Verification and Monitoring in Iran’. Focus, International Atomic Energy Agency, Vienna. ICJ. 1996. ‘Legality of the Threat or Use of Nuclear Weapons. Advisory Opinion of 8 July 1996’. In International Court of Justice Reports 1996. Article 103. ICRC. 1977. ‘Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I)’. International Committee of the Red Cross, Geneva. 8 August. – 1995. ‘Vienna Diplomatic Conference Achieves New Prohibition on Blinding Laser Weapons and Deadlock on Landmines’. Press Release, 13 October, International Committee of the Red Cross. https://www.



– 2010. ‘Customary International Humanitarian Law’. Article, 29 October, International Committee of the Red Cross. document/customary-international-humanitarian-law-0. – 2014. Expert Meeting: Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects. Geneva: International Committee of the Red Cross. – 2016. Expert Meeting: Autonomous Weapon Systems. Implications of Increasing Autonomy in the Critical Functions of Weapons. Geneva: International Committee of the Red Cross. – 2018a. ‘British Military Court at Hamburg, The Peleus Trial’. Case Study. How Does Law Protect in War? International Committee of the Red Cross, Geneva. british-military-court-hamburg-peleus-trial. – 2018b. ‘Customary IHL – Rule 86. Blinding Laser Weapons’. IHL Database, International Committee of the Red Cross, Geneva. https:// – 2018c. ‘Hague Declaration (IV,2) Concerning Asphyxiating Gases, 1899’. Treaties, States Parties and Commentaries. IHL Database, International Committee of the Red Cross, Geneva. https://ihl-databases.icrc. org/applic/ihl/ihl.nsf/0/2531e92d282b5436c12563cd00516149. – 2018d. ‘Declaration on the Laws of Naval War, London 1909’. Treaties, States Parties and Commentaries. IHL Database, International Committee of the Red Cross, Geneva. – 2019. Protocol for the Prohibition of the Use of Asphyxiating, Poisonous or Other Gases, and of Bacteriological Methods of Warfare. Geneva, 17 June 1925’.Treaties, States Parties and Commentaries. IHL Database, International Committee of the Red Cross, Geneva. https://bit. ly/3AFLG6y. IFP Editorial Staff. 2020. ‘IRGC Releases Details of Accidental Downing of Ukrainian Plane’. Iran Front Page, 11 January. irgc-releases-details-of-accidental-downing-of-ukrainian-plane. Ikenberry, G.J. 2011. Liberal Leviathan: The Origins, Crisis, and Transformation of the American World Order. Princeton, N.J: Princeton University Press. IPRAW. 2017a. ‘Focus on Technology and Application of Autonomous Weapons’. ‘Focus On’ Report 1. International Panel on the Regulation of Autonomous Weapons, Berlin. – 2017b. ‘Focus on Computational Methods in the Context of LAWS’. ‘Focus On’ Report 2. International Panel on the Regulation of Autonomous Weapons, Berlin.



– 2018. ‘Focus on the Human-Machine Relation in LAWS’. ‘Focus On’ Report 3. International Panel on the Regulation of Autonomous Weapons, Berlin. Jackson, P.T. 2008. ‘Foregrounding Ontology : Dualism, Monism, and IR Theory’ Review of International Studies 34 (1): 129–53. Jackson, P.T. 2017. ‘Symposium: The Practice Turn in International Relations’. Harvard Dataverse, V1. NXX3JJ. Jacob, C. 2018. ‘From Norm Contestation to Norm Implementation: Recursivity and the Responsibility to Protect’. Global Governance 24 (3): 391–409. Jain, N. 2016. ‘Autonomous Weapons Systems: New Frameworks for Individual Responsibility’. In Autonomous Weapons Systems: Law, Ethics, Policy, edited by N. Bhuta, S. Beck, R. Geiss, H.-Y. Liu, and C. Kress, 303–24. Cambridge: Cambridge University Press. Jasanoff, S. 2015. ‘Future Imperfect: Science, Technology, and the Imaginations of Modernity’. In Dreamscapes of Modernity. Sociotechnical Imaginaries and the Fabrication of Power, edited by S. Jasanoff and S.-H. Kim, 1–33. Chicago, IL: The University of Chicago Press. Jefferson, C. 2014. ‘Origins of the Norm against Chemical Weapons’. International Affairs 90 (3): 647–61. Jepperson, R.W., A. Wendt, and P.J. Katzenstein. 1996. ‘Norms, Identity and Culture in National Security’. In The Culture of National Security: Norms and Identity in World Politics, edited by P.J. Katzenstein, 33–75. New York: Columbia University Press. Johnson, A.M., and S. Axinn. 2013. ‘The Morality of Autonomous Robots’. Journal of Military Ethics 12 (2): 129–41. Johnson, M, J.K. Hawley, and J.M. Bradshaw. 2014. ‘Myths of Automation Part 2: Some Very Human Consequences’. IEEE Intelligent Systems 29 (2): 82–5. Johnston, S.F. 2018. ‘The Technological Fix as Social Cure-All. Origins and Implications’. IEEE Technology and Society Magazine 37 (1): 47–54. Jones, E. 2014. ‘Terror Weapons: The British Experience of Gas and Its Treatment in the First World War’. War in History 21 (3): 355–75. Jose, B. 2017a. Norm Contestation: Insights into Non-Conformity with Armed Conflict Norms. New York: Springer. – 2017b. ‘Bin Laden’s Targeted Killing and Emerging Norms’. Critical Studies on Terrorism 10 (1): 44–66. Kaag, J, and S. Kreps. 2014. Drone Warfare. Cambridge: Polity.



Kaempf, S. 2018. Saving Soldiers or Civilians? Casualty-Aversion versus Civilian Protection in Asymmetric Conflicts. Cambridge: Cambridge University Press. Kahn, P.W. 2002. ‘The Paradox of Riskless Warfare’. Philosophy and Public Policy Quarterly 22 (3): 2–8. Kammerhofer, J. 2015. ‘The Resilience of the Restrictive Rules on SelfDefence’. In The Oxford Handbook of the Use of Force in International Law, edited by M. Weller, 627–48. Oxford: Oxford University Press. Kania, E. 2017. ‘Great Power Competition and the AI Revolution: A Range of Risks to Military and Strategic Stability’. Lawfare (blog). 19 September. – 2018a. ‘China’s AI Agenda Advances’. The Diplomat. February 14, 2018. – 2018b. ‘China’s Strategic Ambiguity and Shifting Approach to Lethal Autonomous Weapons Systems’. Lawfare (blog), 17 April. – 2020. ‘”AI Weapons” in China’s Military Innovation’. Global China Research Paper, Brookings Institution, Washington, DC. FP_20200427_ai_weapons_kania_v2.pdf. Kantowitz, B.H., and R.D. Sorkin. 1987. ‘Allocation of Functions’. In Handbook of Human Factors, edited by G. Salvendy, 355–69. New York: Wiley. Kaplan, F. 2003. ‘Patriot Games’. Slate, 24 March. news-and-politics/2003/03/how-good-are-those-patriot-missiles.html. Kasapoğlu, C. 2019. ‘Beyond Obama’s Red Lines: The Syrian Arab Army and Chemical Warfare’. SWP Comment 2019/C27. Kastan, B. 2013. ‘Autonomous Weapons Systems: A Coming Legal Singularity’. Journal of Law, Technology and Policy 45: 45–82. Keck, M.E, and K. Sikkink. 1998. Activists Beyond Borders: Advocacy Networks in International Politics. Ithaca, NY: Cornell University Press. Kegley, C.W., and G.A. Raymond,. 2003. ‘Preventive War and Permissive Normative Order’. International Studies Perspectives 4 (4): 385–94. Kennedy, D. 2006. Of War and Law. Princeton, NJ: Princeton University Press.



Kinsella, H.M. 2011. The Image Before the Weapon: A Critical History of the Distinction between Combatant and Civilian. Ithaca, NY: Cornell University Press. Koh, H. 2010. ‘The Obama Administration and International Law’. Speech, 25 March, US Department of State. releases/remarks/139119.htm. Koskenniemi, M. 2011. The Politics of International Law. Oxford: Hart. Krasmann, S. 2012. ‘Targeted Killing and Its Law: On a Mutually Constitutive Relationship’. Leiden Journal of International Law 25 (3): 665–82. Kratochwil, F. 1984. ‘The Force of Prescriptions’. International Organization 38 (4): 685–708. Kratochwil, F.V. 1989. Rules, Norms, and Decisions. Cambridge: Cambridge University Press. Kratochwil, F., and J.G. Ruggie. 1986. ‘International Organization: A State of the Art on an Art of the State’. International Organization 40 (4): 753–75. Kurki, M., and C. Wight. 2007. ‘International Relations and Social Sciences’. In International Relations Theories: Discipline and Diversity, edited by T. Dunne, M. Kurki, and S. Smith, 13–33. Oxford: Oxford University Press. Kurzweil, R. 2006. The Singularity Is Near: When Humans Transcend Biology. London: Duckworth. Lanovoy, V. 2017. ‘The Use of Force by Non-State Actors and the Limits of Attribution of Conduct’. European Journal of International Law 28 (2): 563–85. Lantis, J.S. 2016. Arms and Influence: US Technology Innovations and the Evolution of International Security Norms. Stanford, CA: Stanford Security Studies. Lapid, Y. 2001. ‘Identities, Borders, Orders: Nudging International Relations Theory in a New Direction’. In Identities, Borders, Orders: Rethinking International Relations Theory, edited by M. Albert, D. Jacobson, and Y. Lapid, 1–20. Minneapolis, MN: University of Minnesota Press. Laufer, H.M. 2017. ‘War, Weapons and Watchdogs: An Assessment of the Legality of New Weapons under International Human Rights Law’. Cambridge International Law Journal 6 (1): 62–74. Laurence, M. 2019. ‘An ‘Impartial’ Force? Normative Ambiguity and Practice Change in UN Peace Operations’. International Peacekeeping 26(3): 256–80.



Legro, J.W. 1997. ‘Which Norms Matter? Revisiting the “Failure” of Internationalism’. International Organization 51 (1): 31–63. Lele, A. 2014. ‘MH17 and Its Aftermath’. MINT, 28 July. https://bit. ly/3qSsn59. – 2019. ‘Debating Lethal Autonomous Weapon Systems’. Journal of Defence Studies 13 (1): 51–70. Leung, R. 2004. ‘The Patriot Flawed?’ CBS News. 19 February. https:// Leveringhaus, A. 2016. Ethics and Autonomous Weapons. London: Palgrave Macmillan. LeVine, S. 2017. ‘Artificial Intelligence Pioneer Says We Need to Start Over’. Axios. 15 September. artificial-intelligence-pioneer-says-we-need-to-start-over-1513305524f619efbd-9db0-4947-a9b2-7a4c310a28fe.html. Lewis, L. 2018. ‘AI and Autonomy in War: Understanding and Mitigating Risks’. Center for Naval Analyses, Arlington, VA. sti/citations/AD1060954. Lewis, L. 2019a. ‘AI Safety: An Action Plan for the Navy’. DOP-2019-U-0219 57-1Rev. CNA Occasional Paper. Center for Naval Analyses, Arlington, VA. – 2019b. ‘AI Safety: Charting out the High Road’. War on the Rocks (blog), 9 December. ai-safety-charting-out-the-high-road/. Lewis, M.W. 2013. ‘Drones: Actually the Most Humane Form of Warfare Ever’. The Atlantic, 21 August. https:// drones-actually-the-most-humane-form-of-warfare-ever/278746/. Lin, J., and P.W. Singer. 2014. ‘Chinese Autonomous Tanks: Driving Themselves to a Battlefield Near You?’ Popular Science (blog), 7 October. chinese-autonomous-tanks-driving-themselves-battlefield-near-you. Ling, J. 2020. ‘Canada’s Path to Justice from Iran over Shot-Down Flight Will Be Hard’. Foreign Policy (blog), 10 January. ukraine-korea-plane-canada-path-justice-iran-shot-down-flight-hard/. Liu, H.-Y. 2016. ‘Refining Responsibility: Differentiating Two Types of Responsibility Issues Raised by Autonomous Weapons Systems’. In Autonomous Weapons Systems: Law, Ethics, Policy, edited by N. Bhuta, S. Beck, R. Geiss, H.-Y. Liu, and C. Kress, 325–44. Cambridge: Cambridge University Press.



Loh, W., and J. Loh. 2017. ‘Autonomy and Responsibility in Hybrid Systems’. In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, edited by P. Lin, R. Jenkins, and K. Abney, 35–50. Oxford: Oxford University Press. London Naval Treaty. 1930. ‘Limitation and Reduction of Naval Armament’. League of Nations Treaty Series. us-treaties/bevans/m-ust000002-1055.pdf. Lubell, N. 2015. ‘The Problem of Imminence in an Uncertain World’. In The Oxford Handbook of the Use of Force in International Law, edited by M. Weller, 697–719. Oxford: Oxford University Press. Maclean, D. 2017. Shoot, Don’t Shoot: Minimising Risk of Catastrophic Error through High Consequence Decision-Making. Canberra: Air Power Development Centre. Makdisi, K., and C. Pison Hindawi. 2017. ‘The Syrian Chemical Weapons Disarmament Process in Context: Narratives of Coercion, Consent and Everything in Between’. Third World Quarterly 38 (8): 1691–709. March, J.G, and J.P. Olsen. 1989. Rediscovering Institutions: The Organizational Basis of Politics. New York: Free Press. March, J.G., and J.P. Olsen. 1998. ‘The Institutional Dynamics of International Political Orders’. International Organization 52 (4): 943–69. Martin, S.B. 2016. ‘Norms, Military Utility, and the Use/Non-Use of Weapons: The Case of Anti-Plant and Irritant Agents in the Vietnam War’. Journal of Strategic Studies 39 (3): 321–64. Marxsen, C. 2017. ‘A Note on Indeterminacy of the Law on Self-Defence Against Non-State Actors’. Heidelberg Journal of International Law 77: 91–4. Maynes, C. 2017. ‘The Unsung Soviet Officer Who Averted Nuclear War’. The World (blog), 21 September. stories/2017-09-21/soviet-officer-who-averted-nuclear-war. McCaig, R.J. 2013. The Legality of Unrestricted Submarine Warfare in the First World War. Cambridge: University of Cambridge. McDonald, J. 2017. Enemies Known and Unknown: Targeted Killings in America’s Transnational War. London: Hurst. McLean, W. 2014. ‘Drones Are Cheap, Soldiers Are Not: A Cost– Benefit Analysis of War’. The Conversation, 26 June. https:// -analysis-of-war-27924. Melzer, N. 2008. Targeted Killing in International Law, Oxford Monographs in International Law. Oxford: Oxford University Press.



– 2009. ‘Interpretive Guidance on the Notion of Direct Participation in Hostilities under International Humanitarian Law’. International Committee of the Red Cross, Geneva. files/other/icrc-002-0990.pdf. Metz, C. 2018a. ‘As China Marches Forward on AI, the White House Is Silent’. New York Times, 12 February. https://www.nytimes. com/2018/02/12/technology/china-trump-artificial-intelligence.html. – 2018b. ‘Artificial Intelligence Is Now a Pentagon Priority. Will Silicon Valley Help?’ New York Times, 26 August. https://www.nytimes. com/2018/08/26/technology/pentagon-artificial-intelligence.html. Mian, Z. 2017. ‘After the Nuclear Weapons Ban Treaty: A New Disarmament Politics’. Bulletin of the Atomic Scientists (blog), 7 July. after-the-nuclear-weapons-ban-treaty-a-new-disarmament-politics/. Millar, J. 2017. ‘Ethics Settings for Autonomous Vehicles’. In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, edited by P. Lin, R. Jenkins, and K. Abney, 20–34. Oxford: Oxford University Press. Miller, W.O. 1980. ‘Law of Naval Warfare’. International Law Studies Series 62: 263–70. Missile Defense Project. 2018. ‘Patriot’. Missile Threat (blog), Center for Strategic and International Studies, 14 June. https://missilethreat.csis. org/system/patriot/. Moir, L. 2015. ‘Action Against Host States of Terrorist Groups’. In The Oxford Handbook of the Use of Force in International Law, edited by M. Weller, 720–36. Oxford: Oxford University Press. Moravec, H. 1988. Mind Children: The Future of Robot and Human Intelligence. Cambridge, MA: Harvard University Press. Morozov, E. 2014. To Save Everything, Click Here: Technology, Solutionism and the Urge to Fix Problems That Don’t Exist. London: Penguin Books. Moyes, R. 2016a. ‘Key Elements of Meaningful Human Control’. Background Paper, Article 36, London. wp-content/uploads/2016/04/MHC-2016-FINAL.pdf. – 2016b. ‘Meaningful Human Control over Individual Attacks’. In Expert Meeting: Autonomous Weapon Systems. Implications of Increasing Autonomy in the Critical Functions of Weapons, edited by ICRC, 46–52. Geneva: International Committee of the Red Cross. – 2019. ‘Target Profiles’. Discussion Paper, Article 36, London. http:// Müller, Harald, and Carmen Wunderlich. 2018. ‘Not Lost in Contestation: How Norm Entrepreneurs Frame Norm Development in the Nuclear Nonproliferation Regime’. Contemporary Security Policy 39 (3): 341–66.



NATO. 2016. ‘NATO Standard AJP-3.9. Allied Joint Doctrine for Joint Targeting’. uploads/system/uploads/attachment_data/file/628215/20160505nato_targeting_ajp_3_9.pdf. Nath, C., and L. Christie. 2015. ‘Automation in Military Operations’. Research Briefing, 22 October, Parliamentary Office of Science and Technology, UK Parliament. https://post.parliament. uk/research-briefings/post-pn-0511/. New York Times. 2020. ‘Plane Shot Down Because of Human Error, Iran Says’. New York Times, 11 January. https://www.nytimes. com/2020/01/11/world/middleeast/plane-crash.html. Noll, G. 2019. ‘War by Algorithm’. Paper Presented at the Law and the Human Seminar Series, 30 January, University of Kent, Canterbury. Noone, G.P., and D.C. Noone. 2015. ‘The Debate Over Autonomous Weapons Systems’. Case Western Reserve Journal of International Law 47: 25–35. Nye, J.S. 2017. ‘Will the Liberal Order Survive?’ Foreign Affairs 96 (1): 10–16. O’Connell, M.E. 2010. ‘Drones and International Law’. International Debate Series. Washington University Law, Whitney R. Harris World Law Institute. http://law.wustl. edu/harris/documents/OConnellFullRemarksNov23.pdf. Odell, S, and N. McCarthy. 2017. ‘The Stories We Tell about Technology: AI Narratives’. In Verba (Royal Society of London blog), 7 December. the-stories-we-tell-about-technology-ai-narratives/. ODIN. 2020. ‘9K317M Buk-M3 (9K37M3) Russian Medium-Range Air Defense Missile System’. Worldwide Equipment Guide V.2.9.2, OE Data Integration Network. Asset/9K317M_Buk-M3_(9K37M3)_Russian_Medium-Range_Air_ Defense_Missile_System. OECD. 2018. ‘DAC List of ODA Recipients’. OECD Development Co-Operation Directorate. DAC_List_ODA_Recipients2018to2020_flows_En.pdf. Office of Law Revision Counsel of the House of Representatives. 2013. United States Code. 2012 Edition. Volume 5, Title 10 Armed Forces §§14317921. Washington, DC: Government Printing Office. Onuf, N. 1989. World of Our Making: Rules and Rule in Social Theory and International Relations. Columbia, SC: University of South Carolina Press.



OPCW. 2018a. ‘Article I. General Obligations’. Chemical Weapons Convention. Organisation for the Prohibition of Chemical Weapons. article-i-general-obligations/. – 2018b. ‘Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous or Other Gases, and of Bacteriological Methods of Warfare’ (The Geneva Protocol). Organisation for the Prohibition of Chemical Weapons. bio/1925-geneva-protocol. Oudes, C., and W. Zwijnenburg. 2011. Does Unmanned Make Unacceptable? Exploring the Debate on Using Drones and Robots in Warfare. Utrecht: IKV Pax Christi. Paddon Rhoads, E. 2019. ‘Putting Human Rights up Front: Implications for Impartiality and the Politics of UN Peacekeeping’. International Peacekeeping 26 (3): 281–301. Panke, D., and U. Petersohn. 2012. ‘Why International Norms Disappear Sometimes’. European Journal of International Relations 18 (4): 719–42. Parasuraman, R., and D.H. Manzey. 2010. ‘Complacency and Bias in Human Use of Automation: An Attentional Integration’. Human Factors: The Journal of the Human Factors and Ergonomics Society 52 (3): 381–410. Parasuraman, R., and V. Riley. 1997. ‘Humans and Automation: Use, Misuse, Disuse, Abuse’. Human Factors 39 (2): 230–53. Pearce Higgins, A. 1909. The Hague Peace Conferences and Other International Conferences Concerning the Laws and Usages of War. Cambridge: Cambridge University Press. Pearson, T. 2017. ‘The Ultimate Guide to the OODA Loop’. Taylor Pearson (blog). 2017. Permanent Mission of Brazil. 2019. ‘Intervention by Brazil. Group of Governmental Experts on LAWS – 2019. “Human Element” – Agenda item 5(d)’. Disarmament-fora/ccw/2019/gge/statements/26March_Brazil5c.pdf Permanent Mission of Greece. 2018. ‘Statement. Group of Governmental Experts on Lethal Autonomous Weapon Systems. Geneva, 9-13 April 2018’. Permanent Mission of the United States. 2017. ‘Joint Press Statement from the Permanent Representatives to the United Nations of the United States, United Kingdom, and France Following the Adoption of a Treaty Banning Nuclear Weapons’, 7 July. joint-press-statement-from-the-permanent-representatives-to-the-unitednations-of-the-united-states-united-kingdom-and-france-following-theadoption/.



– 2018a. ‘Humanitarian Benefits of Emerging Technologies in the Area of Lethal Autonomous Weapon Systems’. Submitted by the United States. UN Document No. CCW/GGE.1/2018/WP.4. – 2018b. ‘CCW: US Opening Statement at the Group of Governmental Experts Meeting on Lethal Autonomous Weapons Systems’. US Mission to International Organizations in Geneva, 9 April. – 2018c. ‘Human-Machine Interaction in the Development, Deployment and Use of Emerging Technologies in the Area of Lethal Autonomous Weapons Systems: Submitted by the United States’. UN Document CCW/ GGE.2/2018/WP.4. Perrow, C. 1999. Normal Accidents: Living with High-Risk Technology. Princeton, NJ: Princeton University Press. Peter, M. 2011. ‘The Politics of Self-Defense: Beyond a Legal Understanding of International Norms’. Cambridge Review of International Affairs 24 (2): 245–64. Peters, A. 1996. ‘Blinding Laser Weapons’. Medicine and War 12 (2): 107–13. Peterson, N. 2020. ‘Iran Admits to Shooting Down Ukrainian Airliner’. The Daily Signal, 10 January. why-it-looks-like-iran-shot-down-ukrainian-airliner/?utm_ source=rssutm_medium=rssutm_campaign=why-it-looks-like-iran-shotdown-ukrainian-airliner. Pifer, S. 2016. ‘What’s the Deal with Senate Republicans and the Test Ban Treaty?’ Brookings Institution (blog), 26 September. https:// whats-the-deal-with-senate-republicans-and-the-test-ban-treaty/. Piller, C. 2003. ‘Vaunted Patriot Missile Has a “Friendly Fire” Failing’. Los Angeles Times, 21 April. Ponomarenko, I. 2020. ‘Explainer: How Could an Iranian Tor-M1 Missile System Down Flight PS752?’. Kyiv Post, 10 January. https://www.kyivpost. com/ukraine-politics/explainer-how-could-an-iranian-tor-m1-missilesystem-down-flight-ps752.html. Pontin, J. 2018. ‘Greedy, Brittle, Opaque, and Shallow: The Downsides to Deep Learning’. Wired, 2 February. greedy-brittle-opaque-and-shallow-the-downsides-to-deep-learning/. Potter, W.C. 2017. ‘Disarmament Diplomacy and the Nuclear Ban Treaty’. Survival: Global Politics and Strategy 59 (4): 75–108. Pouliot, V. 2008. ‘The Logic of Practicality : A Theory of Practice of Security Communities’. International Organization 62 (2): 257–88.



Price, R. 1995. ‘A Genealogy of the Chemical Weapons Taboo’. International Organization 49 (1): 73–103. Price, R., and N. Tannenwald. 1996. ‘Norms and Deterrence: The Nuclear and Chemical Weapons Taboo’. In The Culture of National Security: Norms and Identity in World Politics, edited by P.J. Katzenstein. New Directions in World Politics. New York: Columbia University Press. Puetter, U., and A.Wiener. 2007. ‘Accommodating Normative Divergence in European Foreign Policy Co-Ordination: The Example of the Iraq Crisis’. Journal of Common Market Studies 45 (5): 1065–88. Ralph, J., and J. Gifkins. 2017. ‘The Purpose of United Nations Security Council Practice: Contesting Competence Claims in the Normative Context Created by the Responsibility to Protect’. European Journal of International Relations 23 (3): 630–53. Rayroux, An. 2014. ‘Speaking EU Defence at Home: Contentious Discourses and Constructive Ambiguity’. Cooperation and Conflict 49 (3): 386–405. Reddy, R.S. 2016. ‘India and the Challenge of Autonomous Weapons’. Report, Carnegie India. CP275_Reddy_final.pdf. Reinold, T. 2011. ‘State Weakness, Irregular Warfare, and the Right to Self-Defense Post-9/11’. American Journal of International Law 105 (2): 244–86. Reus-Smit, C. 2011. ‘Obligation through Practice’. International Theory 3 (2): 339–47. Risse, T. 2000. ‘“Let’s Argue!”: Communicative Action in World Politics’. International Organization 54 (1): 1–39. Risse, T, M. Green Cowles, and J.A. Caporaso, eds. 2001. Transforming Europe. Europeanization and Domestic Change. Ithaca, NY: Cornell University Press. Risse, T., S.C. Ropp, and K. Sikkink, eds. 1999a. The Power of Human Rights: International Norms and Domestic Change. Cambridge: Cambridge University Press. Risse, T., and K. Sikkink. 1999b. ‘The Socialisation of Human Rights Norms into Domestic Practices: Introduction’. In The Power of Human Rights: International Norms and Domestic Change, edited by T. Risse, S. Ropp, and K. Sikkink, 1–38. Cambridge: Cambridge University Press. Risse-Kappen, T. 1994. ‘Ideas Do Not Float Freely: Transnational Coalitions, Domestic Structures, and the End of the Cold War’. International Organization 48 (2): 185–214.



Roberts, J. 2016. ‘Thinking Machines: The Search for Artificial Intelligence’. Distillations (blog). Summer. K4XU-REX6. Roff, H. 2014. ‘The Strategic Robot Problem: Lethal Autonomous Weapons in War’. Journal of Military Ethics 13 (3): 211–27. – 2015a. ‘Autonomous or “Semi” Autonomous Weapons? A Distinction Without Difference’. Huffington Post (blog), 16 January. https://www. – 2015b ‘Lethal Autonomous Weapons and Jus Ad Bellum Proportionality’. Case Western Reserve Journal of International Law 47 (1): 37–52. – 2016. ‘Weapons Autonomy Is Rocketing’. Foreign Policy, 28 September. weapons-autonomy-is-rocketing/. Roff, H.M., and R. Moyes. 2016. ‘Meaningful Human Control, Artificial Intelligence and Autonomous Weapons’. Briefing Paper, Article 36, London. MHC-AI-and-AWS-FINAL.pdf. Roland, A. 2016. War and Technology: A Very Short Introduction. New York, NY: Oxford University Press. Rosert, E. 2019. ‘Norm Emergence as Agenda Diffusion: Failure and Success in the Regulation of Cluster Munitions’. European Journal of International Relations 25 (4): 1103–31. Rosert, E, U. Becker-Jakob, G. Franceschini, and A. Schaper. 2013. ‘Arms Control Norms and Technology’. In Norm Dynamics in Multilateral Arms Control: Interests, Conflicts, and Justice, edited by H. Müller and C. Wunderlich, 109–41. Studies in Security and International Affairs. Athens, GA: University of Georgia Press. Rosert, E., and F. Sauer. 2019. ‘Prohibiting Autonomous Weapons: Put Human Dignity First’. Global Policy 10 (3): 370–5. – 2021. ‘How (Not) to Stop the Killer Robots: A Comparative Analysis of Humanitarian Disarmament Campaign Strategies’. Contemporary Security Policy 2 (1): 4–29. Rublee, M.R., and A. Cohen. 2018. ‘Nuclear Norms in Global Governance: A Progressive Research Agenda’. Contemporary Security Policy 39 (3): 317–40. Russell, R.L. 2005. ‘Iraq’s Chemical Weapons Legacy: What Others Might Learn from Saddam’. The Middle East Journal 59 (2): 187–208. Ruys, T. 2010. ‘Armed Attack’ and Article 51 of the UN Charter: Evolutions in Customary Law and Practice. Cambridge: Cambridge University Press.



Sabbagh, D., and M. Safi. 2020. ‘Iran Crash: Plane Shot down by Accident, Western Officials Believe’. The Guardian, 9 January. tehran-crash-plane-downed-by-iranian-missile-western-officials-believe. Sample, I. 2018. ‘Thousands of Leading AI Researchers Sign Pledge against Killer Robots’. The Guardian, 18 July. thousands-of-scientists-pledge-not-to-help-build-killer-ai-robots. Sartor, G., and A. Omicini. 2016. ‘The Autonomy of Technological Systems and Responsibilities for Their Use’. In Autonomous Weapons Systems: Law, Ethics, Policy, edited by N. Bhuta, S. Beck, R. Geiss, H.-Y. Liu, and C. Kress, 39–74. Cambridge: Cambridge University Press. Sauer, F., and N. Schörnig. 2012. ‘Killer Drones: The “Silver Bullet” of Democratic Warfare?’ Security Dialogue 43 (4): 363–80. Sauer, T., and J. Pretorius. 2014. ‘Nuclear Weapons and the Humanitarian Approach’. Global Change, Peace and Security 26 (3): 233–50. Scharre, P. 2016. ‘Autonomous Weapons and Operational Risk’. Ethical Autonomy Project. Washington, DC: Center for New American Security. Scharre, P. 2018. Army of None: Autonomous Weapons and the Future of War. New York; London: W. W. Norton. Scharre, P., and M.C. Horowitz. 2015. ‘An Introduction to Autonomy in Weapon Systems’. Working Paper, Center for New American Security. Schimmelfennig, F. 2000. ‘International Socialization in the New Europe: Rational Action in an Institutional Environment’. European Journal of International Relations 6 (1): 109–39. Schippers, B. 2020. ‘Autonomous Weapons Systems and International Ethics’. In The Routledge Handbook to Rethinking Ethics in International Relations, 312–25. London: Routledge. Schmidt, D.R., and L. Trenta. 2018. ‘Changes in the Law of Self-Defence? Drones, Imminence, and International Norm Dynamics’. Journal on the Use of Force and International Law 5 (2): 201–45. Schmitt, M.N., and J.S. Thurnher. 2013. ‘“Out of the Loop”: Autonomous Weapon Systems and the Law of Armed Conflict’. Harvard National Security Journal 4: 231–81. Schuller, A.L. 2017. ‘At the Crossroads of Control: The Intersection of Artificial Intelligence in Autonomous Weapon Systems with International Humanitarian Law’. Harvard National Security Journal 8: 379–425.



Schwarz, E. 2018a. Death Machines: The Ethics of Violent Technologies. Kindle ed. Manchester: Manchester University Press. – 2018b. ‘Technology and Moral Vacuums in Just War Theorising’. Journal of International Political Theory 14 (3): 280–98. Schweiger, E. 2015. ‘The Risks of Remaining Silent: International Law Formation and the EU Silence on Drone Killings’. Global Affairs 1 (3): 269–75. Sciutto, J., P. Brown, B. Starr, Z. Cohen, and P.P. Murphy. 2020. ‘Video Appears to Show Missile Strike as Canada and UK Say They Have Intel Iran Shot down Ukrainian Plane’. CNN, 10 January. https://edition.cnn. com/2020/01/09/politics/is-iran-ukraine-plane/index.html. Seet, B., and T.Y. Wong. 2001. ‘Military Laser Weapons: Current Controversies’. Ophthalmic Epidemiology 8 (4): 215–26. Sehrawat, V. 2017. ‘Autonomous Weapon System: Law of Armed Conflict (LOAC) and Other Legal Challenges’. Computer Law and Security Review 33 (1): 38–56. Senn, M., and J. Troy. 2017. ‘The Transformation of Targeted Killing and International Order’. Contemporary Security Policy 38 (2): 175–211. Sharkey, A. 2019. ‘Autonomous Weapons Systems, Killer Robots and Human Dignity’. Ethics and Information Technology 21 (2): 75–87. Sharkey, N. 2010. ‘Saying “No!” to Lethal Autonomous Targeting’. Journal of Military Ethics 9 (4): 369–83. – 2012. ‘The Evitability of Autonomous Robot Warfare’. International Review of the Red Cross 94 (886): 787–99. – 2016. ‘Staying in the Loop: Human Supervisory Control of Weapons’. In Autonomous Weapons Systems: Law, Ethics, Policy, edited by N. Bhuta, S. Beck, R. Geiss, H.-Y. Liu, and C. Kress, 23–38. Cambridge: Cambridge University Press. Shaw, I.G.R. 2017. ‘Robot Wars: US Empire and Geopolitics in the Robotic Age’. Security Dialogue 48 (5): 451–70. Singer, P.W. 2010. Wired for War. The Robotics Revolution and Conflict in the 21st Century. New York: Penguin. Singh Gill, A. (Ambassador). 2018. ‘Chair’s Summary of the Discussion on Agenda Item 6 (a) 9 and 10 April 2018 Agenda Item 6 (b) 11 April 2018 and 12 April 2018 Agenda Item 6 (c) 12 April 2018 Agenda Item 6 (d) 13 April 2018’. UN-CCW. documents/Disarmament-fora/ccw/2018/gge/documents/Chairssummary-of-the-discussions-.pdf. Sislin, J.D. 2018. ‘Chemical Warfare in the Interwar Period: Insights for the Present?’ The Nonproliferation Review 25 (3–4): 185–202.



Sky News. 2020. ‘Iran Plane Crash: Canada Says Evidence Shows Jet Was Shot down by Iranian Missile – but Iran Denies It’. Sky News, 10 January. Smith, R.J. 2019. ‘Hypersonic Missiles Are Unstoppable. And They’re Starting a New Global Arms Race’. The New York Times Magazine, 19 June. Sparrow, R. 2007. ‘Killer Robots’. Journal of Applied Philosophy 24 (1): 62–77. – 2009. ‘Predators or Plowshares? Arms Control of Robotic Weapons’. IEEE Technology and Society Magazine 38 (1): 25–29. – 2016. ‘Robots and Respect: Assessing the Case Against Autonomous Weapon Systems’. Ethics and International Affairs 30 (1): 93–116. Sputnik News. 2007. ‘Iran Successfully Tests Russian TOR-M1 Missiles’. Sputnik News, 7 February. russia/2007020760358702/. Starski, P. 2017. ‘Silence within the Process of Normative Change and Evolution of the Prohibition on the Use of Force: Normative Volatility and Legislative Responsibility’. Journal on the Use of Force and International Law 4 (1): 14–65. Steffen, D. 2004. ‘The Holtzendorff Memorandum of 22 December 1916 and Germany’s Declaration of Unrestricted U-Boat Warfare’. The Journal of Military History 68 (1): 215–24. Stokes, D. 2018. ‘Trump, American Hegemony and the Future of the Liberal International Order’. International Affairs 94 (1): 133–50. Strawser, B.J. 2010. ‘Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles’. Journal of Military Ethics 9 (4): 342–68. Sturma, M. 2009. ‘Atrocities, Conscience, and Unrestricted Warfare: US Submarines during the Second World War’. War in History 16 (4): 447–68. Suchman, L. 2018. ‘Unpriming the Pump: Remystifications of AI at the UN’s Convention in Certain Conventional Weapons’. Robot Futures (blog), 7 April. https://robotfutures.wordpress. com/2018/04/07/unpriming-the-pump-remystifications-of-ai-at-theuns-convention-on-certain-conventional-weapons/. Suchman, L., and J. Weber. 2016. ‘Human–Machine Autonomies’. In Autonomous Weapons Systems: Law, Ethics, Policy, edited by N. Bhuta, S. Beck, R. Geiss, H.-Y. Liu, and C. Kress, 75–102. Cambridge: Cambridge University Press.



Talbot, B., R. Jenkins, and D. Purves. 2017. ‘When Robots Should Do the Wrong Thing’. In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, edited by P. Lin, R. Jenkins, and K. Abney, 258–73. Oxford: Oxford University Press. Talbot Jensen, E. 2018. ‘The Human Nature of International Humanitarian Law’. ICRC Humanitarian Law and Policy (blog), 23 August. human-nature-international-humanitarian-law/. Tams, C.J. 2009. ‘The Use of Force against Terrorists’. European Journal of International Law 20 (2): 359–97. Tannenwald, N. 1999. ‘The Nuclear Taboo: The United States Basis of and the Normative Nuclear Non-Use’. International Organization 53 (3): 433–68. Taylor, D. 2009. ‘Normativity and Normalization’. Foucault Studies 7: 45–63. Thakur, R. 2017. ‘The Nuclear Ban Treaty: Recasting a Normative Framework for Disarmament’. Washington Quarterly 40 (4): 71–95. Thiel, C. 2016. Der deutsche U-Bootkrieg im 2. Weltkrieg. Berlin: epubli. Thurnher, J.S. 2013. ‘The Law That Applies to Autonomous Weapon Systems’. ASIL Insights 17 (4). Towns, A.E., and B. Rumelili. 2017. ‘Taking the Pressure: Unpacking the Relation between Norms, Social Hierarchies, and Social Pressures on States’. European Journal of International Relations 23 (4): 756–79. Trapp, K. 2015. ‘Can Non-State Actors Mount an Armed Attack?’ In The Oxford Handbook of the Use of Force in International Law, edited by M. Weller, 679–97. Oxford: Oxford University Press. Trevithick, J. 2018. ‘US Military Says Chinese Lasers Injured Pilots Flying A C-130 Near Its Base in Djibouti’. The Drive (blog), 3 May. http:// Tucker, J.B. 2007. War of Nerves: Chemical Warfare from World War I to al-Qaeda. 1st Anchor Books ed. New York, NY: Anchor Books. UK Ministry of Defence. 2004. ‘Aircraft Accident to Royal Air Force Tornado GR MK4A ZG710’. Military Aircraft Accident Summary. Directorate of Air Staff, London. https://assets.publishing.service. file/82817/maas03_02_tornado_zg710_22mar03.pdf. – 2017. ‘Joint Doctrine Publication 0-30.2. Unmanned Aircraft Systems’. Development, Concepts and Doctrine Centre, Ministry of Defence, Shrivenham.



uploads/attachment_data/file/640299/20170706_JDP_0-30.2_final_ CM_web.pdf. UN-CCW. 2015. ‘Report of the 2015 Informal Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS)’. UN Document No. CCW/ MSP/2015/3. – 2017a. ‘Advanced Version. Report of the 2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS)’. UN Document No. CCW/GGE.1/2017/CRP.1. – 2017b. ‘A “Compliance-Based” Approach to Autonomous Weapon Systems’. Working Paper Submitted by Switzerland to the GGE. UN Document No. CCW/GGE.1/2017/WP.9. – 2019. ‘Draft Report of the 2019 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems’. UN Document No. CCW/GGE.1/2019/CRP.1/Rev.2. UN General Assembly. 1973. ‘Report of the Special Committee on the Question of Defining Aggression’. UN Document No. A/9019. – 1974. ‘Definition of Aggression’. UN Document No A/RES/3314 (XXIX). – 2010a. ‘Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Philip Alston. Addendum: Study on Targeted Killings’. UN Document No. A/HRC/14/24/Add.6. – 2010b. ‘Interim Report of the Special Rapporteur (Philip Alston) on Extrajudicial, Summary or Arbitrary Executions’. UN Document A/65/321. – 2017a. ‘Taking Forward Multilateral Nuclear Disarmament Negotiations. Resolution Adopted by the General Assembly on 23 December 2016’. UN Document A/RES/71/258. – 2017b. Treaty on the Prohibition of Nuclear Weapons. UN Document A/ CONF.229/2017/8. UNIDIR. 2017. ‘The Weaponization of Increasingly Autonomous Technologies: Concerns, Characteristics and Definitional Approaches. A Primer’. UNIDIR Resources, No. 6. United Nations Institute for Disarmament Research, Geneva. – 2020. ‘The Human Element in Decisions about the Use of Force’. United Nations Institute for Disarmament Research, Geneva. United Nations. 1945. ‘Charter of the United Nations’. https://www. UN News. 2019. ‘Autonomous Weapons That Kill Must Be Banned, Insists UN Chief’. March 25, 2019. story/2019/03/1035381.



UNODA. 2018a. ‘Comprehensive Nuclear-Test-Ban Treaty (CTBT)’. United Nations Office for Disarmament Affairs. disarmament/wmd/nuclear/ctbt/. – 2018b. ‘Treaty on the Prohibition of Nuclear Weapons’. 2018. https:// UNOG. 2018. ‘The Convention on Certain Conventional Weapons’. United Nations Office at Geneva, the-convention-on-certain-conventional-weapons/. UNSC. 2001a. ‘Threats to International Peace and Security Caused by Terrorist Acts’. Resolution 1368, United Nations Security Council. – 2001b. ‘Threats to International Peace and Security Caused by Terrorist Acts’, Resolution 1373, United Nations Security Council. – 2001c. ‘Letter Dated 7 October 2001 from the Permanent Representative of the United States of America to the United Nations Addressed to the President of the Security Council’. UN Document S/2001/946. – 2013. ‘United Nations Mission to Investigate Allegations of the Use of Chemical Weapons in the Syrian Arab Republic: Final Report’, A/68/663. UN Document S/2013/735. – 2015. ‘Provisional Verbatim Record of the 7565th Meeting’. UN Document S/PV.7565. UN Secretary-General. 2005. ‘In Larger Freedom: Towards Development, Security and Human Rights for All. Report of the Secretary-General’. UN Document A/59/2005/Add.3. Publications/A.59.2005.Add.3.pdf. US Congress, House Committee on Government Operations. 1992. ‘Activities of the House Committee on Government Operations. 102nd Congress. First and Second Sessions, 1991–1992 Report 1021086. Performance of the Patriot Missile in the Gulf War’. http://www. US Department of Defense. 1988. ‘Investigation Report. Formal Investigation into the Circumstances Surrounding the Downing of Iran Air Flight 655 on 3 July 1988’. – 2005. ‘Report of the Defense Science Board Task Force on Patriot System Performance. Report Summary’. Office of the Under-Secretary of Defense for Acquisition, Technology, and Logistics, Washington, DC. – 2012. ‘Directive 3000.09 on Autonomy in Weapons Systems’, 21 November. – 2013. ‘Unmanned Systems Integrated Roadmap: FY2013-2038’. Report., pdf.



– 2018. ‘Summary of the 2018 National Defense Strategy of the United States of America: Sharpening the American Military’s Competitive Edge’. US Department of Justice. 2013. ‘Department of Justice White Paper. Lawfulness of a Lethal Operation Directed Against a US Citizen Who Is a Senior Operational Leader of Al-Qa’ida or An Associated Force’. US Department of State. 2018. ‘Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space and Under Water’. Current Treaties and Agreements. 2018. US House of Representatives. 1992. ‘The July 3rd, 1988 Attack by the Vincennes on an Iranian Aircraft. Hearing before the Investigations Subcommittee and the Defense Policy Panel of the Committee on Armed Services’. US Library of Congress. 2018. ‘Treaty of Peace with Germany (Treaty of Versailles, 1919)’. US Navy Office of Information. 2019. ‘Aegis Weapon System’. America’s Navy. 10 January. Display-FactFiles/Article/2166739/aegis-weapon-system/. US Office of the Press Secretary. 2012. ‘Remarks by the President to the White House Press Corps’. White House Briefing, Office of the Press Secretary, 20 August. https:// remarks-president-white-house-press-corps. Vallor, S. 2016. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. New York, NY: Oxford University Press. Vallor, S., and G.A. Bekey. 2017. ‘Artificial Intelligence and the Ethics of Self-Learning Robots’. In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, edited by P. Lin, R. Jenkins, and K. Abney, 338–53. New York: Oxford University Press. van Courtland Moon, J.E. 1996. ‘United States Chemical Warfare Policy in World War II: A Captive of Coalition Policy?’ The Journal of Military History 60 (3): 495–511. van Courtland Moon, J.E. 2008. ‘The Development of the Norm against the Use of Poison: What Literature Tells Us’. Politics and the Life Sciences 27 (1): 55–77. Verbruggen, M. 2019. ‘The Role of Civilian Innovation in the Development of Lethal Autonomous Weapon Systems’. Global Policy 10: 338–42.



Vilches, D., G. Alburquerque, and R. Ramirez-Tagle. 2016. ‘One Hundred and One Years after a Milestone: Modern Chemical Weapons and World War I’. Educación Química 27 (3): 233–6. Vincent, J. 2017. ‘Putin Says the Nation That Leads in AI “Will Be the Ruler of the World”.’ The Verge, 4 September. https://www.theverge. com/2017/9/4/16251226/russia-ai-putin-rule-the-world. Von Clausewitz, C. 1984. On War. Princeton, NJ: Princeton University Press. Waever, O. 1996. ‘The Rise and Fall of the Inter-Paradigm Debate’. In International Theory: Positivism and Beyond, edited by S. Smith, K. Booth, and M. Zalewski, 149–85. Cambridge: Cambridge University Press. Walker, D.M. 2017. ‘“An Agonizing Death”: 1980s US Policy on Iraqi Chemical Weapons during the Iran–Iraq War’. The Journal of the Middle East and Africa 8 (2): 175–96. Walsh, J.I. 2015. ‘Political Accountability and Autonomous Weapons’. Research and Politics 2 (4): 1-6. – 2018. ‘The Rise of Targeted Killing’. Journal of Strategic Studies 41 (1–2): 143–59. Walsh, T. 2017a. Android Dreams: The Past, Present and Future of Artificial Intelligence. London: Hurst. – 2017b. ‘Elon Musk Is Wrong. The AI Singularity Won’t Kill Us All’. Wired, 20 September. elon-musk-artificial-intelligence-scaremongering. – 2018. Machines That Think: The Future of Artificial Intelligence. Amherst, NY: Prometheus Books. Walter, T. 2018. ‘The Road (Not) Taken? How the Indexicality of Practice Could Make or Break the “New Constructivism”’. European Journal of International Relations 25 (2): 538–61. Ward, T. 2000. ‘Norms and Security: The Case of International Assassination’. International Security 25 (1): 105–33. Warren, A., and I. Bode. 2014. Governing the Use-of-Force in International Relations. The Post-9/11 US Challenge on International Law. Basingstoke: Palgrave Macmillan. – 2015. ‘Altering the Playing Field: The US Redefinition of the Use of Force’. Contemporary Security Policy 36 (2): 174–99. Warren, C. 2012. ‘Gas, Gas, Gas! The Debate over Chemical Warfare between the World Wars’. Federal History 4: 43–60. Washington Post. 2013. ‘Full Transcript: President Obama’s Press Conference with Swedish Prime Minister Fredrik Reinfeldt in Stockholm’. Washington Post, 4 September.



Washington Treaty. 2018. ‘Treaty Relating to the Use of Submarines and Noxious Gases in Warfare, Washington, 6 February 1922’. Human Rights Library, University of Minnesota, Minneapolis MN. http:// Watts, B.D. 2007. ‘Six Decades of Guided Munitions and Battle Networks: Progress and Prospects’. Centre for Strategic and Budgetary Assessments, Washington, DC. documents/2007.03.01-Six-Decades-Of-Guided-Weapons.pdf. Webster, D. 1841. ‘Extract from Note of April 24, 1841 (from Daniel Webster to Henry Fox)’. The Avalon Project at Yale Law School. Welsh, S. 2015. ‘Machines with Guns: Debating the Future of Autonomous Weapons Systems’. The Conversation, 12 April. https://theconversation. com/machines-with-guns-debating-the-future-of-autonomous-weaponssystems-39795. Wendt, A. 1987. ‘The Agent-Structure Problem in International Relations Theory’. International Organization 41 (3): 335–70. – 1992. ‘Anarchy Is What States Make of It: The Social Construction of Power Politics’. International Organization 46 (2): 391–425. – 1994. ‘Collective Identity Formation and the International State’. The American Political Science Review 88 (2): 384–96. – 2004. ‘The State as Person in International Theory’. Review of International Studies 30 (2): 289–316. Wiener, A. 2004. ‘Contested Compliance: Interventions on the Normative Structure of World Politics’. European Journal of International Relations 10 (2): 189–234. – 2007a. ‘Contested Meanings of Norms: A Research Framework’. Comparative European Politics 5: 1–17. – 2007b. ‘The Dual Quality of Norms and Governance Beyond the State: Sociological and Normative Approaches to “Interaction”’. Critical Review of International Social and Political Philosophy 10 (1): 47–69. – 2014. A Theory of Contestation. Berlin: Springer. – 2017. ‘A Theory of Contestation: A Concise Summary of Its Arguments and Concepts’. Polity 49 (1): 109–25. – 2018. Contestation and Constitution of Norms in Global International Relations. Cambridge: Cambridge University Press. Wiener, A. 2009. ‘Enacting Meaning-in-Use: Qualitative Research on Norms and International Relations’. Review of International Studies 35 (1): 175–93.



Wiener, E.L. 1981. ‘Complacency: Is the Term Useful for Air Safety’. In Proceedings of the 26th Corporate Aviation Safety Seminar. Denver, CO: Flight Safety Foundation, Inc. Wight, C. 2006. Agents, Structures and International Relations: Politics as Ontology. Cambridge Studies in International Relations. Cambridge: Cambridge University Press. Williams, G.D. 2012. ‘Piercing the Shield of Sovereignty: An Assessment of the Legal Status of the “Unwilling or Unable” Test’. University of New South Wales Law Journal 36 (2): 619–41. Williams, J. 2015. ‘Democracy and Regulating Autonomous Weapons: Biting the Bullet While Missing the Point?’ Global Policy 6 (3): 179–89. Williams, P. 2009. ‘The “Responsibility to Protect”, Norm Localisation, and African International Society’. Global Responsibility to Protect 1 (3): 392–416. Winfield, A.F.T. 2012. Robotics: A Very Short Introduction. Very Short Introductions, Volume 330. Oxford: Oxford University Press. Winston, C. 2018. ‘Norm Structure, Diffusion, and Evolution: A Conceptual Approach’. European Journal of International Relations 24 (3): 638–61. Wirtz, J.J. 2019. ‘Nuclear Disarmament and the End of the Chemical Weapons “System of Restraint”’. International Affairs 95 (4): 785–99. Wright, J. 2017. ‘The Modern Law of Self-Defence’. Speech, EJIL: Talk! (blog), 11 January. the-modern-law-of-self-defence/. Wrong, D.H. 1994. The Problem of Order: What Unites and Divides Society. New York: The Free Press. Wyatt, A. 2020. ‘Charting Great Power Progress toward a Lethal Autonomous Weapon System Demonstration Point’. Defence Studies 20 (1): 1–20. Yale Law School. 2008. ‘The Trial of German Major War Criminals: Proceedings of the International Military Tribunal Sitting at Nuremberg Germany’. Text, The Avalon Project. Documents in Law, History and Diplomacy. juddoeni.asp. Yeung, K. 2019. ‘Why Worry about Decision-Making by Machine?’ In Algorithmic Regulation, edited by K. Yeung and M. Lodge, 21–48. Oxford: Oxford University Press. Yeung, K., and M. Lodge, eds. 2019. Algorithmic Regulation. 1st ed. New York, NY: Oxford University Press.



Zarif, J. (@JZarif). 2020. ‘A sad day. Preliminary conclusions of internal investigation by Armed Forces’. Twitter. 11 January, 4:05 a.m. Zimmermann, L. 2016. ‘Same Same or Different? Norm Diffusion Between Resistance, Compliance, and Localization in Post-Conflict States’. International Studies Perspectives 17 (1): 98–115. – 2017. Global Norms with a Local Face. Rule-of-Law Promotion and Norm Translation. Cambridge: Cambridge University Press. Zöckler, M.C. 1998. ‘Commentary on Protocol IV on Blinding Laser Weapons’. Yearbook of International Humanitarian Law 1: 333–40. Zürn, M., and J.T. Checkel. 2005. ‘Getting Socialized to Build Bridges: Constructivism and Rationalism, Europe and the Nation-State’. International Organization 59 (4): 1045–79.


Entries in italics denote figures; entries in bold denote tables. accountability in AWS deployment, 13, 150, 161, 199, 215; behavioural uncertainty and, 111, 133; overview of, 45; responsibility and, 54–5, 219; transparency of AWS development, 13, 123, 150, 199. See also responsibility Aegis air defence system, 17, 188, 207; Iranian airline disaster and, 172–6. See also air defence systems Afghanistan war, 118–19, 124, 204 agency of AI, 4, 14, 29, 37, 40, 166–8, 193, 196–7. See artificial intelligence of AWS air defence systems, 14–16, 18, 46, 51, 154, 157–8, 161–72, 179–223; automated and autonomous features in, 166, 192–3, 207; definition of, 169; and fratricides, 182–91; and legal–public appropriateness, 13, 147, 150, 153, 158, 198–202; Patriot system, 18, 171–3, 181–92, 205, 207, 214, 217; and procedural– organisational appropriateness, 13, 148, 150–1, 153, 198–202, 213;

sub-system of, 171; and worst case scenario, 200, 202, 207–8 algorithms, 23–4, 30, 45, 55, 185, 195–7, 207; black box, 24, 168, 197; data hungry, 24; and human doubt, 167–9; international humanitarian law and, 198, 200, 206; machine learning and, 21, 23, 24, 55, 168; self-learning, 55; targeting, 42, 53 AlphaGo Zero, 25 anti-personnel, 18, 91, 92 appropriate, 11; action, 11, 33, 97, 111, 135–7, 140, 148–9, 151; changing perspective of what is, 61, 63; human–machine interaction, 166, 198; use of force, 4, 97–9, 103, 152–3, 166, 195–7, 202. See also norms appropriateness, 97; evolving standards of, 57–9, 61, 105, 113, 141, 150, 194, 216–19; legal, 97; legal–public, 13, 147, 150, 153, 158, 198–202; logic of, 135; procedural–organisational, 13, 148, 150–1, 153, 198–202, 213;



standards of, 12–15, 63, 103, 133–6, 145–7, 153, 190, 196–7. See also norms Arkin, Ronald, 43, 48 armed drones, 11, 36, 52, 124. See also drones arms control, 63, 77–9, 82, 85, 215 arms race, 38, 85, 48; AI, 218 Article 36, 27, 34–5, 40–1, 57, 159 artificial general intelligence, 21 artificial intelligence (AI), 4, 17–18, 21, 22, 198, 202, 204, 215; advanced, 21; autonomy and responsibility in, 54; basic understanding of, 21; dual-use nature of, 25; international humanitarian law obligations and, 160; international human rights law obligations and, 47, 211; Moravec Paradox and, 22; narrow, 21, 23, 168; singularity and, 22; strong versus weak, 21, 25; weaponisation of, 4, 63 Asaro, Peter, 17, 42, 49–51, 205–6 Assad, Bashar al-, 77, 80 atomic bomb, 81. See also nuclear weapons attack and autonomous weapons, 17, 26–8, 30, 42, 44, 163, 164, 170 attribution, 11, 106–7, 119, 129, 130; after 9/11, 118–19; expansionist understanding of, 118-19; indeterminacy of international law and, 213; narrow understanding of, 117; thresholds, 117, 118, 119; UN Charter and, 117; unwilling and unable and, 120–1, 127 automated, 3; and autonomous features, 5, 16, 18, 26, 57, 156–8; features and human error, 46; features in air defence systems, 14, 166, 170, 172, 214–5, 220–3; highly, 19; systems, 33, 49

automatic, 33, 177, 178, 179, 182, 187; mode, 169, 181, 183–4, 207 automation, 173, 207; as ‘appropriate’, 200–3; autonomy and, 19, 158, 169, 201, 208, 222; bias and trust, 186–7, 189, 192; complacency, 186–7; in air defence systems, 170–1, 180, 188, 193 autonomous, 19–20; capabilities, 166, 172; cars, 24, 219; driving, 54, 55, 219; functions, 4, 187, 209; ground vehicles, 5; technology, 181, 205; systems, 19–20, 26, 44, 51, 184, 186, 204 autonomous features, 3–5, 14–15, 18, 58, 60, 95, 169; in air defence systems, 43, 154, 169, 178; and complexity, 181, 188; in existing weapons, 36, 133, 195, 201–4, 214, 223; discourse on, 57, 198–9, 200, 207–9; and failures, 190–3; forms of, 16–17, 25–6, 149–50; human control and, 26, 46, 51, 156–8, 215; increasing integration of, 7–9, 211, 213, 217–21 autonomous weapons systems (AWS), 3; accountability issues and, 13, 44–5, 49, 55, 150, 199, 215; bans on, 9, 30–2, 35, 37–8, 40, 48, 62, 100; components of, 21, 168; development of, 4, 14– 15, 18–19, 33–4, 102, 150, 156–8, 172; great-power competition and, 203–5; human control of, 14–15, 26–9, 34–5, 44–5, 49–51, 153–4, 158–66, 207–23; human dignity and, 39, 47–51, 53–4, 205; increasing number of, 7–12, 201; international humanitarian law and, 6, 36, 40, 44, 160, 198, 200, 206; international human rights law and, 40, 47, 211; international law and, 153; meaningful human

Index control and, 8, 17, 27–9, 34–5, 44–5, 49–51, 156–8, 187–8 (see also under on meaningful human control); operational advantages/ disadvantages of, 14, 29; political and moral debate over, 6, 21, 27, 37, 49; proportionality principle and, 6, 41–4, 47, 206; rationale for development of, 205–6; regulatory framework for, 220, 222; semi-autonomous systems vs, 33, 36; state of debate at the CCW, 30–4; ‘stupid’, 44; types of, 4, 200; variations in terminology and definition of, 18–20; virtue ethics and, 53 autonomy, 19; automation and, 158, 169–70, 188, 193, 200–1, 204, 208, 222; in critical functions of weapons systems, 4, 7, 17–19, 26–9, 169, 221; defining, 25–7; meaning of, 18; Directive 3000.09 focus on levels of, 33; in military robotics, 30; spectrum of, 26; uncertainty in AWS and debate over, 54, 56 banning of autonomous weapons systems, 9, 30–2, 35, 37–8, 40, 48, 62, 100; overview of reasons, 32–3, 48 black box, 24, 168, 197 blinding laser weapons, 31, 38, 90–6, 100, 145, 212 Bostrom, Nick, 22 Brimstone ‘fire and forget’ missiles, 18 Buk system, 172, 176–78 Bureau of Investigative Journalism, 52 Campaign to Stop Killer Robots, 5–6, 9, 30–1, 34, 37, 57, 171–2; debate on LAWS and, 34–9, 161;


and perspective on human control, 162, 164 chemical weapons, 8, 10, 38, 61, 63; and warfare, 61–3; and poisonous gas, 71– 80 Chemical Weapons Convention, 76. See Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and their Destruction (CWC) China, 109; air defence systems and, 169, 177; blinding laser weapons and, 92–4; chemical weapons and, 74; development of LAWS in, 5, 8, 23, 38–9, 204; greatpower competition with, 203–4; hypersonic missiles and, 229; nuclear weapons and, 81, 83, 87; position in the GGE of, 9, 32, 226; understanding of the imminence standard in, 124 civilians, 75, 78, 90; autonomous weapons technology and, 52, 160; combatant/non-combatant distinctions, 41–3, 160, 206; humanitarian concerns in protection of, 152; military necessity vs, 44, 46 civil society, 9, 57; debate about LAWS and, 31, 34, 37–8, 161 cluster bombs, 8 Cold War, 16, 75, 80–1, 83, 88, 169, 202, 207 Comprehensive Nuclear-Test-Ban Treaty (CTBT), 84 Convention on Certain Conventional Weapons (CCW), 5, 15, 25, 90, 93, 97, 225; blinding laser weapons and the, 90, 91, 92; existing AWS and debate at, 198–201, 206; focus of debate on LAWS at the, 6, 8–9, 17–19, 30–1, 32, 33–9, 220; international



humanitarian law and the, 31, 47, 147, 211; meaningful human control at the, 153, 156, 158–9, 161, 213–14; openness of debate at the, 226 Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on their Destruction (CWC), 76 cruise missile, 181–2

semi-autonomous, 33; between deliberative and procedural norms, 149, 198; between interests and norms, 135; principle of, 6, 41–4, 98, 100, 115, 206 drones, 4, 6, 8, 11, 14; self-defence and, 105–7, 116, 124; targeted killing and, 6, 11, 107, 114, 116, 124, 128 dual-use, 7, 24, 38, 56, 94–5

data, 176, 186, 205; big, 23, 204; catalogue of air defence systems, 153, 171; collection, 20, 219; link, 228; and machine learning, 24; meta, 28; personal, 47; radar, 174, 180–1; sensor, 4; set, 24, 43–4, 185, 196; training, 23–4, 25, 168–9 dazzler weapons, 94–5 decision-making, 37, 99; in air defence systems, 175, 184; and autonomy, 21, 26, 43–4, 53–4, 205, 222; human, 4, 7, 17, 29, 41, 100, 135, 205; human control and, 161, 162, 163, 166, 170, 172, 189, 191–2; military, 160; targeting, 191; norm emergence and, 132, 141–3 dehumanization, in autonomous weapons systems, 49–51 deliberation, 50, 97, 165; and decision-making, 97, 192, 208–9; human 49–50, 160, 162; and international law, 121, 125, 213, 217; and norms, 97–8, 142–3, 148–9, 194–5, 220–1; and practices, 140–1, 194–5 digitalisation, 3 distinction, 113, 121, 125, 144, 164, 200; autonomous weapons systems and, 42, 49, 51, 56; between autonomous and

ethics, 218; and autonomous weapons systems, 40, 48, 51, 52, 54, 56, 218; and human dignity, 48–9, 51, 53–4; and international law, 40, 48–9, 52, 56, 59, 99; virtue, 52–53 First World War, 64; and chemical weapons, 10, 62, 71–4, 82, 96–7; and submarine warfare, 10, 64–9, 71, 73, 96–7 Flockhart, Trine, 109, 137, 151 Foucault, Michel, 196 France: air defence systems and, 169; blinding laser weapons and, 92, 94; chemical weapons and, 72, 75; nuclear weapons and, 81, 83, 89–90; position in the GGE on LAWS, 35; submarine warfare and, 63–4, 66–8 Garcia, Denise, 12, 33, 57, 100–1, 104, 132, 215 Geneva Convention, 93 Germany, 18, 35, 63–6, 68, 72–5, 92, 94 Global North, 31, 32, 89 Global South, 88–9; and the CCW, 31, 32 great-power competition, 202–4 Gulf War, 16, 94, 183, 186 Guterres, Antonio, 6

Index Hague Peace Conference, 63, 72 Heyns, Christof, 4, 8, 16–17, 20, 30, 41, 47–50, 150 human approval, 163–4 human control: and air defence systems, 170–2, 179–80; challenges to, 191–3; and debate on LAWS, 8, 15, 17, 34–5, 158–66, 200, 214; as an emerging norm, 14–15, 153–4, 156–8, 188, 193–7, 218; and ethical questions, 49–51; five levels of, 27, 163; human–machine interaction and, 50–1, 166–7, 184, 198; immediate, 17, 26, 29, 35, 63, 157; loss of, 17, 63, 172, 196–7; meaningful, 27–9, 191, 207–10, 214–23; meaningless, 14, 191, 193, 208, 214, 221–2; operational definitions of, 162; quality of, 15, 157, 165; situational awareness and, 188–90; specific targeting situations and, 14, 166, 171, 183, 188, 214; training and, 190; without, 44–5 human dignity, 39, 47–51, 53–4, 205 human doubt, 167, 169 human–machine interaction, 162; complexity and, 15, 164, 168, 188–9, 191–2, 201, 214, 223; human control and, 162 human operators: accountability and, 46, 181–2; Aegis and, 17, 174–5; air defence systems and, 16–17, 46, 158, 169–71, 178–83, 188, 215, 223; automation bias of, 186–7, 191, 194–5; changing role of, 158, 191, 208–9; and human–machine interaction, 160, 162, 222–3; and levels of human control, 163–4; in the loop, 170, 183; on-the-loop, 170, 187; reaction time and, 170, 192, 203;


training of, 178, 189, 192, 223 Human Rights Watch, 25, 30, 77, 93–5, 170 interests, 19, 223; and norms, 115, 123, 133–6 International Atomic Energy Agency (IAEA), 86–7 International Committee for Robot Arms Control (ICRAC), 9; debate on LAWS and, 30, 57 International Committee of the Red Cross (ICRC), 34, 88, 90–1, 93; debate on LAWS and, 41–2, 52, 57, 94–5, 161, 162, 164, 170 International Conference on Naval Limitation, 66. See also Washington Naval Conference imaginary, 203 imminence, 11, 114, 127, 130, 213; broadening the scope of, 106–7, 124; counterterrorism and, 116, 123–4; and pre-emptive selfdefence, 121–2, 129 international humanitarian law, 6, 36, 38, 40, 104, 206; and human control, 31, 160; and legal–public appropriateness, 198, 200; and nuclear weapons, 145; and the principle of distinction, 115; and the principle of necessity, 44 international law, 5, 10; constructive ambiguity and, 165; contested areas in, 6, 56, 115–16, 127, 129, 156, 212; contested areas on the use of force in, 11, 105–7, 130; critical approaches to, 63, 112, 116, 133, 141–2; customary, 6, 41, 57, 105, 120; human judgment in, 40, 47, 67; indeterminacy of, 104, 110, 113, 129, 213; international order and, 10, 102–4, 108–9, 115, 212; interpretation of, 11, 96, 104–6,



112–15, 118, 120–1, 123, 212; jus ad bellum and, 11, 40, 47, 106–7, 115, 166, 194, 211–12; jus in bello and, 42, 47, 115–16, 166, 194, 211–12; legal justification practices, 118; practices and, 6, 10–13, 56–7, 74–5, 80, 95–9, 102–15, 129–31, 223; practices and formal, 11, 71, 99, 197, 214; practices of warfare and, 73, 96–7; silence in, 127; structure of, 108; targeted killing and, 125–8, useof-force practices and, 9, 60–1, 65, 70, 120, 149, 194 international order, 5; definition of, 109, 212; international norms and, 10, 102–4, 108–9, 115, 212, 217. See also order Iran, 75, 81, 86–7, 126, 172–3, 174, 177–81 Iraq: chemical weapons and, 75, 77, 80; debate on LAWS and, 9, 226; 2003 invasion of, 172, 182, 184–6, 204; missiles in, 185–6; US bases in, 179, 181–2; War with Iran, 173 Israel, 183; air defence systems and, 169; and the attribution standard, 118–19; blinding laser weapons and, 92, 94; and broadening selfdefence, 122, 124–6; chemical weapons and, 76–7; debate on LAWS and, 35; development of LAWS and, 18; nuclear weapons and, 81, 84, 87; and targeted killing, 125–6, 128, Italy, 64, 66, 67, 72, 74 Japan: air defence systems and, 169, 182; blinding laser weapons and, 92; chemical weapons and, 72, 74–5, 99; nuclear weapons and, 81; submarine warfare and, 64, 66–70

killer robots, 5, 30, 101; image of, 57, 58 Koskenniemi, Martti, 12, 104, 110, 114 landmines, 31, 159 legacy systems, 201 lethal autonomous weapons systems (LAWS), 30–1, 32, 33–9, 52, 62; characteristics of, 158; debate at the CCW and, 17, 27, 90, 159, 201, 221; regulation of, 5–6 London Declaration, 63, 64 London Naval Conference, 66–7, 68 London Naval Treaty, 66–7, 68, 143. See also Treaty for the Limitation and Reduction of Naval Armament loop, 27, 147, 164–5, 170–1, 173, 187–9, 191–2; control, 26, 163, 183, 189 Lusitania, 65 machine learning, 5, 17, 21, 55, 168, 195–7, 204, 206; and deep learning neural networks, 23; and supervised/unsupervised learning, 23–4 Malaysia Airlines Flight 17, 172, 176, 240, 244 Martens Clause, 39 meaningful human control, 14–15, 27–9, 191, 207–10, 214–23; and air defence systems, 157–8, 161–6, 170–2, 183, 188, 197–8, 214–15, 218–23; and Article 36, 27; challenges to, 191–4, 197–8, 214–15, 218–23; definitions of, 162; and legacy systems, 201, 207–9; levels of, 163; and the targeting process, 17, 27–9, 35, 160, 163, 183, 187, 191, 209, 222. See also human control

Index military chain of command, 181 Moyes, Richard, 19, 159–60, 185, 228, 229 Non-Aligned Movement, 6 Non-Proliferation Treaty, 84, 85, 86. See also Treaty on the NonProliferation of Nuclear Weapons (NPT) non-verbal practices, 113, 133; definition of, 11, 110, 213–14; and indeterminacy of international law, 104, 165; and the international normative order, 111, 115; and meaningful human control, 164; and the silent making of norms, 194, 216–17; and understandings of appropriateness, 157 normal accidents, 200, 204–5, 207 norm emergence, 12, 61–2, 99, 132–55, 216–17, 220; in international relations, 132–55; silent, 195, 209 norm research, 132–42, 145, 147–51, 154, 213; conventional constructivist models of, 12, 132– 3, 135–6; critical constructivist models of, 132; practices and, 140, 151 norms, 1, 4, 6–15, 18, 20, 22, 24, 26–30; ambiguity of, 11, 127, 131, 165, 167; contestation of, 12, 18, 30, 86, 103, 127, 132, 136–7, 141–6, 154, 213, 216, 219; definition of, 136; deliberative, 63, 144, 151, 153, 158, 164, 165, 220; dual quality of, 107, 218; fundamental, 13, 14, 62, 144, 147, 148, 150; legal, 72, 74, 76, 80, 93–4, 100, 104 ; life cycle, 126; localisation, 132, 136, 138–42, 147, 154, 213; and logic of appropriateness, 135, 141; and


logic of arguing, 137, 140; and logic of consequentialism, 135; and logic of practicality, 140; and normality, 13, 141, 151–2, 154, 196, 218; and normativity, 13, 150–2, 154, 196, 216, 218–19; procedural, 153, 165; types of, 59, 99, 133, 143, 144, 147, 213, 218 North Atlantic Treaty Organisation (NATO), 28, 89, 177–8 North Korea, 76, 81, 84, 86–7, 90 nuclear weapons, 71; development of, 81–3; and the Hiroshima and Nagasaki bombings, 145, 207; norm emergence and, 10, 61–3; 96, 98–101; and the NPT, 84, 85–8; and the nuclear taboo, 82, 90; and the prohibition treaty, 87–90; testing, 82–3 Obama, Barrack, 77–80, 123, 126, 204 order, 108, 111; concepts of, 111; constitution of the international, 10, 102–4, 109; contestation of the liberal international, 104–8, 109, 110–16; international, 96, 108, 118, 120–1, 124, 129–31, 212–13; international normative, 10, 114–16, 118, 121, 212; legal international, 217 over-trust, 186, 191, 206, 222 Pakistan, 9, 28, 35, 81, 84, 87–8, 195 Partial Test Ban Treaty (PTBT), 82, 83, 84, 87. See also Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space and Under Water, 82 Patriot air defence system, 18; automatic mode and, 184; different modes of operation in the, 183; and emerging norms,



214; friendly fire incidents and, 172, 182, 205; over-trust and the, 187, 216; performance in the First Gulf War and, 183–4; situational awareness of operators in the, 188–9; technical specification of, 182, 207; track classification problems and, 185–6. See also air defence systems positivist: legal, 36, 103, 105, 108109, 114, 132, 211; social sciences, 132, 136, 139, 141 post-positivist, 113, 140–1, 211–12 practices: and ‘appropriate’ use of force, 9, 36, 42, 47, 57, 60–1, 80, 212; and customary international law, 105; definition of, 7, 99; of developing, testing and deploying AWS, 13–15, 56, 150, 214; eroding norms, 78–9; and ethics, 51, 53, 125; formal law and, 71, 75, 102, 106, 111–12, 197; human– machine interaction, 4, 15, 131, 161, 216; of implementation, 98, 195; interaction of verbal and non-verbal, 113; and the international normative order, 10, 110–11, 114–15; legal justification, 118; of localisation, 138, 147; micro-, 61, 104, 146, 151, 154; and normativity, 152–3; norm emergence and, 7, 10, 12, 61–2, 103, 133, 143–6, 217–18; norm making, 6–7, 149, 193–4; of operating air defence systems, 14, 162, 191, 193–4; ordering, 108–10; and practice theories, 140–1; and procedural norms, 13, 148–50; in relation to drones, 106–7; of self-defence, 105, 124, 129–30; targeted killing, 125, 127–8; use-of-force, 63, 65, 70, 120, 148, 213, 218–19; of using chemical weapons, 80; of

warfare, 72, 74, 96–7, 156. See also non-verbal practices and verbal practices proportionality, 6, 41–4, 98, 122; autonomous weapons systems and, 47, 49; principle of, 42–3, 47, 49, 56, 115, 206 Protocol for the Prohibition of the Use in War of Asphyxiating Poisonous or Other Gases, and Bacteriological Methods of Warfare, 74 reaction time, 169, 181, 200, 202–3 regulation of LAWS, 5–6, 30, 32, 34–9; China and, 38–9, 62, 221; Global South and, 32; UK and, 35; US and, 35, 38–9 remote-controlled systems, 3, 8, 29, 52, 56, 100 responsibility, 37, 75, 78, 181, 219; distributed, 54; ethical, 51, 54–5, 219; gap, 46; of the international community, 78, 122; military, 173, 176, 181; to protect, 144; when using AWS, 44–6, 55 Russia: and air defence systems, 169, 176–9; and blinding laser weapons, 91–2, 94; and broadening imminence, 121, 124; and chemical weapons, 72, 80; and debate on LAWS, 35, 38–9, 203–4, 220; and development of AWS, 5, 18; and nuclear weapons, 81, 84, 88; and submarine warfare, 63–4; and targeted killing, 126, 128 Schippers, Birgit, 50–1, 55 Schwarz, Elke, 48, 51–2, 55 Science and technology studies, 7 Second World War: air campaigns during the, 16; international order after the, 96; nuclear

Index weapons and the, 62, 81–2; regulation of chemical weapons after the, 80; regulation of chemical weapons before the, 73–4; submarine warfare during the, 66, 68–71, 98, 134, 167; trials after the, 71; use of chemical weapons during the, 62, 74–5 Sharkey, Amanda, 48–50 Sharkey, Noel: and debate on AWS, 37; and human control, 8, 27, 43–4, 163; and human dignity, 48; and ‘stupid’ AI, 20; and UK definition of LAWS, 34 Singer, Peter, 4–5, 16, 183, 207 Singh Gill, Amandeep, 34, 160 situational awareness, 43, 168, 170, 175, 181, 188–9; autonomous targeting and, 170, 181, 188; definition of, 188 Skynet, 28, 195–6 solutionism, 55, 202, 206 Soviet Union, 68, 75, 81–4, 179, 208 Stockholm International Peace Research Institute, 158 submarines, 167; autonomous, 29; commanders of, 97, 134; and norms, 96–8; practice of, 98, 100; regulation of warfare by, 63–4, 66, 67, 68, 70–1; unrestricted warfare by, 64–5, 68–70, 98–100, 143, 212; warfare during the Second World War, 68–72, 95–100, 212 Syria, 43, 77–80, 87, 119–21, 177 Taliban, 119 targeted killing, 6, 11, 107, 114–16, 124–30, 213; drones and, 114–16, 124–5, 128; as an emerging norm, 116, 126–7, 130; international law and, 114–15, 124–5, 128–9, 213; norm of political assassination and, 114–15, 125, 127–9


targeting, 66, 105, 125, 190; in air defence systems, 170, 172; algorithms, 42–3, 55, 180, 185, 191; autonomous, 16, 26, 41, 169; cycle, 28, 29; decisions, 41, 47, 57, 160, 166, 172, 183, 192; functions, 8, 25; outputs, 186; practice, 123; precision, 35; principles, 42; process, 3, 17, 27–9, 49, 55, 160, 163, 199; selection, 185; situations, 26, 166, 170, 172, 187, 193, 208–9, 221–3 technology, 37, 49, 128, 160, 164, 189, 206; autonomous, 181, 205; demonstrator, 18; drone, 6, 38, 128, 131; dual-use, 38; emerging, 62, 202, 212; human interaction with, 3, 22, 48; military, 16, 38, 177; new, 12; remote-controlled, 52; security, 51; warfare and, 3, 81, 85, 94; weapon, 71, 101, 157, 195, 203 Terminator T-900, 5, 20, 28, 57; science fiction framing of LAWS and the, 20, 59 terrorism, 59, 121, 128 Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space and Under Water, 82 Treaty for the Limitation and Reduction of Naval Armament, 68 Treaty on the Non-Proliferation of Nuclear Weapons (NPT), 84, 85, 86, 87–9 Treaty on the Prohibition of Nuclear Weapons (NWPT), 87–8, 89, 145 Treaty Relating to the Use of Submarines and Noxious Gases in Warfare, 67, 73 Turkey: and air defence systems, 169; and developing LAWS, 18; and self-defence, 120; and targeted killing, 126



UK: and air defence systems, 169; and blinding laser weapons, 91–2, 94; and broadening imminence, 123–4; and chemical weapons, 72, 75, 79; and debate on LAWS, 35–6, 95, 159; and developing AWS, 18; and nuclear weapons, 81–4, 89; reaction to Patriot fratricides, 184–6; and submarine warfare, 63–6, 68–9; and targeted killing, 126, 131; understanding of autonomy, 33 Ukraine, 94, 172, 176–8 Ukraine International Airlines Flight 752, 178 United Nations (UN), 77, 78, 115; Charter, 8, 75, 104–6, 115, 117, 121–2; Convention on Certain Conventional Weapons, 5, 17, 90; and debate on LAWS, 19, 30, 215; General Assembly, 43, 85, 87–8, 89, 117, 125, 128; and nuclear weapons, 88; peacekeeping, 140; Security Council, 78, 85–7, 115, 118–19, 121–2, 141, 169 United Nations Institute for Disarmament Research, 170 unmanned systems, 4, 29, 33, 128, 131, 179. See also drones US, 189, 196, 203; aircraft, 94, 186; and air defence systems, 17, 169, 172, 179, 202, 207, 214; Air Force, 182; Army, 184-–5, 191; Army Research Laboratory, 183, 189–90; and attribution, 118–20; and blinding laser weapons, 91–4; and broadening imminence, 124; casualties, 65; and chemical weapons, 72, 74–5, 77–8; Congress, 79, 184; and debate on LAWS, 35, 38, 95, 200, 203–4, 206, 220; Defense Secretary, 93, 203–4; and development of AWS, 5, 18,

28–9; and drone technology, 52, 56–7, 128, 131; and international order, 103, 109; and Iran Air Flight 655, 173–6; law, 190; military, 30; Navy, 176, 181; and nuclear weapons, 81–4, 88–9, 168; Operation Enduring Freedom, 118; and the Patriot system, 182–4; practice, 44; satellites, 179; Secretary of State, 178; and self-defence, 122–4, 126; Senate, 88; strikes, 181; and submarine warfare, 63–6, 68–70, 143; and targeted killing, 124–5; troops, 59, 99, 179; understanding of LAWS, 33; Vincennes, 207 US Department of Defense, 5, 30, 174–6, 182, 187, 195, 204; Directive 3000.09, 33 use of force, 149; ‘acceptable’, 6; ‘appropriate’, 4, 9, 51, 54, 66, 98–9, 130; and AWS, 8, 15, 39, 45, 152, 181; decision-making on the, 4, 48, 159–60, 165; during the Second World War, 69; German, 65; human agency in the, 36, 41, 161, 162, 190, 222; and international human rights law, 47; international law governing the, 6, 8, 10–11, 75, 96–7, 106, 116, 199; international order governing the, 5, 10, 102–3, 124; kinetic, 28; lowering thresholds of the, 59, 107, 123, 131; norms, 6, 14, 79, 153, 156, 194–5, 197, 211–13; permissive standards on the, 46, 112–14, 117; practices, 10, 60–3, 80, 97, 120, 145, 148, 164, 220; precision and the, 52; principles, 42, 127; prohibition of the, 104–5, 115; proportional, 44, 151; self-defence and, 118–20,

Index 122, 200; situations, 14, 201, 204, 214–15, 219; US, 57, 70 Vallor, Shannon, 52–3, 55 verbal practices, 113, 133, 220; definition of, 110, 213–14; and deliberative norm-setting, 111; and the international legal order, 115; and norm emergence, 164, 216 Walsh, Toby, 20–5, 59 Washington Naval Conference, 66–7, 73


Washington Treaty, 67, 73. See also Treaty relating to the Use of Submarines and Noxious Gases in Warfare, 66, 67, 73 Wiener, Antje: and international law, 108; and norm contestation, 137–8, 141; and typology of norms, 144, 145; and understanding of norms, 12, 132, 148, 213 Winfield, Alan, 19, 20 X-47B, 18, 58