Digital Influence Warfare in the Age of Social Media (Praeger Security International) 1440870098, 9781440870095

This timely book spotlights how various entities are using the Internet to shape people's perceptions and decision-

1,595 115 5MB

English Pages 303 [321] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Digital Influence Warfare in the Age of Social Media (Praeger Security International)
 1440870098, 9781440870095

Table of contents :
Cover
Title Page
Copyright
Contents
Preface
Acknowledgments
1. An Introduction to Digital Influence Warfare
2. Goals and Strategies: Influencing with Purpose
3. Tactics and Tools: Technical Dimensions of Digital Influence
4. Psychologies of Persuasion: Human Dimensions of Digital Influence
5. Exploiting the Digital Influence Silos in America
6. Information Dominance and Attention Dominance
7. Concluding Thoughts and Concerns for the Future
Notes
Index
About the Author

Citation preview

Digital Influence Warfare in the Age of Social Media

Digital Influence Warfare in the Age of Social Media JAMES J. F. FOREST

Praeger Security International

Copyright © 2021 by James J. F. Forest All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except for the inclusion of brief quotations in a review, without prior permission in writing from the publisher. Library of Congress Cataloging-in-Publication Data Names: Forest, James J. F., author. Title: Digital influence warfare in the age of social media / James J. F. Forest. Description: Santa Barbara, California : Praeger, An Imprint of ABC-CLIO, LLC, [2021] | Series: Praeger security international | Includes bibliographical references and index. Identifiers: LCCN 2020054801 (print) | LCCN 2020054802 (ebook) | ISBN 9781440870095 (hardcover) | ISBN 9781440870101 (ebook) Subjects: LCSH: Information warfare. | Social media—Political aspects. | Disinformation. | Mass media and propaganda. | Mass media and public opinion. Classification: LCC UB275 .F67 2021 (print) | LCC UB275 (ebook) | DDC 355.3/437—dc23 LC record available at ­https://​­lccn​.­loc​.­gov​/­2020054801 LC ebook record available at ­https://​­lccn​.­loc​.­gov​/­2020054802 ISBN: 978-1-4408-7009-5 (print) 978-1-4408-7010-1 (ebook) 25 24 23 22 21 1 2 3 4 5 This book is also available as an eBook. Praeger An Imprint of ABC-CLIO, LLC ABC-CLIO, LLC 147 Castilian Drive Santa Barbara, California 93117 ­www​.­abc​-­clio​.­com This book is printed on acid-free paper Manufactured in the United States of America

Contents

Preface vii Acknowledgments xiii 1. An Introduction to Digital Influence Warfare

1

2. Goals and Strategies: Influencing with Purpose

29

3. Tactics and Tools: Technical Dimensions of Digital Influence

67

4. Psychologies of Persuasion: Human Dimensions of Digital Influence

111

5. Exploiting the Digital Influence Silos in America

153

6. Information Dominance and Attention Dominance

189

7. Concluding Thoughts and Concerns for the Future

221

Notes 245 Index 297

Preface

Let me begin with an honest self-reflection. I have published more than 20 books, and this has been among the most difficult of all, in part because of the tumultuous events swirling around us during the time this enters the publisher’s review and production process (e.g., the COVID-19 pandemic and related economic turmoil, the nationwide protests against police brutality, the ongoing threat of foreign and domestic terrorism, and a highly polarizing presidential election). This has also been an unusually difficult topic to write about because of the emotions it provokes, such as dismay, frustration, anger, powerlessness, and even hopelessness—all in response to the fact that we have been (and continue to be) attacked on a daily basis by malicious actors, both foreign and domestic, who want to use our online information access and sharing activities as weapons against us. The research and writing of this book required an extensive journey of discovery, and when I began the journey in late 2017, one of my goals was to find some answers to a puzzling question. I had recently seen an increasing number of people I know—people whom I consider reasonable and intelligent—expressing opinions and beliefs that I knew to be untrue, things that could not be supported by any factual evidence. This was occurring sometimes in face-to-face conversations, but much more so in discussions online, and particularly on social media. Why would these people be so convinced of something that is proven completely false by all factual evidence? Further, when factual evidence was presented to them clearly proving that they were incorrect, these people would just turn away and continue repeating their support of the falsehoods to anyone

viiiPreface

who would listen. Or, in several instances, they would try to argue that their beliefs were more valid than the facts. What was going on? These were not stupid people, and they did not exhibit the signs of someone who had been brainwashed (whatever that word really means) by a cult or terrorist group. Yet they had come to embrace particular narratives about a range of issues and people that the rest of the world rejected. Having studied terrorism and terrorists for nearly 20 years, I thought I had a fairly good handle on things like extremism and radicalization. One of my books—Influence Warfare: How Terrorists and Governments Fight to Shape Perceptions in a War of Ideas (Praeger, 2009)—had even examined various aspects of propaganda, psychological operations, and disinformation, with particular focus on how websites, blogs, email, online videos, digital magazines, and other such things were used to shape beliefs, attitudes, and behaviors. My primary research question at that time was how governments were competing with terrorists for influence and support in the public domain, and particularly on the Internet. But a decade later, I have now come to realize that the scope of this earlier research was far too limited: what we see today is a much broader and complex terrain, in which the rapid advancement and global usage of social media has introduced new concepts, strategies, and tactics for influence warfare that did not exist just a decade ago, and a much broader range of actors are using these strategies and tactics than ever before. So, for the past few years I have been studying this broader phenome­ non of what I now call digital influence warfare—reading an ever-growing stack of books, academic research articles, reports by government agencies and think tanks, and much more. Many of these publications have focused on Russia’s massive disinformation efforts aimed at the populations of countries like Estonia, Ukraine, the United Kingdom, and the United States. But an increasing number of other countries are also engaged in similar activities, including China, Iran, Saudi Arabia, and Turkey. As discussed in the introductory chapter, one report from the Oxford Internet Institute found that in 2018 there were disinformation efforts of one kind or another in 70 countries around the world. But at the same time, extremists and terrorists have also taken advantage of new opportunities for provoking fear—even livestreaming videos of attacks in Kenya and New Zealand—with dramatic results. And a profit-generating business model has shifted the entire landscape of influence warfare in a new—and decidedly more dangerous—direction, especially during the worldwide COVID-19 pandemic of 2020. In today’s attention economy, the ability to shape perceptions and influence behavior through social media is a major source of power and profit. After finding many examples of state and non-state actors using the many new tools of digital influence, I also began to appreciate the strategic mechanics of it—the psychology of persuasion or social influence

Prefaceix

applied in online environments. From my previous work, I could understand several of the relevant concepts already, like exploiting a person’s “fear of missing out” (FOMO), extremist ideologies, dehumanization, ingroup indoctrination, out-group “othering,” provocation, propaganda, psychological operations, political warfare, website defacement, and tools for manipulating photos and videos. I knew that repetition and framing were important elements of effective communication, and I appreciated the dangers of conspiracy theories. I also knew something about data mining, social network analysis, and algorithms, having co-taught a course at West Point on information warfare many years ago. However, there were other terms that I was learning for the first time, like trolling, doxxing, gaslighting, hashtag flooding, deepfakes, astroturfing, ragebait, digital information silo, and so forth. What I found in my research were many studies that said basically the same thing: there are clear and recognizable strategies and tactics being used by certain people to manipulate the perceptions and behaviors of others. Generally speaking, three kinds of people are involved: influencers, enablers, and targets. Some of the publications I encountered used terms like “influence aggressor” to describe the individuals whose actions are described in this book. They may be state sponsored or driven by ideological beliefs, profits, and many other kinds of motives. Their ability to identify advantageous targets for influence efforts has become easier based on all the information that is available about us. As we’ll examine in several chapters of this book, billions of people worldwide are providing free and unfiltered access to themselves by posting photos, personal revelations, telling people where they are at a given moment, and showcasing who their friends and family are. Further, because of the profit models that pervade the attention economy, Internet firms track a user’s identity and patterns of behavior so they can formulate the right kinds of advertising campaigns. Just as every click and keystroke can be monitored, recorded, and used for analysis that generates advertising profits for the Internet companies, the same data can inform a digital influence strategy. The targets of digital influence efforts have become increasingly accessible as well, particularly those who engage more frequently on social media and other online information resources on a daily basis. Influencers can now use Facebook or other social media platforms to pinpoint much more precisely the types of individuals who might be receptive to the information (or misinformation) they want to disseminate. The targets could be virtually anyone, but influencers quickly find that they’ll have more success by choosing targets whose beliefs and values indicate certain predispositions and biases. Further, changing a target’s mind about something may be an objective, but this is much more difficult than finding targets whom you only need to nudge a little in a certain direction

xPreface

or simply confirm for them that their biases and prejudices about others are justified. Effective influencers have learned how to capitalize on the fact that the Internet provides the means to shape a reality that caters to the disposition of its users. And while the target is often described as an unwitting participant (or even victim) in digital influence warfare, this not always the case. As we’ll see reflected in several chapters of this book, many individuals are actively seeking out disinformation and fake sources of information online solely for the purpose of providing confirmation for what they want to believe. For their part, the digital influencer could pursue any number of goals and objectives. Some may want to deceive, disinform, and provoke emotional responses (including outrage) in order to influence certain people’s voting behavior. Other influencers may want to strengthen the commitment of the target’s beliefs, reinforcing their certainty and conviction in something; this may include attacking scientific evidence that supports inconvenient truths. The goals of digital influence also include various forms of online recruitment efforts by global jihadists and other terrorist networks, as well as the nurturing of online communities whose members embrace extremist ideologies (e.g., white nationalism, neo-Nazism, sovereign citizens, ANTIFA, or the incel movement). Sometimes, the strategy can involve convincing the targets that what they believe or think they know is based on false information. Or the strategy could be to convince the target that what “other people” believe or think they know is based on false information, leading to a sense of superiority over those naive “oth­ ers.” A particularly powerful form of digital influence involves convincing targets that the beliefs and convictions they are particularly passionate about are severely threatened by other members of society and must be defended. Similarly, a goal of a digital influence effort could be to encourage broader patterns of questioning and uncertainty, leading the targets to believe that nothing is true and anything may be possible. This in turn creates opportunities for the spread of disinformation and conspiracy the­ ories. And other online influencers may simply want to market and sell products, services, and ideas. There are also a variety of tactics involved in digital influence warfare, from deception (including information deception, identity deception, and engagement deception) to emotional provocation and outright attacking the target (including bullying, hacking, exposing embarrassing information online, etc.). We’ll examine these and much more in chapter 3. But across this diversity of goals and tactics, what most of them have in common is that they are intended to shape the perceptions and behaviors of targets in ways that will benefit the influencers more than the targets. In other words, the influencer rarely has the best interests of the target in mind. This seems to hold true regardless of whether the goals of the influencer are political, economic, social, religious, or other categories of belief and behavior.

Prefacexi

And finally, in addition to the relative ease of identifying and accessing viable targets, the influencer can also monitor and assess the impact of their influence effort by gathering and analyzing data on the target’s reception and reaction to the information they were exposed to. Success in digital influence warfare can be measured by the target’s behavior. Did they do something that the influencer wanted them to—for example, vote, buy, protest, join, reject, or some other behavioral response? Did they express some kind of emotional response (outrage, anger, sympathy, encouragement, etc.)? With this assessment in hand, the influencer can then refine their efforts to maximize effectiveness. With all these developments in mind, I thought a book focused on digital influence warfare would be useful for academics, policymakers, and the general public. The chapters of the book are organized around a series of questions I sought to answer during my intellectual journey through the research on this topic. My search for answers led me through a ton of published research on the psychology of persuasion, in which experts have identified a wide variety of ways in which ordinary individuals can be persuaded—and in some cases, even to do some terrible things to other people. I also revisited the history of influence warfare, with particular focus on Russia and its Active Measures program. Along the way, I found an emerging body of research about what I now call digital influence mercenaries, and I found many examples of non-state actors who are profiting by deceiving and provoking people on social media. A separate book on that topic is now in the works. My journey also led me to the research on technological tools used by state and non-state actors in their digital influence efforts. As a result, I know more now about deepfake images and videos than a person of my technical incompetence should know. There also seems to be widespread agreement in the published materials on this matter that something ought to be done to curb malicious uses of social media (and other forms of online information and interaction). Social media platforms are certainly doing more today than they were in 2016 to curb the malicious kinds of digital influence efforts described in this book. But I’ve come to the conclusion that each of us as individual citizens has a responsibility as well. When we stop and think about the influencers behind the information we see and hear, we tend not to be as open to exploitation. Further, these influence attempts—both foreign and domestic—should make us angry: for the most part, there is no informed consent; nobody asked us for our permission to deceive or manipulate us. So, we should get angry enough to do something about it. We should also expect greater commitment from our government for policies and public education to confront these issues. Digital influence warfare represents a form of cyberattack that requires more than network systems firewall and security. Confronting and deflecting these digital influence efforts require a kind of societal firewall, a psychological barrier

xiiPreface

of shared resistance and resilience that rejects and defeats these attempts. Only when a society proves completely invulnerable to digital influence attacks will there be a true deterrent. In the absence of that, our enemies will continue trying. While this book was being written, our nation endured acrimonious political campaigns, a rising tide of right-wing anti-government extremism, and the deadly COVID-19 virus spreading to countries around the world. Various forms of social mediated disinformation, disorientation, and conspiracy theories have accompanied these and many other major events. Reflecting on this now, it becomes clear to me that unfortunately I chose to research and write a book about a topic where things have been very fast moving and ever-changing. By the time this volume hits the shelves in 2021, some of the analyses and recommendations contained within may be overtaken by events. I ask your indulgence and understanding for this. As I mentioned at the outset, the research and writing of this book required an extensive journey of discovery, and to be honest, much of what I discovered was rather unpleasant. I have learned more about the darker elements of psychology and human nature—and about technology, social media algorithms, deviant mercenaries, and much more—than I had originally thought possible. I have written and rewritten several chapters multiple times, reorganized the entire volume at least a dozen times, and even scrapped entire chapters (some of which may appear someday as articles or essays in different publications). I have had to go outside my own fields of education, counterterrorism, and international security studies for material used in this book, including such disciplines as psychology, sociology, information technology, criminal justice, communication, political science, and many others. In the course of integrating various information from these disciplines, it was of course necessary to summarize research findings and concepts, so to the experts in those fields who may feel slighted that I overlooked their important contributions, I apologize. In embracing the ethos of the curious mind, I have encountered numerous things about our modern world in recent years that have proved deeply disturbing to me. My academic training prompted me to document these things over the course of several years and eventually (with the prompting of a publisher) put pen to paper in an effort to make sense of it all. Thus, this book represents the product of an intellectual adventure, an account of where I looked for answers and what I learned along the way. I should conclude here with a warning that readers may experience mild whiplash between research-based theories on political, psychological, and influence warfare and my personal observations or whimsical attempts at humor. I hope you enjoy the roller-coaster ride and find the book worthwhile.

Acknowledgments

I owe considerable gratitude to literally thousands of people who have significantly influenced my intellectual journal over the past two decades. Some of them I consider friends and colleagues, while others I have never even met. Some have been coworkers or guests who lectured in my courses, and even coauthored publications with me, while others have only communicated with me briefly online. But they have also helped me answer questions and find new perspectives. The abbreviated list I’d like to especially thank includes: Alex Schmid, Andrew Silke, Annette Idler, Arie Perliger, Assaf Moghadam, Bill Braniff, Brian Fishman, Brian Jenkins, Bruce Hoffman, Colin Clarke, Clint Watts, Daniel Byman, David Kilcullen, David Ronfeldt, Dorothy Denning, Emerson Brooking, Eric Schmitt, Erica Chenoweth, Gabi Weimann, Gary LaFree, GEN Wayne Downing, Greg Miller, Henry Crumpton, J.M. Berger, Jacob Shapiro, Jade Parker, Jarret Brachman, Jennifer Giroux, Jessica Stern, Jim Duggan, Joe Felter, John Arquilla, John Horgan, Joshua Gelzer, Juan Merizalde, Kurt Braddock, Martha Crenshaw, Matthew Levitt, Maura Conway, Max Abrahms, Michael Hayden, BG (ret.) Michael Meese, Michael Sheehan, Nada Barkos, Neil Shortland, Paul Cruikshank, Peter Neumann, Peter W. Singer, Richard Shultz, Robert Cialdini, Rolf Mowatt-Larssen, BG (ret.) Russell Howard, Ryan Evans, Sheldon Zhang, Thom Shanker, Thomas Fingar, Tom Nichols, Walter Lacquer, William McCants, and ADM (ret.) William McRaven. There are also a growing number of experts and organizations in this emerging field of what I am loosely calling digital influence studies, and I have benefitted enormously from many of them in researching and writing this book. If you are interested in the contents of this book, you will find

xivAcknowledgments

the works of these people most enlightening, particularly the hardworking folks at the Oxford Internet Institute’s Computational Propaganda Project, Graphika, the Global Network on Extremism and Technology, the Centre for the Analysis of Social Media, the Rand Corporation’s Truth Decay project, and the Stanford Internet Observatory. Weekly publications like First Draft, Popular Information, and The Source (published by the Atlantic Council’s Digital Forensic Research Lab) are strongly recommended. I also recommend following the online commentary on these and other topics by Barb McQuade, Cass Sunstein, Carl Miller, Caroline Orr, Cindy Otis, Claire Wardle, Emma Barrett, Emma Briant, Erin Gallagher, Jay Rosen, Joan Donavan, Judd Legum, Kate Starbird, Marc Owen Jones, Natalia Antonova, Nathaniel Gleicher, Nick Carmody, Olga Belogolova, Peter Pomerantsev, Phil Howard, Samantha Bradshaw, Yael Eisenstat, and others followed by the Twitter account @DIWbook. And I’m grateful to Naomi Shiffman and her colleagues at CrowdTangle for showing me how to analyze data on the spread of disinformation via social media platforms. I also want to include here a special shout-out to professors and mentors in my graduate school many years ago who took me under their wing and showed me how I could make potentially worthwhile contributions to the academic profession, especially Patricia Gumport (at Stanford University) and Philip Altbach (at Boston College). I also greatly appreciate my former colleagues at the U.S. Military Academy. I learned so much during my nine years there, particularly from my friends and colleagues in the Department of Social Sciences and the Combating Terrorism Center, as well as from the faculty in the Department of Electrical Engineering, with whom I collaborated on teaching an information warfare course for several years. I thank the publisher, Praeger/ABC-CLIO, and particularly the editorial staff and proofreaders who helped ensure this was not a complete literary disaster. And finally, I express my appreciation to my family members: Alicia, Chloe, Jack, John, Jason, Jeremy, Jody, Jesse, Jael, and Mary. They are all positive sources of influence in my life, and I am forever grateful.

CHAPTER 1

An Introduction to Digital Influence Warfare

During the process of researching and writing this book, various friends and colleagues would ask me to explain what the term “digital influence warfare” really means and why I chose this term for the book’s title. Admittedly, I haven’t always had the most articulate way of responding to this question, so let’s begin this introductory chapter by providing my best effort to define and explain the term. First, let’s consider what each word means: Digital: Anything online, anything you see on a computer, smartphone, etc. is inherently digital—in other words, composed of digits (1s and 0s) that form text, pixels, sound, etc. We are surrounded by an online ecosystem of digital information providers and tools, from websites, blogs, discussion forums, and targeted email campaigns to social media, video streaming, and virtual reality. While various strategies and tactics of influence warfare have existed for centuries, this book focuses on new and emerging digital forms of it, and the technological environments that enable unique tactical innovations in manipulative behavior. Influence: An ability to convince others to think or do something. Drawing from decades of research in psychology, marketing, education, sociology, and other disciplines, we have learned the most effective ways an information provider can persuade other people, to shape their beliefs in ways that lead them to embrace one perspective and reject others, and to adopt behaviors in alignment with that perspective. Some influence efforts are intended to strengthen existing beliefs, while others may try to challenge and change those beliefs, but in general the underlying goal of most influence efforts is to impact the behaviors that are driven by what people believe. Influence operations exploit information systems (like social media platforms) to manipulate audiences for the purpose of achieving strategic goals (including political, social, and economic). Sometimes the influencer wants to change people’s views and behaviors, while in other instances, they want to strengthen existing beliefs rather than change them (e.g., amplifying existing levels

2

Digital Influence Warfare in the Age of Social Media

of distrust and divisions within a society). Throughout the many examples provided in this book, one entity is using information (or in many cases disinformation) in order to gain the power to influence another. Warfare: A type of human behavior that involves winning and losing, in which there are attackers and targets, offensive and defensive strategies, tactics and weapons. There are also frequently innocent victims, and sometimes third party allies and mercenaries are involved. Warfare is a means to an end, usually some sort of political objectives pursued by the aggressor. It can be a way of gaining power and/or diminishing the power of others—for example, a goal could be to degrade the functional integrity of a democratic society that is considered an adversary or peer competitor.

So, the combination of these three concepts gives us the term “digital influence warfare.” In short, it refers to the landscape of online psychological operations, information operations, and political warfare through which a malicious actor (state or non-state) achieves its goals by manipulating the beliefs and behaviors of others. It involves the use of persuasion tactics, information and disinformation, provocation, identity deception, computer network hacking, altered videos and images, cyberbullying, and many other types of activity explored in this book. Examples of digital influence warfare range from using armies of trolls to flood a social media platform with a narrative or view on a specific (often social or political) issue to using thousands of computer-generated accounts (“bots”) to manufacture the perception of massive support for (or opposition to) something or someone. And while there has been much attention in the media about Russia and China engaging in these activities, there are both foreign and domestic examples of influence warfare. The central goal of influence warfare is—and has always been—fairly straightforward: the attacker wants to shape or reshape the reality in which the target believes in order to achieve some sort of strategic objective.1 However, the context in which this “weaponization of information” takes place has changed significantly over the past two decades. The rise of the Internet and social media companies, whose profit model is based on an “attention economy,” has been a game changer. Within the attention economy, the most valued content is that which is most likely to attract attention, with no regard to whether it is beneficial or harmful, true or untrue. New tools have emerged for creating and spreading information (and disinformation) on a global scale. Connectivity in the digital realm is now much easier, and yet—as we’ll examine later in this book—ironically the emergence of hyperpartisan influence silos has sequestered many online users into separate communities who reject the credibility and merits of each other’s ideas, beliefs, and narratives. This is why fake information can be so readily believed—as long as it is tailored to support what you want to believe, it will be believed. And it

An Introduction to Digital Influence Warfare3

has never been easier to tailor information of all kinds for a specific audience online. In later chapters of this book, we’ll examine the role of deepfake images and videos, memes, fake websites, and many other tools used in digital influence operations. But in this introductory discussion, let’s review some important points about terms and terminology and look at a small handful of examples that illustrate what this book is about. Then the latter part of this chapter will provide a brief overview of what readers will find in the rest of the book. COMPARING INFLUENCE WARFARE WITH INFORMATION OPERATIONS How does the term digital influence warfare relate to other similar terms like “information operations” or “information warfare”? Indeed, there are reports published on these topics every year, some of which do address the issue of influencing targets. However, those terms have also been increasingly used to describe computer network attacks (often by highly trained military units) like hacking into databases to observe or steal information, pervert information, or replace some kinds of information with other information, and so forth. Traditional military uses of the term “information warfare” have also focused on protecting our own data from those kinds of attacks by adversaries. Of course, computer network attacks like these can certainly be used to send a message (e.g., about a target’s vulnerabilities and the attacker’s capabilities), and in that way, they could be a means of influencing others. States may want “information dominance” over the populations of other states. This would include computer network operations, deception, public affairs, public diplomacy, perception management, psychological operations, electronic countermeasures, jamming, and defense suppression.2 Similar terms in this broad landscape include public diplomacy and strategic communications. Cyber operations and cybersecurity have also been intertwined with discussions about information operations. I prefer to use the term “influence warfare” to describe the kinds of activities in which the focus is not on the information but on the purposes of that information, that is, propaganda, misinformation, disinformation, and other kinds of efforts in which the implicit goal of the information is to shape perceptions and influence behavior. Further, influence warfare strategies and tactics—particularly as we have seen online—also involve more than just manipulation of information; they can include behavior signaling (swarming and bandwagoning), trolling, gaslighting, and other means by which the target is provoked into having an emotional response that typically overpowers any rational thought or behavior. Clickbait, memes, and ragebait (for example) are not really seen as a form of information

4

Digital Influence Warfare in the Age of Social Media

operations as normally conceived, but it is certainly a means of influencing others via the Internet. Similarly, other terms addressing the overall concept of cybersecurity can be somewhat confusing. Many of you are familiar with the concept of computer hacking, but this is different. While there is some conceptual overlap, the term “digital influence” warfare should not be confused with terms like “cyberwar,” in which the attacker seeks to damage the functionality of information technology, computer systems, and communication networks. Other terms traditionally associated with using computers to attack others including cybersecurity, cyberterrorism, and information warfare—and even digital warfare—are not really what this book is about. Those terms usually apply to attacking other countries’ critical infrastructure and military computer systems using tools for hacking into—and degrading or even destroying the functional integrity of—those systems. But unlike conventional cyberattacks, the goal of a digital influence warfare campaign is not about degrading the functional integrity of a computer system. Rather, it is to use those computer systems against the target in whatever ways might benefit that attacker’s objectives. Often, those objectives include a basic “divide and conquer” strategy—a society that is disunited will fight among themselves over lots of things instead of coming together in the face of a threat that only some of them believe is there. The emphasis is thus on the middle word “influence,” where a broad diversity of activities are meant to shape the perceptions, choices, and behaviors of a society—and in some cases, the goal may in fact be to make the target dysfunctional as a society. This is not simply propaganda, fake news, or perception manipulation. It is a battle over what people believe is reality and the decisions that each individual makes. The victors in this battle are the attackers who have convinced scores of victims to make decisions that directly benefit the attackers. As Michael Erbschloe explains, a difference between cyberwarfare and what he describes as “social media warfare” is that “cyber warfare requires a far higher level of technical knowledge and skill. Social media warfare is easier to learn and faster to deploy; but effective social media warfare, like cyber warfare, requires discipline and long-term dedication for successful deployment or defense.”3 Competency in digital influence warfare can be measured by one’s ability to successfully influence perceptions and behaviors through information provided by digital means. In 1997, Charles Swett—the Acting Deputy Director for Low Intensity Conflict Policy in the U.S. Department of Defense—offered a warning about how the future would include uses of the Internet “for spreading propaganda by extremist groups and disinformation about U.S. activities.”4 Unfortunately, we have seen much more than those kinds of influence efforts. Nation-states have attempted to impact democratic elections in other countries, as well

An Introduction to Digital Influence Warfare5

as manipulate the perceptions of their own populations. Attempts to provoke outrage and sow discord among a society have increased dramatically, especially with the rise of social media. Today, the Internet offers a unique information environment that brings many advantages to influence warfare campaigns, or what the Atlantic Council’s Digital Forensics Lab5 refers to as “cyber-enabled influence operations.” A recent Soufan Center analysis observed how “all modern conflict now features a significant and growing social media component, an extension of the propaganda that has accompanied war for ages.”6 Meanwhile, the Oxford Internet Institute refers to contemporary forms of influence warfare as “computational propaganda,”7 while other researchers have examined the rising threat of “information aggressors” and “information wars—sometimes aimed at persuasion, often morphing into vicious cyberbullying.”8 Books, research articles, and reports have been published over the last decade describing the “age of weaponized narrative,”9 “media manipulation,”10 or “information disorder.”11 One report portrays the threat as “an increasingly hostile series of aggressive actions between opposing groups . . . [while] wars—though among virtual communities—pit states against states, states against non-state actors, and networks of non-state groups against similar networks.”12 Throughout these digital influence wars, the attacker wins (and the target loses) by successfully influencing the target to think and do things that benefit the attacker’s political, social, or other goals. We will examine these and other goals at length in chapter 2, along with a number of prominent examples of digital influence efforts—some of which you have likely seen in your own social media account at some point. THE “WARFARE” PERSPECTIVE Using the terminology of warfare when describing information operations and digital influence efforts can be confusing, but there are many precedents to consider. Richard Stengel (a former Undersecretary of State and editor of Time magazine) chose the term “information wars” for the title of his recent book,13 and Danah Boyd (a Principal Researcher at Microsoft and the founder/president of Data & Society) used the same term to describe a variety of influence efforts in 2017.14 This term has also been used at the very top of the Kremlin: Vladimir Putin’s spokesperson, Dmitry Peskov, openly says that Russia is in a state of “information war,”15 and Vyascheslav Volodin, the deputy head of Putin’s administration, views social media as a battlefield.16 In 2014, NATO’s top military commander Philip Breedlove called the disinformation campaign around Russia’s annexation of Crimea “the most amazing information warfare blitzkrieg we have ever seen in the history of information warfare.”17 A Senate Armed Services Committee hearing in March 2017 addressed “Russian influence

6

Digital Influence Warfare in the Age of Social Media

and unconventional warfare operations.”18 And a 2018 UNESCO report describes how “the 21st century has seen the weaponization of information on an unprecedented scale. Powerful new technology makes the manipulation and fabrication of content simple, and social networks dramatically amplify falsehoods peddled by States, populist politicians, and dishonest corporate entities, as they are shared by uncritical publics”19 (my emphasis added). In a 2019 report by the Rand Corporation, the authors “propose the term virtual societal warfare to capture the emerging reality . . . [involving] informational mechanisms of coercion and manipulation.”20 They explain how: This warfare involves the use of largely nonkinetic, information-based aggression to attack the social stability of rival nations. It is virtual because, for the most part, these strategies do not employ direct physical violence or destruction. (This concept, therefore, excludes both direct military attack as well as large-scale cyberattacks designed to wreak havoc on a nation’s physical infrastructure and cause actual damage.) It is societal because both the targets and the participants in such campaigns stretch across society, and because the goal is to undermine the efficient functioning, levels of trust, and ultimately the very stability of the target society. And it is warfare because, in its potentially more elaborate forms, it represents an activity designed to achieve supremacy over rival nations, not merely to gain relative advantage in an ongoing competition but to gain decisive victory in ways that leave the target nation subject to the attacker’s will.21

In choosing to use the terminology of warfare for this book, part of my reasoning was the recognition of a particular aggressiveness in the use of social media, and the Internet more generally, to attack targets on behalf of political goals. If war is the continuation of politics by other means— “a real political instrument, a continuation of political commerce,” as Carl von Clausewitz suggested22—then it would seem appropriate to view the tactics and strategies described here as a form of warfare. Further, war is never an isolated act, but rather is a means to achieve specific goals and objectives over time. Wars require some sort of defensive measures taken by those being targeted, and inevitably, there are casualties of war. Failure to adopt the most effective measures in response to these adversaries could be disastrous for the future of truthful discourse and civil democracy, as Nina Jankowicz explains in her book How to Lose the Information War.23 Other relevant literature published in recent years also incorporate the language of warfare, including War in 140 Characters: How Social Media is Reshaping Conflict in the Twenty-First Century, Social Media Warfare, and LikeWar: The Weaponization of Social Media.24 In LikeWar, authors Singer and Brooking describe how “the Internet is a battlefield . . . a platform for achieving the goals of whichever actor manipulates it most effectively. Its

An Introduction to Digital Influence Warfare7

weaponization, and the conflicts that then erupt on it, define both what happens on the Internet and what we take away from it. Battle on the Internet is continuous, the battlefield is contiguous, and the information it produces is contagious. The best and worst aspects of human nature duel over what truly matters most online: our attention and engagement.”25 Similarly, other publications have referred to “Cyber troops”26 engaged in various kinds of activities and the need to fight against these influence efforts in the “digital trenches.”27 Additional terms closely associated with influence warfare include “political warfare,” which was used by the legendary diplomat George Kennan in 1948 to describe “the employment of all the means at a nation’s command, short of war, to achieve its national objectives. Such operations are both overt and covert . . .” and can include various kinds of “propaganda” as well as covert operations that provide clandestine support to underground resistance in hostile states.28 Paul Smith describes political warfare as “the use of political means to compel an opponent to do one’s will,” and “its chief aspect is the use of words, images, and ideas, commonly known, according to context, as propaganda and psychological warfare.”29 Carnes Lord notes a “tendency to use the terms psychological warfare and political warfare interchangeably” along with “a variety of similar terms—ideological warfare, the war of ideas, political communication and more.”30 And if you are interested in the topic of this book, you will surely enjoy Thomas Rid’s excellent book Active Measures: The Secret History of Disinformation and Political Warfare.31 Altogether, there are political, psychological, and informational dimensions to what Brad Ward refers to as “strategic influence.”32 A recent report by the Rand Corporation explains how “information warfare . . . works in various ways by amplifying, obfuscating, and, at times, persuading” and observes that “political warfare often exploits shared ethnic or religious bonds or other internal seams.”33 Another term frequently encountered in this realm is “strategic communications,” which U.S. military reports have defined as “focused efforts to understand and engage key audiences in order to create, strengthen, or preserve conditions favorable for the advancement of interests, policies, and objectives through the use of coordinated programs, plans, themes, messages, and products synchronized with the actions of all instruments of national power.”34 According to a 2004 Defense Science Board report: Strategic communication requires a sophisticated method that maps perceptions and influence networks, identifies policy priorities, formulates objectives, focuses on “doable tasks,” develops themes and messages, employs relevant channels, leverages new strategic and tactical dynamics, and monitors success. This approach will build on in-depth knowledge of other cultures and factors that motivate human behavior. It will adapt techniques of skillful political campaigning,

8

Digital Influence Warfare in the Age of Social Media

even as it avoids slogans, quick fixes, and mind sets of winners and losers. It will search out credible messengers and create message authority . . . It will engage in a respectful dialogue of ideas that begins with listening and assumes decades of sustained effort.35

My 2009 book Influence Warfare referred frequently to a “strategic communications battlespace” as “the contested terrain upon which all types of information from competing sources seeks to influence our thoughts and actions for or against a particular set of objectives.”36 Terms like “battlespace” and “warfare” may seem odd when the discussion centers on information and influence. However, as we’ll see in later chapters of this book, what I found in the course of my research indicates that the weaponization of information—particularly of an emotionally provocative nature—has become a major problem worldwide. In his national best-­ selling book Influence: The Psychology of Persuasion, author Robert Cialdini chose to title the very first chapter as “Weapons of Influence.” This was no coincidence—the same terminology of weapons or weaponization has been used on countless occasions to describe things used to attacks our thoughts and beliefs. For example, in their book Age of Propaganda: The Everyday Use and Abuse of Persuasion, Anthony Pratkanis and Elliot Aronson urge their readers to consider the metaphor of “propaganda is invasion” (i.e., an attacker is trying to conquer a target audience’s minds and beliefs).37 As a recent European Commission report observes, “Disinformation strategies have evolved from ‘hack and dump’ cyber-attacks, and randomly sharing conspiracy or made-up stories, into a more complex ecosystem where narratives are used to feed people with emotionally charged true and false information, ready to be ‘weaponized’ when necessary. Manipulated information . . . [enables] rewriting reality, where the narration of facts (true, partial or false) counts more than the facts themselves.”38 In May 2019, Brian Jenkins—an internationally respected expert in national security—chaired a workshop of Cold War-era subject matter experts on Russian information warfare, veterans who had served in the White House, the State Department, the United States Information Agency, the Pentagon, the FBI, and the intelligence community. The title of his 47-page report from this event? Russia’s Weapons of Mass Deception.39 And in a September 2019 Time magazine article, former U.S. State Department senior official Richard Stengel argued that “we are all actors in a global information war that is ubiquitous, difficult to comprehend and taking place at the speed of light. . . . Governments, non-state actors and terrorists are creating their own narratives that have nothing to do with reality. These false narratives undermine our democracy and the ability of free people to make intelligent choices.”40 From this perspective, one might assume the U.S. government has developed some sort of “influence warfare strategy” to defend our nation

An Introduction to Digital Influence Warfare9

from such attempts. To date, that does not exist, although the U.S. Department of Defense has specific definitions for several of the terms associated with influence warfare, including the following: • Information Operations (IO): “The integrated employment of electronic warfare, computer network operations, psychological operations, military deception, and operations security, in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making while protecting our own.”41 Information operations can also help enable a commander to interrupt or stop the flow of information to their adversaries.42 • Psychological Operations (PSYOP): Efforts to convey selected truthful information and indicators to foreign audiences to influence their emotions, motives, objective reasoning, and ultimately, the behavior of their governments, organizations, groups, and individuals. The purpose of PSYOP is to induce or reinforce foreign attitudes and behavior favorable to the originator’s objectives.43 PSYOP employs various media such as magazines, radio, newspapers, television, email, dropping leaflets on adversarial territory, and so forth.44 • Information Warfare: The offensive and defensive use of information and information systems to deny, exploit, corrupt, or destroy an adversary’s information, information-based processes, information systems, and computer-based networks while protecting one’s own. Such actions are designed to achieve advantages over military, political, or business adversaries.45 A recent report by the think tank Demos provides a different working definition of “information operations” as “a non-kinetic, coordinated attempt to inauthentically manipulate an information environment in a systemic/strategic way, using means which are coordinated, covert and inauthentic in order to achieve political or social objectives.”46 The term “propaganda”—described by Jason Stanley as “a means to strengthen and spread the acceptance of an ideology”47—is also used frequently to describe influence warfare efforts. According to Philip Zimbardo et al., the main purposes of propaganda have included attempts to weaken or change the emotional, ideological, or behavioral allegiance of individuals to their group (army, unit, village, nation, etc.); to split apart component subgroups of the enemy to reduce their combined effectiveness; to ensure compliance of civilian populations in occupied zones; and to refute an effective theme in the propaganda of the enemy.48 Of course, the use of influence operations to achieve the goals of political and psychological warfare is well known among those who study military history, foreign affairs, and international security. As we’ll briefly review in chapter 2, influence warfare was prominent in both world wars

10

Digital Influence Warfare in the Age of Social Media

and throughout the Cold War. And while Russia continues to invest heavily in its Active Measures program of Dezinformatsiya,49 they are far from being the only states investing in these kinds of activities. Further, it is important to note that this is not only a state-based phenomenon. Influence warfare is also seen in many contemporary insurgencies—in fact, as Thomas Hammes noted, “Insurgent campaigns have shifted from military campaigns supported by information operations to strategic communications campaigns supported by guerilla and terrorist operations.”50 While combating the Malaya insurgency in November 1952, British Field Marshal Sir Gerald Templer observed that “the shooting side of the business is only 25% of the trouble and the other 75% lies in getting the people of this country behind us.”51 This is why modern terrorist groups and extremist movements have also used the spread of propaganda and disinformation for their own purposes, as described most recently in Kurt Braddock’s book Weaponized Words.52 Examples range from al-Qaeda’s online magazine Inspire and the Islamic State’s version Dabiq to Hezbollah managing its own satellite television network Al-Manar and Hamas flooding the Internet with images and videos during their 2012 conflict with Israel.53 We’ll review several specific instances of this in chapter 2. To sum up, we will explore together throughout this book many different aspects of a modern technology-based form of influence strategies and tactics that have been used by states and non-state actors for centuries. Manipulating the perceptions and behaviors of large populations is made more efficient and effective by a wide range of Internet and social media platforms. It is clear now why Steve Bannon (head of Brietbart at the time) told his staff that the Internet was not just a communications medium; it was a “powerful weapon of war.”54 It is also clear why Facebook recently launched a team responsible for detecting and disrupting the kinds of disinformation campaigns that we have all seen in the news lately, and according to one account, the members of this team view their social media platform as “terrain” where war is waged. Members of this team view themselves as “defenders” against malicious “attackers” whom they must force “downhill” to a position of weakness.55 For all these reasons, the term “digital influence warfare” is used in this book to describe a variety of strategies (e.g., to confuse, disorient, destabilize, and increase doubt and uncertainty), tactics (e.g., the promulgation of disinformation and fake news, spoofing, spamming, and hasthtag flooding), and tools (e.g., automated troll farms, “sock-puppet” networks, hijacked accounts, and deepfake images and video). It is a term that draws from the kinds of activities that are often described as political warfare, information warfare, asymmetric warfare, and cyberwarfare as well as strategic communications, information operations, psychological operations, public relations, marketing, and behavioral manipulation. Throughout all of these kinds of activities, we find a common goal: to influence

An Introduction to Digital Influence Warfare11

the thoughts, actions, and reactions of human targets. The fact that an aggressor is seeking to influence the target against their wishes, in order to achieve certain strategic goals, leads us to consider this a form of warfare. EXAMPLES OF DIGITAL INFLUENCE WARFARE To illustrate what digital influence warfare is, let’s review some specific examples. The next several pages of this chapter will provide just a small handful, and while other chapters of this book contain dozens more, these are all just a small representative sampling of a much larger and diverse landscape of both foreign and domestic influence efforts. Many of the examples provided in this book are linked directly to Russia—and most observers will agree that this country has been the most active and aggressive state sponsor of these types of activities in recent years. In fact, as we’ll examine in chapter 2, Russia has a long history of disinformation operations, from the early days of the Soviet Union and throughout the Cold War. According to Mark Galeotti, “In the immediate aftermath of the Crimean seizure, the notion of a radically new style of hybrid war fighting took the West by storm, and led to both insightful analysis and panicked caricatures. This has been called ‘new generation warfare,’ ‘ambiguous warfare,’ ‘full-spectrum warfare’ or even ‘non-linear war.’”56 Galeotti takes issue with the overuse of the term “hybrid warfare,” preferring instead to focus on “political warfare” as the overall framework that best describes Russia’s engagement with its perceived adversaries. Throughout all the books, government reports, and scholarly journal articles on what Russia does (and why) in pursuit of its foreign policy objectives, the term “warfare” is used quite frequently, and in my view quite logically. The government leaders of Russia are clearly engaged in a war to influence perceptions, beliefs, and behaviors of others. It’s a war without bullets or tanks, but instead different kinds of weapons are used—especially weaponized information—and there are clear examples of aggression, targets, defenders, tactics, strategies, goals, winners, losers, and innocent victims. The main thrust of its Active Measures program today takes place online, largely (but not exclusively) via social media platforms, using the tactics and tools of digital influence (described in chapter 3) to sow confusion and spread disinformation about its invasion of Ukraine and annexation of the Crimean peninsula, the shooting down of Flight MH17, and many other issues. Many of Russia’s digital influence campaigns have directly targeted the United States. For example, on June 8, 2016, a Facebook user calling himself Melvin Redick, a family man from Harrisburg, Pennsylvania, posted a link to D ­ CLeaks​.­com and wrote that users should check out “the hidden truth about Hillary Clinton, George Soros and other leaders of the US.” The profile photograph of “Redick” showed a middle-aged man in

12

Digital Influence Warfare in the Age of Social Media

a baseball cap alongside his young daughter—but Pennsylvania records showed no evidence of Redick’s existence, and the photograph matched an image of an unsuspecting man in Brazil. U.S. intelligence experts later announced, “with high confidence,” that DCLeaks was a fake news website created by Russia’s military-intelligence agency.57 Whoever was posing as Redick was likely a Russian operative. On August 2, 2017, then-National Security Adviser H.R. McMaster fired Ezra Cohen-Watnick from his position as a top intelligence official on the National Security Council (NSC). Cohen-Watnick was an extremely vocal supporter of Trump, and his dismissal followed the departure of other Trump advocates from the NSC in previous weeks.58 Later that evening, at least 11 different Twitter accounts posing as Americans—but operated by Russians working for the Internet Research Agency (IRA) in St. Petersburg—tweeted (and retweeted) a message urging that Trump fire McMaster. Among them was the Twitter account @TEN_GOP, which claimed to be the “unofficial Twitter of Tennessee Republicans.” This account encouraged its followers to retweet “if you think McMaster needs to go,” and many of @TEN_GOP‘s 140,000 followers were automated “bot” accounts that then automatically retweeted the message. The intended result of this effort was to flood the social media platform with a perception that a groundswell of support was building for the firing of the U.S. National Security Advisor, a former U.S. army general who was (and remains) highly respected by both Republicans and Democrats. In August 2018, Microsoft disabled six phony websites targeting conservative think tanks and U.S. Senate staff.59 The sites were apparently designed for a spear-phishing campaign. Another Russian digital influence campaign—dubbed “Operation Secondary Infektion”—used fabricated or altered documents to try to spread false narratives across at least 30 online platforms. According to a report by the Atlantic Council’s Digital Forensic Research Lab, and a team of analysts at Facebook who uncovered the operation in June 2019, the network of social media accounts involved “originated in Russia.”60 Similarly, in October 2019 documents from the British government were posted online in an apparent effort to either undermine the ruling Conservative party or sow confusion. According to Ben Nimmo, head of investigations at the social media analytics firm Graphika, this was “either a Russian operation or someone trying hard to look like it.”61 As detailed in the investigation report by former FBI Director Robert S. Mueller III, online Russian operatives were increasingly active during the 2016 U.S. presidential election,62 and they have continued to try and influence political issues and debates in America since then. According to a Republican-led Senate Intelligence Committee report released in October 2019, “Russia’s targeting of the 2016 U.S. presidential election was part of a broader, sophisticated and ongoing information warfare campaign” using

An Introduction to Digital Influence Warfare13

Facebook, Instagram, YouTube, Twitter, Google, and other major Internet platforms. The report called upon the White House, various government agencies, and the private sector to ramp up efforts to counter this threat in the future. One member of the Committee warned that “Russia is waging an information warfare campaign against the U.S. that didn’t start and didn’t end with the 2016 election.” Another provided a warning: “With the 2020 elections on the horizon, there’s no doubt that bad actors will continue to try to weapons the scale and reach of social media platforms to erode public confidence and foster chaos.”63 And in addition to their efforts to influence public perceptions about democratic elections, Russia’s information operations also seek to undermine faith and confidence in the independence and legitimacy of other democratic institutions, including courts and the criminal justice system. And by some measures they have succeeded—those who violently attacked the U.S. Capitol Building on January 6, 2021, clearly rejected the legitimacy of over 60 court decisions that upheld electoral results that they did not like. But beyond Russia, we see other states and non-state actors also using the tools and global connectivity of the Internet—and particularly social media—to launch massive information and disinformation campaigns in order to influence people about politics, science, social norms, and many other (often controversial) topics. For example, in August 2019, Twitter and Facebook revealed a Chinese state-backed information operation launched globally to delegitimize the pro-democracy movement in Hong Kong.64 Twitter said it had taken down 936 accounts that were “deliberately and specifically attempting to sow political discord in Hong Kong.” Facebook said it had found a similar Chinese government-backed operation and deleted fake accounts.65 Meanwhile, Google shut down 210 channels on YouTube that it said were part of “coordinated influence operations” to post material about the ongoing protests in Hong Kong.66 Earlier in 2019, Chinese authorities had openly instructed and encouraged hackers to deface websites and attack Telegram accounts of political protestors in Hong Kong.67 A website hosted on Russian servers, “HK Leaks,” posted personal details—names, home addresses, personal telephone numbers— of hundreds of pro-democracy protestors.68 In their report Tweeting through the Great Firewall, the Australian Strategic Policy Institute describes how Chinese language accounts, “leveraging an influence-for-hire network,” were used to target Hong Kong citizen and the global Chinese diaspora in a massive effort to discredit the pro-democracy protests.69 China has also been using tools of digital influence warfare, like bot and troll accounts, to promote disinformation about—and public debates within—Taiwan.70 Specific examples have included exposing dissidents’ activities, exacerbating political tensions (including a contentious debate over pension payments), and raising suspicions against leading military and political figures.71 The overall goals of these efforts appear to be to

14

Digital Influence Warfare in the Age of Social Media

discredit the secessionist movement, which advocates formal separation from mainland China, and to encourage unity with the People’s Republic of China.72 In addition, China has been aggressively trying to change how the popular information source Wikipedia depicts topics they find politically sensitive.73 For example, a visitor to Wikipedia would normally see that Taiwan is described as “a state in East Asia.” However, anyone can edit Wikipedia entries, and in September 2019, someone (presumably acting on behalf of the Chinese regime) had changed the entry to describe Taiwan as a “province in the People’s Republic of China.” In the English version of Wikipedia, the Dalai Lama is described as a Tibetan refugee, while the Mandarin version of Wikipedia describes him as a Chinese exile. Similarly, the English entry for the Senkaku Islands said they were “islands in East Asia,” but in 2019, the Mandarin equivalent had been changed to add “China’s inherent territory.” The Chinese Wikipedia describes the 1989 Tiananmen Square protests as “the June 4th incident” to “quell the counterrevolutionary riots.”74 Chapters 2 and 6 of this book will examine China’s digital influence efforts in greater detail. Meanwhile, we also see other authoritarian regimes adopting various strategies and tactics of digital influence warfare. On August 21, 2018, the cybersecurity firm FireEye released a report describing “a suspected influence operation that appears to originate from Iran aimed at audiences in the U.S., U.K., Latin America, and the Middle East. This operation is leveraging a network of inauthentic news sites and clusters of associated accounts across multiple social media platforms to promote political narratives in line with Iranian interests. These narratives include anti-Saudi, anti-Israeli, and pro-Palestinian themes, as well as support for specific U.S. policies favorable to Iran, such as the U.S.-Iran nuclear deal (JCPOA).”75 Shortly after the report was made public, Facebook announced the removal of 652 users. According to Nathaniel Gleicher, head of Facebook’s Cybersecurity Policy, their investigation into a network calling itself “Liberty Front Press” found a direct link to the Iranian government. “For example, one part of the network, ‘Quest 4 Truth,’ claims to be an independent Iranian media organization, but is in fact linked to Press TV, an English-language news network affiliated with Iranian state media.”76 A year later, in October 2019, Facebook announced the deletion of 93 Facebook accounts, 17 Pages, and 4 Instagram accounts “for violating our policy against coordinated inauthentic behavior. This activity originated in Iran and focused primarily on the US, and some on French-speaking audiences in North Africa.”77 According to the announcement, “The individuals behind this activity used compromised and fake accounts— some of which had already been disabled by our automated systems—to masquerade as locals, manage their Pages, join Groups and drive people to off-platform domains connected to our previous investigation into the Iran-linked ‘Liberty Front Press’ and its removal in August 2018.”78

An Introduction to Digital Influence Warfare15

Facebook also removed 38 Facebook accounts, 6 Pages, 4 Groups, and 10 Instagram accounts that originated in Iran and focused on countries in Latin America, including Venezuela, Brazil, Argentina, Bolivia, Peru, Ecuador, and Mexico. The page administrators and account owners typically represented themselves as locals and used fake accounts to post in groups and manage pages posing as news organizations, as well as directed traffic to other websites.79 And that same month, Microsoft announced that hackers linked to the Iranian government had targeted an undisclosed U.S. presidential campaign, as well as government officials, media outlets, and prominent expatriate Iranians.80 While the above examples are mainly foreign influence efforts—that is, attempts by one country to influence the citizens of other countries—a different sort of case was reported in a recent New York Times exposé. Days after Sudanese soldiers massacred pro-democracy demonstrators in Khartoum in June 2019, a digital marketing company in Cairo began deploying what I refer to as “digital influence mercenaries”81 in a covert operation to praise Sudan’s military on social media. The Egyptian company New Waves—run by Amr Hussein, who retired from the Egyptian military in 2001 and describes himself on his Facebook page as a “researcher on Internet wars”—paid new recruits $180 a month to write pro-military messages using fake accounts on Facebook, Twitter, Instagram, and Telegram. On August 1, 2019, Facebook announced that it had shut down hundreds of accounts run by New Waves and an Emirati company with a near-identical name. Facebook said the Egyptian and Emirati companies worked together to manage 361 compromised accounts and pages with a reach of 13.7 million people.82 According to the report, they spent $167,000 on advertising and used false identities to disguise their role in the operation.83 Several other countries are also focused on fabricating narratives that they force-feed to their own citizens through state-owned media and government-controlled bot networks on social media (which we’ll examine in chapters 2 and 6). A 2019 report by Oxford University’s Computational Propaganda Project found evidence of disinformation and propaganda attempts to manipulate voters and others online in 70 countries.84 A majority of these attempts were by domestic actors trying to influence domestic targets. For example, Rodrigo Duterte (President of the Philippines) encourages “patriotic trolling” to undermine his critics.85 A 2017 Oxford Internet Institute report describes how “many of the so-called ‘keyboard trolls’ hired to spread propaganda for presidential candidate Duterte during the election continue to spread and amplify messages of his policies now that he’s in power.”86 Few states are more committed to spreading disinformation among their own people than Russia. Michiko Kakutani, the cultural critic and author of The Death of Truth, has observed how Russia uses propaganda “to distract and exhaust its own people (and increasingly, citizens of foreign

16

Digital Influence Warfare in the Age of Social Media

countries), to wear them down through such a profusion of lies that they cease to resist and retreat back into their private lives.”87 A Rand Corporation report called this “the firehose of falsehood”—“an unremitting, highintensity stream of lies, partial truths, and complete fictions spewed forth with tireless aggression to obfuscate the truth and overwhelm and confuse anyone trying to pay attention.”88 According to the report, “Russian propaganda makes no commitment to objective reality,” instead relying on “manufactured sources” and “manufactured evidence (faked photographs, faked on-scene news reporting, staged footage with actors playing victims of manufactured atrocities or crimes). . . . Russian news channels, such as RT and Sputnik News are more like a blend of infotainment and disinformation than fact-checked journalism, though their formats intentionally take the appearance of proper news programs.”89 In fact, as Kakutani notes, “The sheer volume of dezinformatziya [see ­chapters  2 and 6] unleashed by the Russian firehose system . . . tends to overwhelm and numb people while simultaneously defining deviancy down and normalizing the unacceptable. Outrage gives way to outrage fatigue, which gives way to the sort of cynicism and weariness that empowers those disseminating the lies.”90 Further, monitoring the online activities of its citizens and controlling all forms of access to information online have become hallmarks of authoritarian regimes. For example, Russia has forced search engines to delete certain search results, required messaging services to share encryption keys with security services, and made social network companies store their user data on servers in the country (that presumably they have full access to). Further, beginning in July 2020 Russia will require all smartphones, computers, and smart TV sets sold in the country to come preinstalled with Russian software.91 As we’ll discuss in chapter 6, sometimes a government will simply shut off the country’s Internet access altogether in order to ensure control over what their citizens can say or do online. In November 2019, Iran did this for nearly an entire week. While businesses, universities, government agencies, and other institutions may have suffered from such an act, the underlying logic appeared to be, “Why bother to compete for influence online when you have the power to completely shut off the competition’s voices?” Meanwhile, India’s shutdown of access to the Internet in Kashmir is the longest ever imposed in a democracy. It began on August 5, 2019, and by mid-December, the province had been without Internet access for 134 days.92 But thus far, no country has done more than China in using the Internet to influence and control the political and social behavior of its own citizens. With its social credit program and its filtering of Internet search results, China has essentially created its own nationwide digital influence silo (see chapter 5).

An Introduction to Digital Influence Warfare17

Here in the United States, we also see a range of domestic-oriented influence efforts. For example, a number of reports have emerged detailing the microtargeting of veterans in the United States. For years, online scams and fake accounts that exploit or target American veterans have proliferated throughout the Internet, including on Twitter, Facebook, and Instagram.93 Images of deceased veterans are used as bait in romance scams, memes are spread about desecrated graves in order to provoke anger, and misleading articles about the possible loss of health benefits worry veterans and their families who rely on them. On November 13, 2019, the House Committee on Veterans’ Affairs convened a hearing titled “Hijacking Our Heroes: Exploiting Veterans Through Disinformation on Social Media,” in which veterans testified about many instances of these things. An extensive report was also published by the Vietnam Veterans of America, describing the “persistent, pervasive, and coordinated online targeting of American service members, veterans, and their families by foreign entities who seek to disrupt American democracy. American veterans and the social-media followers of several congressionally chartered veterans service organizations were specifically targeted.”94 And on May 22, 2019, an online video of a speech by then-House Speaker Nancy Pelosi was altered to make it seem that her speech was slurred and incoherent and then posted and forwarded by a flurry of Twitter, YouTube, and Facebook accounts.95 One version of the video, posted by the conservative Facebook page Politics WatchDog, had been viewed more than 2 million times within the first 24 hours of being online and had also been shared more than 45,000 times, garnering over 23,000 comments with users calling her “drunk” and “a babbling mess.”96 The video, as numerous experts in computer science and information technology verified, had been slowed to about 75 percent of its original speed, and the pitch of the speaker’s voice had been further modified. A separate video of Pelosi speaking at a news conference was similarly altered (to make her seem like she was stumbling, slurring her words as if she were highly intoxicated)—and then posted to Twitter by then-president of the United States, Donald Trump (see Figure 1.1), amplifying its perceived legitimacy among millions of viewers. In less than 24 hours, the altered video had been viewed more than 3.5 million times on Twitter, earning 70,000 likes and 22,000 retweets.97 Of course, modifying the audio and video recordings of politicians in ways that are meant to disparage and embarrass them is nothing new; we’ve seen that for over half a century. In fact, when a deceptively edited video clip of Democratic presidential candidate Joe Biden circulated on social media in August 2020—which cuts an hour-long speech to less than one minute, retaining only parts of statements and his pauses between words—some observers considered this to be the new norm.98 Today’s

18

Digital Influence Warfare in the Age of Social Media

Figure 1.1  Example of then-President Trump using his Twitter account to impugn a political opponent by distributing a doctored video to his followers. (­https://​ ­twitter​.­com​/­realdonaldtrump​/­status​/­1131728912835383300)

technology offers cheap and easy ways to do this, along with the ability to distribute the manipulated video as part of a misinformation campaign of unprecedented scale and speed. In May 2019, YouTube took down the altered Pelosi video fairly soon because it has long prohibited altering videos with the purpose of deceiving the public. (YouTube has, however, allowed other hoaxes on the platform so long as they don’t promote violence or alter a video clip.) Twitter also kept the video up on its platform. Millions of Trump supporters (many of whom apparently despise anyone who disagrees at all with Trump) tried to influence others’ views about Pelosi by distributing copies of this altered video clip and defending its “authenticity” even in light of many news headlines and experts who quickly revealed it to be fake. And unfortunately, as examined in later chapters of this book, there are millions of people in the United States (and billions more worldwide) who believe what they see online regardless of overwhelming evidence proving it’s entirely untrue. If nothing else, as Samantha Cole put it, this video “proves that rudimentary editing and willingness to prey on people’s hate for public figures is all they need in order to successfully spread misinformation across the Internet.”99 These are just a few examples of digital influence warfare, provided for purposes of illustration; many more will be provided throughout this book. They reflect how the strategies, tactics, and tools described in chapters 2

An Introduction to Digital Influence Warfare19

and 3 have been used by a variety of states and non-state actors to achieve political power, security, and other kinds of goals and ­objectives—like social activism, science denial, cyberbullying, economic warfare, and much more. Thus, my approach in this book is to look beyond “political warfare” or information operations and focus instead on the core goal of influencing a target, how it is done, and what enables the influencer to be successful when doing so. ORGANIZATION OF THE BOOK Okay, if you’ve made it this far into the introduction to the book, I’m going to take a leap of faith and assume you want to know more, so here is what the rest of the chapters will cover. First, chapter 2 will focus on the various strategic goals and objectives pursued by these kinds of opera­ tions. After briefly reviewing some pre-Internet era examples of influence warfare, we’ll discuss a broad range of goals and objectives being pursued through digital influence warfare. For the sake of simplicity, much of this discussion is framed in terms of influencers (or information aggressors) and targets. The means of influencing can vary widely, from spreading blatant lies and disinformation to emotionally provocative (but factually accurate) videos and images, as described in chapter 3. But before choosing which tactics and tools deploy against the target, the influencer must have a clear sense of what goals they want to achieve. We will also examine specific examples like China (with its “Three Warfares” doctrine)100 and Russia (with its Information Security Doctrine of 2000).101 Both countries have what a Stanford Internet Observatory report calls “full-spectrum propaganda capabilities,” and each has amassed prominent Facebook pages and YouTube channels targeting regionalized audiences, though the use of those pages differ according to the kinds of goals and objectives they want to achieve.102 And this chapter will also review some examples of non-state actors and the strategic goals they pursue using the tactics and tools of digital influence warfare. Those tactics and tools are the main focus of chapter 3. After a brief explanation of the similarities and differences among the major social media platforms, and the importance of gathering and analyzing quality data on potential targets, the discussion proceeds through three categories of digital influence tactics: deception, provocation, and direct attacks. Within these categories, we find a broad range of specific tactics and terms that may or may not be familiar to most readers. Some tactics are used to discredit institutions that are dedicated to distinguishing between true and false information, while others seek to amplify social grievances, polarization, personal frustration, and anxiety.103 As we’ll see in later chapters of the book, tactics within the categories of deception and provocation are particularly effective for spreading disinformation and for

20

Digital Influence Warfare in the Age of Social Media

Figure 1.2  An incomplete but representative list of terms used to describe tactics and tools used by digital influence attackers against their targets. For abbreviated definitions of these and many other terms used throughout this book, a “Glossary of Terms” is available online at ­http://​­www​.­DIWbook​.­com.

exploiting uncertainty and confirmation biases. For the purposes of illustration, Figure 1.2 provides an incomplete but representative list of these tools and tactics. Some of the terms listed here have been fairly well understood and used for many years to describe various kinds of traditional influence operations—like misinformation (untruths or partial truths mistakenly shared), disinformation (intentional spread of falsehoods), malinformation (leaks and harassment strategies),104 and propaganda—while other terms (like doxxing, clickbait, hashtag flooding, and search engine optimization) are relatively new and specifically related to the online technologies being used by attackers in various kinds of influence operations. Each of these terms will be explained in this book. Attempts by influencers to deceive a target are particularly common. As Martin and Shapiro describe in a 2019 report, an influencer can easily produce and disseminate “content that is meant to appear as being produced organically in the target state (e.g., fake accounts, fake press organizations, trolls posting as citizens of the target state)” and may “involve promoting true content as well as false or misleading information.”105 The discussion

An Introduction to Digital Influence Warfare21

in this chapter will travel far beyond the contemporary debates about fake news websites, hijacked social media accounts, and trolls. For example, massive amounts of highly plausible fabricated video and audio material are being disseminated in order to reduce the target society’s confidence in a shared reality and intensify their loss of faith in institutions (and each other). In addition to popular social media platforms like Facebook and Twitter, there are many online community sites where users have been prolific in the creation and sharing of deepfake images and videos, “memes,” or other viral content. These include Reddit, 4chan, MetaFilter, and Tumblr. Further, recent years have seen the rapid transmission of deepfake technologies from the research lab to user-friendly applications. What started in 2017 with the FakeApp software has now evolved to an open-source version called Faceswap, and the code for accomplishing face swaps is now openly available as a software package called DeepFaceLab.106 Anyone with even a modest degree of computer literacy can now engage in forms of digital influence warfare. And both chapters 2 and 3 discuss the importance of assessing the impacts of digital influence efforts and then using this information to refine an influence campaign in order to produce greater levels of effectiveness. This assessment and refinement process reflects the kind of organizational learning we have witnessed thus far among the more sophisticated influence warfare attackers. And the ability to gather and analyze data on the target’s reception and reaction to the influence efforts has never been easier, thanks to the Internet. In fact, perhaps the most important contributor to the success of digital influence warfare is the extensive amount of information we provide about ourselves. The advances of social media platforms and search engines have provided a tremendous windfall for the influencers seeking information about you. By monitoring your use of their platforms, companies like Facebook, Twitter, and Google are able to use algorithms to carefully tailor (or “personalize”) your online experience in a way they believe you will like, increasing your commitment to using their platforms more frequently. With a few exceptions, everything you do online can be tracked and recorded, information that can then be used to create a digital profile about your preferences, shopping habits, social and political views, organizational affiliations, religious beliefs, friends, families, and much more. For the influencer, this amount of information about the target is a goldmine, allowing them to fine-tune their strategies and tactics in ways that will be increasingly effective. The websites you visit and the social media posts that you “like” are revealing, but there is much more. The ability to identify whom you “follow” on Twitter or Facebook allows the influencer to gain insight into whom you trust, the people you want to hear from or keep in touch with. Tracking and analyzing the preferences, habits, and so forth of those individuals provides an even greater amount of information

22

Digital Influence Warfare in the Age of Social Media

about you that can then further increase the pinpoint accuracy of an influence effort. And from this information, the influencer can then refine and recalibrate their tactics and tools as needed, to include trying different message formats and contents or choosing new targets. Monitoring the success of an influence campaign is made especially easy by social media platforms that capture data on their user’s preferences and activities, as described later in chapter 3. Following these explorations into the strategies and technical mechanics of digital influence warfare, the book turns to examine the psychology of persuasion and influencing. Drawing from research literature on the psychology of persuasion, chapter 4 identifies the main components of influence attempts including the attributes of the influencer, the attributes of the target to be influenced, the content and format of the messages used in the influence attempt, and the context in which those messages will be considered salient and relevant by the target (including how the media often provide that contextual relevance). This discussion is informed by the work of several scholars in the field, including Robert Cialdini’s research on the principles of social influence and key concepts like reciprocity, social proof, commitment and consistency, liking, authority, and scarcity. Chapter 4 also addresses the importance of contextual relevance in effective strategies of persuasion. Because each of us has our own unique collection of beliefs, values, and perceptions about the world and our place within it, the influencer must gather intelligence about their target— including what they like and dislike, what they want to see or don’t want to see in their world—and then incorporate that into the strategy and tactics accordingly. You (the target) may not initially care about Topic X, but if the influencer can find ways to directly connect a certain view about Topic X with something you do care about, the odds of you paying attention will naturally increase. So, this chapter explores how contextual relevance can be identified and manipulated by an information aggressor in order to increase the target’s likelihood of being influenced. Information that is conveyed via a trusted source, or that is novel, sensational, or emotional (for example), can attract the attention of the target more than other kinds of information. Here’s an example of why this matters: in order to effectively utilize the tactic of “ragebait” to provoke outrage among the target audience, it is first necessary to understand the values and beliefs that frame what that audience views as something to get worked up about. Finally, the chapter briefly reviews how uncertainty, fear, conformity, and confirmation bias play significant roles in how people respond to these influence efforts. There is a considerable body of research in social psychology that identifies how group identity impacts an individual’s information processing choices. For example, consider the following two statements: “I hate X, and something bad is being reported

An Introduction to Digital Influence Warfare23

about X; therefore, it must be true.” And “I like A, and something bad is being reported about A; therefore, it must be lies.” This is essentially how in-group identity and “othering” can shape perceptions of a target audience. As a recent Rand Corporation report notes, influence operations “are likely to take the form of targeting subsections of the population to intensify divisions and polarization rather than attempting to shift or create new beliefs wholesale in a population.”107 These subsections are manifest in the ways that a majority of Americans now access their news primarily via social media.108 In many cases, the target’s values and beliefs are shared among groups of people, allowing for various forms of collective behavior in areas of politics, religion, social norms, and so forth. This is particularly the case inside what I call digital influence silos, the topic of chapter 5. Essentially, these are virtual bubbles of information in which we surround ourselves with factoids and narratives that confirm what we want to believe and effectively block out any kinds of information that questions or contradicts those beliefs. Many readers will have seen these referred to by other terms as well—like filter bubbles, influence bubbles, and echo chambers109—but the overall concept is that we live in a world in which we are able to surround ourselves extensively and exclusively with information that confirms what we want to believe. Our ability to avoid (or even ignore the existence of) information that questions or contradicts what we want to believe has never been greater. This gives a huge advantage to the malicious actor using digital influence strategies and tactics to achieve their goals. All that is now required is to tailor their message in a way that conforms to what we want to believe. For example, if the goal is to sow discord and animosity among members of a community, the data and tools are now available for identifying disagreements and seams of latent distrust. Within any community, there are always in-groups and out-groups, and it is increasingly easy to identify the members of each. The next step is to simply provide the kinds of information (or disinformation) within each influence silo that will exacerbate distrust toward those “others” outside the silo, add more fuel of hostility to the disagreements, and then sit back and watch them go at each other’s throats. If the goal is to encourage a “rally around me and my cause” effect, first convince the members of a particular influence silo that your issue is directly tied to whatever they care most about. Then create the illusion that what they care most about is threatened by those outside their influence silo and that what you want is also threatened by the same out-group. You can also deploy tactics of identity politics to convince some members of the influence silo that they are not fully committed or being true to the in-group unless they act in some way (usually in defense of what you want them to believe is threatened). As Oxford University’s Philip Howard notes, digital influence efforts are most effective when the

24

Digital Influence Warfare in the Age of Social Media

messages are “delivered by a relatively enclosed network of other accounts and other content that affirms and reinforces what people are seeing.”110 As we’ll address in chapter 5, modern social media platforms have helped create a variety of competing influence silos in at least two ways.111 First, we have a tremendous amount of freedom to actively seek out information that we believe will provide us with what we want to know, and quite often, we prefer sources of information that confirm what we already tend to believe. Further, we can ignore (or even block out) other information sources that question or contradict what we want to believe. Meanwhile, influence silos are also nurtured by an automated personalization of information involving algorithms that we have virtually no control over and yet show us what the computer believes we will want to see. As a result, the computer becomes a channel through which we can be effectively influenced, and an increasing number of malicious actors are hacking into the channel to manipulate what we see and hear. This is why the term “digital” is used throughout the book to mean something that has direct relation to the online information ecosystem. In chapter 6, we examine the ultimate manifestation of digital influence silos, in two separate forms. In authoritarian countries, as described in the first part of the chapter, we find governments seeking (and sometimes successfully achieving) a form of information dominance in which all the information available to the country’s citizens is highly controlled. In one sense, they are able to establish a nationwide influence silo, within which individuals only see and hear information that has been preselected for their consumption. When this is possible, your target audience has no choice but to hear your narrative, and yours alone. Authoritarian regimes are perennially manipulating public opinion and perceptions by curating and controlling the information their population is allowed to see, and the Internet provides the means to do this in more ways than we have ever known before. Further, by increasing uncertainty among members of the target audience, within an environment where your ability to influence the target is unchallenged and unconstrained, you can convince them of virtually anything. Authoritarian regimes also seek to establish and maintain information dominance as a means of reducing uncertainty among its citizens about what to believe. Essentially, they replace one choice with another: trust what we tell you and do what we tell you or face dire consequences. Countries like China, Russia, Turkey, North Korea, and Iran respond to questions of uncertainty by simply imposing their own information dominance. Disinformation is crafted by government leaders and fed to the population through government-controlled media and multiple online forms of communication. Competing sources of information, particularly if they question or contradict official narratives, are simply banned; uncontrollable journalists are jailed (and in some instances killed); search

An Introduction to Digital Influence Warfare25

engine results are limited to only acceptable sources of information; and many other tactics are used to create informational barriers and to shape perceptions of the population. However, this kind of information dominance is not readily available in truly democratic countries, where freedom of speech and expression is protected and where citizens can access a broad range of information sources. So instead, influencers wishing to achieve a similar level of power will often pursue what I call “attention dominance,” something that is made increasingly possible through the algorithms of social media platforms, search engines, and website trackers. We’ll discuss examples of this in the second part of chapter 6. Replicating information dominance may be more difficult in an open democratic society, but the Internet has now provided the means to do so in unprecedented ways. By crafting and coopting influence silos that reinforce the cognitive biases and preferences described in the previous chapter, the tools and tactics of digital influence warfare can be used to shape reality in a way that makes it increasingly easy to block out any dissenting types of information and their sources. Platforms like Facebook and Twitter use a variety of algorithms to filter the information we see in our daily “news” feed based on what we most likely “want” to read. The business model of social media companies relies on clicks and preferences, not telling people what they “should” know. This, in turn, leads to a fragmented digital information environment that can virtually isolate people within ideologically partisan communities that have no access to (or interest in) any type of information that does not conform with their preferences, prejudices, and beliefs. Because of this kind of environment, the behaviors of the target can be manipulated, particularly by influencers who appear to be aligned with the target’s previously established preferences. As a result, we are now rapidly hurtling toward a future of influence attacks using increasingly realistic deepfake videos, audio clips, and images—many of which are intentionally trying to ruin trust in specific individuals and institutions.112 Together, these factors explain how blatant lies and disinformation can incentivize behavior like clinging to beliefs even in the face of overwhelming evidence that proves those beliefs to be in error. Given the choice between an inconvenient truth and an enjoyably confirming falsehood, many of us will choose the latter. Finally, the book concludes with a brief look at what we should anticipate for the future of digital influence warfare. For example, advances in artificial intelligence will make fake photos and videos increasingly difficult to detect. Russia will increasingly rely on its complex international network of hackers, activists, and informal propagandists to further pursue its strategic and foreign policy objectives, while China will expand its use of use of Chinese citizens and ethnic Chinese abroad to further its control over key narratives. These and other countries will find new ways to mask their involvement in digital influence efforts against domestic and foreign

26

Digital Influence Warfare in the Age of Social Media

targets.113 Meanwhile—as explained by a recently published Rand Corporation report—“the conflicts for ideological supremacy emerging between influence silos are encouraging new forms of widespread cyberharassment, and in time this will result in the Internet becoming a notably crueler and more intimidating space.”114 The future will likely bring an increase in various forms of cyber harassment attacks, such as creating false websites with allegedly compromising information; generating fake videos using high-grade digital mimicry programs that allegedly show the targets stealing, killing, or in intimate contexts; hacking official databases to corrupt the targets’ tax or police records; sending critical, crude, and self-incriminating emails to dozens of friends and colleagues, seemingly from the target, using spoofing techniques to conceal the origins of the messages; and hacking targets’ social media accounts in order to post offensive material supposedly in their name.115 CONCLUSION As Carl Miller explains, “With states, political parties and individuals jockeying for ever-greater influence online, you and your clicks are now the front line in the information war.”116 Whether you are using your smartphone, desktop, laptop, tablet, Internet-connected television, or any other means to go online, anything you see on that screen is inherently digital— the words, images, and sounds are all based on various compositions of digits (1s and 0s). This technological environment (and the ways in which we interact with it) offers new tools and tactics for influence warfare. The revolution in communications technology driven by the Internet has created a new, more expansive market of ideas. Individuals are now empowered to reach massive audiences with unfiltered messages in increasingly compelling and provocative packaging, rendering the competition for mass influence more complex. The emergence of new means of communication and new styles of virtual social interaction have transformed the context for mass persuasion and have expanded opportunities for anyone to disseminate their message.117 Social media is particularly central to digital influence warfare. Not only can we easily become overwhelmed by the volume and diversity of information available in our social media account feeds but also much more of that information is trivial, one-sided, and fake than we’ve ever encountered before. This makes it increasingly difficult to distinguish fact from fiction, or evidence-based arguments from biased opinion, and the result is greater uncertainty and misperceptions about what is true and what is not. This book is not just about technology—automated fake accounts, data algorithms, deepfake videos, and so forth—that underpins digital influence warfare. The book is essentially about real people doing real things with

An Introduction to Digital Influence Warfare27

real consequences. It’s about the intersection of human behavior, beliefs, technology, and power. Whether they’re trolls paid by Russian government agencies, politicians who lie, Chinese censors, or violent extremists, the Internet is simply the means by which they are trying to achieve certain influence objectives. Further, many forms of digital influence increasingly focus on getting real people—our family, friends, colleagues—to share and retweet lies and disinformation, something that has become all too easy today given the rise of digital influence silos and our own reliance on cognitive biases to sort through a confusing avalanche of information. Unfortunately, democratic societies are considerably vulnerable to disinformation, resulting in distorted public perceptions fueled by algorithms that were originally built for viral advertising and user engagement on Internet platforms. Further, as Thomas Rid observes, “Disinformation corrodes the foundations of liberal democracy, our ability to assess facts on their merits and to self-correct accordingly.”118 Today’s disinformation can include a wide variety of digital items—from images and videos to official documents—that can all be fabricated or altered in ways that manipulate our perceptions and beliefs about something. And disinformation is just one of several variants of digital influence warfare. Leaking confidential documents and correspondence to the public for malicious effect isn’t considered disinformation (the spread of falsehoods), but it is an act driven by similar kinds of influence strategies and goals. Similarly, factually accurate information can be used to provoke certain kinds of behaviors among the target audience of an influence operation. To sum up, a wide array of strategies, tactics, and tools of digital influence warfare will increasingly be used by foreign and domestic actors to manipulate our perceptions in ways that will negatively affect us. According to a UNESCO report, the danger we face in the future is “the development of an ‘arms race’ of national and international disinformation spread through partisan ‘news’ organizations and social media channels, polluting the information environment for all sides”119 Tomorrow’s disinformation and perceptions manipulation will be much worse than what we are dealing with now. The future also promises to bring darker silos of deeper animosity toward specifically defined “others” who will be deemed at fault for the grievances of the silo’s members. With this will come a higher likelihood of violence, fueled by emotionally provoking fake images and disinformation (from internal and external sources) targeting the beliefs of the silo’s members. This is the future that the enemies of America’s peace and prosperity want to engineer. We must find ways to prevent them from succeeding. At the end of the day, one of my goals for writing this book has been to encourage each of us to look more closely at how our own decision-making is being influenced each day, by whom, and what their goals might be. When we stop and think about the influences

28

Digital Influence Warfare in the Age of Social Media

we are experiencing, we tend not to be as open to digital influence and exploitation. If we don’t treat this battlefield with greater levels of attention and urgency, identifying and confronting the various forms of digital influence warfare used on that battlefield, we will succumb to whatever our adversaries’ strategic goals are. We must confront this, collectively and urgently.

CHAPTER 2

Goals and Strategies: Influencing with Purpose

In their 2001 groundbreaking book Networks and Netwars, John Arquilla and David Ronfeld described how “the conduct and outcome of conflicts increasingly depend on information and communications. More than ever before, conflicts revolve around ‘knowledge’ and the use of ‘soft power.’ Adversaries are learning to emphasize ‘information operations’ and ‘perception management’—that is, media-oriented measures that aim to attract or disorient rather than coerce, and that affect how secure a society, a military, or other actor feels about its knowledge of itself and of its adversaries. Psychological disruption may become as important a goal as physical destruction.”1 How prescient their observation seems today, particularly given the proliferation of digital influence efforts described in the previous chapter. As Carl Miller observes, “Digital warfare has broken out between states struggling for control over what people see and believe.”2 And it’s not just states engaged in these struggles—we see politicians, companies, terrorists, and many others using the same strategies, tactics, and tools in order to influence the beliefs and behaviors of their targets. Modern influence warfare can be viewed as encompassing a combination of political warfare, psychological operations, and information operations (including propaganda). The principles of influence warfare are based on an ancient and much-repeated maxim, attributed to the Chinese general and military theorist Sun-Tzu, paraphrased as: “To win one hundred victories in one hundred battles is not the highest skill. To subdue the enemy without fighting is the highest skill.”3 The goals and objectives of what I call digital influence warfare are thus mostly about power. The attacker (influencer) wants to gain the power to achieve the goals articulated in their strategic influence plan. The target, meanwhile, wants the power to resist and reject the influence attempts of the attacker.

30

Digital Influence Warfare in the Age of Social Media

By paraphrasing a recent report by the Rand Corporation, we can clarify some aspects of influence warfare. To begin with, a central goal of an influence attempt is “to cause the target to behave in a manner favorable to the influencer.”4 The influencer may seek to disrupt the target’s information environment—for example, interrupting the flow of information between sources and intended recipients of an organization or on a broader level between the target’s government and its citizens. Similarly, the influencer may also seek to degrade the quality, efficiency, and effectiveness of the target’s communication capabilities, which may involve flooding channels of communication with misinformation and disinformation—undermining the perceived credibility and reliability of information shared among the adversary’s organizational members (government or corporate) or between the target’s government and its citizens.5 In his book Strategy in Information and Influence Campaigns, Jarol Manheim offers a range of goals often pursued through influence efforts:6 • Reinforcing an existing, deeply held, perception, preference, or attitude, or one firmly integrated into a belief system or attitude cluster; • Reinforcing an existing, lightly held, perception, preference, or attitude, or one that remains isolated or only loosely integrated into a belief system or attitude cluster; • Attaching a new preference to an existing perception, or linking an existing perception to an existing belief system; • Implanting a new perception where none existed previously; • Introducing a belief system to integrated existing perceptions, or an attitude cluster to integrated existing lower-order attitudes; • Stimulating action based on an existing attitude or preference, or on an existing attitude cluster or belief system; • Stimulating action based on an existing perception that is not linked to a conscious preference; • Stimulating action that occurs without reference to perceptions, preference or attitudes; • Changing an existing, lightly held, perception, preference, or attitude, or one that remains isolated or only loosely integrated into a belief system or attitude cluster; • Changing an existing, deeply held, perception, preference, or attitude, or one firmly integrated into a belief system or attitude cluster. Whatever tactics are chosen, the influencer seeks to weaponize information against a target in order to achieve some kind of power. For example, as psychologist Kathleen Taylor explains, “Social power is the ability to impact other people.”7 Sometimes the goal may be to change their beliefs and behaviors, prompting the target to question their beliefs in the hopes that once those beliefs have been undermined the target may change their

Goals and Strategies: Influencing with Purpose31

minds. You could even seek to manufacture uncertainty (see chapter 4) in order to convince the target that nothing may be true and anything may be possible. In other instances, the goals of your influence strategy could include strengthening the target’s commitment to their particular beliefs and reinforcing their certainty, biases, and trust in things that are actually untrue. Another goal of influence warfare may be to identify and then exploit existing vulnerabilities within the target society—for example, exacerbating political, socioeconomic, and ethnic divisions to foment distrust, uncertainty, and fear. Further, using influence silos (described later in this book), the influencer can achieve their goals by gaining the active support and engagement of the targeted society’s members. An underlying problem is that we really don’t want our beliefs and ideas to be challenged by anyone, because it elicits uncertainty and discomfort. One of the core attractions of the Internet is the fact that people can find whatever kinds of information they want, especially if it serves to confirm their own biases. We tend to follow the social media accounts of only individuals whom we are likely to agree with or learn from; the same decision-making informs which news articles and other sources of information we choose to access. Meanwhile, social media platforms and other Internet companies can access a wealth of data and algorithms (described in chapter 3) to identify an individual’s beliefs and ideas, interests, likes and dislikes, patterns of online activity, and much more—all of which can then be used to tailor their online experiences in ways that satisfy their desire for self-validation and social proof. As a result, the influence silos we opt into help us create an informational barrier around ourselves that prevents us from seeing opposing viewpoints. This has the effect of reinforcing a sense of selfperception that may be far removed from reality. Chapter 4 will review this topic in much greater detail. Furthermore, because there are so many different influence silos, with their own identity orientations and goals, there is an increasingly diminished likelihood of individuals communicating across the silo boundaries with one another to discuss disagreements and resolve differences. By aligning a narrative with emotional provocation, tactics of persuasion, and detailed knowledge about attributes of the target, the influencer can enlist the target in amplifying the narrative, usually without even recognizing that they are doing so. As the Nazi propagandist Joseph Goebbels once noted, “This is the secret of propaganda: Those who are to be persuaded by it should be completely immersed in the ideas of the propaganda, without ever noticing they are being immersed in it.”8 This is the kind of power that malicious actors hope to gain by using the tools and strategies of digital influence warfare. The goals of an influence campaign are often political in nature and could include the mobilization of people and resources toward political

32

Digital Influence Warfare in the Age of Social Media

protests; the degradation of a democratic society’s trust in each other; and increasing societal levels of anger, animosity, disappointment, apathy, and disengagement. Further, according to Oxford research Philip Howard, politically oriented digital influence campaigns aren’t really about achieving effects across an entire country. Rather, “it is network-specific effects, sited in particular electoral districts, among subpopulations, that are sought. The ideal outcome for political combatants is not a massive swing in the popular vote but small changes in sensitive neighborhoods, diminished trust in democracy and elections, voter suppression, and longterm political polarization.”9 One of the most pernicious forms of digital influence warfare, as we’ll examine later in this book, involves the attempts to fracture the bonds of trust between governments and the governed. While Russia’s attempts in this arena have been most prominent (particularly in Ukraine, Europe, and the United States), other countries are also trying to do this in order to weaken the social unity of the target country. They do so on the premise that a divided populace is less likely to support the policies of its government, including foreign policies that matter to Russia (or whatever country is behind the influence efforts). A healthy democracy requires an ability to produce, share, and access quality information, supported by trusted institutions of government, media, academe, nonprofit organizations, and others. When the integrity of that information is compromised, this is where attacks against the fabric of a democracy bear fruit. And there are also second- and third-order effects of a damaged citizen-state relationship— for example, lower levels of societal resilience, less confidence that the government will be there to aid you in times of crisis, and more skepticism and suspicion about the core intentions of anyone involved in public service at any level. Meanwhile, some influence goals could be exclusively economic rather than political. For example, an influencer’s goal could be to undermine trust in a specific company, its leaders, the quality of its products and services, and so forth, leading to plummeting value of that company’s stocks. An attacker could gain financially from these types of attacks by (for example) investing in a competitor to the company being targeted. In recent years, some businesses have been hit by fake correspondence and videos that have hurt their stock prices. Company executives now have to stay vigilant for erroneous online headlines or false information being disseminated about their company and to reach out to social media companies to get it taken down before it spreads. Some are hiring thirdparty firms who specialize in this kind of service.10 In early 2019, several news media outlets (including the Financial Times and ­CNBC​.­com) were deceived by a letter claiming to be from Larry Fink, the chief executive of the investment firm BlackRock, who typically sends out an annual note about the company’s efforts around pro-social investing. The hoax also

Goals and Strategies: Influencing with Purpose33

involved several fake BlackRock Twitter handles and a sophisticated website made to look like the company’s own.11 In addition to attacks that could impact a company’s stock value, influence warfare measures can also be used to attack specific industries (like portraying tobacco companies as inherently deceitful and evil), who then use the same weapons of misinformation and disinformation in response. Even an entire country could be targeted by economic-oriented forms of influence warfare. For example, in 2013 hackers used the Twitter account of The Associated Press to send a fake message claiming that President Barack Obama had been injured in an explosion at the White House. Within minutes, the stock markets had dropped, and a flurry of anxious social media discussions ensued—until it was proven to be a hoax.12 The goal of manufacturing economic disruption in the United States was achieved, at least temporarily. To sum up, the overall goal of influence warfare is to win. Each of us— you, me, the individual reader, viewer, voter, consumer—is the target and the spoils of victory. The influencer wants to shape your perception of what is important in this world and why, as well as how to feel about certain events, trends, people, and so forth. The influencer can be a government, a terrorist group, a politician, a social movement, or some other kind of entity that has the goal of affecting your perceptions and behaviors in ways they hope will benefit them. Others influence us as well, but perhaps without a specific (or ulterior) agenda in doing so, like a mainstream media service, a trustworthy blog or website, educators, community leaders, a friend or family member, or even an individual you admire but have never actually met. Indeed, influencers can (and sometimes do) try to shape a target’s perceptions and behaviors in ways that will be most beneficial for the target. Within the framework of influence warfare, the battle is won when the target embraces the values, beliefs, and behaviors that the influencer wants them to adopt. INFLUENCE WARFARE: A BRIEF HISTORY Before addressing the goals and strategies of digital influence warfare, it is important first to acknowledge how this kind of activity merely represents online versions of something that is not new. In truth, various forms of influence warfare have been a part of domestic and international politics since the times of the first Roman Emperor.13 When the thirteenth-century Mongols were rolling across Eurasia, they deliberately spread news of the atrocities they had perpetrated on cities that did not surrender, the obvious goal being what Sun Tzu argued was the ultimate victory: to defeat the enemy before a single shot had been fired. As Marc Galeotti explains, fear is a powerful emotion, and in this instance, it was used to coerce the behavior of cities the Mongols had in their sights, preferring that they

34

Digital Influence Warfare in the Age of Social Media

surrender instead of having to spend valuable resources conquering them through force.14 Mongol hordes would also drag branches behind their horses to raise dust clouds, suggesting their armies were far larger than reality—an early and effective form of deception and disinformation. Naturally, authoritarian regimes have long recognized that a cornerstone of any successful effort to influence society involves controlling at least some of that society’s primary sources of information. Prior to the 1917 October revolution, the Soviets established the newspaper Pravda. Decades later, the Nazi party in Germany established its own publishing company, and during his reign, Hitler held the press captive by systematically rewarding agreeable journalists (with choice interviews, promotions, and party favors) and punishing those who disagreed with Nazi policy (by limiting their access to news, subjecting them to government investigations, and suspending their operating licenses).15 Various forms of influence warfare played a major role in both world wars of the previous century. For example, the Committee on Public Information was created during World War I by President Woodrow Wilson to facilitate communications and to serve as the worldwide propaganda organization on behalf of the United States.16 Influence warfare also featured much more prominently during World War II—and not just on the part of Joseph Goebbels and the Nazi regime. In 1942, President Roosevelt established the Office of War Information with a dual purpose. Domestically, the office worked to mobilize the country in support of the war effort, through patriotic movies, artwork, radio broadcasts, and other means. Overseas, the purpose of their efforts (using the organizational title “U.S. Information Service”) was to undermine the enemy’s morale— often through various psychological and information operations—as well as to provide moral support and to strengthen the resolve of resistance movements in enemy territories. The Voice of America (VOA) was also established in 1942 as the radio and television broadcasting service of the federal government, broadcasting in English, French, and Italian. These foreign influence efforts were also responsible for countering and responding to the massive amounts of German propaganda disseminated during that time period. During the Chinese Civil War, both the Communist and Nationalist (Kuomintang, or KMT) armies spread false information to sow discord in enemy-controlled areas, spreading rumors about defections, falsifying enemy attack plans, and stirring up unrest in an effort to misdirect enemy planning. After the Nationalist government relocated to Taiwan in 1949, the propaganda and disinformation war continued as the two sides flooded propaganda and disinformation into enemy-controlled territories to affect public opinion and troop morale.17 And as Daniel Baracskay explains, influence warfare also played a prominent role during the Cold War, when the United States was committed to an overseas information

Goals and Strategies: Influencing with Purpose35

effort targeting the Soviet Union and its ability to influence other nations.18 As Thomas Rid notes, “Entire bureaucracies were created in the Eastern bloc during the 1960s for the purpose of bending the facts.”19 The Soviets used disinformation “to exacerbate tensions and contradictions within the adversary’s body politic, by leveraging facts, fakes, and ideally a disorienting mix of both.”20 In response, the United States Information Agency (USIA) was created in 1953 as a primary conduit for enhancing our own strategic influence during the Cold War. The director of USIA reported directly to the president through the National Security Council and coordinated closely with the Secretary of State on foreign policy matters. The success of USIA varied depending on the location of its public diplomacy efforts and the resources it had available for disseminating information. One of its earliest efforts occurred in 1961 when the VOA focused its 52 radio broadcast transmitters toward nations behind the Iron Curtain to notify the audience of “Communist nuclear testing in the atmosphere.”21 USIA also produced for broadcast “The Wall,” a program about the creation of the Berlin Wall in its first year. During this period, USIA was also beginning to expand into Africa and other developing countries throughout the world.22 Meanwhile, the Soviet Union’s strategic communications efforts were similar to those of USIA. Their efforts were coordinated by Radio Moscow, which began broadcasting in 1922 and was initially available only in Moscow and its surrounding areas. However, due to the massive size of Russia, the Soviets soon began to broadcast via shortwave radio signals (shortwave broadcasts could travel significantly farther than AM signals). Soviet leaders continued to refine and add to its broadcasting power, and by 1929, Radio Moscow was able to broadcast into Europe, North and South America, Japan, and the Middle East using a variety of languages.23 By 1941, the USSR was able to broadcast in 21 languages and, ten years later, had a program schedule of 2,094 hours.24 Because most citizens in the USSR had shortwave radios in order to attain news from Moscow, they were also able to receive transmissions from the British Broadcast Corporation (BBC)—perhaps the most wellknown and respected global media service today—or the VOA. Of course, government leaders in the USSR did not want their citizens to hear alternative news from outside sources, so a considerable effort was made to jam them,25 but despite their efforts, the broadcasts were still able to get through. In fact, it was believed by the USIA that the Soviet Union spent more money on jamming efforts each year than the USIA had for its annual budget.26 In the first academic study of the Soviet-era “Active Measures” program, Richard Shultz and Roy Godson explain how information operations (and propaganda) were not the only kind of influence efforts pursued. For example, the Soviets cultivated several different types of so-called “agents

36

Digital Influence Warfare in the Age of Social Media

of influence . . . including the unwitting but manipulated individual, the ‘trusted contact,’ and the controlled covert agent.”27 As they explain, “The agent of influence may be a journalist, a government official, a labor leader, an academic, an opinion leader, an artist, or involved in a number of other professions. The main objective of an influence operation is the use of the agent’s position—be it in government, politics, labor, journalism or some other field—to support and promote political conditions desired by the sponsoring foreign power.”28 Forged documents—including fake photographs—have also been a part of influence warfare for well over a century. For example, during the 1920s the Soviet Cheka used elaborate forgeries to lure anti-Bolsheviks out of hiding, and many were captured and killed as a result.29 During the Cold War, as Shultz and Godson note, many “authentic-looking but false U.S. government documents and communiqués” could be categorized mainly as either “altered or distorted versions of actual US documents that the Soviets obtained (usually through espionage),” or “documents that are entirely fabricated.”30 Examples include falsified U.S. State Department documents ordering diplomatic missions to sabotage peace negotiations or other endeavors, fake documents outlining U.S. plans to manipulate the leaders of Third World countries, or even forged cables from an American embassy outlining a proposed plan to overthrow a country’s leader.31 In one case, an authentic, unclassified U.S. government map was misrepresented as showing nuclear missiles targeting Austrian cities. A fabricated letter ostensibly written by the U.S. Defense Attaché in Rome contained language denying “rumors suggesting the death of children in Naples could be due to chemical or biological substances stored at American bases near Naples” (while no such substances were stored at those bases).32 Even a fake U.S. Army Field Manual was distributed, purportedly encouraging Army intelligence personnel to interfere in the affairs of host countries and subvert foreign government officials and military officers.33 Through these and other kinds of information operations, the Soviets tried to influence a range of audiences, and the lessons they learned—from both successes and failures—inform Russia’s influence warfare efforts today, as we’ll examine later in this chapter. A SIMPLIFIED TEMPLATE FOR DIGITAL INFLUENCE WARFARE CAMPAIGNS Drawing on lessons learned from our pre-digital era experiences with influence warfare, we can take advantage of an extraordinary wealth of opportunities provided via the Internet to achieve a broad range of objectives, from deception and distraction to sowing confusion, disorientation, and discord. As Thomas Rid notes, “The Internet has made open societies more open to disinformation.”34 And as noted in the previous chapter, there are plenty of examples to emulate. Let’s imagine how an architect

Goals and Strategies: Influencing with Purpose37

of a digital influence campaign would approach their work, involving at least five categories of effort: goal identification, plan development, target identification, tactical implementation, and assessment. Identifying the Goals to Be Achieved At the very outset, the key thing to remember is that any successful digital influence effort necessarily starts with clear goals and objectives. You, the influencer, must have a clear sense of what you want to accomplish before deciding what kinds of tactics you will want to use in trying to achieve those goals. Articulate in as much details as possible what you want to achieve and why. As described in the first part of this chapter, the goals and objectives of influence warfare cover a broad spectrum, from deception and distraction to sowing confusion, disorientation, and discord. A useful list of potential goals that could be pursued through these influence efforts is provided by the independent analysis group Demos in their report Warring Songs (Table 2.1). The primary goal of digital influence warfare strategies and tactics involves power—you may seek to acquire the power to coerce others, or to diminish their power, or even to demonstrate your own power in order to demoralize your opponents and energize your supporters. Often an influence campaign will focus on changing or reinforcing the targets’ perceptions about the influencer, promoting (or strengthening) a target’s animosity and hostility toward some other entity (e.g., a peer competitor Table 2.1  Examples of goals pursued in Digital Influence Warfare Build political support

Create confusion and anger

Feign public support

Denigrate compromise

Encourage conspiratorial thinking

Undermine channels of productive communication

Promote sympathetic voices Reduce critical voices in media Undermine trust in political representatives and institutions

Reduce trust in digital communications Disrupt channels of communication

Undermine trust in institutions of government

Undermine trust in media institutions

Undermine trust in electoral institutions

Blur the boundaries of fact and fiction

Incite societal and cultural divisions

Promote sympathetic content

Voter suppression

Shift the balance of content in actor’s favor

Abuse of legal systems

Undermine trust in digital media Suppress critical content

Source: Adapted from Alex Krasodomski-Jones et al., “Warring Songs: Information Operations in the Digital Age,” Demos/Open Society, European Policy Institute (May 2019), p. 8. Online at: ­https://​­demos​.­co​.­uk​/­wp​-­content​/­uploads​/­2019​/­05​/­Warring​-­Songs​-­final​-­1​.­pdf

38

Digital Influence Warfare in the Age of Social Media

of the influencer), raising uncertainty about what is known or knowable (e.g., questioning the scientific evidence about climate change or the harmful effects of tobacco), attacking the perceived credibility of a target (e.g., questioning the integrity of a political candidate or the objectivity of the World Anti-Doping Agency), or amplifying the fissures (socioeconomic, ethnic, political, identity, etc.) that exist within a society. Additional goals of a social influence campaign include simply drawing attention to an issue or a narrative.35 This is particularly essential in today’s attention economy; you may have to initially work hard to get the intended audience to listen to what you have to say. And, of course, once you have their attention you want them to accept your narrative, your characterization of that issue. You, the influencer, should have in mind a general sense of the perceptions you want to shape among your target audience. A goal may be to change their views about something, although influence campaigns are far more effective when they focus on strengthening and reinforcing the beliefs already held by the target (even if those beliefs are contradicted by factual evidence). And as we’ll see later in this volume, influence silos (and particularly today’s digital versions of these) play a key role in achieving the goals of an influence strategy. Overall, you can choose to pursue any number of influence goals discussed in this chapter or others that haven’t been mentioned yet. The important point to make here is that you should always begin with some idea of what you want to accomplish. Crafting a Plan to Achieve Those Goals Once you have determined what goals you want to accomplish through your influence campaign, you’ll want to devise a strategic plan that answers a broad range of important questions, such as:36 • What kinds of information do you want to communicate and how will this help you achieve the goals you have articulated? • What kinds of target audiences will it benefit you the most to influence in the ways you intend? • What do you know about those target audiences and what do you need to know? For example, what do those target audiences value, want, need, fear, and so forth? • What are the preferred means of communication and information sharing among the target audiences? • What peer competition, potential allies, and enemies should you keep in mind as you pursue your influence campaign objectives? • How do the target audiences currently view you, the influencer? What are those views based on? • Will you need to mask the origins of the influence campaign using deception, proxies, and digital influence mercenaries?37

Goals and Strategies: Influencing with Purpose39

• How will you determine which types of information are resonating among your target audience and how will you adjust your efforts according to that analysis? • How will you know you have achieved your influence goals? What data will you analyze and when will you assess this? Many of these questions, you will note, reflect what you would find among influence warfare strategies of the pre-Internet era, particularly during the Cold War. As we’ll examine throughout this book, digital influence warfare is different mainly because it involves the use of new technological tools in pursuit of these goals and because there are new and powerful ways to acquire tons of data on the targets you seek to influence. The important point to make here is that digital influence strategies are most effective when driven by a clear understanding of goals and objectives and a well-designed plan for how those can be achieved. Collecting and Analyzing Data on the Targets Once you have resolved the many initial questions of your strategy and developed a plan for achieving the goals of your influence campaign, you will need to identify the targets you want to influence, based on a clear understanding of how influencing them will help you achieve your goals. Conduct as much research as you can about the target audience you are seeking to influence. What do they already believe about their world and/ or their place within it? What do they think they know and what are they uncertain about? What assumptions, suspicions, prejudices, and biases might they have? What challenges and grievances (economic, sociopolitical, security, identity, etc.) seem to provoke the most emotional reactions among them? Throughout the history of influence warfare, this information has been relatively easy to identify in open liberal democracies of the West. In more closed or oppressive societies, an additional step may be needed to determine how the target audience’s perceptions compare to the discourse in the public domain—e.g., what the news media (often owned and controlled by the government) identify as important topics and acceptable views within that society. Influence efforts should be always guided by data on potential targets. You should never waste your resources on target audiences that are already well armed to repeal your efforts. Instead, you should seek to identify vulnerable targets to exploit. For example, if your goal is to sow divisions and increase political polarization within a society, the United States offers a prime target for achieving that goal. Research by Oxford University’s Internet Institute has found that people in the United States share more junk news than people in other advanced democracies such as France, Germany, and the United Kingdom.38 A study by the Pew Research Center

40

Digital Influence Warfare in the Age of Social Media

in 2017 found that 67 percent of U.S. adults received news through social media sites like Twitter and Facebook.39 Further, analysis by the Atlantic Council Digital Forensic Research Lab in 2018 found that “American society was deeply vulnerable, not to all troll farm operations, but to troll accounts of a particular type. That type hid behind carefully crafted personalities, produced original and engaging content, infiltrated activist and engaged communities, and posted in hyper-partisan, polarizing terms. Content spread from the troll farm accounts was designed to capitalize on, and corrupt, genuine political activism. The trolls encapsulated the twin challenges of online anonymity—since they were able to operate under false personas—and online ‘filter bubbles,’ using positive feedback loops to make their audiences ever more radical.”40 This example highlights the intersection of what other chapters of this book will address: the ways in which data are made increasingly available to the influencer (chapter 3), especially by what I call data mining mercenaries, companies (like Cambridge Analytica)41 that specialize in collecting and analyzing targeting data for the purposes of maximizing the effectiveness of digital influence efforts; the ability to capitalize on a target’s confirmation bias (chapter 4); and the importance of influence silos (chapter 5). Implementing Tactics and Tools of Digital Influence Warfare Having identified a set of influence goals, a plan to achieve those goals, and targets to focus on, you are ready to launch your campaign for using whatever means are at your disposal to influence those targets. In many cases, you will use a combination of public and covert methods of influence. This may include feeding information to mainstream journalists (e.g., through press releases and public statements) and posting information online. Obviously, the nature and format of the message matters and must be tailored according to both the plan’s objectives and the target’s attributes. Before the days of the Internet, whisper campaigns and the spread of disinformation were more cumbersome and typically involved various kinds of interpersonal contact with members of the target audience (perhaps through proxies and intermediaries), along with newspapers, radio, and television. But today, the tools of the influence campaign include emails, blogs, websites, official social media accounts, fake news websites, fake social media accounts, and automated “bots” to spread or support a particular narrative. Most importantly, unlike the pre-Internet era, you now have the tools at hand to create (or manipulate) virtually any kind of image, audio, or video and provide it directly to the target audience, bypassing the editorial gatekeepers of the traditional news media. In some cases, you may want to spread disinformation and encourage others to disseminate it to their contacts; in other cases, you may want to

Goals and Strategies: Influencing with Purpose41

highlight a mistake and amplify its effects, or you may want to provoke heated debate and exacerbate differences of opinion about a particularly controversial topic within the target society. To achieve your digital influence strategy goals, an important first step is to establish a foothold in the information environment of the target. You must establish a credible presence among an audience of like-minded social media users before attempting to influence or polarize that audience. A common approach involves initially posting some messages that the target audience is likely to agree with. The convention of “like” or “share” facilitated by social media platforms can draw in the user, and once you have observed someone expressing an endorsement of something you’ve said, you have a better chance of enticing them to do it again. Once a perception of the acceptable persona (the “like-minded, fellow traveler”) has been digitally established, then when events or policies of concern become apparent, you can use the established influence ability to shape perceptions and behavior in ways that will benefit your influence strategy’s goals. You can now much more effectively provoke outrage (for example) or utilize the strategy of escalating commitment (see chapter 4), among many other objectives. As noted earlier, an initial goal of many influence efforts is just to attract the target’s attention. The clever influencer will have a toolkit full of tricks and tips they can use to do this, including being louder and more provocative than others.42 In some instances, the more extreme the opinions being expressed, the more likely they are to get attention;43 this is also a useful tactic if the goal is to alter the target’s perception of what’s possible (e.g., see the discussion about the “Overton Window” provided in chapter 4). Repeating a phrase or narrative tends to enhance and prolong the influence of that message.44 And as we’ll examine in later chapters, many effective influence strategies focus on manipulating a target’s uncertainty and fear. As Zimbardo et al. note, “There is a positive relationship between intensity of fear arousal and amount of attitude change if recommendations for action are explicit and possible.”45 Once you have the audience’s attention, an effective way to get them to accept your argument is to make sure the audience is heavily stacked with people already inclined to believe in what you have to say. This is relatively easy to do at political campaign rallies, where a majority of attendees are already supportive of the candidate. And now social media allows a similar phenomenon, but with a twist: as we’ll see in chapter 3, you can create the deception of a mass of online supporters using fake, automated accounts that signal their support for your message in a coordinated fashion. In turn, this kind of “engagement deception” (using automation to give the appearance that lots of other people are supportive of a particular narrative) provides an important indicator of “social proof,” something

42

Digital Influence Warfare in the Age of Social Media

we’ll cover in chapters 4 and 5. And if you are unable to fabricate the illusion of mass support, you can at least have a few planted supporters in your audience, either online or in a physical setting. Several other prevalent tactics of influence warfare today involve deception as well. Perhaps the most well known in the public arena today is called disinformation or “fake news”—essentially, these are forms of information deception, and there are several variations to consider. Claire Wardle of First Draft has identified “seven distinct types of problematic content within our information ecosystem,” which differ in specific attributes and intent to deceive:46 1. Satire or parody: no intention to cause harm, but has potential to fool; amid the flurry of parody websites and social media accounts, The Onion is the most well-known example of this; 2. False connection: when headlines, visuals, or captions don’t support the substance or content of the story itself; 3. Misleading content: misleading use of information to frame an issue or individual; 4. False context: when genuine content is shared with false contextual information; 5. Imposter content: when genuine sources are impersonated; 6. Manipulated content: when genuine information or imagery is manipulated to deceive (altered videos and images, including deepfakes, are the most prevalent examples of this); and 7. Fabricated content: new content is 100 percent false, designed to deceive and do harm. Each of these forms of “problematic content” has a role to play in achieving an influence warfare strategy, as illustrated by many examples provided throughout this book. Further, in many cases, the most effective means of utilizing these types of information (or disinformation) involve a careful integration between fake details and accurate details that the target already accepts as true. In the field of education, teachers often refer to the concept of “scaffolding” as a strategy to foster learning by introducing material that builds on what the student already understands or believes. For the purposes of an influence strategy, as Thomas Rid explains, for disinformation to be successful it must “at least partially respond to reality, or at least accepted views.”47 Additional examples of deceptive digital influence tactics include identity deception (like using fake or hijacked social media accounts) and information source deception (like rerouting Internet traffic to different sources of information that seem legitimate but relay false information to the viewers). As with the other forms of deception, a primary intent of

Goals and Strategies: Influencing with Purpose43

these tactics is for the influencer to make the target believe what is not true. Similarly, the influencer may also spread disinformation through the target’s trusted communication channels in order to degrade the integrity of their decision-making and even their understanding of reality. Of course, as noted earlier, deception is only one of several digital influence strategies. Another, which we have seen frequently in recent years, is to provoke engagement—especially to provoke emotional responses— using information that may in fact be all or partially accurate. Unlike disinformation and deception, the primary focus here is less on the message than on provoking people to propagate the message. Effective targets for this approach are those who have higher uncertainty about what is true or not, but are willing to share and retransmit information without knowing whether it is untrue (and often because they want it to be true). And as noted in the previous chapter, fear is an exceptionally powerful emotion that can lead people to make a wide variety of (often unwise) decisions. There are many kinds of influence goals that can be achieved by intentionally provoking emotional responses, usually in reference to something that the target already favors or opposes. The tactic of provoking outrage can be particularly effective here against a target audience—as Sun Tzu wrote, “Use anger to throw them into disarray.”48 With the right sort of targeting, message format, and content, the influencer can utilize provocation tactics to produce whatever kinds of behavior they want by the target (e.g., angrily lashing out at members of an opposing political party or questioning the scientific evidence behind an inconvenient truth). And an additional type of influence warfare involves attacking the target directly—threatening or bullying them, calling them derogatory names, spreading embarrassing photos and videos of them, and so forth. We’ll examine many more of these and other tactics in chapter 3. Finally, it should be noted that there are many different kinds of people utilizing these tactics. In addition to state-sponsored hacking units from Russia, China, Iran, and North Korea, there are also new profit-oriented and highly skilled digital influence mercenaries who will attack targets on behalf of the highest bidder.49 And in deploying the tactics of digital influence warfare, you’ll want more than just a team of hackers. Certainly, the same types of technical computer skills used to attack and degrade computer systems can also be used to access and manipulate sources of information upon which we make decisions. If people believe that what they see online is true, the goal of the attacker then becomes one of altering what they see in order to influence those beliefs. Thus, in addition to computer hackers, to do digital influence warfare effectively you will also want to employ specialists in communication and language, social and behavioral psychology, marketing, and so forth. We’ll examine these areas of knowledge and skills in later chapters of this book.

44

Digital Influence Warfare in the Age of Social Media

Monitoring, Evaluating, and Refining the Campaign Finally, having progressed through this simplified template to the stage of actually deploying the tools and tactics of digital influence warfare against your chosen target audience, you will want to monitor the ways in which the target audience responds and reacts to the kinds of information you are directing toward them. Success in this arena can be determined by (for example) the target’s behavior: Were they motivated to show support, buy, vote, protest, join, reject, or do something else that you (the influencer) wanted them to? Did they express some kind of emotional response (outrage, anger, sympathy, encouragement, etc.) that indicates your message had the desired impact? By the same token, failure in digital influence warfare can be assessed by the lack of such actions or emotional response. If the target did not respond in ways you had hoped, this should lead to an examination of whether it might be a poorly worded (or timed) message, ineffective means of delivery, or perhaps even a poor choice of target. If you failed to influence any kind of reaction at all by the target, your “after action report” should seek to understand why. What do we need to know about the failed attempt that can help us improve the next attempt? What tactics were you attempting? Would other tactics have been more effective? Is there something about the message itself that needs to be revised? Did the target see/hear your information? How and when was it presented to them and by whom? Would a different messenger be more effective? Did the same influence effort work against other (presumably similar) targets? If so, what made this target different—why was it not convincing enough for this target to react in the way you had intended? As you assess the target’s response to your influence campaign, revise and recalibrate as needed to ensure the highest likelihood of achieving the goals articulated in your plan. If members of the target audience(s) raise questions and concerns about what they are seeing or hearing, you’ll want to deflect any criticisms as politically motivated by an ulterior agenda. Knowing what the target audience views (or fears) as an evil-oriented “other” will help you in this regard. The overall point to make here is that no plan or strategy is perfect—you must be willing to refine it as necessary after assessing the kind of impact your efforts are having. Again, this is an overly simplistic outline for what can be a much more complex endeavor. As we will explore in other chapters of this book, there is a great deal of research on the psychology of persuasion (at both the individual and the group levels) that you can draw upon to ensure the effectiveness of your influence campaign strategy. Further, a state or non-state actor will likely be pursuing a number of influence campaigns simultaneously, assumedly with some mechanism for ensuring they complement (and not undermine) each other. There may be considerable overlap as well. Thus, the same influence tactics and tools may be used simultaneously toward

Goals and Strategies: Influencing with Purpose45

achieving multiple strategic goals of the influencer. And, of course, it is quite likely you would be trying to influence not just one target audience but many targets, and as a result, it can be expected that some targets will respond differently than others to the tactics, tools, and messages being used in the influence campaign. In fact, as we’ll see in the next part of this chapter, many states are seeking to influence both foreign and domestic targets simultaneously. The United States is definitely not the only—nor the most frequent—target of digital influence warfare. STRATEGIC GOALS OF STATES ENGAGED IN DIGITAL INFLUENCE WARFARE As described earlier (and in chapter 1), states have engaged in various kinds of influence operations for centuries. Modern forms of this have been variously referred to as information operations, psychological operations, “computational propaganda,”50 cyberwarfare, “virtual societal warfare,”51 information warfare, “media manipulation,”52 and even social media warfare. But whatever one wishes to call it, the central goal of influence warfare is—and has always been—fairly straightforward: the attacker wants to shape or reshape the reality in which the target believes in order to achieve some sort of strategic objective.53 One of the most well-known earlier forms of digital influence warfare was North Korea’s attack against Sony. In the summer of 2014, Sony Pictures had planned to release a comedy, The Interview, featuring a plot in which two bumbling, incompetent journalists score an interview with Kim Jong-un, but before they leave, they are recruited by the CIA to blow him up.54 An angered North Korea responded by hacking into Sony’s computer networks, destroying some key systems and stealing tons of confidential emails that they later released publicly in small, increasingly embarrassing quantities. Details about contracts with Hollywood stars, medical records, salaries, and Social Security numbers were also released. But unlike other well-reported cyberattacks of that era, this was—in the words of David Sanger—“intended as a weapon of political coercion.”55 As with many other examples of this “hack and release” tactic (described further in chapter 3), the strategic goals are fairly straightforward: for example, to weaken an adversary by undermining its perceived credibility. Today, states are engaged in these kinds of activities with increasing regularity and sophistication. As a July 2020 report by the Stanford Internet Observatory explains: “Well-resourced countries have demonstrated sophisticated abilities to carry out influence operations in both traditional and social media ecosystems simultaneously. Russia, China, Iran, and a swath of other nation-states control media properties with significant audiences, often with reach far beyond their borders. They have also been

46

Digital Influence Warfare in the Age of Social Media

implicated in social media company takedowns of accounts and pages that are manipulative either by virtue of the fake accounts and suspicious domains involved, or by way of coordinated distribution tactics to drive attention to certain content or to create the perception that a particular narrative is extremely popular.”56 In a 2019 research report published by Princeton University, Diego Martin and Jacob Shapiro illustrate how “foreign actors have used social media to influence politics in a range of countries by promoting propaganda, advocating controversial viewpoints, and spreading disinformation.”57 The researchers define “foreign influence efforts” as: (1) coordinated campaigns by one state to impact one or more specific aspects of politics in another state, (2) through media channels, including social media, by (3) producing content designed to appear indigenous to the target state.58 The objective of such campaigns can be quite broad and to date have included influencing political decisions by shaping election outcomes at various levels, shifting the political agenda on topics ranging from health to security, and encouraging political polarization.59 Similarly, research by Philip Howard identified “eight countries with dedicated teams meddling in the affairs of their neighbors through social media misinformation.”60 While Russia has been the most frequently identified culprit of such things, as we’ll discuss in a moment, China has also significantly ramped up its digital foreign influence efforts, including disrupting Twitter conversations about the conflict in Tibet and meddling in Taiwanese politics.61 Further, digital influence warfare is not just something that involves one country meddling in the information ecosystem of another country. In some countries, politicians are using these tactics to influence elections in their own country. For example, during the 2012 presidential elections in Mexico, automated social media accounts (or “bots,” basically computer programs pretending to be people, as described in chapter 3) helped propel Enrique Peña Nieto to victory.62 Dubbed “Peñabots” by locals and researchers, these accounts were produced by the thousands and then programmed to disseminate and endorse messages in support of the candidate, in essence manufacturing “social proof” (see chapters 4 and 5). Once in office, an estimated 75,000 of these Peñabots were used in 2015 to combat protests and attack critics of the government. The accounts were coordinated to drown out protestors and their hashtags with spam, as well as target individual journalists and activists for smear campaigns, death threats, and other forms of harassment.63 Similar tactics were used in 2016 by Rodrigo Duterte and his supporters to capture the presidency in the Philippines, and such tactics have been used against political opponents and journalists ever since.64 In fact, according to a 2019 report by Oxford University’s Computational Propaganda Project, most of the 70 nations that have sought to manipulate voters and others online have focused mainly on their own

Goals and Strategies: Influencing with Purpose47

domestic targets rather than meddle in the political affairs of other countries.65 Organized propaganda campaigns were found on Facebook in 56 of these countries.66 These governments are spreading disinformation to discredit political opponents and to bury opposing views. In Ethiopia, the Philippines, Vietnam, and other countries, people have been hired to influence social media conversations by posting pro-government messages on their personal Facebook pages and attacking political opposition leaders.67 The Guatemalan government has used hacked and stolen social media accounts to silence dissenting opinions.68 In other countries, governmentorganized sock puppets—social media accounts that embed themselves within an online community and then manipulate it from the inside— have been used to cause confusion and misdirection at anti-government protests.69 In the Philippines, Duterte has turned the army of social media trolls that helped elect him into a weapon against any political opponents, including well-respected journalists like Maria Ressa. After years in CNN’s Southeast Asia bureau, and then leading one of the Philippines largest news networks, Ressa had launched Rappler, an online media outlet that began investigating some of Duterte’s more horrific travesties of justice against his own people, exposing bot armies and corruption and documenting his brutal anti-drugs campaign.70 His supporters responded by attacking all the key players at Rappler with messages that felt “like an infestation of insects, swarming into the email inboxes,” and then the hashtag #ArrestMariaRessa began to trend, and the government launched a series of legal cases against the group.71 A team of Rappler investigators identified a collection of fake accounts, all coordinated to say the same hateful things about Rappler and Maria Ressa (including rape threats) and most likely controlled by the same source. As Pomerantsev notes, each account looked realistic, real Filipinos with real jobs, but an investigation of them found nobody had heard of them at their claimed places of employment. These were 26 well-disguised fake accounts repeating the same messages at the same time and reaching an audience of three million.72 When Maria Ressa received the 2018 Knight International Journalism Award, one of the world’s most prestigious, she noted that “exponential lies on social media incite hate and stifle free speech . . . We battle impunity from the Philippine government and from Facebook.”73 In June 2020, Ressa was convicted in the Philippines of “cyberlibel” charges that all objective observers have decried as politically motivated and unfounded. An international team of lawyers representing Ressa stated that the court in Manila had become “complicit in a sinister action to silence a journalist for exposing corruption and abuse . . . . This conviction is an affront to the rule of law, a stark warning to the press, and a blow to democracy in the Philippines.”74 We’ll discuss more about Duterte’s government-sponsored social media trolling and harassment campaigns in chapter 6.

48

Digital Influence Warfare in the Age of Social Media

Meanwhile, in November 2019, former Twitter employees were charged with spying for Saudi Arabia by digging into the accounts of kingdom critics.75 Then in December, Twitter announced they had identified 5,929 accounts that were part of “a significant state-backed information operation on Twitter originating in Saudi Arabia” and that the accounts had been “removed for violating platform manipulation policies.”76 These and many other instances, according to Philip Howard, illustrate how ruling elites have determine that “the risk of being caught manipulating public opinion is not as serious as the threat of having public opinion turn against them.”77 Further, he notes, “political parties worldwide have begun using bots to manipulate public opinion, choke off debate, and muddy political issues.”78 If you’ve been following global news at all over the past decade, you already know two prominent countries whose leaders conduct extensive influence operations against both foreign and domestic targets: China and Russia. THE CURIOUS CASE OF CHINA AND THE UNIQUELY UNPLEASANT CASE OF RUSSIA Both China and Russia employ a wide range of overt and covert influence efforts to try to reduce U.S. power and influence around the world, viewing this as a necessary way to gain more power and opportunity for their own countries. Of course, China has already gained a formidable rep­ utation for cyberattacks, hacking into systems not only to spy and cause trouble but also to conduct espionage, stealing scientific research, proprietary corporate data, and defense technology. But the Chinese Communist Party is also expressly committed to many kinds of information operations and media manipulation efforts in support of their overall strategy of developing hegemonic power throughout Asia. For instance, in 2007, the party unveiled a Grand External Propaganda Campaign, earmarking billions in an attempt to control external narratives about China, and Premier Xi Jinping has vastly intensified that effort, waging a global “discourse war.”79 As a recent Stanford Internet Observatory report notes, “China’s extensive overt propaganda capabilities, on print, broadcast, and social media, are used to influence audiences both domestically and worldwide to embrace China’s point of view and policy positions.”80 Similarly, in 2003, the Communist Party’s Central Committee and the Central Military Commission approved a new strategic warfare concept—the “Three Warfares” (san zhong zhanfa, generally abbreviated in Chinese as san zhan)81—which China military expert Larry Wortzel refers to as a “comprehensive information operations doctrine . . . reinventing our understanding of war.”82 According to a 2013 Pentagon report, this doctrine reflects the Chinese regime’s belief that “twenty-first century warfare [is] guided by a new and

Goals and Strategies: Influencing with Purpose49

vital dimension: namely, the belief that whose story wins may be more important than whose army wins.”83 Further: The Three Warfares is a dynamic three dimensional war-fighting process that constitutes war by other means. Flexible and nuanced, it reflects innovation and is informed by CCP control and direction. Importantly, for US planners, this weapon is highly deceptive. It proceeds in a dimension separate both from the well-worn “hearts and minds” paradigm and from the kinetic context in which power projection is normally gauged and measured by US defense analysts. The Three Warfares envisions results in longer time frames and its impacts are measured by different criteria; its goals seek to alter the strategic environment in a way that renders kinetic engagement irrational.84

The “Three Warfares” described in the doctrine are (1) public opinion (media) warfare (yulun zhan), (2) psychological warfare (xinli zhan), and (3) legal warfare (falu zhan).85 One of these—legal warfare—has limited ties to the central theme of this book. It involves what Orde Kittrie describes as “the leveraging of existing legal regimes and processes to constrain adversary behavior, contest disadvantageous circumstances, confuse legal precedent, and maximize advantage in situations related to the PRC’s core interests.”86 According to the Pentagon report, efforts in this realm can “range from conjuring law to inform claims to territory and resources, to employing bogus maps to ‘justify’ territorial claims.”87 But the other two dimensions of this doctrine have clear and direct implications for digital influence warfare. In their conception of public opinion warfare, the goal is to influence both domestic and international public opinions in ways that build support for China’s own military operations while undermining any justification for an adversary who is taking actions counter to China’s interests.88 But this effort goes well beyond what Steven Collins refers to in a 2003 NATO Review article as “perception management,” in which a nation or organization provides (or withholds) certain kinds of information in order to influence foreign public opinion, leaders, intelligence agencies, and the policies and behaviors that result from their interpretation of this information.89 According to the Pentagon report, China “leverages all instruments that inform and influence public opinion . . . and is directed against domestic populations in target countries.”90 As Laura Jackson explains, “China’s extensive global media network, most notably the Xinhua News Agency and China Central Television (CCTV), also plays a key role, broadcasting in foreign languages and providing programming to stations throughout Africa, Central Asia, Europe, and Latin America.”91 In turn, Western media outlets then repeat and amplify the spread of messages to a broader international audience, lending a perception of legitimacy to what is in

50

Digital Influence Warfare in the Age of Social Media

fact Chinese state-directed propaganda.92 On the digital front, they use a collection of websites and social media accounts to influence domestic and international perspectives associated with ongoing disputes involving their national interests.93 Their efforts incorporate content farms, “astroturf” commenter brigades, and fabricated accounts and personas on social media channels—and for the first time in August 2019, clusters of fake accounts and content were concretely attributed to the CCP by several tech companies including Facebook, Twitter, and YouTube.94 Recent examples include China’s attempts to influence global perception of the 2019–2020 Hong Kong protests, in which both overt state media outlets and fake accounts on Western social media platforms aimed to shape the global perception of the protesters and to project that the Chinese Communist Party’s control over Hong Kong was not in jeopardy.95 Similar efforts, along with the use of messaging apps and YouTube channels, were made to promote the party’s preferred narratives in the 2020 Taiwanese election, as well as to create and amplify misinformation (including rumors about incumbent presidential candidate Tsai Ing-wen).96 And China’s influence strategy regarding the COVID-19 pandemic involved a full spectrum of overt and covert tactics. As the Stanford Internet Observatory report explains, “English-language state media Facebook Pages and Twitter accounts, as well as Chinese diplomats and embassies, took part in an overt messaging effort to amplify the CCP’s preferred narratives on COVID-19. Covert statesponsored activity leveraging fake Twitter accounts paralleled these efforts, praising the CCP’s pandemic response and criticizing the responses of other actors, such as the United States, Hong Kong and Taiwan.”97 The goals of these and other public opinion warfare efforts, according to the Pentagon report, include “generating support for the Chinese government both at home and abroad, and weakening their enemy’s will to fight.”98 They reflect a long-standing commitment to the ideas of Chinese military theorist Sun Tzu, who argued 2,500 years ago in The Art of War that “supreme excellence consists in breaking the enemy’s resistance without fighting.”99 As Doug Livermore observes, the Three Warfares Doctrine “seeks to break adversary resistance and achieve Chinese national objectives with little or no actual fighting.”100 Essentially, public opinion warfare and psychological warfare are closely intertwined. According to the Pentagon report, China’s approach to psychological warfare “seeks to influence and/or disrupt an opponent’s decision-making capability, to create doubts, foment anti-leadership sentiments, to deceive opponents and to attempt to diminish the will to fight among opponents.”101 A primary objective, as Laura Jackson explains, is “to demoralize both military personnel and civilian populations, and thus, over time, to diminish their will to act.”102 Further, the doctrine “aims to undermine international institutions, change borders, and subvert global media, all without firing a shot.”103

Goals and Strategies: Influencing with Purpose51

Meanwhile, on the domestic front, the Chinese government also uses an endless stream of pro-regime propaganda to ensure conformity to the Communist Party policies and objectives.104 Their public opinion warfare efforts include the use of television programs, newspaper articles (particularly in China Daily and the Global Times), books, films, and the Internet, with 2 million official “public opinion analysts” monitoring and censoring social media networks and blogs (including Sina Weibo, China’s equivalent of Twitter).105 Since the mid-2000s, the CCP has been replacing editors and publishers at many of the more popular media outlets to reassert control over domestic information, and in 2018, the party tightened its control of the media by shifting direct oversight of print publications, film, press, and key broadcast properties to a central committee.106 And as we’ll examine further in chapter 6, China broke new ground when it launched (in 2015) a new “social credit” system to create an “upward, charitable, sincere and mutually helpful social atmosphere” through which all Chinese citizens receive a numerical score reflecting their “trustworthiness . . . in all facets of life, from business deals to social behavior.”107 But while much attention has focused on China in recent years, arguably the most frequent perpetrator of digital influence operations today (internationally as well as domestically) is Russia. On the global front, Putin’s approach to deriving power through the spread of lies and uncertainty reflects an extensive Cold War experience with dezinformatziya, which Shultz and Godson describe in their 1984 book by that title.108 In fact, as noted earlier, Russia has pursued various sorts of influence strategies for well over a century. There are also other important differences between China and Russia in terms of the goals, strategy, and tactical execution of their influence operations. According to a report by the Stanford Internet Observatory, for example, the two countries have different core operational objectives: China is focused on a strategic mission of establishing the country as a leader in the international order and maintaining positive global opinion. Russia, meanwhile, seeks to create a positive perception in regions where it seeks strategic relationships, while also working aggressively to erode the international perception and domestic social cohesion of its rivals.109 This is why, as we’ll examine below, a core objective of its digital influence efforts against U.S. targets is to confuse, distract, and encourage more heated squabbling among our citizens and politicians so that we turn a blind eye to what Russia is doing in Ukraine, Syria, and elsewhere. Both countries have what the Stanford report calls “full-spectrum propaganda capabilities,” and each has amassed prominent overt Facebook Pages and YouTube channels targeting regionalized audiences, though the use of those pages diverges in service to their differing objectives.110 Additionally, both actors have run fake Facebook pages and Twitter persona accounts. However, the execution of this covert strategy varies significantly: Russia’s covert operations include sophisticated personas

52

Digital Influence Warfare in the Age of Social Media

informed by ethnographic research and development of relationships with influencers (enabling them to reach their audience and amplify their content). China’s own efforts to leverage fake personas have resulted in unsophisticated accounts, far less engagement, and no clear influencer amplification.111 And most importantly, unlike China, Russia has actively and aggressively sought to influence democratic elections in the United States, Europe, Africa, and elsewhere, as well as sow confusion and encourage widespread societal polarization and animosity. As noted earlier, Russia has been heavily engaged in various forms of influence warfare for well over a century. Russia’s modern approach to digital influence warfare was formally described in one of Vladimir Putin’s first pieces of legislation as president of Russia, the Information Security Doctrine of 2000.112 In it, the government describes the “information sphere” as an arena of conflict, one in which Russia is facing both external and internal threats. External threats described in the document include “activities of foreign political, economic, military, intelligence and information entities, directed against the interests of the Russian Federation in the information sphere” and “the striving of a number of countries toward dominance and the infringement of Russia’s interests in the world information space and to oust it from external and domestic information markets.”113 This document also expresses concern about “disinformation being spread overseas about the foreign policy of the Russian Federation” and “the blocking of the activities of Russian media in explaining to foreign audiences the goals and major thrust areas in the Russian Federation’s state policy and its view of socially significant events in Russian and international life.”114 As a result of these perceived threats, Russia has engaged in a broad, multifaceted influence warfare campaign involving all of the former tools and tactics of active measures program along with a flurry of new technological approaches. Like many other authoritarian regimes (including China, Saudi Arabia, Egypt, Turkey, and Iran), Russia has invested heavily in online troll farms, armies of automated “bot” accounts, cyber hacking units, and other means by which they can pursue their foreign influence goals using the most modern tools available to them.115 While the “agent of influence” of the Cold War may have been a journalist, a government official, a labor leader, or an academic (among many other examples), today the agent is more likely to be a social media user with enough followers to be considered a potential “influencer.”116 Naturally, since there have already been a number of lengthy congressional investigation reports, books, and articles published in recent years on the subject of Russian digital influence operations,117 there is far more that one could say about this than I have room for in this chapter. But let’s just review some of the main highlights of what a mound of evidence has revealed. To begin with, we know that similar to China, Russia uses state-controlled and

Goals and Strategies: Influencing with Purpose53

well-resourced media outlets to spread narratives globally in support of their foreign policy objectives. As Margarita Simonyan, chief editor of RT, noted in 2018, these media outlets view themselves as equal in importance to the Defense Ministry, using “information as a weapon.”118 Russia uses information, along with the tactics and tools of digital influence warfare, as components of what various experts have referred to as “hybrid warfare,” “new generation warfare,” “political warfare,” “ambiguous warfare,” “fullspectrum warfare,” or even “nonlinear war.”119 Many different kinds of tactics have been used to achieve their strategic objectives. For example, we know Russian operatives have launched direct cyberattacks against other countries, like the April 2007 attack that crippled government, banks, newspapers, and other targets in Estonia.120 In February 2014, the world saw what NATO’s Supreme Allied Commander called “the most amazing information blitzkrieg in history” when Russia launched a full-scale digital influence campaign against Ukraine in order to justify its invasion of the Crimean peninsula, using proxy militias intertwined with Russian troops. Months earlier, a popular uprising in Ukraine had led to the resignation of pro-Russian president Yanukovich, and Putin’s response eventually led to a Russia-sponsored insurgency in that country which continues to this day. And in one of the worst tragedies linked to that conflict, Malaysian Airlines flight MH17 from Amsterdam was shot down near the village of Grabove, in rebel-held territory close to the border with Russia, on July 17, 2014. Investigations eventually showed pro-Russian separatists were responsible for the attack, but in its immediate aftermath, Russia deployed a furious arsenal of influence operations to deny, distract, disinform, and disorient the information circulating online about the event. As Singer and Brooking explain, “Russian media and proxies spun at least a half dozen theories regarding the MH17 tragedy. It hardly mattered that these narratives often invalidated each other . . . The point of this barrage was to instill doubt—to make people wonder how, with so many conflicting stories, one could be more ‘right’ than any other.” 121 We also know for certain that Russia attempted to influence the 2016 U.S. presidential elections. In March 2016, Russian media began openly supporting Donald Trump’s candidacy to English-speaking audiences, portraying him as the target of unfair coverage from traditional U.S. media that was subservient to a corrupt political establishment.122 Their online influence efforts during this campaign involved a broad range of tactics, including direct attack, deception, and provocation. Research also shows the Russian intelligence agencies subcontracted much of these efforts to what Thomas Rid called “third-party services providers.”123 Among these, the most prominent was known as the Internet Research Agency (IRA), though they have recently rebranded themselves under a new name.124 In early 2018, the U.S. Special Counsel Investigation (the Mueller Report) found that the operations of Russian trolls had created thousands of fake

54

Digital Influence Warfare in the Age of Social Media

accounts, groups, and messages, posing as genuine Americans; rightnationalist gun-loving Americans who supported the election of Donald Trump; and Black civil rights campaigners who promoted the idea that his rivals weren’t worth voting for.125 According to Oxford University’s Internet Institute, “Over 30 million American users, between 2015 and 2017, shared the IRA’s Facebook and Instagram posts with their friends and family, liking, reacting to, and commenting on them along the way.”126 These and many other kinds of Russian digital influence efforts are often meant to exacerbate divisions and political polarization of a society. Of the 470 Facebook accounts known to have been created by Russian saboteurs during the 2016 campaign, 6 of them generated content that was shared at least 340 million times. These operations did not rely on a few powerful influencers (although in recent years, Trump has certainly been most helpful to their efforts), but rather on having a flood of seemingly regular, ordinary people communicating the same narrative in a mutually reinforcing way. Through such deception, their efforts at manufacturing the illusion of support for the narrative produce a form of “social proof”—an amplifier of influence that we’ll discuss in chapters 4 and 5. It is important to note the targeting choices made by Russian operatives during this influence campaign. As we’ll examine in chapter 5, Trump supporters tend to be less politically compromising and more hardened in their views, which is why they are more susceptible to disinformation and other forms of digital influence warfare. This, in turn, is precisely why they were the number one target of Russia’s massive fake news efforts during the last six weeks of the 2016 election. The explicit message put forward by the Trump campaign (and by him personally) is that anyone who is not an active supporter of Trump and his agenda is somehow unpatriotic, worth of scorn, and even an “enemy of the people.” The angrily shouting, MAGA hat-wearing throngs at Trump rallies, provoked by all manner of emotionally charged rhetoric like “lock her up” and “send them home,” were in retrospect the most obvious targets for the kind of polarizing ragebait that has been the hallmark of today’s Active Measures and Dezinformatziya campaign, as described earlier in this chapter. The Mueller report also found that “average levels of misinformation were higher in swing states—like Florida, North Carolina, and Virginia—than in uncontested states.”127 Clearly, the Russians had amassed a great deal of information for use in their digital influence strategies during 2016. And in addition to influencing targets through disinformation, trolling, provocation, and other tactics, Russia has also invested heavily in cyber espionage capabilities that they use aggressively against perceived threats to their foreign policy objectives. Multiple open-source reports have described two of Russia’s more notorious hacking units: Advanced Persistent Threat, or APT 28 (Fancy Bear), is a hacker group linked to Russia military intelligence (GRU), and APT 29 (Cozy Bear) is a hacker

Goals and Strategies: Influencing with Purpose55

group linked to Russia’s Foreign Intelligence Service. The Russian tactic of kompromat—releasing controversial information about public figures—has long been a part of its Active Measures effort to intimidate and embarrass a target while also influencing public perceptions more broadly. As Thomas Rid explains, “Private correspondence gets stolen and leaked to the press for malicious effect.”128 Thus, during the 2016 U.S. election, Russian hackers penetrated a number of targets, including the Democratic National Committee and the Clinton Campaign, from whom they stole information (including 20,000 emails) that was subsequently leaked online via Wikileaks. On April 21, 2020, the U.S. Senate Intelligence Committee unanimously endorsed the U.S. intelligence community’s conclusion that Russia had conducted a sweeping and unprecedented campaign to interfere in the 2016 U.S. presidential election, affirming once again the findings of the January 2017 Intelligence Community Assessment129 that Russia sought to undermine American confidence in democratic elections, denigrate thencandidate Hillary Clinton, and boost her rival Donald Trump.130 The Justice Department has charged 25 Russian nationals for a covert effort to spread disinformation on social media and for hacking into Democratic emails. And while Trump has downplayed the threat of Russian meddling, he authorized a cyberattack against the IRA during the 2018 congressional elections.131 We also know that Russia has meddled in other country’s political elections as well. For example, during the 2017 French election they created fake social media accounts (most prominently on Facebook, Twitter, and Instagram) that spread disinformation about then-candidate Emmanuel Macron. And then email accounts of his campaign members were hacked and information was released publicly via Wikileaks in an effort (dubbed #Macronleaks) to embarrass and undermine his candidacy.132 While Macron’s campaign team denounced a “campaign of digital disinformation on a scale and with a level of professionalism that is troubling,”133 later investigations proved that most of the anti-Macron propaganda came from two sources: Russian media, trolls, and influence mercenaries and American far-right operatives. Just 10 days before that election (on April 23, 2017), Facebook announced the suspension of over 30,000 accounts that they suspected were automated and linked to Russia.134 Other examples include Russia’s efforts to influence elections in Germany and the Netherlands, as well as the Brexit referendum in the United Kingdom.135 For example, in 2015 thousands of Russian-linked trolls targeted German Prime Minister Angela Merkel with disparaging Instagram messages.136 In 2018, Russian trolls and bots targeted UK Prime Minister Theresa May, particularly via Twitter. In these and other instances, the goal was fairly similar and straightforward: defame and undermine the perceived credibility of a foreign country’s leader and their policy decisions.

56

Digital Influence Warfare in the Age of Social Media

Meanwhile, recent reports have indicated Russia is increasingly meddling in African political elections as well, often paying local actors to post messages in order to evade Facebook’s efforts to monitor and remove foreignbased disinformation.137 In late 2019, Facebook revealed details of a new Russian influence operation targeting the 2020 U.S. election. Using roughly 147,000 accounts on Instagram, operatives encouraged rancorous debates surrounding the Democratic presidential primary.138 And their continuing efforts have sought to capitalize on other issues as well. In late July 2020, U.S. officials announced that Russian intelligence services were “using a trio of English-language websites to spread disinformation about the coronavirus pandemic, seeking to exploit a crisis that America is struggling to contain ahead of the November 2020 presidential election.”139 Specific websites were publicly identified—including ­InfoRos​.­ru, ­Infobrics​.­org, and ­OneWorld​.­press​—­that were spreading disinformation, with “about 150 articles about the pandemic response, including coverage aimed either at propping up Russia or denigrating the U.S.”140 Stories published included claims that the United States was using the pandemic to impose its view of the world and that the coronavirus was originally an American biological weapon.141 Some of these stories were then amplified by other social media users inclined to believe such things were true, or at least could be true, regardless of the fact that there was no supporting evidence (and often there was ample evidence to refute such claims). Through these and myriad other examples, Russia is aggressively pursuing a multipronged strategy against the United States and other countries it views as obstacles to achieving their foreign policy goals. Their aims are to incite confusion, polarization, animosity, and distrust among the members of an increasingly divided society. During the 2016 U.S. elections, the Facebook page for a fake group, Blacktivist—which stoked racial tensions by posting militant slogans and stomach-churning videos of police violence against African Americans—garnered more hits than the Facebook page for Black Lives Matter.142 A pair of Russian operatives posing as Black Americans posted over a hundred of videos on Facebook, Twitter, and YouTube claiming that Democrats exploited Black voters; that a Black American was ejected from a Clinton campaign rally; that liberals and Democrats “want Blacks to be slaves again”; that the KKK was supporting Hillary Clinton; and that a race war was inevitable. Caroline Orr explains that their videos also encouraged African Americans to stock up on guns and stay home on Election Day and that if the Democratic candidate won the election the government would “take our weapons; then they will come to our homes and then . . . they will kill us.”143 And Russian trolls also used an imposter Facebook account called “Heart of Texas” to organize a protest called “Stop the Islamization of Texas” in May 2016, while simultaneously using another imposter Facebook account called

Goals and Strategies: Influencing with Purpose57

“United Muslims of America” to organize a counterprotest at the same time and place. When citizens of adversary nations have greater suspicion toward each other, this produces the kind of indifference, disengagement, and apathy that gives Russia a free hand to pursue its foreign policy objective unopposed. Ben Nimmo, director of investigations at technology firm Graphika, describes Russia’s approach as simply “divide and conquer,” where the overall goal is “to divide and discredit the countries and institutions it targeted, setting allies against one another and driving wedges between Kremlin critics.”144 Russia’s use of deception and disinformation tactics, honed through nearly a century of Active Measures, also brings them additional benefits. Like all state governments (and many politicians), Russia denies any culpability for these efforts, but as Pomerantsev explains, “It is when the Kremlin’s efforts are unveiled that they have perhaps their most significant effect. When one hears so many stories of fake accounts that seemed to be supporting freedom and civil rights but that in fact turn out to be fronts of foreign governments, one starts doing a double take at everything one encounters online.”145 As a result, those of us targeted by such efforts are becoming increasingly dismayed by the overwhelming flood of information and disinformation swirling around us, a flood that is designed to stimulate emotions (from outrage to false pride) and entertain, while also making it impossible to discern fact from fiction. And the exposure of these disinformation efforts actually helps further Russia’s goals by raising the levels of uncertainty and fears about future possible disinformation efforts. Not only have they succeeded in duping the public but also now the public has a heightened concern about being duped again and as a result are more suspicious about virtually everything—including factual truth conveyed to them by formerly trusted sources. When we can’t trust what we see and hear, we begin to think that nothing may be true, nothing can be believed, and anything may be possible. Over time, this results in precisely what Russia wants (as noted earlier): an information environment where the audience is unwilling to place its full faith in anything unless it conforms to what they already want to believe. We will examine much further these facets of uncertainty and confirmation bias in chapters 4 and 5. And of course, Russia attacks democracies not only because they are vulnerable targets but also because Putin and his colleagues have grown incredibly wealthy and powerful through an authoritarian system and see democratic systems as a direct threat. It is in their best interests to demonstrate—both to the world and to the people of Russia—that democracy as a political system is chaotic, inefficient, unjust, and destined for failure. For their part, democratic governments are recognizing this aspect of Russia’s efforts, though none have quite figured out what to do about it yet. For example, a UK House of Commons committee report in February

58

Digital Influence Warfare in the Age of Social Media

2019 noted that the British legal framework was “no longer fit for purpose” and that “in this environment, people are able to accept and give credence to information that reinforces their views, no matter how distorted or inaccurate. This has a polarizing effect and reduces the common ground on which reasoned debate, based on objective facts, can take place . . . the very fabric of our democracy is threatened.”146 So, to sum up this brief overview of many state-based digital influence efforts, we see a broad range of tactics being used against foreign and domestic targets, with Russia and China providing some of the most prominent and frequent examples. The strategies pursued through these efforts are fairly straightforward: the pursuit (or retention) of power. And it is this same perspective that sheds light on why many groups and individuals are also engaged in similar kinds of influence efforts, using similar kinds of tactics including deception, provocation, and direct attacks. STRATEGIC GOALS OF NON-STATE ACTORS ENGAGED IN DIGITAL INFLUENCE WARFARE While much of this chapter (and book) addresses state governments engaged in digital influence warfare, we have also seen many instances of individual politicians, anti-science crusaders, and others using the same kinds of tactics online for their own purposes. For example, you can easily find influence campaigns online that will try to convince you that the world is flat, smoking is not harmful, and the moon landing was a hoax orchestrated by the U.S. government. Others engage in digital influence warfare for profit-oriented goals, including efforts to undermine the credibility and reputation of peer competitors in a particular market. Within the United States, we have seen an increasing prevalence over the past decade of Americans using all sorts of online tactics to deceive, provoke, or even silence other Americans. In fact, as Nathaniel Gleicher—Facebook’s head of security—noted in October 2018, “If you look at volume, the majority of the information operations we see are domestic actors.”147 Further, according to Philip Howard, “In political battles it is now a normal campaign strategy to employ some communications experts to use social media algorithms and automation to amplify a political voice.”148 Advertisements on Google and Facebook are inexpensive and can reach millions of voters, so it’s unsurprising that from June to November 2016, Trump’s campaign ran 5.9 million ads on Facebook (compared to just 66,000 by Clinton’s campaign).149 And there is also a burgeoning industry of what I call digital influence mercenaries, whose high levels of technical skills can be enlisted by political parties for the purpose of achieving the strategies and goals described in this chapter, including manipulating voters or shaping public opinion over social media networks.150

Goals and Strategies: Influencing with Purpose59

Recently we have seen social media platforms respond to politicians spreading disinformation online by inserting a warning to viewers. For example, in late May 2020, Trump posted on his Twitter account a claim (without any evidence) that mail-in ballots led to voting fraud. This was of course not the first instance of Trump spreading lies online, but in a new twist, Twitter added its “fact check” label to this message, which then led to an angry tirade against Twitter, accusing it of interfering in the 2020 presidential election.151 As Yale University law professor Jack Balkin notes, Trump also threatened to impose an Executive Order or find some other means to regulate social media platforms, apparently an attempt “to frighten, coerce, scare, cajole social media companies to leave him alone and not do what Twitter has just done to him.”152 But much like Russia’s influence strategy described above, Trump’s strategy here seems fairly transparent: raise doubts about the integrity of an election that you might lose, and then if you do lose, simply claim the election was fraudulent. We saw this strategy pursued during the 2016 election as well, with Trump claiming that he might not accept the election results if he lost, something he inferred again in a July 2020 interview with Chris Wallace on Fox News.153 Throughout these and myriad other examples, the overriding goal being pursued has something to do with power. Politicians will use all available means on the Internet to acquire, keep hold of, and utilize power for the sake of their political agenda. In a very similar way, extremists and terrorists also use the Internet to acquire, keep hold of, and utilize power for the sake of a political agenda. Thus, recruiting for a cause, provoking fear and uncertainty, capitalizing on a target’s biases and prejudices, and encouraging belief in a narrative of crisis and solution are all common among many of today’s non-state digital influence efforts. This is not meant to suggest some kind of moral equivalence: using the tactics and tools of digital influence warfare for political gain pales in comparison with using it to mobilize lethal terrorist attacks against innocent civilians. But the essential point to make here is that while the goals and strategies for influencing others cover a very broad and diverse terrain, the tactics and tools they use are actually quite similar. For example, much like a political campaign will create an easy-toremember slogan, like “Make America Great Again,” a terrorist movement will do the same. The central theme of Al-Qaeda’s message to its followers can easily fit on a T-shirt or a bumper sticker: “Think global, act local.”154 Throughout much of the Muslim world today, there is a longing for retribution against others for perceived injustices and a desire to address a power imbalance.155 Jihadists can tap into these sentiments by offering a promise to empower the disenfranchised and to right the global wrong. Anti-American sentiment—built largely on animosity toward certain U.S. policies and the perceptions of intent behind these policies—do

60

Digital Influence Warfare in the Age of Social Media

not necessarily lead to an acceptance of violence, but it can lead Muslims to suppress their moral doubts about global jihadist ideology and in the process give al-Qaeda’s leadership cadre more room to maneuver. Thus, according to Steven Kull, majorities of Muslims reject the legitimacy of terrorist attacks against civilians, and yet they also favorably interpret a core jihadist ideological goal as “stand up to America and affirm the dignity of the Islamic people.”156 Just like the state governments described above, non-state actors of many types want to influence perceptions. Political campaigns have huge and active websites. In 2014, according to Gabriel Weimann’s research, there were over 10,000 individual websites associated with some form of support for terrorism.157 Both politicians and terrorists use social media and the Internet for communicative, social, symbolic, and operational uses.158 Another similarity is that both politicians and terrorists describe themselves as a solution (often the only viable solution) to an existential crisis faced by their supporters, who are all characterized as good people threatened by forces that are beyond their ability to confront. They may capitalize on a major economic downturn, or demographic changes (like immigration), or perceived corruption and injustices in their calls for action. And of course, politicians and terrorists routinely incorporate the psychology of persuasion and influence in their constant appeals for money to support their cause. Politicians and terrorists also use the Internet to build a community in which members receive some kind of personal validation while contributing to the validation of others. These communities of supporters establish and nurture a type of in-group identity, while out-groups (external populations) are demeaned, dismissed as illegitimate, and sometimes even targeted for violence and destruction. As we’ll examine more closely in chapter 5, provoking emotional reactions about the out-group and reinforcing certainty in an individual or group identity through “othering” (e.g., turning people against “nonbelieving others”) are used to polarize society, making people increasingly angry and defensive about that what they believe to be true— emotions that often bring out the worst behaviors in some people. Since much of this book explores the political (but mostly nonviolent) uses of digital influence warfare, let’s focus here for a bit on violent extremists and terrorists. In truth, there is already a tremendous amount of news media coverage and research literature on how violent non-state actors utilize the Internet for their purposes.159 We know that extremists and terrorists use the Internet to spread ideological propaganda, recruit new members, and mobilize people to act. We know a great deal about how global jihadists like al-Qaeda and the Islamic State have used discussion forums, YouTube videos, online magazines (like Inspire and Dabiq), text messaging, and social media to achieve specific inspirational and instructional goals of their movement.160 Research by Ali Fisher has

Goals and Strategies: Influencing with Purpose61

revealed how this movement’s “media mujahideen” operate “through a dispersed network of accounts which constantly reconfigures much like the way a swarm of bees or flock of birds constantly reorganizes in midflight,” which he refers to as “swamcast.”161 Based on research by Katherine Brown and Elizabeth Pearson, we also know how “members of this movement developed their own Twitter app, ‘Dawn,’ which automatically downloaded ‘tweets’ to a given account’s timeline. This subverts attempts to close accounts and vastly increases reach through synchronization with peak viewing times of different time zones.”162 Social media also facilitates “narrow-casting,” which Brown and Pearson describe as “channeling internal private communications between members or sympathizers of extremist groups. The online environment can provide safe spaces for debate, organization and networking, and risks of detection are perceived to be lower than offline. ‘Broadcasting’ on social media can assist in ‘narrowcasting.’ On 26 September 2015, just four days after Telegram launched its new ‘Channels’ tool, Daesh [Islamic State] set up its own, ‘Nashir’ (‘Distributor’). Extremists favor Telegram because of its ‘secret chat’ function, which encrypts messages and avoids security service detection.”163 Meanwhile, we also know how right-wing extremists have—since the early 1980s—pioneered the use of the Internet for disseminating hate speech through message boards, videos, music, and even video games.164 Prominent manifestos in this genre range from The Turner Diaries (a personal favorite of Oklahoma City bomber Timothy McVeigh)165 to Anders Breivik’s 1,500-page manifesto attempting to justify his horrific July 2011 terror attack in Oslo, Norway.166 Researchers have also shown how farright extremists use search engine optimization methods, incorporating politicized keywords to get their content ranked higher in search results.167 These extremists have favored the Telegram messaging app for the same reasons that Jihadists do—and increasingly so after August 2017, when Stormfront—one of the oldest and largest neo-Nazi discussion forum— was taken offline following a series of lethal hate crimes and mass killings linked to members of the site.168 And after the 8chan messaging board was taken down in 2019, a study found that “white nationalists have a much more robust presence on Telegram than they did two years ago” and that “their channels have grown more sophisticated, violent and terroristic over time,” with some channels offering instructional guides for building pipe bombs, stockpiling weapons without the feds noticing, and preparing for a mass shooting.169 Right-wing extremists have also increasingly used Discord channels to meet online and plan digital influence tactics such as organizing a “dislike” campaign on YouTube, where the goal is to vote down videos of your ideological opponents.170 Some communities of extremists would likely not even exist were it not for the Internet. The QAnon conspiracy phenomenon, for example, was

62

Digital Influence Warfare in the Age of Social Media

launched and sustained primarily through online message boards (like 4chan and 8chan), YouTube videos, and social media platforms. According to a study published in the CTC Sentinel, “QAnon represents a militant and anti-establishment ideology rooted in an apocalyptic desire to destroy the existing, corrupt world to usher in a promised golden age.”171 Adherents of this ideology formed an online community through which they reinforced increasingly radical beliefs, fueling several high-profile cases of violent attacks and plots. And because of the profit incentives built into the attention economy (described later in this book), YouTube and social media platforms allow extreme and misleading information to proliferate because it increases “engagement” and allows them to draw in more advertising revenue. But in addition to spreading ideological messages, forging online group identities, and motivating certain kinds of behavior among supporters, we are also seeing a wide variety of individuals and groups increasingly use the same tactics and tools often ascribed to state-based digital influence efforts. For example, terrorist networks like Islamic State and al-Qaeda use these tools to sow discord and confusion among target audiences. According to Dell Dailey, a former U.S. State Department counterterrorism chief, “Al-Qaeda and other terrorists’ center of gravity lies in the information domain, and it is there that we must engage it.”172 Both jihadists and rightwing extremists use social media to provoke people with their information, not only to get a reaction but also because frequently these people express their outrage about what they have just seen, which in turn helps spread the original information to others. They also seek to polarize “the enemy” much like Russia does. This is why we see a range of examples today of extremists who engage in trolling, harassing, gaslighting, attacking, using deepfakes for perceptions manipulation, and other tactics described in chapter 3. Often, a primary goal of these online efforts has been to provoke fear in order to influence the policies and behavior of the target. For example, in early January 2015, members of the Islamic State posted a video to several websites showing a Jordanian pilot they had captured after his plane went down in Syria. The pilot was shown injured but standing upright, locked in a cage at the center of a village square surrounded by onlookers. After several minutes, the video shows the lighting of a fire that proceeds to engulf the pilot, who dies in agony.173 This ghastly video was posted on multiple Jihadist websites and via social media accounts for many days. The purpose of this video dissemination effort was at least fourfold: first, elicit fear and panic among Islamic State’s enemies, as well as those who might be only tentatively committed to the coalition of states aligned against it. In this instance, Jordan’s government responded to public pressure and did not participate openly in the fight against Islamic State following this incident. Second, the video was intended to send a message to other nations about the atrocities that would await them if they didn’t keep

Goals and Strategies: Influencing with Purpose63

their distance from this conflict. Third, the video was meant to embolden the active fighting cadre of Islamic State (at one point estimated to be over 15,000) by promoting a narrative of justified vengeance (e.g., “Here’s a Muslim who has turned his back on Islam by not accepting that we, and al-Baghdadi, are fighting to liberate the global umma!”). And fourth, this was also meant to encourage recruitment of foreign fighters from abroad, showcasing with sheer brutality the power that Islamic State had amassed. By any measure, the video proved quite effective toward achieving each of these objectives. And although most Western news media refused to show it, allowing its reporters to only describe a few details from it, Fox News posted the complete 22-minute video on its website. In a similar example, on March 15, 2019, a right-wing extremist in Christchurch, New Zealand, killed 50 people in two mosques in a mass shooting attack. Some people viewed this as merely the latest in a series of mass casualty attacks by right-wing extremist against houses of worship (e.g., Pittsburgh, PA; Poway, CA; Charleston, SC; and several others) in recent years. However, what made this murder spree unique was that the attacker used a GoPro camera to provide a live, real-time video of his attack on Facebook. Clearly the killer’s intent was not simply to kill; using these digital tools to “livestream” the attack online was clearly meant to influence and inspire others. During the attack, the killer referenced various ideas associated with the accelerationist movement, whose proponents believe modern society must be destroyed before something better will come in its place. These beliefs are then used to justify acts of violence meant to “accelerate” conflicts (like racial wars) in which a spiral of violent actions provoke overreactions of the government, eventually resulting in an all-out civil war through which all parties literally destroy each other. The attacker’s video was viewed 4,000 times before it was removed by Facebook,174 and within the 24 hours after the attack, the company also located and deleted from its social media platform an additional 1.5 million videos containing footage of the bloodshed.175 Of course, by then it had already been duplicated and reposted to other websites and social media platforms beyond Facebook’s control. The attacker also posted online a 74-page anti-immigrant manifesto that reflected his beliefs (influenced by right-wing extremists in Australia and Canada) in a far-right conspiracy theory called “The Great Replacement,” which describes the extinction of “the white nation” through uncontrolled immigration.176 In addition to provocation, extremists and terrorist also use digital influence tactics of deception for their own purpose. For example, extremists in the United States (both left wing and right wing) have been aggressively creating networks of Facebook pages and accounts—many of them fake—that make it appear as if the ideas they are promoting enjoy widespread popularity.177 Right-wing extremists have also tried to incite violence and riots at otherwise peaceful Black Lives Matter protests and even

64

Digital Influence Warfare in the Age of Social Media

encouraged violence against members of law enforcement.178 In Europe, far-right propaganda and disinformation flooded Facebook ahead of the 2019 European Union parliamentary elections. In Italy, they used a movie clip of a car being destroyed and claimed it was news footage of migrants wrecking a police vehicle. In Poland, they disseminated a fake news story about migrant taxi drivers raping European women. In Spain, they shared lies about Catalan separatists shutting down a child cancer center. In the United Kingdom, they shared a blog post with a beheading photo and a sensationalist headline, claiming “A Billion Muslims Want Sharia Law.”179 Terrorists and extremists have also hacked into individual social media accounts, according to one study, in order “to optimize their influence online and counteract the effects of account suspensions and removals by social media providers.”180 And extremists and terrorists also use direct attack methods of targeted harassment, as well as steal information and release it publicly in attempts to embarrass or silence their opponents. During the pre-social media era, right-wing extremists were particularly known for using “mail bomb” tactics (flooding a target’s email inbox with an overwhelming number of hateful messages) and distributed denial-of-service attacks against websites (through which a server would be overwhelmed with a flood of requests for information and freeze up). Now, as research by Audrey Alexander and Bennett Clifford demonstrates, terrorist group’s hacking capabilities have evolved to include vandalizing websites and “doxxing”—gathering and disclosing or publishing an individual’s personally identifiable information online, with the intent of harming a target with acts like public humiliation, stalking, identity theft, or harassment.181 Through the tactics and tools of digital influence warfare, extremists and terrorists are seeking to achieve a variety of goals, including: • to boost morale and support among the “in-group” of true believers (e.g., “we are the chosen few, the enlightened ones”), providing them with an eagerly desired certainty in an uncertain world; • to intimidate, demean, and silence their ideological opposition by attacking them, their websites, and social media accounts; • to provoke dissension, disagreements, and debates among nonbelievers, knowing that a divided enemy falls faster than a united one; • to diminish faith in established norms, accepted narratives, and conventions of behavior in society in order to facilitate greater acceptance of their own views; and • to influence beliefs and behaviors among a society’s members in ways that align with their political and ideological goals. And of course, the same tools and tactics of online deception, provocation, and direct attacks that are used by politicians and extremists are also

Goals and Strategies: Influencing with Purpose65

used by criminal enterprises to scam innocent consumers (e.g., by using a realistic-looking website offering discount prices that in the end are too good to be true or by provoking a target to reveal sensitive information that is then held for ransom). This is a different matter beyond what we have time to cover in this book, but it should be kept in mind that deepfake images and videos, fake websites, and other such topics discussed in this book can be (and increasingly are) used to generate illicit profit rather than for the pursuit of social or political goals. And we should also remember that to the degree any kind of influence effort is successful, it can be assured that others will learn from that success and try to emulate it for their own purposes. CONCLUSION To sum up, a brief review of the digital influence landscape reveals that the strategies and tactics described in this chapter can be (and are being) used not only for political purposes but also for criminal purposes, social movement purposes, anti-science purposes, extremist and terrorist purposes, among many others. Digital influence warfare thus involves much more than just meddling in elections or spreading fake news—it is about influencing the beliefs and behaviors of targets, domestic and foreign, in ways that the state or non-state actor believes will be beneficial to them. While there are many different kinds of examples of these efforts throughout the digital information ecosystem, a comparative analysis reveals a number of common themes. First, they are most often driven by clear strategic objectives the influencer wants to achieve, and usually at the target’s expense. Some influence campaigns may focus on attracting supporters, or seek to undermine faith in scientific facts or a democratic election, while others try to provoke fear or polarization among the members of a community. Some may want to increase uncertainty, while others want to reinforce certainty. The tactics chosen to use against the target(s)—such as direct attack, provocation, or deception—are meant to achieve those objectives. And whatever the goals and objectives to be achieved through an influence effort, one should research and choose targets who are most likely to help achieve those influence goals. Using local proxies to disseminate and reinforce a particular narrative can be a particularly effective approach. And an influencer should monitor and assess the impact of their efforts, refining as necessary to improve effectiveness. Why are these strategies and tactics effective? The answer is complicated, as we’ll see reflected in the chapters of the second part of this book. Chapter 4, for example, explains how humans are natural seekers of information, often driven to do so in order to manage uncertainty. Even though we are surrounded by multiple forms of uncertainty—it is an inherent part of the human experience—we don’t like it much, and for many people

66

Digital Influence Warfare in the Age of Social Media

a higher level of uncertainty causes anxiety and fear. So, our seeking of information can facilitate exposure to disinformation, provocation, and other digital influence tactics described in chapter 3. At the same time, as Katherine E. Brown and Elizabeth Pearson explain, the Internet “works like a virtual echo chamber” in which “confirmation biases are tapped, further polarizing group identities.”182 As we’ll examine extensively in chapter 5, this is another vector of vulnerability, through which an influence aggressor can manipulate us into believing (and even fiercely defending) something that is completely untrue. This is particularly the case when we cocoon ourselves within certainty-reinforcing influence silos, where information that is not consistent with the approved narratives and beliefs of those within can be increasingly shut out and ignored. And when information dominance or attention dominance has been established, the influencer can literally lie at will. But before we get those important discussions, let’s turn our attention toward addressing some important technical details. Underlying all of this is a proliferation of technology companies whose platforms are designed to make money by attracting attention through any means necessary. As Pomerantsev notes, this creates “an information environment in which accuracy, fairness and impartiality are at best secondary.”183 These platforms, and the tactics and tools used to manipulate their users, are the focus of chapter 3.

CHAPTER 3

Tactics and Tools: Technical Dimensions of Digital Influence

In this chapter, we’ll look at a broad sampling of tactics and tools in which online technologies are used to manipulate the perceptions and beliefs of a target. To begin with, it is essential to remember that strategic goals and objectives (like those described in chapter 2) should determine the kinds of tactics that would be most effective within any digital influence effort. Often, goals will be complementary or overlapping, and the influencer needs to ensure that the tactics they choose to achieve one goal do not undermine their ability to achieve other goals. And naturally, once you have determined your strategic goals and objectives, you will need to identify your target(s) and begin gathering data on them. From that data, you can determine the contexts in which specific kinds of information will become relevant for your target. As Jarol Manheim notes, “The single most important key to success in an information and influence campaign is good intelligence . . . Any well-crafted campaign will incorporate in its overall planning and implementation a significant strategic research component.”1 Chapter 4 of this book examines various attributes of potential targets, so let’s just say here that the targets you choose for your influence effort should naturally have direct relevance to achieving your goals and objectives. Further, the more data you can gather and analyze about your target, the more effective your influences efforts can be. A unique aspect of the Internet is that it allows us—encourages us, even—to share information about us and our lives in ways (and frequency) that are unprecedented. We post photos, make personal revelations, tell people where we are at a given moment, showcase who our friends and family are—before the Internet, it could take an intelligence or law enforcement agency weeks to compile this much information, but now hundreds of millions of people worldwide are providing free and

68

Digital Influence Warfare in the Age of Social Media

unfiltered access to themselves. This is the “privacy paradox” described by Susan Barnes2—while most people understand privacy issues, they are posting tons of information on their social media profiles. As a result, teenagers become surprised when their parents find out—after viewing their social media account—what sorts of things they’ve been up to. You can’t claim to have been home studying all weekend when you have posted a bunch of photos of you and your friends at a late-night party. This aspect of the profile, crafting and maintaining an online identity—which may or may not truthfully reflect who we are in real life—is a particularly critical dimension of digital influence warfare. Further, our online activities (emailing, website surfing, using social media accounts, etc.) are somewhat akin to an iceberg—the fun stuff we do in the visible online arena is just the part of the iceberg we see above water; there is a lot more below water that we cannot see. Overall, there are many kinds of data gathered about individual users of Internet platforms and social media—some of which we may not even be aware of. Do each of us have the power to shape what data is provided about us online and how it is used? No, not really. We have given away that power to the service providers, email servers, website trackers, social media companies, and others. Remember all those “Terms of Service” agreements you didn’t read and just clicked the “Accept” button so you could get on with your business? Because of the profit models that pervade the attention economy, Internet firms need to track a user’s identity and patterns of behavior, so they can formulate the right kinds of advertising campaigns. Just as every click and keystroke can be monitored, recorded, and used for analysis that generates advertising profits for the Internet companies, the same data can inform an influence strategy. One of the most popular and heavily used email services in the world is Google’s “Gmail” platform, which provides free email accounts to anyone. The service is very user-friendly and works on all smartphones and computer operating systems. It also deceives its users into believing that their email is actually theirs—in reality, all your messages are stored on Google mail servers, as well as message information about when and to whom you send messages (and from whom you receive messages). By analyzing this data, Google can then tailor specific ads of interest to you based on algorithms that monitor your usage of their email system. Companies will pay Google a handsome price for the opportunity to place those ads in front of people most likely to respond favorably (in terms of consumer purchasing decisions). Meanwhile, websites use cookies and other information-gathering devices to monitor visitor activity. As described in other chapters of this book, Internet and social media platforms routinely track our website visits, the terms we search for, the videos we watch, the things we download

Tactics and Tools: Technical Dimensions of Digital Influence69

or upload and send to others, and much more. Additionally, Internet service providers can also gather a lot of information on visitors to websites they host, and many websites appreciate receiving regular traffic analysis reports from them to answer important questions, like: Where do my website visitors come from? How do they find me? How long do they stay at my website and what do they do (i.e., what do they read, click on, search for, etc.)? The collection of this data can also be used by hackers to identify an audience whose members have shown interest in the subject matter of that website. Some websites use a technique known as fingerprinting, a way to force your browser to hand over innocent-looking but largely unchanging technical information about your computer, such as the resolution of your screen, your operating system, or the fonts you have installed. Combined, those details create a picture of your device as unique as the skin on your thumb, as Geoffrey Fowler explains: It doesn’t matter whether you turn on “private browsing” mode, clear tracker cookies or use a virtual private network. Some even use the fact you’ve flagged “do not track” in your browser as a way to fingerprint you. Sites can use your digital fingerprint to know if you’ve visited before, create profiles of your behavior or make ads follow you around. They can also use it to stop you from sharing a password, identify fraudsters and block harmful bots.3

The data collected by this fingerprinting method could certainly be used to target users for a disinformation campaign. More importantly, the transparency of a social media user profile and activity lends itself to an environment that is rich for data gathering and analysis. Just looking at a quick snapshot of a person’s account—­examining the number of followers, the number of likes, the number of shares, etc.— offers visual indicators of that person’s potential value as a target. Then you collect data on that individual’s activities—sometimes called “scraping”— which will help you identify how the target can be influenced. For various reasons, I won’t go into the specific details in this book about how to scrape social media platforms or websites, but you can find plenty of instructional guides, YouTube videos, and tools to assist with this kind of activity.4 Some of these tools do not require any knowledge of coding programs, and they instead have been set up as ready-to-use resources for the collection of data. In general, all of our profile information, status updates, interactions, and much more can be harvested and used for advertising purposes by the Internet companies whose tools we use. All this can also be used for identifying the kinds of users most suitable for an influence effort. A combination of data analysis and algorithms will reveal specific identity attributes

70

Digital Influence Warfare in the Age of Social Media

about each user that can then aid in devising an appropriate targeting set in support of your influence goals. For example, from their user profile and their activities online, we can find answers to specific questions: • What can we determine about the user’s education, occupation, interests in certain sports teams, movies, music, etc.? • When are they most likely to be online? • Who are their friends and associates? • What kinds of hobbies, interests, and concerns do they indicate? • What kinds of messages do they post or share? • Who and what do they tend to like or dislike? • What can we determine about their preferences and attitudes? • What sources of information do they provide links to? • Do they show a pattern of frequently favoring information posted online by specific individuals or organizations, news media, websites, etc.? • How long or short is the duration of their online sessions? • What format of information (text, image, video) do they respond to and interact with most often? Data mining of these and many other kinds of information reveals social media users who would be most beneficial to be targeted in your influence effort. The amount of information available helps refine targeting of influence efforts. For example, putting forth a convincing profile that reflects shared identity and values with the target can help the influencer be more believed by the target. Connections with others—an underlying motivation of many Internet users—also play into the strategies of digital influence efforts. By knowing your connections, the influencer can work to influence some of them, and the result of seeing your friends embrace a certain narrative is that you are more likely to embrace it yourself. Basically, our online connections can serve as a new means of exerting peer pressure on an individual’s views and behaviors. Social media services allow us to gather our connections (bonding and bridging) into one place, but they generally don’t provide a way to orga­ nize them into a hierarchy of importance.5 Nor might we actually want to— imagine the stress you would face if you were forced to identify which of your connections on Facebook or Twitter were your top 20 “most important connections”? Would you worry about how your other connections would feel if they were not selected to be atop your list? Would you occasionally modify the list over time, and if so, what impact would that have on your relationships with connections who were added to, or removed from, your “most important connections” list? Further, would you want any random visitor to your profiles (say, a Russian troll) to know which of your connections might have the most influence on your opinions and behavior?

Tactics and Tools: Technical Dimensions of Digital Influence71

The 2016 Trump campaign’s use of data on Facebook users (provided by Cambridge Analytica) offers a useful case study. In its investigation after the election, Facebook revealed that the data of as many as 87 million people may have been shared improperly with Cambridge Analytica, which used this information to help create tools designed to predict and influence voter behavior. Microtargeting strategies basically involve collecting data and then crafting messages most likely to influence specific people in specific ways. The Trump campaign’s digital director (and briefly director of his 2020 reelection campaign) once described how they used Facebook’s advertising tools to microtarget potential supporters with customized ads, making some 50,000–60,000 ads a day, continually tweaking language, graphics, even colors, to try to elicit a favorable response.6 Of course, the same strategies used by political campaigns to identify the voters most likely to support their candidate’s ideas are also used to identify individuals most susceptible to disinformation, and most likely to share it with others. Effective influence campaigns will identify key targets based on their analysis of user attributes. Identifying and then targeting “established local influencers” (i.e., individuals with large numbers of followers) within an influence silo is particularly important. You can use the tools of social network analysis to determine who influences whom within a particular group or network, identifying individuals with high numbers of others who listen and respond to those individuals. In short, because of the massive amount of data available on users of Internet and social media platforms, shaping and controlling the flow of information to the target of one’s influence effort has become far easier than ever before. Finally, keep in mind that different technologies and social media platforms offer different opportunities. Thus, once you have formulated your influence strategy and goals, and determined what targets to influence, you will also need to decide which online communications and social media platforms are best suited for achieving your objectives. The choices you make here will be influenced by the strategic goals you want to achieve and will also determine the tactics you can deploy on those platforms—for example, hashtag flooding can be effective on a variety of social media platforms, but not so much in mass email distribution schemes or when providing information on fake news websites, file-sharing networks, or YouTube. Digital influence warfare campaigns generally seek to manipulate targets (e.g., social media users) as well to alter the broader information environment. In both instances, the strategy must be tailored to fit the specific Internet or social media platform of interest. The variety of social media platforms complicates this somewhat. Facebook, Instagram, Twitter, Tumblr, YouTube, etc. have a common desire to attract as many users as possible, so they can display advertisements in front of those users.

72

Digital Influence Warfare in the Age of Social Media

This is their basic profit model in the attention economy, the source of the platform’s revenue stream. Thus, the more users a social media platform has, the more they can earn from advertising revenues. Users also interact with others in similar yet subtly different ways. For example, Twitter is a microblogger that facilitates tons of public engagement, with back-andforth tweets among those willing to engage in a discussion that others can view. You can also communicate privately via direct messages with other users on the platform. Twitter allows you to broadcast information in many different formats (text, image, video clip, links to other sources, etc.) to your followers, and by extension—should they choose to retweet your message—to the users who follow your followers, allowing for a potential cascade of repetition. On Instagram, you can comment and mention photos, but there is less opportunity for open dialogue. And unlike Twitter, you can disable the ability for people to comment on an individual post. Tumblr allows you to post and share links; text; quotes; and audio, photo, and video posts from other Tumblr blogs, and people who see the information can share/repost and comment on it. Meanwhile, YouTube does things a bit differently, because it is a different kind of platform dominated by content producers and content consumers. YouTube allows you to host your own “channel” with video content, and like the other platforms, viewers of your content can like and comment on your videos. Further, instead of a scrolling “feed” of information in your account, algorithms are used to encourage visitors to view other videos similar to the one you’ve just watched. So, if you were to watch one video full of inaccurate information, the algorithm would promote other similar types of disinformation.7 But (as with some of the other platforms) there is less opportunity for open dialogue. The important thing to keep in mind here is that there are a number of similarities and differences among the many social media platforms that an influence strategy should take into account. These differences naturally impact the tactics you can use within a particular influence strategy. Not all tools and tactics work the same, but there are a number of similarities when it comes to the ability to collect and analyze data, use algorithms to make predictions, etc. Whether it’s Facebook, Twitter, Instagram, Twitch, TikTok, YouTube, WeChat (the biggest one in China), or VK (formerly VKontakt, the largest platform in Russia), all have hidden means of logging data on users, patterns of activity, and other data that can be used to influence the targets. Each of the platforms also has their own policies and regulations about what kinds of information are or are not allowed. All the major social media platforms routinely remove accounts that have violated their policies. For example, Twitter and Facebook have banned noted conspiracy theorists and hate-mongers like Alex Jones, Louis Farrakhan, and others from its platforms.8 Google (which owns YouTube) announced in 2019 that it would ban websites that spread fake news from using its online

Tactics and Tools: Technical Dimensions of Digital Influence73

advertising services and that it would remove videos from YouTube advocating Nazi or other hateful ideologies.9 Twitter announced they would ban all political ads, and Facebook has banned fake and manipulated videos from its platform.10 They also announced that they would no longer permit ads from websites that displayed misleading or illegal content— with a notable exception. As of June 2020, Facebook CEO Jeff Zuckerberg remained insistent that political ads, even when they contain obviously false information, would still be allowed on the platform.11 Obviously, this gives tremendous power to those seeking to spread disinformation and has direct implications for what to expect in the 2020 U.S. elections. Notably, some videos containing false information can still be posted to YouTube. This was the centerpiece of much public debate when in late 2019 a Trump campaign ad attacking Democratic candidate Joe Biden was posted online. While the New York Times, the Washington Post, and many other news outlets assessed the video to be making completely false allegations, and CNN refused to air it, the video was hosted on a YouTube channel.12 Google, which owns YouTube, has policies in place that do not allow a user to post politically oriented videos that can be proven fraudulent. For example, stating that people can vote via text message, or providing the wrong time or location for casting your vote on Election Day, or making easily debunked claims about a particular candidate—these are all disallowed on the platform. But when a video provides tidbits of real information melded together with carefully edited inferences and allegations that are more difficult to prove false, such a video apparently can be allowed. Anyhow, the point to make here is that these various differences across the social media platforms naturally have significant implications for conducting an effective digital influence strategy, as some tactics are more conducive than others for a specific platform. Further, as a recent UNESCO report notes, “Some social media and messaging platforms have limited quality control standards for determining what constitutes news, make it easy to counterfeit and mimic legitimate news brands to make frauds look like the real thing.”13 Today there are an array of online tools and tactics that can be used to shape people’s perceptions about virtually anything. An influencer looking to engage in digital influence warfare will want to have a solid understanding of these tactics and their purposes and then choose among them the tactics best suited to achieving the strategic goals they’ve previously identified. So let’s have a look now at a relatively modest sample of these tactics before concluding the chapter with some thoughts about assessing the effects of digital influence efforts. TACTICAL CHOICES There are many tactics14 that can be pursued in support of these strategies. These may include altering the nature and quality of information

74

Digital Influence Warfare in the Age of Social Media

Table 3.1  A sampling of tactics used in Digital Influence Warfare Astroturfing (fake grassroots support)

Playing both sides

False amplification of critiques of opponents

Shocking or graphic content

False amplification of marginal voices

Hashtag poisoning

False amplification of news Impersonation of public figures Impersonation of political allies Defamation Doxxing Hacking and leaking documents

Scare stories Communications disruption Spam Algorithm exploitation and manipulations Deepfakes Dissemination of doctored images, videos, and documents Dissemination of false, misleading, or misattributed content

Interference with political processes Impersonation of websites Intimidation and harassment Restriction of availability of information Dark advertising

to the public

Exploitation of content moderation systems

Dissemination of conspiracy theories

Source: Adapted from Alex Krasodomski-Jones et al., “Warring Songs: Information Operations in the Digital Age,” Demos/Open Society, European Policy Institute (May 2019), p. 8. Online at: ­https://​­demos​.­co​.­uk​/­wp​-­content​/­uploads​/­2019​/­05​/­Warring​-­Songs​-­final​-­1​.­pdf

that can be accessed (e.g., hacking a website or blog in order to insert, remove, or amplify information); changing the visibility of information, like making formerly secret documents available to the public; degrading the information environment, like flooding an online conversation space with confusing, mixed messages in support of or against a certain topic and preventing the spread of opposing information; limiting the ability or willingness of politically, culturally, or socially opposed voices to participate in the discussion. A fairly extensive list of tactics was identified by the independent analysis group Demos in their recent report Warring Songs, as shown in Table 3.1. I’ll describe each of these, as well as several others, throughout this chapter. While analyzing many examples of digital influence efforts (and their impacts), I found that the majority of them fall into one of three categories. One category of efforts involves deception. Basically, the influencer seeks to benefit by deceiving the target in some way. We’ll examine several specific examples of this in a moment. The second category involves provoking emotional responses, usually about something that the target already favors or opposes. Exacerbating fear and anger appear to be among the most common goals here. And the third category involves attacking the target

Tactics and Tools: Technical Dimensions of Digital Influence75

directly—bullying, calling them derogatory names, spreading embarrassing photos of them, and so forth. Of course, these three categories of tactics are not mutually exclusive: a digital influence effort could involve a combination of them—for example, using deception (like a hijacked account and fake images) to attack the target in order to provoke an emotional response. The more I searched for examples of these various tactics, the more I found—far too many to represent adequately in this one chapter—so I’ll provide just a handful of examples of what I learned during this journey through the murky world of digital influencing. The bottom line here is that despite the public handwringing about fake news, the strategies and tactics of digital influence warfare encompass a much broader terrain of activity. As the Demos report (mentioned above) noted, “Fake news is a tiny cog in a much larger machine.”15 So, let’s begin our discussion on tactics by exploring the many different ways deception is deployed in digital influence warfare. Category #1: Digital Tools and Tactics to Deceive In the early days of the Internet, it was already well known that certain individuals were posting information (usually text and a few images) meant to portray a false narrative as believable. For example, during the mid-1990s I remember attending an educator’s workshop in which the audience was shown a series of websites that claimed—rather convincingly to some viewers sadly—that the Holocaust had never happened; it was all a hoax. The main takeaway from this was the caution that teachers at all levels of education would need to find new ways of developing the qualitative assessment and critical analysis capabilities of our students, in order to counter the influences of these false narratives. Over time, while the “Internet literacy” movement in education evolved, new tools were developed for crafting and delivering such information in ways that proved more effectively convincing than ever before. Meanwhile, other forms of deception became commonplace. Some examples involved criminal hackers using fraudulent emails to try and trick you into clicking on a link, which would then expose you to malware that could reveal personal information, infect your computer with viruses, and so forth. People were routinely deceived into donating money online to a fraudulent cause. Remember the Nigerian prince who wanted to give you millions of dollars, and all he needed to do so was you providing him with your private bank account details and pay the transfer fee? And there are the numerous stories of “catfishing,” in which some poor victim (often through a dating website) is lured into a romance scam by a con artist using a highly convincing and attractive (yet completely fake) profile. Because of the way we interact with information online today, we are vulnerable to the tactics of deception. Recent decades have seen an

76

Digital Influence Warfare in the Age of Social Media

accelerated shifting of power in the social and political communication arenas from media and educational institutions to individuals. The elevation of individuals to the role of unmediated information providers has had an unprecedented effect on how information is distributed and consumed and the kinds of information we can see on our laptops and handheld devices today. We have limitless choices, and many of us choose information sources that conform to our beliefs, values, hopes, and desires. Typically the information sources we choose will be aligned with our sense of identity. This gives a huge advantage to those who want to deceive us. Once the influencer knows their target’s beliefs, they can tailor information— entirely false information—that conforms to those beliefs and satisfies their desires for social validation about those beliefs. Of the several kinds of deception possible on the Internet, let’s focus the discussion on three subcategories that seem most prevalent: information deception, identity deception (including deceiving people about the source of information), and engagement deception. Information Deception One of the easiest forms of information deception involves taking a photo and altering it in some way. You or I could do this fairly easily— heck, my kids can do it better than I can. There are any number of tools you can use, some of which you can even download for free off various websites, to make it look like a person was somewhere they’ve never been or doing something they’ve never done. Most of us are familiar with the concept of air-brushing, “Photoshopping,” and other means of manipulating an image. Many of today’s smartphones contain built-in software and applications that allow you to “fix” a photo in a variety of ways, from removing red-eye and altering hair color to inserting background images or even people who had nothing to do with the original photo. There must be billions of fake images populating the Internet by now, many of them quite harmless, maybe even moderately funny. Today’s technology enables you to create and post online an image of Pope Francis on a surfboard looking stoked as he cuts back onto a clean wave. But alas, the same photo editing software is also increasingly used to deceive people of something more serious, with images of ordinary, real things with small, nuanced alterations. A smile changed to a painful grimace; a calm, blank stare made to look menacing; a clean-cut professional made to look haggard, with dirty hair and a thick five o’clock shadow. These are examples of how digital influence can really be effective. Rather than producing images about things that are obviously fake—a shark leaping out of the water to attack a helicopter, for example—the goal is to have us see things we are likely to believe as plausible evidence of something the influencer wants us to believe.

Tactics and Tools: Technical Dimensions of Digital Influence77

Fake photos inspired the development of fake animated GIF images (graphical interface format—basically, a sequence of photos merged together into one file, like a very short video clip). These can also be found throughout the Internet, from harmless ones involving Kermit the Frog shaking his head or drinking tea to more steamy ones on myriad pornography websites, where the image creator would superimpose the face of a celebrity (or virtually anyone) onto the body of a porn star engaged in the kinds of activities porn stars do. Like the fake photos, these animated GIFs are also used to make it look as if someone is doing something they actually never did. The next threshold of deception was the video. At first, altered videos (sometimes called “cheapfakes”) were not difficult to spot. For example, the audio track would be slightly off, or the quality of the video images led one to be unsure about its authenticity. Some videos have obviously been wholesale fabrications—staged, with actors, scripts, and so forth— and comically amateurish, like throwing a pie pan into the air, taking a short video clip of it floating in the sky, and calling it proof of UFOs, aliens visiting Earth from another planet. But manipulating videos became easier over time—as a recent UNESCO report notes, it is increasingly possible to “engineer audio and video in ways that go beyond legitimate news editing in order to make it appear that a particular individual said or did something in some place, and to pass this off as an authentic record, sending it viral in the social communications environment.”16 Chapter 1 described a recent case where a video of Congresswoman Nancy Pelosi was slowed down slightly to make it seem that she was slurring her words and was possibly inebriated. Another worrisome example of altered videos was the clip showing Donald Trump and CNN White House correspondent Jim Acosta arguing. The altered version of this video—distributed by the InfoWars conspiracy website—sped up the motion of Acosta’s arm, making him appear to strike a White House aide as she reached for his microphone. The Trump Administration then used this manipulated video to justify revoking Acosta’s press privileges.17 The technology also enables you to edit out certain frames of the video in order to depict an event differently from what it really was. For exam­ ple, for several decades, one could use simple video editing tools to alter a speech in order to make it seem like the speaker said something differently. Imagine this hypothetical scenario: a soldier has committed a sexual assault against a victim in Iraq. The military spokesperson gives a speech including the words “we do not condone sexual misbehavior by our soldiers.” Someone edits out the frames in which the spokesperson says “do not” and then splices the remaining frames together. Then on viewing the video, it makes it seem like the spokesperson said something different, in essence, provoking anger through disinformation. If the editing is competent enough, viewers may find it nearly impossible to tell that the video has been altered from its original version. Now envision what a terrorist

78

Digital Influence Warfare in the Age of Social Media

group like al-Qaeda or Islamic State can do with this technology—for example, taking a speech by the U.S. president in which they said, “The U.S. is not at war with Muslims around the world,” and then simply editing out the word “not,” splicing the rest back together, and using this as part of the group’s own propaganda videos. Fake videos come in a wide range of styles and forms. There are videos that purport to “reveal” some sort of truth that is being hidden from you by conspiratorial shadowy networks, sometimes even the government. Many of these videos use real video footage but interpret the context and event in ways that are untrue but yet could be perceived as possible. The videos distributed by InfoWars questioning the official investigation into the 9/11 attacks is a classic example. There are also fake video clips in which someone’s face is superimposed onto someone else, similar to the fake GIFs describe earlier. For example, several have shown the actor Nicholas Cage’s face placed on various other actors (and actresses), who are in various state of undress or doing things Mr. Cage would never be doing in real life. These types of videos have been created more for laughs and entertainment than to cause significant harm or spread misinformation. These are usually easy to spot, though the technology for video deception continues to improve. For example, Adobe recently released “Project Cloak,” an After Effects software update that allows users to easily make aspects of a video vanish. And now there are newer, more sophisticated—and very real-looking— so-called “deepfakes” videos. The term “deepfake,” a combination of neural network-based “deep learning” and “fake media,” is used to describe videos that incorporate the use of artificial intelligence (AI) to seamlessly integrate misleading alterations. Joe Litell, a U.S. army officer with expertise in information operations and machine learning, provides a good succinct explanation of how it works: Neural networks are computer algorithms that are meant to mimic the processes of the human brain. Interconnected layers of nodes, which represent neurons, conduct mathematical computations on inputs that pass their result on to the next layer. Each node in the next layer receives results for each node in the previous layer until a prediction is made. The decision is then compared with the actual outcome, and each connection between the nodes is adjusted to carry more or less weight. The process is repeated until the difference between predictions and outcomes is minimized. These implementations allow experts, and in some cases mere enthusiasts, to manipulate images, video, audio, and text in such a way that even the keenest observers can be deceived. This capability could be used to interfere in an election, sow political chaos, or frustrate military operations, making this a national security issue.18

In these videos, AI and advanced computer algorithms are used to identify facial features, movements, lighting, and so forth in one video that

Tactics and Tools: Technical Dimensions of Digital Influence79

match (or approximate) what is seen in another video. The implications of algorithmically generated face-swapping videos are fairly clear for disinformation efforts, particularly when focused on boosting (or undermining) support for a political candidate. As Will Knight observed in a recent article about one of the world’s top deepfake video experts, “When fake video footage is as easy to make as fake news articles, it is a virtual guarantee that it will be weaponized. Want to sway an election, ruin the career and reputation of an enemy, or spark ethnic violence? It’s hard to imagine a more effective vehicle than a clip that looks authentic, spreading like wildfire through Facebook, WhatsApp, or Twitter, faster than people can figure out they’ve been duped.”19 Deepfakes are far more complicated—and realistic—than the earlier versions of fake videos, but unfortunately, the technology is advancing rapidly. It is only a matter of time before undetectable deepfakes can be created with just a few simple clicks, even by amateurs. The technology is already being used to recreate lifelike video of individuals from mere paintings of them. Essentially, you can take a single painting or photograph and generate a series of images that when viewed sequentially give the appearance that the person is speaking, making facial expressions, moving their head, and so forth.20 As researcher Samantha Cole described it, “The machine learning technology brought the Mona Lisa, Fyodor Dostoevsky, and Salvador Dali to life, using just a few portraits . . . If deepfakes are scary because they’re relatively easy to create, it’s even scarier that a program can do the same thing, with only a handful of photos of your target.”21 As Singer and Brooking explain, “Using such technology, users will eventually be able to conjure a convincing likeness of any scene or person they or the AI can imagine. Because the image will be truly original, it will be impossible to identify the forgery via many of the old methods of detection.” The technology can generate images and videos of something that has not happened in real life; “events that never took place may nonetheless be presented online as real occurrences, documented with compelling video evidence.”22 In a recent Foreign Affairs article, Robert Chesney and Danielle Citron argue that as deepfakes develop and spread, “the current disinformation wars may soon look like the propaganda equivalent of the era of swords and shields. . . . Legal and technological solutions—forensic technology, authenticating content before it spreads, ‘authenticated alibi services,’ criminalizing certain acts—may help. But deepfakes will become better and cheaper, and democracies will have to learn resilience and how to live with lies.”23 Of similar concern, we have seen the emergence of deepfake audios. As Singer and Brooking explain, AI can be programmed to study a database of words and sounds to infer the components of speech—pitch, cadence, intonation—and learn to mimic a speaker’s voice almost perfectly. Moreover, the network can use its mastery of a voice to approximate words

80

Digital Influence Warfare in the Age of Social Media

and phrases that the machine has never heard. With a minute’s worth of audio, these systems might make a good approximation of someone’s speech patterns. With a few hours, they are essentially perfect. One such “speech synthesis” start-up, called Lyrebird, shocked the world in 2017 when it released recordings of an eerily accurate, entirely fake conversation between Barack Obama, Hillary Clinton, and Donald Trump. Another company unveiled an editing tool that it described as “Photoshop for audio,” showing “how one can tweak or add new bits of speech to an audio file as easily as one might touch up an image.”24 This is a far cry more sophisticated than the “celebrity voice changer” you might have downloaded to your smartphone. Finally, there are the advances in CGI, where actors perform a scene on a soundstage with a green screen, and then an array of computer-generated, real-looking animals (e.g., in Chronicles of Narnia) and others (including some people who have passed away) are digitally added to the film along with an incredibly detailed background scenery. Here’s a fun assignment: have a look at some Hollywood films of years ago that adopted early forms of CGI and then look at the CGI of today’s blockbuster movies (like the Star Wars or Marvel franchises). This kind of comparison typically illustrates how advanced this CGI technology has become—which in turn gives us ample reason to expect it will continue advancing in sophistication for many years into the future. Great for entertaining us, but not so great for defending against perception manipulation and influence attempts. Many observers of disinformation today believe that these “deepfakes” and other kinds of image and video manipulation will be increasingly problematic in years to come. One report suggests that the photo-sharing platform Instagram may be the most prominently used distributor of disinformation during the 2020 election, with altered “deepfake” videos of candidates meant to deceive and provoke emotional responses.25 Particularly troublesome will be entirely realistic-looking videos that show a person saying something they’ve never said—a fake celebrity endorsement of a political candidate, for example, or a video made to look like a surreptitious recording of a candidate doing or saying something that undermines their credibility. As Michael Mazarr and his colleagues noted in a recent Rand report, “Simply put, the ability to manufacture seemingly tangible reality from scratch has now become commonplace.”26 Meanwhile, building your own website that looks credible enough to be viewed by some as credible and authentic is a bit more complicated than altering an image or video, but there are plenty of tools available to help you (or you can hire any number of digital influence mercenaries to help you with this).27 It is relatively inexpensive to create a website that has the trappings of a professional news organization, and you can easily monetize your content through online ads and social media dissemination. One reason fake news websites are so common in the arsenal of

Tactics and Tools: Technical Dimensions of Digital Influence81

digital influence warfare is that it seems to be quite effective. According to a 2016 BuzzFeed News investigation, viral fake news stories on Facebook during the final months of the U.S. presidential campaign gained more shares, reactions, and comments than top articles by the New York Times, the Washington Post, and other major news outlets.28 Notably, nearly all of the top-performing fake election stories had either an overtly pro-Trump or an anti-Clinton bent. In May 2014, the Washington Post launched a series called “What was fake on the Internet this week,”29 in response to what it described as “an epidemic of urban legends and Internet pranks.” At the outset, the typical Internet hoax du jour was a lighthearted, silly affair, false stories on subjects like pregnant tarantulas roaming the streets of Brooklyn or the makers of Oreo launching a fried chicken flavor. By the end of 2015, the series was shelved—not because of a dearth of fake content online but because the pace and tenor of online disinformation had become much more difficult to stomach. The fakes were easier to spot, but garnered evermore traffic. The subject matter had grown hateful, gory, and divisive. It was less funny and more upsetting.30 For example, a completely fake story, “Pope Francis Shocks World, Endorses Donald Trump for President,” blasted across American social media networks in July 2016 like wildfire.31 Although it was proven false—by a stern denial from Pope Francis himself32—three times as many Americans read and shared it on their social media accounts as they did the top-performing article from the New York Times. This “fake news” story was created by a few young Internet entrepreneurs in Macedonia who had previously built up a following of Trump supporters on social media through a variety of popular (though completely false) news stories on the dozens of websites they operated, including the claim of “proof” that former President Obama was born in Kenya. Fake news can be a complete fabrication (the pope didn’t really endorse Donald Trump), but often there’s a kernel of truth that’s taken out of context or edited to change its meaning. The information may appear on something that resembles a legitimate news website—with names such as ­newsexaminer​.­net or ­WorldPoliticus​.­com​—­and go viral when it’s tweeted by someone with lots of followers or turned into a “trending” YouTube video. The most sophisticated disinformation operations use troll farms, artificial intelligence, and automated accounts (or bots, described later in this chapter)—what some researchers call “cyber troops”—to flood the zone with social media posts or messages to make a fake or doctored story appear authentic and consequential.33 As David Lazer notes, “The Internet not only provides a medium for publishing fake news but offers tools to actively promote dissemination.”34 Furthermore, once fake news has been spread by the influencer and believed by the target, any attempt at correcting those beliefs will have

82

Digital Influence Warfare in the Age of Social Media

only limited success. Many studies have shown that a provocative statement or accusation can spread quickly throughout a society, but a followup corrective statement or admission of error rarely has the same effect. In fact, the more extreme a headline and the longer the targets spend time processing it, the more likely they are to believe it.35 At the same time, influence operators have learned how to game the system on Facebook, how to attract attention and gain followers to the level at which they can monetize their social media activities. They post professional-looking material that has no basis in fact, and in most cases, their website even acknowledges (usually a few clicks deeper than the home page) that they are a “fantasy news” and information source only. Some websites may be part of a network of locally oriented fake news organizations, using low-cost automated story generation and news pieces copied from elsewhere. An investigation in 2019 by the Tow Center for Digital Journalism at the Columbia Journalism School discovered at least 450 websites in a network of local and business news organizations, each distributing thousands of algorithmically generated articles and a smaller number of reported stories. Of the 450 sites discovered, at least 189 were set up as local news networks across 10 states with titles like the East Michigan News, Hickory Sun, and Grand Canyon Times. According to the report’s author, “These networks of sites can be used in a variety of ways: as ‘stage setting’ for events, focusing attention on issues such as voter fraud and energy pricing, providing the appearance of neutrality for partisan issues, or to gather data from users that can then be used for political targeting.”36 Further, these kinds of “websites and networks can aid campaigns to manipulate public opinion by exploiting faith in local media. The demise of local journalism in many areas creates an information vacuum, and raises the chance of success for these influence campaigns. The strategy is further made possible by the low cost of automating news stories, repurposing press releases (including obituaries from funeral homes), and replicating design templates, as well as the relative ease with which political or single-issue campaigns can obscure their funding and provenance.”37 In addition to creating and hosting your own fake news website, some digital influence strategies involve altering the information provided on legitimate websites. For example, Wikipedia is a well-known source of information, in part because the crowd-sourced, open-access nature of it has allowed the site to cover more topics than the entire multivolume Ency­ clopedia Britannica ever could. Virtually anyone can easily update Wikipedia page entries or create new ones. College students (much to the chagrin of their professors) routinely use the site as a resource for their course term papers. Journalists use the site for background information when preparing a news story about people, events, countries, and much more. Google search engine results almost always provide a link to something on Wikipedia. It is likely one of the most frequently visited websites in the world.

Tactics and Tools: Technical Dimensions of Digital Influence83

Wikipedia is also an excellent resource for digital influencers. The steps to using that website for spreading disinformation and fake news are quite simple: (1) choose a Wikipedia page; (2) edit some information on that page, perhaps with a quote or some background material that could be authentic but would take some time to verify by Wikipedia editors; (3) write a story for your Facebook or Twitter account, blog, or fake news website that links to the information in Wikipedia you have just manipulated. Be sure to include the URL to the Wikipedia page in a footnote citation. When people read your story and see that your information came from Wikipedia, many will assume it’s accurate. If they seek verification, they can click on the link, and if the entry has not been fixed, your edited version will still be there for all to see. Many pranks have been pulled in this manner (e.g., the entry for a particular celebrity was altered to change the name of his mother, and when he arrived at the Academy Awards with his mother, a reporter approached them and on live television said, “And let’s introduce the world to your mother [wrong name]”). Unfortunately, the ways in which Wikipedia can be used for deception are fairly obvious. But the same tactics for deception can be followed by more sophisticated hackers who can gain access to—and then alter the information on—virtually any website. The list of websites that have been hacked in this manner is quite long, but some recent examples include the January 2020 attack against the website of the Federal Depository Library Program by a group calling itself “Iran Cyber Security Group Hackers.” In this instance, the website was altered to show messages vowing revenge for the death of former Iranian military leader Qassim Sulaimani (who was killed in an American drone strike), accompanied by a doctored photograph of Trump being punched in the jaw, superimposed over a map of the Middle East.38 In mid-2015, the official U.S. Army website was hacked and defaced by a group calling themselves the Syrian Electronic Army. One of the messages shown on the altered website read: “Your commanders admit they are training the people they have sent you to die fighting.”39 A year earlier, the group was also responsible for a similar attack against the websites of Forbes, Ferrari, The Daily Telegraph, and hundreds of others.40 In sum, there are numerous ways in which various kinds of fake or altered content are being developed and used in digital influence warfare efforts. Many other examples are provided throughout this book. But in addition to falsifying the information communicated to the target, the influencers will also try to deceive the target about the source of information. As Jakub Kalenský notes, Russia has been experimenting with new ways of blurring the sources of disinformation, in one instance paying Ukrainian citizens to give a Russian agent access to their personal pages.41 The overall intent is to obscure the origin of the information communicated as part of the influence effort. This leads us to the second category of tactics that I found in studying this phenomenon: identity deception.

84

Digital Influence Warfare in the Age of Social Media

Identity Deception Various forms of identity deception can include hijacking email accounts, direct messaging accounts, social media accounts, or others, which are then used to make it look like the real account owner is saying or doing something you want the target to believe. But the more common activities in this category include creating a false online identity (or in some cases multiple false identities) or programming and controlling a network of automated accounts (called “bots”), often for the purpose of coordinating them in pursuit of influence goals. A well-publicized example of this was the blog of “Amina Arraf,” whose blog “Gay Girl in Damascus” claimed she was a 35-year-old Syrian woman participating in an uprising against President Bashar al-Assad.42 The blog found a global audience, moved by vivid descriptions of queer life in the Middle East, and a May 2011 article in The Guardian described her as “an unlikely hero of revolt in a conservative country.”43 Then in June 2011, a different kind of post appeared on the blog, a panicked update from Arraf’s cousin explaining that she had been thrown into the back of a red minivan by three mysterious men in downtown Damascus. News of the kidnapping quickly spread around the globe, resulting in reports from The Guardian, the New York Times, Fox News, CNN, and more, while the U.S. State Department reportedly started an investigation into her disappearance.44 But six days after the alleged kidnapping, the hoax was revealed: The “gay girl from Damascus” was actually a straight 40-year-old American man from Georgia named Tom. The blog, social media accounts, and nearly six years of forum postings under the name Amina Arraf were all fake.45 As the old saying goes, on the Internet nobody knows you’re a dog. Impersonating others has become an all too common annoyance on the Internet. Sometimes masking your true identity may be necessary, like for dissidents in authoritarian countries. In other instances, it is done for the purposes of stalking, cyberbullying, trolling (described below), and even for criminal activity. In terms of digital influence warfare, the advantages of identity deception are fairly clear. For example, one person can establish multiple accounts on a social media platform—hundreds or even thousands—and use them to orchestrate various influence operations against a target. The platform in which this has taken place the most so far is Facebook, which admitted in late 2019 to hosting over 120 million fake accounts.46 Naturally, Facebook is trying to respond to this problem. In August 2019, Nathaniel Gleicher (Facebook’s head of cybersecurity policy) announced the identification and removal of more than 350 Facebook accounts and pages connected to the Saudi Arabian government for engaging in “coordinated inauthentic behavior.”47 These accounts were posing as news outlets and locals with fake names, and the primary target of this particular

Tactics and Tools: Technical Dimensions of Digital Influence85

influence effort appears to have been people primarily in the Middle East and North Africa.48 According to Gleicher, the goals of this effort were to “disseminate their content, increase engagement and drive people to an off-platform domain.”49 Separately, agents of Saudi Arabia have been implicated in online attacks targeting Amazon founder and CEO Jeff Bezos over the coverage of the murder of journalist Jamal Khashoggi by the Washington Post, which he owns,50 and for coordinating a network of fake accounts on Twitter sending pro-Saudi messages that often included the hashtag #We_all_trust_Mohammad_Bin_Salman.51 A particularly nefarious form of identity deception has been used in online microtargeting and influence campaigns against American veterans on Twitter, Facebook, and Instagram. Images of deceased veterans have been used as bait in romance scams, memes are spread about desecrated graves in order to provoke anger, and misleading articles about the possible loss of health benefits worry veterans and their families who rely on them.52 On November 13, 2019, the House Committee on Veterans’ Affairs convened a hearing titled “Hijacking Our Heroes: Exploiting Veterans Through Disinformation on Social Media,” in which veterans testified about many instances of these things. An extensive report was also published by the Vietnam Veterans of America, describing the “persistent, pervasive, and coordinated online targeting of American service members, veterans, and their families by foreign entities who seek to disrupt American democracy. American veterans and the social media followers of several congressionally chartered veterans service organizations were specifically targeted . . .”53 Sometimes a fake account is called a “sock puppet,” a term used to describe aliases or fake persona created by social media users to masquerade as someone or something else on the Internet. According to Robert Walker, the false nature of the sock puppet allows them to make controversial or offensive comments while taking sides on a particular issue without the risk of exposing their real identity. Sock puppets have been known to post commentary on content that they might have produced themselves under a different identity.54 As Singer and Brooking discuss in their book LikeWar, during the 2016 U.S. presidential election Russian operatives used three “sockpuppet” approaches to try and influence the U.S. electorate. One was to pose as the organizer of a seemingly legitimate group, like the Twitter handle @Ten_GOP (calling itself the “Unofficial Twitter account of Tennessee Republicans”). Another was to pose as a trusted news source, like “@tpartynews” (a “hub for conservative fans of the Tea Party”). Using both of these approaches, the Russians managed to trick several high-profile Republicans into forwarding and sharing completely false information to millions of Americans. Finally, a third tactic involved posing as a trustworthy “average Joe” blue-collar worker in middle America or an elderly (and presumably wise) grandmother, whose comments

86

Digital Influence Warfare in the Age of Social Media

on news items (which were actually fake news fabricated by other Russian operatives) would provoke further engagement and distribution throughout social media.55 In order to mask the Russian origin of these efforts, third-party service providers like the Internet Research Agency purchased space on U.S. servers and set up dedicated Virtual Private Networks and then routed the disinformation traffic into the United States through these encrypted tunnels.56 More recently, in October 2019, Facebook announced the takedown of 50 Instagram accounts that “originated from Russia” and “showed some links to the Internet Research Agency (IRA),” the Russian “troll farm” that had previously targeted U.S. audiences and the U.S. presidential election in 2016.57 An analysis of this influence operation noted that the operators went to great lengths to hide their origins and claimed to represent multiple politically active U.S. communities: Black activist groups, advocates speaking out against police violence, police supporters, LGBTQ groups, Christian conservatives, Muslims, environmentalists, gun-rights activists, southern Confederates, and supporters of Senator Bernie Sanders and President Donald Trump. Almost half the accounts claimed to be based in “swing states,” especially Florida. Multiple accounts praised Bernie Sanders or Donald Trump. Accounts from both sides of the political spectrum attacked Joe Biden; some also attacked Kamala Harris and Elizabeth Warren.58 Trolling, as we’ll examine later in this chapter, involves individuals who primarily try to provoke emotional responses by others online. They are often identity deceivers, but not always; what many of them do is deceive you about their identity as well as the purpose of the information they are communicating. Their goals are typically focused on provocation, disruption, and distrust, but as Judith Donath described in 1998: Trolling is a game about identity deception, albeit one that is played without the consent of most of the players. The troll attempts to pass as a legitimate participant, sharing the group’s common interests and concerns; the newsgroup members, if they are cognizant of trolls and other identity deceptions, attempt to both distinguish real from trolling postings and, upon judging a poster to be a troll, make the offending poster leave the group. Their success at the former depends on how well they—and the troll—understand identity cues; their success at the latter depends on whether the troll’s enjoyment is sufficiently diminished or outweighed by the costs imposed by the group.59

The congressionally mandated investigation into Russia’s attempt to influence the 2016 U.S. presidential election revealed the sophisticated capabilities of so-called “troll farms” (thousands of coordinated user accounts) operating on social media platforms like Twitter and how governments and others can use these to influence perceptions about a wide range of

Tactics and Tools: Technical Dimensions of Digital Influence87

topics in both domestic and foreign policies. The rationale for troll farms and massive collections of automated bots is fairly straightforward: there is perceived strength in numbers. As we’ll examine more fully in later chapters of this book, the power of social validation means that people are more influenced when more than one friend makes a request or endorses a topic.60 Facebook pages can also be created by falsified accounts, allowing the influencer to provide false information and provoke engagement. This tactic involves luring Facebook users into joining and supporting the fake page because it presents information they agree with, a narrative they want others to see and agree with. As noted above in the Saudi Arabia case, these can be coordinated as part of a broader strategic influence effort. In another case, research by Judd Legum revealed a network of 14 large Facebook pages, all of which appeared to be posting the same information and links within 4 seconds of each other. The Daily Wire, a rightwing website founded by pundit Ben Shapiro, appears to be the origin of the information that is replicated on the other pages (which include The Right News and Conservative News).61 Automated accounts, known as “bots,” are another form of identity deception. A study in 2017 suggested that as many as 48 million Twitter accounts aren’t real people, and Facebook has admitted to finding tens of millions of “bots” on its platforms.62 Another report released in 2017 suggests that more than half of all Internet traffic involves bots. As the report’s author, Sara Fischer, notes: “Google, Facebook and Twitter want to make it easy for users all over the world to get on their platforms, because they believe in free speech and open access. But this level of openness means the barrier to entry on these platforms isn’t just low for users, but for bots and bad actors as well.”63 Additionally, another reason for the relatively open access to these platforms is the profit model incentive (described in later chapters of this book): the more users a social media platform can claim, the more attractive they are to companies who want to place their advertisements in front of the largest numbers of eyeballs. Similar to the tools available for image and video manipulation, and for website hacking and defacement, the software used for creating and using large armies of “bots” can be found online fairly easily. There are companies that you can pay to provide you with trained professionals who can program the fake accounts, help you identity target audiences, and manage as many accounts for you as you want. Fake news perpetrators create fake stories that are often amplified by a network of bots that automatically like, share, or comment on the content. Algorithms on the social media platforms elevate content that is popular, further amplifying the effect. Bots can even be used to manipulate online polls and surveys. And there also plenty of guides for those who want to program and control a collection of bots on their own.64

88

Digital Influence Warfare in the Age of Social Media

During the early years of Twitter—which appears to be the social media platform where we see the most prevalent amount of automated activity—one of the easiest ways to give the illusion that you were impor­ tant was to purchase tens of thousands of followers, fake accounts created and controlled by one individual. For a relatively small fee paid to that individual, your account would gain the illusion of having a mass following. Here’s an example of an advertisement for this kind of service (Figure 3.1): Of course, none of these new followers are actually real people, just “bot” accounts. They won’t engage with your tweets in any way (e.g., no retweets, no likes or responses, and no clicking on your links). These fake

Figure 3.1  Larry Kim, “Why Buy Twitter Followers? 10 Things You Need to Know About Followers Campaigns on Twitter,” MarTech (February 4, 2016). Online at: https://martech.org/buy-twitter-followers-10-things-need-know-follower-ad -campaigns-twitter/

Tactics and Tools: Technical Dimensions of Digital Influence89

accounts also won’t really have their own audience of followers anyhow. But the reason for doing this would be an attempt to give other people the impression that “surely if so many other people find this person worth following, I should as well.” The more sophisticated (and far more useful) social media “bots” are those that are programmed to give other people the impression that lots of people agree with a certain position or argument you are making. You can have thousands of fake accounts programmed to like, share, and retweet one of your posts in order to manufacture the illusion of social validation for what you posted. Repetition is made so much easier with automation, and messages become more believable to many people when they are repeated extensively. Sheer repetition and volume of a falsehood can lead to a perception of it being credible, something that American product marketing professionals know well. These automated “bot” accounts can also be programmed to interact with other people’s social media accounts in a variety of ways, in order to generate forms of reciprocity and in-group identity affiliation. Repetition of a message or narrative can be an effective way of fostering contextual relevance for an influence campaign. When the target receives the same or similar messages repeatedly, and especially from multiple sources, the message may be deemed worthy of more attention. There may be little or no effort to determine whether the multiple sources are automated bot accounts or real people. In a very real sense, the credibility of information received by the target can be deemed credible if others are perceived as deeming it credible.65 The perception of endorsement by large numbers of social media users can give a certain kind of social validation that leads the target to have higher trust and confidence in the information, even when it’s completely false. Again, this appears to hold true regardless of whether the sources of those endorsements are automated bot accounts or real people.66 Similarly, in his book Trust Me, I’m Lying, Ryan Holiday describes a process of grabbing attention by manufacturing controversy. First, use photo or video manipulation software to make it look like someone said or did something they actually did not. Then send this fake information to bloggers and journalists and also post it online to websites and social media accounts. Now, to give the illusion that others have seen and believed your material, use a collection of fake user accounts (“bots”) to “like,” “share,” and “retweet” links to it. Having these accounts add randomized snarky comments, or expressions of outrage, helps attract more attention to it. Eventually, your hustle pays off when real people start liking, sharing, and retweeting your post.67 These social media “bots” also play a central role in what’s known as “astroturfing.” Here’s an early example of what this looks like, as recounted by Singer and Brooking in the book LikeWar. In 2010, Massachusetts held a special election to fill the seat vacated by the late Senator Ted

90

Digital Influence Warfare in the Age of Social Media

Kennedy. Early on, it seemed unlikely that a Republican candidate would have a chance in this traditionally Democratic stronghold. But then a poll suggested Scott Brown might do well, and conservative advocacy groups launched a major social media campaign to try and swing the election his way. Thousands of fake accounts across Facebook and Twitter (“bots”) promoted his candidacy, and the “Twitterbomb” tactic (automated replies) was employed to further encourage support for Brown. Solicitations and messages of support were disseminated far outside New England, expanding the Republican’s donor base. When Brown became the first Republican to win a Massachusetts seat in the U.S. Senate since 1952, these efforts demonstrated how one could create the appearance of grassroots support (a tactic that became known as “astroturfing”) and influence the outcome of an election.68 Finally, it should be noted that identity deceivers are getting more sophisticated in their efforts. Bots today are more likely to mimic humans by highlighting information most likely to be polarizing and by targeting people with lots of followers who can easily disseminate false or misleading information.69 They will also focus their automated amplification efforts on precision targeting within an environment that allows for information silos and echo chambers (see chapter 5)—that is, information environments in which accounts with opposing viewpoints can be blocked or ignored as if they don’t exist. All these things combined lead to the impression that the narrative you believe in is reinforced by masses of others—you are right; the “others” are wrong.70 This, in turn, is a form of engagement deception, another of the most prevalent types of tactic within this broader category. Engagement Deception In the subcategory of engagement deception, some of the most common tactics include hashtag manipulation (to influence perceptions about “what’s trending” on the social media platform) and manipulation of perceptions of social validation (i.e., creating an illusion of mass support, or mass outrage, or whatever mass reaction the influencer is looking to provoke). Automated repetition and creation of viral memes are other tactics that can produce similar outcomes. Exploiting hashtags involves “tagging” key words and individual users in attempts to get more attention. For example, anti-vaccine Instagram users—who know the hashtags and accounts that their target audience pays attention to the most—have employed more than 40 hashtags such as #learntherisk and #justasking. During the 2019 congressional presidential impeachment hearings, the hashtag #DemocratsAreDestroyingAmerica became one of the top 10 hashtags used on Twitter. On August 19, 2019, the official account of the Russian Mission to the Organization for Security

Tactics and Tools: Technical Dimensions of Digital Influence91

and Co-operation in Europe (OSCE) announced the launch of a hashtag, #TruthAboutWWII, which was then used to promote the claim that an unwilling Soviet Union was forced to sign the 1939 Nazi-Soviet Treaty of Nonaggression—an attempt to encourage reinterpretation of historical facts.71 Hashtag flooding involves a coordinated effort to counter a trending hashtag with negative posts, and sometimes misinformation and disinformation as well. Here, a group of accounts (humans or, more likely, “bots”) will incorporate a barrage of messages containing that hashtag but with demeaning and derogatory information, the purpose being to grab control of the narrative and manipulate perceptions in a different direction.72 Alice Marwick and Rebecca Lewis describe this as an attempt to “hijack” the hashtag, like (for example) when right-wing extremists coordinate a huge collection of fake accounts for posting messages critical of #BlackLivesMatter, in order to diminish the ability of BLM supporters to use this hashtag to find each other.73 Another clever way to manipulate hashtags is to make it seem that something is trending, when actually it’s the opposition to it that’s trending. For example, in early 2020 the hashtag #NeverWarren began trending on Twitter, but not because of a groundswell of opposition to presidential candidate Elizabeth Warren. Instead, the most amplified tweets containing this hashtag were denouncing people using it. Basically, you had credible people with large followings inadvertently spreading a hashtag by overtly condemning its use. As noted earlier, the primary goal of many digital influence efforts is simply to provoke engagement, and both positive and negative reactions influence the algorithms used to determine what information is trending, which then attracts the attention of more social media users. Hashtag flooding is particularly effective when tagging content to strategically appear in conversations, trends, or search results when information on a topic is sparse or missing. For example, during the early weeks of the coronavirus (COVID-19) worldwide pandemic, tons of disinformation was easily spread by anti-vaxxers, conspiracy theorists, and others in pursuit of a political or ideological agenda. People were searching en masse for information on something relatively new, so using the right hashtags helped disinformation campaigns attract scores of viewers to their web of lies. In one instance, a 26-minute video titled “Plandemic” that went viral on social media featured the anti-vaccine activist Judy Mikovits describing the coronavirus as a conspiracy among people trying to profit from vaccines. She also proclaimed that wearing masks activated the coronavirus within people and criticized orders to stay away from beaches. Eventually, YouTube and Facebook announced they would remove the video, and Twitter said it had blocked users from using the hashtags #PlagueOfCorruption and #Plandemicmovie, although by then millions had already seen and reacted (often positively) to it.74

92

Digital Influence Warfare in the Age of Social Media

Manipulating perceptions of social validation is another form of deception tactics in the arsenal of digital influence warfare. Social media has shifted the terrain of influencing from mainstream media to individu­ als with significant numbers of followers, whose “retweets” or “shares” amplify the perception of endorsement and support that lend social proof and confirmation of a specific narrative. When these individuals are then linked in a concerted strategy to collectively amplify that narrative, the effects on the broader population of social media users can be overwhelming. This tactic is also sometimes referred to as “brigading,” a coordinated effort by one online group to manipulate another—for example, through mass commenting on a certain message.75 At the same time, our quest for social proof/validation online (which we measure by the numbers of followers, responses, reposts, retweets, etc.) opens us to a vulnerability in which scores of automated “bot” accounts lead us to believe something may be true (even when it is false) simply because there is a significant number of “likes” or “shares” or “retweets.” Engagement deception is all about enhancing perceived validity about certain views (e.g., “lots of peo­ ple feel this way, so it must be true”) even when based on absolutely no evidence or even when it’s a lie. In another example of engagement deception, researchers at George Washington University found that in the weeks before the 2019 European elections, an incredible 86 percent of total shares and 75 percent of all comments on Facebook (regarding party political content in Germany) were supportive of just one party: the far-right Alternative für Deutschland (AfD) party in Germany (which received only 11 percent of the vote overall). That is, support for this party on Facebook was four times the comments and six times the shares of all the other parties combined. According to their report, “The vast majority of likes and shares came from a cluster of 80,000 accounts . . . In particular the most active—around 20,000 accounts—had random two letter first and last names. For example MX, CH, EW, which as it happens would not be legal names if you tried to register on a birth certificate in Germany.”76 And of course, as is widely known today, bots were used extensively during the 2016 presidential election.77 For example, as Marwick and Lewis note, “During the first presidential debate, bots generated 20% of the Twitter posts about the debate, despite representing only 0.5% of users. Significantly more of this traffic came from pro-Trump bots than pro-Clinton bots. This remained constant throughout the election; researchers estimate that about a third of all pro-Trump tweets on Twitter were generated by bots, more than four times that of pro-Clinton tweets. Many of these bots spread what is known as ‘computational propaganda’: misinformation and negative information about opposition candidates.”78 As these examples reflect, the overall strategic objective here is manufacturing a fake audience, whose members amplify the appearance of

Tactics and Tools: Technical Dimensions of Digital Influence93

support for your disinformation.79 You can manipulate perceptions of a large audience through automated activity based on algorithms that monitor online discourse and provoke preprogrammed responses from bots, generating a flurry of comments in support of a message, as well as a barrage of insults against users who post a message they disagree with. However, engagement deception does not exclusively involve automated accounts, though. The central effort is on coordinated effort, and in some instances, this has involved real humans who collectively operate through a dispersed network of accounts that constantly reconfigures much like the way a swarm of bees or a flock of birds constantly reorganizes in midflight. Research by Ali Fisher calls this a “user curated swarmcast,”80 a particularly relevant concept when describing how terrorist and extremist networks operate online. Other technical tools that can be used by the influencer include search engine optimization, including paying for targeted advertisements and sponsored search engine results. This can manipulate perceptions about what sources of information are more important, more “liked,” or frequently visited than others. You can also hack websites to plant autoredirect URL scripts that take visitors to another website. For example, imagine you point your web browser toward the BBC News website, and some hacker has managed to redirect traffic heading to that website over to a different website, where the look and feel are roughly the same but the content is much different. This could allow you to think what you are reading is the news as reported by the BBC, but what you are really seeing now are made-up stories. To sum up, there are many different kinds of deception that are used in all manners of digital influence warfare strategies. The power to deceive is especially strong among people who are deeply embedded in cult-like influence silos (described in chapter 5), where their cognitive abilities are constrained by a warped worldview, and where there decision-making faculties are held hostage by a political ideology. Even when the factcheckers point out all the lies and deceit, these people will believe virtually any deepfake images and videos, any deepfake news, and any manufactured illusions of mass support so long as the narrative is aligned with the political biases and prejudices they hold dear. As Singer and Brooking note, “Bots, trolls, and sockpuppets can invent new ‘facts’ out of thin air; homophily and confirmation bias ensure that at least a few people will believe them. On its own, this is grim enough, leading to a polarized society and a culture of mistrust. But clever groups and governments can twist this phenomenon to their own ends, using virality and perceptions to drag their goals closer within reach. Call it disinformation or simply psychological manipulation. The result is the same, summarized best by the ­tagline of the notorious conspiracy website InfoWars: ‘There’s a war on . . . for your mind!’”81

94

Digital Influence Warfare in the Age of Social Media

Further, we have the counterintuitive problem that the threat of deception itself can raise a mountain of problems. Even when there are no actual disinformation attempts, there is already a level of uncertainty and fear that someone could be deceiving us, so all one needs to do is call out “fake news” or claim something is disinformation (when it really is true). In doing so, the influencer can encourage some members of the target audience to question or even dismiss something that is an inconvenient truth, especially if it involves rejecting facts that disagree with what they want to believe. In this way, the legacy of deception and disinformation we have endured already can turn us into our own worst enemy. Because our suspicion and distrust have been increased due to successful disinformation efforts of the past (including those by Russian operatives), the danger in the future is that some members of society can more easily claim a falsehood is real or that something true is a falsehood, and either way, there will be some members of society who will agree with them. Finally, it should be noted that the many tools and tactics of digital influence warfare can be used by multiple sides of any debate, political campaign, social movement, or whatever. This does not necessarily have to be a one-sided affair, with attackers constantly pummeling a defenseless victimized audience. Rather, we see examples of competing political parties, or people aligned with either a pro-science or an anti-science perspective, using many of these tactics against each other (as described in other chapters of this book). However, a problem arises when the amount of money one side is able to spend on these efforts greatly exceeds that of the other, tipping the scales heavily in favor of one particular narrative that is amplified by these various strategies and tools, while the opposing narrative is blocked, drowned out, muted, distorted, and manipulated. Being able to manipulate interpretations of reality in this way helps ensure the influencer can achieve their strategic objectives, and as we will examine in later chapters of this book, establishing information dominance (either through authoritarian force or through manipulation of influence silos) has become a cornerstone of such efforts. Category #2: Digital Tools and Tactics to Provoke Engagement Another significant category of digital influence tactics—one that is not wholly exclusive to that of deception, but rather overlaps it in some ways—involves efforts to provoke some kind of reaction among the targets. Reaction is a form of active engagement; the target feels compelled to do something about what they see. Their reaction can range from expressions of anger, disgust, and outrage to expressions of support— for example, the response could be to like, share, retweet, or to write something of their own and share it with others, either in defense of their beliefs or in declaring an even greater commitment to those beliefs.

Tactics and Tools: Technical Dimensions of Digital Influence95

This is where the research on influence and social psychology intersects with Internet algorithms that encourage and reward engagement. Either a positive or negative reaction is still seen as a form of engagement, which is all that matters. The most common forms of this involve the dissemination of negative messages (e.g., those intended to harass, discredit, suppress, dissuade, or disparage the target audience members) or constructive messages that seek to encourage action (e.g., “If you love the President, RT this!”) or create a fake sense of consensus (e.g., “The #1 trending hashtag can’t be wrong”).82 Research from the field of psychology points to a powerful impact of writing something down; it commits us to defending a view or position publicly, even when presented with evidence that contradicts what we originally felt or believed. Social media posting on Twitter, Facebook, etc. functions as a sort of personal and public commitment solidifier. Once we have chosen a particular set of views or political stances, embraced a particular politician or celebrity, and so forth, it becomes difficult to reverse what you have publicly declared. This hardening of commitment then contributes to the kinds of political and social polarization we see today. Many people are increasingly unwilling to question the validity of their own opinions (however misinformed they may be) or their instincts. I’m sure you and I know several people who fit this description. This appears to be the case even when one engages in the seemingly innocuous act of liking, forwarding, or quoting something that was posted by another social media user. For example, publicly declaring yourself a supporter of Trump on one particular issue (just by “liking” something) makes it increasingly difficult to like or agree with opposing viewpoints or information that contradicts Trump’s views on that issue. The same thing holds true with regard to terrorist groups like ISIS or right-wing extremists. Having publicly declared your commitment for or against something, we tend to reform our self-image in order to remain aligned with that commitment (e.g., see chapter 4). Over time, the public nature of the declaration shapes an image of us that we are driven to conform with, despite what other information or beliefs may seem more valid later on. The influencer can not only use active methods like directly sending the email or text messages to the target(s) and pushing stuff into their social media feed, but they can also do this in combination with more passive forms of information, like hosting a website that the targets are encouraged to go visit (and where they will see something meant to provoke their reaction). Influencers can provoke engagement of the targets with real or fake accounts, real or fake images and videos, and real or fake news websites. Deception brings certain advantages, but it’s not always necessary as part of the disinformation effort. As described in chapter 4, repetition is a well-known tactic for effectively influencing a target. This is why they will look for ways to ensure

96

Digital Influence Warfare in the Age of Social Media

that the narrative intended to provoke a reaction will be repeated in multiple formats. The influencer’s goal here, as Singer and Brooking explain, is to induce thousands (or even millions) of people “to take their messages seriously and spread them across their own networks.”83 The more times and places the narrative is seen, the more likely people are to believe it—particularly if they see other “like-minded believers” enthusiastically replicating and amplifying those messages. Using easily available data, the influencer can identify individuals with large numbers of followers like Donald Trump (whose fame is derived from wealth and position) or “celebrities” like Kim Kardashian (whose fame is derived solely from attention-seeking publicity). When individuals like these share or retweet the influencer’s messages, the messages seem even more trustworthy, since they now bear the stamp of approval by whoever shared them. In recent years, this tactic of provoking engagement by prominent voices (particularly if they have established a following within an influence silo) has been used as a conduit for influencing members of the American public. For example, we have seen numerous attempts (including several successful ones) by Russians, Chinese, and Iranians to use social media forums to influence Trump. As explained by Clint Watts, a former FBI agent and a cybersecurity expert who studies propaganda campaigns on social media, the time Mr. Trump spent on Twitter “gives you an amazing opportunity to game the president.”84 Trump follows relatively few Twitter accounts, fewer than a hundred. Among them are Republican politicians and Fox News hosts, several of whom have sent his way a variety of conspiracy theories and white nationalist and anti-Muslim messages, which he has then amplified by forwarding to his millions of followers. He has amplified disinformation on several occasions, perhaps unaware (or not really caring) about its origins. In fact, as reported by the New York Times in November, 2019, “Trump has retweeted at least 145 unverified accounts that have pushed conspiracy or fringe content, including more than two dozen that have since been suspended by Twitter.”85 Fake accounts tied to intelligence services in China, Iran, and Russia had directed thousands of tweets at Trump. Iranian operatives tweeted antiSemitic tropes, saying that Trump was “being controlled” by global Zionists and that pulling out of the Iran nuclear treaty would benefit North Korea. Russian accounts tagged the president more than 30,000 times, including supportive tweets about the Mexican border wall and his hectoring of Black football players. Trump even retweeted a phony Russian account that said, “We love you, Mr. President!”86 The goal of influence strategies in these instances is to identify what themes and topics seem to get the most attention of your target audience. Employing the provocation tactic also requires identifying the most prominent voices among the members of your target audience and determining what topics have most often provoked their engagement—what are they

Tactics and Tools: Technical Dimensions of Digital Influence97

mostly likely to share, like, retweet? A pattern analysis of the “prominent voices” within the target audience will also reveal when certain individuals are most likely to be online (what hours of the day or night, which days are more likely than others, etc.). From all this information, the influencer can then craft a set of messages most likely to provoke the target’s engagement and to be distributed into their social media feed at the most optimum times. As more people like, comment and share, and tag the original message, the algorithms used by social media platforms will push it into more people’s news feeds. As Singer and Brooking explain, “The best predictor for whether something posted online will become influential is not the accuracy or even the compelling value of the content: it is the number of friends and followers who share the content first. They are more likely to believe what it says, and then to share it with others who, in turn will believe what they say; it is all about us, or rather our love of ourselves and people like us.”87 Further, the information we see and then share with oth­ ers (on Facebook, Twitter, Instagram, etc.) is most often based on our level of familiarity with (or trust in) whomever we received the information from. Basically, influence warfare strategies exploit our trusted relationships to their own advantage. Attracting media coverage is another primary goal of provocation tactics. As Marwick and Lewis note, “For manipulators, it doesn’t matter if the media is reporting on a story in order to debunk or dismiss it; the important thing is getting it covered in the first place.”88 One example of provoking the media was seen in November 2015, when Andrew Anglin (founder and editor of the right-wing outlet The Daily Stormer) directed his followers to set up fake White Student Union pages on Facebook for universities throughout the United States—and then to contact local media outlets about the groups. If his goal was to provoke the media into expressing moral outrage, and simultaneously spread some racial tension throughout college campuses, he was highly successful. Local media outlets promptly reported on these Facebook pages (although some did note it was unclear whether the groups existed outside of Facebook). USA Today picked up the story and covered it nationwide, followed by coverage in Gawker, The Daily Beast, and even the the Washington Post—even after the whole thing was exposed online as a hoax engineered by Anglin. By then, the tactic had already worked—the media had greatly amplified his underlying message that there are legions of white people on university campuses with racial grievances, and they are seeking opportunities to unite their efforts.89 Ryan Holiday’s controversial book about media manipulation is largely about using tactics of provocation to gain the attention of journalists and the general public.90 For example, he recounts the time that he was hired to help promote a new movie release. In order to attract attention, he created

98

Digital Influence Warfare in the Age of Social Media

some fictitious protestors about the movie who proclaimed it was the most disgusting and controversial thing they had ever seen. This manufactured controversy then led to media coverage about how the movie was offending people, and naturally more people decided to go see the movie themselves, just to determine what the whole fuss was about.91 He also describes several instances where he took advantage of online bloggers’ constant need for material to write stories about by feeding them some kind of “scoop”—exclusive information that the blogger would publish online in order to grab the attention of more established media services. It is telling that one of the chapters in his book is titled “Just Make Stuff Up,” and he describes how one journalist created a fake institute online that then released a press release claiming (based on spurious connections in data published elsewhere) that people could lose weight by eating chocolate. Soon the story was being reported by the Huffington Post, the Daily Mail, and many other news outlets, reflecting how fake news on one site becomes the source for fake news on another “and again in turn for another, until the origins are eventually forgotten.”92 Some forms of provocation may be quite more subtle, like breadcrumbing—a tactic that involves giving someone just enough information to keep them interested and pursuing more information, but without pushing them to risk doing something or endorsing something totally new. Each new breadcrumb leads to another through the woods, pulling them deeper into the rabbit hole of disinformation. This may be especially enticing for investigative journalists looking for a “scoop,” something that would attract attention to their media outlet, and specifically with their name in the byline. Further, if you can provoke the media to pay attention to rumors and allegations, questions about hidden conspiracies, and other such controversies, it can be a win for your influence strategy. For their part, the media know they will attract more visitors to their websites and viewers of their television shows by framing these controversies as things worth looking into (but without fully endorsing or rejecting the claims therein). Basically, the media have an economic incentive to report on controversies, and in doing so, they amplify the narratives embedded within, many of which encourage distrust and heightened uncertainty. In contrast to subtle breadcrumbing, influencers may decide the best route to provocation is to make the most sensational claims you can get away with. This is related to the concept of the “Overton Window” (explained further in chapter 4), which involves finding ways to change perceptions about the range of publicly acceptable ideas by promoting ideas outside of that range. Further, the more “outer fringe” ideas you promote the better, because in comparison, other “less fringe” ideas begin to appear more reasonable or even acceptable. This described what Rush Limbaugh does on his radio talk show, or what Alex Jones repeatedly does on his InfoWars website—they make sensational claims and repeat them

Tactics and Tools: Technical Dimensions of Digital Influence99

excessively, drawing on the power of repetition to convince some listeners that the claims may indeed be true. Often, the more outlandish the accusations, the better, as Trump learned by amplifying and repeating the whole “birther” conspiracy (the claim that President Obama was not born in the United States). Outrage and/or humor can be used to provoke; demeaning and disparaging humor can be particularly effective in an environment where “othering” is celebrated. These things all capture the attention of an audience and provoke emotional responses. So-called “rage engagement stories” (also called “ragebait,” a variation of the “clickbait” term I’m sure you’re already familiar with) are built on the same kind of understanding about what is most likely to provoke the target. As Ryan Holiday notes, “The best way to make your critics work for you is to make them irrationally angry. Blinded by rage or indignation, they spread your message to every ear and media outlet they can find.”93 This is the value proposition of trolling, which began in the early days of the Internet—with individuals on Usenet discussion communities and email listservs—and today can be found on any social media platform and many discussion boards. Trolls are individuals who primarily try to provoke emotional responses from others online and often find humor in sowing discord and confusion.94 Frequently (but not always), a troll will use a fake username or anonymous accounts in order to post things that they might not want to be associated with in real life. The more effective trolls are those who have a knack for offending others—and of course, people who are easily offended make perfect targets. Trolls will often use inflammatory and degrading language, images, and videos (often manipulated and fake) in order to disrupt discussions; spread fake news; provoke anger; disseminate bad advice; and damage the feeling of trust within an online community.95 A troll may also incorporate random, off-topic messages in their provocation strategy or even words of support for others whose provocative views and messages are aligned with that of the troll. A variant of this is called “concern trolling,” in which the goal is to manipulate the target’s perceptions about something by agreeing with them in principle but expressing concerns about a particular aspect, in order to encourage higher levels of uncertainty. And of course gaslighting (described in the next chapter) is another favorite tactic of online trolls and provocateurs. This involves deceiving the target into believing things that aren’t true, while expressing concern about the fact that others are questioning the target’s sanity because of those false beliefs. Trolling can also involve encouraging members of an in-group to embrace a more radical interpretation of the out-group. This involves a sequence of steps leading to infiltration and radicalization. First, pick an issue—for example, gun control, women’s rights, abortion rights, Black Lives Matter, climate change, or whatever. It should be something around

100

Digital Influence Warfare in the Age of Social Media

which there are competing narratives and hotly contested opinions. Next, pick a community of users and (after doing some data collection and analysis) begin to masquerade as a member of that community. Try to endear yourself to other members, and make them believe you are one of them. Once you have established a level of “street credibility” among the group, you can begin to exploit existing tensions for your own purposes. Gradually switch from providing positive comments about what members of the group say to posting increasingly negative—perhaps even hostile and violent—things about “the others” (the out-group who are perceived as illegitimately supporting the opposing view). Once you begin to move in this direction, however, you will need to be patient. An effective campaign of disinformation may evolve over the course of a few months by gradually introducing an intentionally false narrative in an organic manner. A “troll farm” is a term used to describe an organization that arranges collective trolling efforts, usually supervised by someone with resources, like a state sponsor—for example, the Internet Research Agency in Russia is one of the world’s more notorious troll farms. There are many private sector examples of troll farms as well.96 In general, these outfits describe themselves as marketing or public relations organizations that work to promote certain perceptions and images (largely on social media). In 2019, an investigative journalist in Poland went undercover for six months and then reported what life was like as an employee at one such company. She described how she was first instructed to establish a user account (using a fake identity) for sharing “social and political content,” and then after attracting a minimum of 500 followers, she would receive guidance and instructions on “what issues to engage with, who to promote, and who to denigrate. The accounts produced both leftwing and rightwing content, attracting attention, credibility and support from other social media users, who could then be rallied in support of the company’s clients.”97 According to Wojciech Cies’la of Investigate Europe (a consortium of European investigative reporters involved in producing this story), the overall goal of these efforts is “to build credibility with people from both sides of the political divide. Once you have won someone’s trust by reflecting their own views back at them, you are in a position to influence them.”98 As noted earlier in this chapter, in addition to provoking a target to react or respond in certain ways, trolling is also largely about identity deception. Creating and spreading memes can be another way of provoking engagement on social media. The term “meme” was coined by Richard Dawkins in 1976 to describe small units of culture that spread from person to person by copying or imitation, and an “Internet meme” commonly describes the propagation of items such as jokes, rumors, videos, and websites from person to person on the Internet.99 As Limor Shifman explains, memes reflect—and influence—people’s views in many different ways and usually represent something very contemporary, like a soundbyte

Tactics and Tools: Technical Dimensions of Digital Influence101

from a speech (“read my lips,” “lock her up,” or “mission accomplished”), a political or social phenomenon (flash mobs dancing “Gangnam Style”), or even a mistake that cost the team the winning score (e.g., the Seahawks throwing an interception at the goal line in SuperBowl 49).100 Memes are quite popular on certain sites, like Tumblr, Instagram, and Reddit. Influencers want to create memes that will “go viral”—that is, something that encourages other users to like and share it with family and friends. The general rule is to create something that resonates with the social norms, perceptions, and preferences of your intended audience and facilitates the transmission and reinforcement of an idea.101 The idea could be something that makes us laugh or makes us angry. It could be derisive and derogatory (e.g., an unflattering portrayal of a political candidate) or uplifting and inspirational. The idea reflected in the meme could reflect gender or age differences, racial or ethnic stereotypes, or something that depicts injustice or the abuse of power. For example, consider the image of a campus police officer casually using pepper spray in an assault on students who were sitting peacefully in a political protest. The officer is dressed in riot gear and heavily armed and seemingly has no reservation or concern about what he is doing to the unarmed students on the ground. To some viewing this picture, it’s almost as if the officer were spraying his garage to wipe out a termite infestation. Many who viewed the photo online were aghast, and in their outrage, they forwarded the photo to friends and family, sometimes with comments attached. Others, however, saw no fault in the officer’s actions—indeed, they even applauded the officer for doing a good job pacifying those students who were portrayed as uppity and probably obnoxious and getting what they deserved. As we’ll explore in later chapters of this book, the desire for social validation has led to people “sharing” a mountain of personal information. From weekly (or daily) postings and photos of the kids and cats to product reviews, personal reflections, and extensive videos of all kinds, hundreds of millions of people worldwide have offered us windows into their lives. And not all of them share only the good stuff; the desire to appear authentic and “real” has led some to reveal their dirty laundry online as well (not recommended, in my opinion). But anyhow, sharing content—which includes spreading memes—has become commonplace online. By sharing a meme, you are telling the world that you felt something about it and that you feel that they will to. You are sharing an idea with others that resonated with you. In many instances, people alter the meme to include different words or images, in order to amplify a certain dimension of the core idea reflected in the original version. As Shifman notes, this “repackaging” of the meme can extend the long-term impact of the original, while also expanding its potential resonance with others. Two primary ways in which people do this is through mimicry and remix.102 While some have suggested that mimicry is a form of flattery, it’s not always so. The re-creation

102

Digital Influence Warfare in the Age of Social Media

of a text, image, or video surrounded by a different context could generate wildly positive or negative reactions. The same goes for remixing, which involves manipulating the original meme (e.g., using Photoshop) to alter the underlying message or idea. For example, a “mission accomplished” banner was used at a President George Bush speech following the 2003 Iraq war. As a photo of this banner began circulating throughout social media (often by supporters of President Bush), it could also be digitally replicated to be superimposed on a wasteland of destruction in Iraq, with bodies and wreckage strewn about everywhere. Doing something like this would surely be motivated by feelings of scorn and ridicule rather than support for President Bush. Message type, content, format, contextual relevance, influencer attributes, target attributes all matter—especially when the goal is to provoke an emotional response from the target. Research has confirmed that people share content that arouses them emotionally, both positively and negatively.103 Clear and simply packaged information spreads easier than complex representations of ideas. If we have to figure out what it means, we’re less likely to share it with others and burden them with the same challenge. Prestige of the messenger matters: the more famous the person from whom you received the meme, the more likely we’ll share it with others. Thus, if Brad Pitt or Bill Gates posts something on social media, it will be viewed and shared quite a lot more than anything posted by James Forest. Further, many attempts are made to attract the attention of celebrities (e.g., tagging your message with #Oprah or #Trump in the hopes that those individuals see your message and then react to it). Positioning is also important for memes to spread. If you have a sense that the idea expressed in your text, image, or video will be most well received among a certain type of people (e.g., based on political party affiliation), you will want to target them first. And timing is essential: rarely do memes pick up steam when reflecting “yesterday’s news.” Having some contemporary significance, some relevance to what other people are already talking about or debating, aids the resonance of your meme and makes it more likely that it will be shared with others, particularly if it provokes high-arousal feelings of anger and anxiety.104 The retransmission of the idea reflected in a meme thus does not necessarily mean endorsement. In fact, the kind of digitally replicated image described above could become a separate meme of its own, liked and shared among Internet users who were opposed to the war in Iraq. In a sense, there would now be two competing memes—one that supports a certain narrative and the other that is opposed to it. This competition of ideas—and support for those ideas—creates a sort of spiral of sharing activity that catapults the original meme and its variants toward viral status.105 This, in turn, reflects a unique opportunity for those looking for ways to digitally influence a society. As we will explore later in this book,

Tactics and Tools: Technical Dimensions of Digital Influence103

the Russian digital influence warfare efforts against the United States over the last several years have been focused not on a single narrative but on a flood of competing narratives. While fake news about a specific candidate was replicated and amplified with regularity, for example, the more prominent and frequent narrative was the loss of faith in the integrity of our political system and its leaders. This reflects the higher-level strategic goal of Russia’s influence warfare effort. Finally, the importance of contextualizing a meme underlies a point made earlier in this chapter: fake social media accounts and automated bots can only amplify a disinformation campaign, not initiate it. To be successful, you must first ensure that the underlying idea(s) expressed in the meme resonate among your target audience. Only then will they want to share it with others. This, in turn, means that you must gather as much detailed information as you can about the preferences, attitudes, beliefs, likes, and dislikes shared by your target audience before you can succeed in provoking them in the ways you intend. This holds true for basically any other form of provocation tactics as well. To sum up, provocation can involve lies, scorn, trash talk, rumors and conspiracies, and images and videos that are either fake or real—as long as you know your audience well enough, you can find ways to provoke either positive or negative emotions among them. You can provoke outrage by playing the role of an angry, aggrieved, isolated, self-absorbed hothead constantly insulting and denigrating your enemies. If your target responds in any way, you win. Sometimes the target will be strongly urged to “take it into real life”—by organizing a protest, maybe even reaching out to established organizations for assistance. Earlier I described “concern trolling”—acting like you’re upset and offended in order to exploit the ethics and empathy of your target. In the case of the well-reported #Pizzagate incident, so many people repeated a child exploitation conspiracy and said, “Somebody should do something about this,” it eventually inspired Edgar Welch to leave his home in North Carolina on December 4, 2016, drive to Washington, DC, with an assault rifle, and shoot up a completely innocent pizza parlor.106 He was quickly arrested, and the following year, he pleaded guilty and was sentenced to four years in prison. Earlier that year, Russian trolls organized real-life protests against both the Republican and Democratic nominees. In that instance, the overall goal was to encourage greater polarization, to make Americans see each other as enemies. On multiple fronts, Russia is continually trying to divide, misinform, and manipulate our society. If Americans come to believe they can’t trust each other (or the government) to do the right thing, faith in democracy ultimately declines. And a democratic society that is perpetually disunited and unable to agree on anything substantive surely poses no real obstacle to Russia’s ability to achieve its long-term foreign policy and economic goals.

104

Digital Influence Warfare in the Age of Social Media

Category #3: Digital Tools and Tactics for Directly Attacking the Target A third category of tactics for digital influence warfare that I observed when conducting research for this book involves directly attacking a target. There are several terms used to describe the different forms these attacks may take, like cyberbullying and online harassment. Dorothy Denning’s research on what she calls “hacktivism” describes how attackers may use “virtual blockades, automated email bombs, webhacks, computer break-ins, and computer viruses and worms” against a target.107 Some attacks may involve shutting down a target’s web server or email service using tools that are widely available today. The “denial-of-service” (DOS) attack and the related “distributed denial-of-service” (DDOS) attack both use similar methods to overload a server’s capacity, causing it to malfunction. Other attacks may involve hacking a website and then changing the information it provides. For example, in October 2019 one of Norway’s best-selling newspaper was forced to take its website offline after hackers inserted false stories and quotes, including a pro-pedophilia comment attributed to Norway’s Prime Minister, Erna Solberg.108 Sometimes the purpose of an attack may be to intimidate or frustrate the target or to provoke an individual into saying or behaving in some kind of self-sabotage way—for example, continually prodding them until they lose their cool and irrationally lash out at others, showcasing their temper in front of all who might be watching. Or you can use the tools of deception to trick the target into saying controversial or embarrassing things on an audio or video recording that undermines their own credibility. There have also been efforts to try and mute opposing messages and accounts. As Bradshaw and Howard describe, one social media takedown strategy involves “mustering an array of human and automated accounts to falsely mass-report legitimate content or users in an attempt to have the social media platform (e.g. Facebook, YouTube, or Twitter) to suspend the accounts of those with whom they disagree. Even a temporary suspension of an account could affect the spread of information promoted by that opposing viewpoint,” thus tipping the digital influence scales in favor of the attacker.109 The overall goal here would be to drown out the opinions of “the other” (whoever they may be). Other direct attacks used in digital influence warfare may involve hacking into servers and accounts in order to steal compromising information or gain access to key networks and then expose a target’s secrets, leaking documents and images that were not meant for public consumption. It is particularly easy to capture media attention when you “leak” officiallooking documents or “secretly record” audio or video. This adds perceived value to the material, allowing a journalist or blogger to trumpet “Exclusive!” or some other attention-grabbing title when announcing

Tactics and Tools: Technical Dimensions of Digital Influence105

the material’s content. As discussed in chapter 2, the Russian tactic of ­kompromat—releasing controversial information about public figures—has long been a part of its active measures effort to intimidate and embarrass the target, while also influencing public perceptions more broadly. This was why in July 2018, 12 Russian intelligence officers were indicted for hacking into the computers of at least 300 people connected to the Democratic Party and the Clinton campaign.110 The indictment maintained that after infiltrating their computers and implanting malware, the Russians leaked stolen files “in stages,” a tactic “that wreaked havoc on the Democratic Party throughout much of the election season.”111 Everyone has skeletons in their closet; nobody has lived a mistake-free life. Digging up dirt and sharing it publicly in smear campaigns is a timehonored form of influence campaigns. The tools of direct online attacks seem to be most frequently used for the harassment and abuse of journalists, academics, scientists, opposition politicians, activists, and celebrities. But ordinary people have also been the victims of online harassment. In one case, a woman in a Seattle suburb suffered a variety of attacks over several months.112 Images and videos were posted to a fake Facebook page, and she received a barrage of phone calls and email messages. Even her mother and coworkers received calls. Police were sent to her home numerous times after receiving “tips” about nonexistent child abuse. At least fifteen of her neighbors received a “community alert” in the mail warning them that they were living near a dangerous abuser. She and her husband eventually won a lawsuit against her cyberattacker, and yet the abusive emails continued. Unfortunately, the ability to mask your identity online has prompted a toxic lack of accountability and enabled a proliferation of harassers and trolls.113 Individuals feel emboldened to say and do things online that they would never say or do in the physical world. Researchers have described several dimensions of this so-called “online dis-inhibition,” including the ability to manufacture an online “presence” that is separate from reality— like being able to schedule messages to be sent at a time that we’re not actually online or to use different accounts under fake identities. Individuals may consider themselves “invisible” when online, scrolling through websites and social media feeds without overtly revealing the fact to others that they are online. Another dimension is treating our online activity as a game or fantasy (“it’s just online; it’s not the real world, so whatever I do here doesn’t really matter”).114 Similarly, many people believe that different rules and laws apply when they’re online. There is no central control or overarching authority policing your behavior or the things you say. Your ideas, beliefs, and opinions are just as valuable—or perhaps even significantly more valuable—as those of any other user (or at least, that’s what certain individuals may tell themselves when firing off a barrage of angry tweets early in the morning).115

106

Digital Influence Warfare in the Age of Social Media

In short, there are many tactics and tools through which you can attack people online as part of a digital influence campaign. These attacks could help you achieve various goals and objectives described earlier, like creating confusion and anger, encouraging conspiratorial thinking, undermining trust in institutions of government and law enforcement, disrupting channels of communication, or intimidating and suppressing voices of a political opposition. And as mentioned before, the influencer is more likely to be effective when they adopt multiple tactics as part of their effort. Various forms of deceiving, provoking, and attacking can all complement each other in support of your overall digital influence strategic goals. CONCLUSION Finally, the influencer will always need to assess the impact of their tactics and tools by gathering and analyzing data on the target’s reception and reaction to the influence efforts. Success in digital influence warfare can often be evaluated by the target’s behavior. Did they do something that you wanted them to—did they vote, buy, protest, join, reject, or choose some other behavioral response? Did they express some kind of emotional response (outrage, anger, sympathy, encouragement, etc.)? Successful examples of DIW are already available for us to study and learn from, to an increasingly alarming degree. For instance, we can study how Russians successfully influenced the heated debates over Brexit in the UK, the 2016 U.S. presidential election, or the reluctant acquiescence of their annexation of Crimea and continued insurgency in Ukraine. We can study how conspiracies have been spread and have influenced people to do seemingly outlandish things like attack a pizza parlor (#pizzagate) or threaten a mass protest at Area 51. How and why did the disinformation narratives spread during these incidents resonate with the target audience, to the degree that at least some of them responded in these ways? By the same token, failure in DIW can be measured by the lack of such actions. For example, did you fail to influence any kind of reaction or response by the target? If your narrative was ignored completely, what potential deficiencies with the narrative (or the transmission of the narrative) can you identify that might explain the lack of influence? What tactics of persuasion were you using? Could other tactics have been more effective? Did this same approach influence other targets previously? If so, how was this target or context different? In short, you would be seeking to understand all that you can about a failed influence attempt in order to make improvements for the next attempt. From this information, you can then refine and recalibrate your tactics and tools as needed, including trying different message formats and contents or choosing new targets to try and influence. Incorporating what Philip Howard refers to as “constant message testing” helps the influencer determine the most successful

Tactics and Tools: Technical Dimensions of Digital Influence107

social media messages on one day that will get even wider distribution the next day.116 Monitoring the success of an influence campaign is made relatively easy by social media. We can quickly see how many users (and which ones) liked and/or shared your message with others or how many engaged in other ways (like comments with language indicating that you were successful at provoking certain emotional responses). An effective digital influence strategy should take into account the contexts in which these tools and tactics are to be used. If your strategy was initially designed to take advantage of a particular context—for example, to amplify existing social fissures and exploit societal vulnerabilities like distrust, the decay of truth, and the decline in deference to expertise or an objective reality—you will want to be sure the context has not changed in any meaningful way. If it has, you will want to evaluate whether the strategy and tactics you chose are still relevant or if you need to make adjustments. Also, a certain flexibility may be warranted. For the most part, conducting digital influence warfare effectively usually requires careful planning and disciplined adherence to specific, narrow objectives. However, there may also be instances in which it can be very much an opportunistic sort of enterprise, where contextual dynamics or triggering events can significantly amplify the impact of a specific political or psychological messaging effort. Thus, the smart influencer will invest in developing an infrastructure (e.g., a troll farm, a conglomerate of automated accounts or “bots”) at your disposal that can be deployed when and where the occasion requires. Throughout my research for this book, I found that the more one learns about the landscape of tools available to the digital influence entrepreneurs and mercenaries, the more one discovers new tools that are being devel­ oped, and others that are being modified or used in new ways, and will continue to do so long after this book is published. As a result, the digital ecosystem has transformed the world of influence warfare in many ways. Think of all the things you can do now that were not possible before the Internet: mining social media accounts to identify patterns of activity that reveal beliefs, values, etc.; manipulating images via Photoshop; using automated “bot” accounts to give the sense of widespread support; defacing the websites of adversaries, or even attacking an adversary’s web servers and social media accounts in order to shut down the sources of competing narratives; employing phishing and other hacking strategies to gain access to accounts and then hijacking and spoofing those accounts—that is, posting messages to embarrass the account’s true owner; spamming massive audiences; stealing and publishing embarrassing or damaging information on anonymous file-sharing sites; posting videos that have been altered in any number of ways; and so much more. Of particular note is the advancing presence and sophistication of artificial intelligence—computers that are able to analyze patterns of behavior and communication faster than

108

Digital Influence Warfare in the Age of Social Media

humans and can respond with preprogrammed tactics to manipulate and capitalize on those patterns. With the global rise of the Internet, individuals—more than institutions— have become the primary sources of perceptions-shaping information in the twenty-first century. Anyone can publish any kinds of information, true or false, from confessions to conspiracy theories. As noted in a recent German Marshall Fund report, digital platforms provide several advantages for modern information operations, including the following:117 • Low-cost publishing: Only minimal resources are needed to create seemingly credible and professional “news” websites. The equipment needed to capture and manipulate photos and video is as ubiquitous as the smartphone. And translation of digital stories into multiple languages and multiple social media platforms is cheap and easy. • Elimination of “gatekeepers”: Information is published with no editorial oversight, and anyone can be a source of information. There is no need for the information you publish online to be true (particularly if you are running political ads on Facebook). • Anonymity: You can publish and disseminate information of any nature without having to reveal your true identity. You can be anonymous or have a fake online identity, even using fabricated credentials giving the illusion of expertise and authority. You can also use other accounts (automated) to corroborate your perceived expertise and authority, which deepens the convincing nature of the illusion. • Precision targeting: Because the target audiences are already segmenting themselves into ideological comfort zones and echo chambers (or information silos, as described in chapter 5), it is now easier to tailor your narrative according to what that audience has already agreed they mutually believe in. • Automated amplification: The use of automated accounts allows a relatively small number of individuals to disseminate any number of narratives across a broad spectrum of social media platforms and target audiences. These can also be used to create a “manufactured consensus” in order to make something (or someone) appear more widely supported than they actually are. All these things facilitate a wide array of deception, provocation, and attacks in support of a digital influence strategy. The tools and tactics described in this chapter are deployed by influence aggressors in both authoritarian and democratic societies, often trying to achieve the same kinds of strategic objectives. They work best when deployed against targets who inhabit the kinds of influence silos described in chapter 5, where the echo chamber effect helps the messages proliferate and reverberate, increasing the likelihood of achieving the influencer’s strategic goals. They

Tactics and Tools: Technical Dimensions of Digital Influence109

are able to exacerbate differences and to encourage conflicts by bringing people from opposing viewpoints into direct and frequent contact with no filters or self-constraints. While digital influence silos reinforce a perception of like-mindedness within a specific ideological milieu, the filter bubbles that are insulating us are still bumping up against other bubbles that we just want to go away and leave us alone. Their very existence aggravates us, which is something that influencers can take advantage of to sow further discord and animosity. This leads us to the psychology of influence and persuasion. Once the influencer has developed a keen understanding of what the target audience wants—and what they don’t want—crafting and delivering effective influence operations against the target becomes fairly straightforward. As we’ll see in the next chapter, there are specific attributes of an influencer, target, and message that can all be tailored in ways that will maximize the impact of the influence effort.

CHAPTER 4

Psychologies of Persuasion: Human Dimensions of Digital Influence

The strategies and tactics described in the previous chapters work best when combined with an appreciation for the psychology of influence and persuasion. However, there are many different theoretical frameworks on influence and persuasion, far too many to cover in a book like this. So, in this chapter I have chosen just a representative sample to demonstrate what the broader terrain of research has to offer for the study of digital influence warfare. Please see the reference endnotes of this chapter for suggestions of additional places to look for greater depth and breadth of research on the psychology of persuasion and influence. I have also placed a collection of online resources on the intersections of psychology, technology, and persuasion on the website for this book (­www​.­DIWbook​.c­ om). To begin with, a simple way to summarize this broad topic is to organize the discussion around three main themes: attributes of effective influencers, attributes of receptive targets, and the kinds of information conveyed from the former to the latter that is meant to influence their behavior. This interdependent triangle of themes draws from Aristotle’s early theory of persuasion, which highlights the source (ethos) of the message (logos) and the emotions of the audience (pathos). For each of these, as Pratkanis and Aronson explain in their book Age of Propaganda, Aristotle provided recommendations for the would-be communicator.1 For example, he recommended that the orator present himself in the best possible light and that the message should be tailored to fit the preexisting beliefs of the audience. These recommendations are easy to implement with social media, as described in the previous chapter. First, the influencer can produce a carefully crafted persona (real or fictitious) that is attractive to the target. Second, the influencer can gather and analyze a significant amount of information about the target, including their beliefs, hopes, and fears.

112

Digital Influence Warfare in the Age of Social Media

Of course, any effective influence effort needs to be predicated on a clear understanding of whatever goals and objectives you want to achieve. Ultimately, the influencer wants the target to commit themselves to some kind of behavior that will be beneficial to the influencer—and typically, it’s behavior that the target is unlikely to commit themselves to without a concerted effort to persuade them. In some cases, the goal may be coercing people to conform to certain rules and regulations, while in other cases the goal may be the opposite—perhaps even to provoke a rebellion or insurgency. Authoritarian regimes (e.g., in China, Iran, North Korea, Russia, and Turkey) have developed various means of controlling the behavior of their own populations through state-owned media and control of Internet access (which we’ll examine in chapter 6). Political entities want to influence voters and win elections, while social movements and activists want to mobilize popular support, and a wide variety of entities simply want to influence consumer behavior in ways that will generate profits. Further, we know that individuals make decisions based on a wide variety of information—and more specifically, their interpretations of that information. Thus, many of the efforts described in this book involve manipulating the kinds of information the target receives, as well as how they interpret and process that information in ways that will ensure the influencer achieves their goals. And how we interpret and process information is directly connected to the source of the information (the attributes of the influencer) and our own core values, beliefs, sociodemographic background, and much more (the attributes of the target). So, let’s turn now to look at what the research literature suggests about the attributes that contribute to the effectiveness of an influencer. ATTRIBUTES OF THE INFLUENCER The research literature on attributes of effective persuaders contains a number of common themes. We know, for instance, that physical characteristics of the influencer (including age, race, gender, etc.) can significantly impact the acceptance of the messages they communicate. But the deeper and more impactful attributes of an effective influencer include more nuanced things like personal charisma, perceptions of authority, and the importance of having a clear and compelling style of communication (including voice tone and inflection). Further, some of these attributes are difficult to explain and almost impossible to measure in any scientific way. There is ample anecdotal evidence that some of the best public speakers or teachers are known for their storytelling prowess, an ability to captivate an audience by weaving together an intriguing narrative that both informs and entertains. But there is disagreement over how (or whether) such ability can be consciously developed. Similarly, there is disagreement about how to define charisma, and whether it can be developed or is an innate

Psychologies of Persuasion: Human Dimensions of Digital Influence113

quality that some lucky people are simply born with. Generally speaking, the literature describes various charismatic leaders as exuding a type of confidence and positive energy, while at the same time coming across as down-to-earth and relatable. In one account, the ability to influence was described as being “based on the confidence of the speaker—the more selfassured and confident a communicator appears, the more likely that we will accept what is said.”2 Emotional intelligence, a sense of humor, physical posture, and many other aspects have been connected to the idea of charisma. Research on the relationships between personality and social influence has also suggested that “more agreeable people might be more authentic.”3 But how do you persuade people if a large number of them consider you to be untrustworthy, unbelievable, and disliked? Research has found that people who want more control over others will use Machiavellian tactics of manipulation,4 but generally speaking, people don’t like being overtly manipulated. This is a challenge that all propagandists and wouldbe influencers face—how to make the untrustworthy seem trustworthy, how to make the unbelievable seem believable, and overall how to make a target willingly comply with whatever the influencer wants them to say or do. To begin with, the appearance of legitimacy matters very much. As Zimbardo et al. note, an influence effort will always be more successful “if the communicator has high credibility than if he or she has low credibility.”5 So, because the influencer must appear to be legitimate in the eyes of their target, they must find ways to manufacture the perception of legitimacy. According to research by Mark Suchman, legitimacy-building “falls into three clusters: (a) efforts to conform to the dictates of preexisting audiences with the organization’s current environments; (b) efforts to select among multiple environments in pursuit of an audience that will support current practices; and (c) efforts to manipulate environmental structures by creating new audiences and new legitimating beliefs.”6 If the influencer chooses the first kind of legitimacy-building, they will gain acceptance from their audience if they are seen as champions of the values, norms, and expectations of that audience. For the second type of effort, the influencer will seek out an audience that is more closely aligned with their own values, norms, and expectations. And in the third approach, they will seek to create new audiences, something that television and social media are particularly well-suited for. An obvious example is how Trump was able to trick millions into believing him to be a legitimate, qualified leader—through a so-called “reality TV show.” These shows are extremely deceptive and misleading, in that they raise the expectation and perceptions of something that is highly scripted, manipulated, and not real at all, yet believable and dramatic. Think about it for just a moment. The truth is ordinary reality isn’t likely to be all that

114

Digital Influence Warfare in the Age of Social Media

interesting or entertaining, so why would anyone bother watching it on television? And thus, why would advertisers sponsor something unlikely to attract larger numbers of viewers? By necessity, “reality” television is staged, manufactured to look real yet still be entertaining. Now, imagine a reality show in the United States centered around a wellknown business executive, who is surrounded by a support cast of individuals whose main objective is to impress him. Naturally, the scripted role of this central character is to be the unquestionably powerful leader, the infallible decision-maker whom others will revere and fear in equal measure. But by convincing millions of television viewers of this leadership persona, and gaining nationwide name recognition, the actor who plays this role on television could catapult himself into the political arena as a viable candidate for a national position. The candidate could even pursue a strategy of provoking outrage and other emotions, as described in chapter 2, in order to dominate the news cycle and keep the mainstream media discussion all about him. He could say horrible things on the campaign trail and be outwardly (even belligerently) disrespectful toward others, from political opponents to minorities and decorated military veterans. The candidate could even claim (in a recorded conversation broadcast by several news outlets) that fame permits him to get away with sexually assaulting women. But millions of voters already feel they “know” this person and trust his decision-making abilities, and they believe that voting for him will result in a strong, wise leader in the Oval Office. After all, his decisions were never questioned on his television show, so they are likely to expect good decision-making here. Another approach to manufacturing legitimacy that has shown to be effective is the influencer giving the appearance of doing something that does not openly seem to be in their own interests. They will try to convince you they are making a sacrifice on your behalf, thereby creating a form of influence that social psychologist Robert Cialdini describes as reciprocity: our tendency to feel obliged to a person who gives us something, no matter how trivial or unwanted the gift. This is why the social media convention of “liking” can be a powerful source of influence. Often when someone “likes” what you post online, or follows you, there is a tacit expectation that you are obligated to follow them in return and/or like something of theirs in return.7 There are in fact many ways in which the principle of reciprocity influences our behavior. We become open to persuasion by the giver and to get rid of the feeling of obligation we will say or do something that benefits the influencer.8 In fact, we may even agree to giving back a much larger gift than we received. This is why a nonprofit organization will send those stickers, stamps, photos, pens, gift cards, and other such things in their solicitations for donations. It also explains why a theme of the Trump campaign was that of the wealthy and powerful candidate “victimized” by an

Psychologies of Persuasion: Human Dimensions of Digital Influence115

unsupportive news media, yet willing to reluctantly “lead” the country on behalf of the American public in order to tackle a range of perceived national crises like illegal immigration, rising criminality, terrorism, and greedy corporations moving jobs overseas. He claimed the mantle of being a “blue collar billionaire” and the “voice of the people.” In his Republican National Convention address, he declared, “I’m with you—the American people. I am your voice.”9 The dark irony, of course, is that “the people” Trump claimed to speak on behalf of truly have no relation whatsoever to Trump’s millionaire upbringing and lifestyle. He is no more one of “them” than the rotting carcass of an Egyptian cat, but because he provided verbal confirmation of their anger and grievances from the stage, attendees at Trump’s campaign rallies became reliable voters for him. Society also venerates those who are seen as risk takers, pushing the envelope, exploring new boundaries, and taking it harder, faster, higher than before. From athletes to scientific and technological breakthroughs, we celebrate those exceptional few who have gone beyond the norm, and this is an attribute that an influencer can use to their advantage in getting the target’s attention. Similarly, recognition of authority is another attribute connected to the effectiveness of influencers. Many of us have at some point in our lives held a job in which we were “supervised” (or at least answerable to) by someone else. This sort of formal authority is a given, and naturally, the employer/manager/supervisor will have some power and influence over the employee. Similarly, teachers influence students and senior military officers influence enlisted soldiers, and the same holds true for many other kinds of hierarchical organizational relationships. But then there is the more informal kind of authority, one that we grant to individuals based on their status. According to Albert Bandura’s social cognitive learning theory, viewers exposed to messages depicting rewards for actions exhibited by credible role models are more likely to perform similar behaviors.10 Wealthy individuals, star professional athletes, and celebrities are often awarded a certain kind of authority, even when speaking about something on which they have no real authority. As Guadagno notes, “Many actors who have played an authority figure (e.g., doctor, president, and police chief) on television have successfully used their status on TV to endorse products and services.”11 And of course, we’re all very familiar with certain television personalities who were elected to political office (like Ronald Reagan, Jesse Ventura, and Arnold Schwarzenegger). In whatever way their perceived credibility is established, these individuals can influence the behavior of millions of television viewers. Finally, an effective influencer knows their audience and tailors their messages accordingly. The most powerful influencers have a captive audience and control of the message and the means of communication—what I term “information dominance” in chapter 6. Authoritarian governments

116

Digital Influence Warfare in the Age of Social Media

in places like Russia, China, Iran, and North Korea have an uncontested, virtually limitless stranglehold on the means of communication, including anything digital, which gives them tremendous control over what their populations believe to be true. However, true democracies don’t really offer that level of power concentration, so the alternative strategy (described later in this book) involves attaining a level of “attention dominance” through which the influencer can achieve their goals. And in order to dominate the attention of a target audience, it is essential to have a solid understanding of what that audience values, fears, wants, likes, dislikes, and much more. ATTRIBUTES OF THE TARGET The most effective influence campaigns are those that capitalize on the attributes and concerns of the target. At its core, influence revolves around how the target processes information. What the influencer wants, then, is to maximize the likelihood that the target will focus (at least at some level) on the information the influencer wants them to see and that they will process that information in ways that ensure the success of the influencer’s goals. But every human being processes information of considerable scope and variety, and as noted earlier, the underlying goals of the effort are to influence the kinds of information the target receives and how they process that information. If the influencer is seen as lacking credibility or other attributes, or if they are unable to capture the attention of the target, their influence effort will fail. If the target rejects the information as lacking contextual relevance (for example), the influence effort will fail. I’m sure you get the point. Further, the attributes of the target will impact their information processing in ways the influencer is unlikely to have control over. As a result, the influencer will be most effective when they can ensure that the attributes of the target are already aligned in some ways with the goals and objectives, the message content, and the context for information relevance of the influence effort. In other words, effective influence efforts require that you have a good sense of your target audience. Jerrold Manheim provides a list of questions that the influencer should try to answer when determining whether the attributes of the intended target are amenable to the influence strategy.12 • Questions about the target’s existing cognitive state. What is the nature and distribution of existing attitudes, predispositions, preferences, perceptions, beliefs, needs, and expectations? How much has the target of persuasion thought about the attitude in question; how personally engaged is he or she with that attitude? How salient is the campaign or its objectives? How open is the target to persuasion? To what degree

Psychologies of Persuasion: Human Dimensions of Digital Influence117

are the target’s emotions in play? What are the cultural or linguistic hurdles in play in the situation? How great is the discrepancy between the target’s existing beliefs and the new information? What motivates the target? Will the persuasion be beneficial to the target—or readily capable of being portrayed as such? • Questions about the target’s general availability for persuasion. Is the target actively seeking change or direction or guidance? Has the target been previously persuaded by a similar persuader or information? Is the target aware of the persuasion attempt? What does the target think of the persuader? • Questions about the target’s access to channels. What channels does the target use in information seeking? What channels are available to the persuader? To what extent can the campaign control the channels it employs for persuasion? What are the target’s social/demographic characteristics? • Questions about the setting in which persuasion will be attempted. How many opportunities will likely be available to engage in persuasion? Is there a counter-persuasion effort in play? What are the target’s expectations, if any, regarding the persuasive situation? To what extent can the campaign control the persuasion setting? • Questions about the source of any persuasive message. Which sources are judged as authoritative or credible (or likeable)? What attributes or behaviors might enhance the legitimacy of the persuader? What affinity, or points of similarity of self or circumstance, can the persuader establish with the target? Social media and Internet platforms offer a wealth of user information that can be used to gain a solid understanding of a target audience. Further, identifying and targeting the members of an influence silo (described in chapter 5) in ways that reinforce their sense of in-group identity and personal biases can be a significant contributor to the success of your influence campaign. But first, it’s important to acknowledge there are many different personal attributes that affect how we process information, including our intelligence, personal experiences, education, occupation, emotional intelligence, sociodemographic background, and much more. For the purposes of this book (and in the interests of limited time and space), let’s look at just a representative handful of things that an influencer would want to take into account when identifying a suitable audience for their influence effort. While there is a lot of research in the field of psychology that tries to categorize individuals into different categories or typologies according to how susceptible they may be to persuasion, there is no widely agreedupon theory on this topic. In fact, the diversity of models and approaches is probably due to the vast diversity among people in general. For example,

118

Digital Influence Warfare in the Age of Social Media

the moral values and standards that have been inculcated in us by our family and by society’s institutions (such as school and church) differ, resulting in each of us having our own personal code of what we consider “right” and “wrong.” Similarly, our attitudes have often been shaped by the education and training experiences we have received, and there is wide variety here as well. Generally, as Zimbardo et al. observe, “the level of intelligence of an audience determines the effectiveness of some kinds of appeals.”13 However, the ways in which our education impacts our receptivity to influence efforts can include our level of education (e.g., high school graduate, college graduate, and postgraduate degree), disciplinary specialization (medicine, engineering, economics, political science, and criminology), and even the specific college or university we attended. Despite the wide diversity in our individual backgrounds there are some common human traits that researchers have identified as affecting our susceptibility to influence and persuasion. For example, research indicates that people who are low in emotional stability are usually more susceptible for certain kinds of persuasion. Further, the more we care about our public persona, the more likely we can be influenced. Emotional self-centeredness is a common problem among insecure people, and it makes them overly anxious about what other people think about his or her opinions.14 That is, the more concern we have about how others view us, think about us, and talk about us, the more likely we are to pay attention to influence efforts that are meant to make us feel important. This, in turn, underscores the psychological research on conformity and perceived authority. Conformity and Perceived Authority A target’s propensity for conformity is an important psychology attribute for influence efforts to be effective. Conformity is, to some degree, required for any civil society to function. Imagine you’re in a theater watching a performance, and instead of sitting quietly, the person next to you is chatting loudly on their cell phone. Or imagine you’re in the checkout lane at the grocery store, and a customer shoves everyone out of their way to get to the front of the line. If each of us did not conform to the rules of driving, what chaos there would be. Of course, it is natural—even advisable sometimes—to have doubts about certain kinds of conformity. Nobody likes to see the lemmings following their leader over the cliff in a form of mass suicide. Nobody likes to see examples of individual citizens blindly conforming to a political regime that by any objective measure would be seen as morally deficient. But there are specific psychological advantages derived from conforming. For example, nonconformity can introduce too much uncertainty, which (as we’ll examine later in this chapter) can be very discomforting and

Psychologies of Persuasion: Human Dimensions of Digital Influence119

fearful for many people. Conformity allows us to take some comfort in the perceived certainty that we’re choosing something that others have also chosen. And our willingness to conform is an important target attribute for any influence effort to consider. During the mid-1950s, Crutchfield used an experimental design known as the “question booth” to explore the relationship between conformity and influence.15 This technique involves asking participants in the study to indicate whether a series of statements projected on a screen are true or false. At the bottom of the screen, the answers of other participants are provided. Conformity is measured by the number of times the participants agree to obviously incorrect answers. After this exercise, a series of personality tests were administered to the participants. The researchers found that those who conformed were intellectually less effective, submissive, and inhibited and had stronger feelings of inferiority.16 Essentially, if we are uncertain about our own cognitive abilities, our convictions may often become dependent on what others believe. In fact, as Nezlek and Smith observe, “The study of social influence reveals that people who depend more on others for guidance are more susceptible to influence than those who depend less on others.”17 These people are often viewed as having generally weak convictions and sometimes are called “seekers” (but not the Harry Potter book series kind). In fact, each of has probably known a seeker or two in our lives, people who still haven’t found what they’re looking for. Good title for a song, by the way. However, as Zimbardo et al. note, the problem here for the influence effort is that individuals who are highly persuadable—and whose beliefs and behaviors can be easily changed by a particular argument—are then equally persuadable when provided a compelling counterargument.18 Anyhow, a similar manifestation of the tendency to depend on others for guidance is found in research on the impact of perceived authority.19 While the research is inconclusive on the topic, a small number of studies suggest that personality may affect obedience to authority.20 Researchers have referred to this as authoritarianism or a “dependent personality,” and it basically indicates that in some instances a target may be psychologically predisposed to submit to authority. In 1950, Theodor Adorno and his colleagues proposed the idea of “the authoritarian personality”21—loosely defined as a person who is excessively deferential to those in authority and has negative opinions or even hostile attitudes toward others outside their in-group identity. (We’ll examine this dimension of in-group/outgroup perceptions and behaviors extensively in chapter 5, on the power of influence silos.) Some researchers have also suggested that a person who appears to be very authoritarian in manner and thinking will probably be more impressed by status sources and appeals to power, control, decisiveness, and one-sided generalizations than by informational appeals, expert testimony, unbiased presentation of both sides of the issue, and so on.22

120

Digital Influence Warfare in the Age of Social Media

Finally, according to Nezlek and Smith, “Individuals high on authoritarianism follow the dictates of authority figures and norms more closely than those who are low on authoritarianism . . . Although authoritarians may see themselves as strong and people of action, weakness and insecurity underlie the authoritarian personality, and it is these characteristics that make authoritarians susceptible to social influence.”23 However, there is far more research that describes how obedience to authority is less related to personality attributes than to a wide variety of other mechanisms, including ideological and political convictions, social connections, family bonds, the media, religious and political organizations, and the education and criminal justice systems (to name just a handful).24 In fact, some studies have suggested that obedience to authority may be far more dependent on situational contexts than ingrained as part of someone’s personality. For example, in a well-known 1974 study by Stanley Milgram on blind obedience to authority, the majority of subjects (representing a fairly diverse collection of personalities) delivered on demand potentially lethal shocks to an innocent victim. Milgram found that even highly educated and liberal-minded American college students would be willing to inflict dangerous levels of electric shock to people when instructed to do so by a perceived authority figure. In his subsequent book Obedience to Authority, Milgram concluded: “Ordinary people, simply doing their jobs, and without any particular hostility on their part, can become agents in a terrible destructive process  .  .  . even when the destructive effects of their work become patently clear, and they are asked to carry out actions incompatible with fundamental standards of morality.”25 The use of the term “agents” is important here, as he then proceeded to develop what he called the theory of the “agentic state.”26 As Kathleen Taylor explains, this research argued that humans could operate in two states: “autonomous” and “agentic.”27 When acting autonomously, humans are essentially free, acting in ways that serve their own needs and are under their own control. However, we live in highly complex groups, from which we derive considerable survival benefits. Within these groups, structures exist for coordination and decision-making authority, and by extension, groups can only achieve certain goals if each individual member sacrifices some personal autonomy. So, a shift in behavior and attitude—the agentic shift—is required, as outlined by Milgram: “Specifically, the person entering an authority system no longer views himself as acting out of his own purposes but rather comes to see himself as an agent for executing the wishes of another person.”28 Obedience to authority is instilled by a variety of institutions, from the family and schools to religions, political organizations, and the criminal justice system. Compliance is rewarded within these social systems, while dissent (and particularly rejection of authority) is disapproved of and may be punished. Similarly, one could argue that social media activity is driven in

Psychologies of Persuasion: Human Dimensions of Digital Influence121

part by appeals for a form of agentic facilitation, through the many appeals to (and willingness to) like, share, and retweet messages. From this perspective, it makes intuitive sense that provoking such actions is a common goal of digital influence efforts, as described in chapter 3. And when the appeal to do such things comes to us from an authority figure (who often views people as a means to an end), the “agentic state” of an obedient person ensures their compliance. In summarizing his research, Milgram concluded that obedience to authority was dependent on the psychological and social context and the beliefs of the person involved. Resisting this impulse to obey authority is not easy for some. As Milgram noted, “Relatively few people have the resources needed to resist authority.”29 Further—and similar to the discussion earlier about an individual’s connections to others and conformity—researchers have noted that personal compliance is significantly higher when a request or command comes from an authority figure within the target’s in-group.30 Obedience (or a social expectation thereof) is a powerful source of influence and persuasion throughout our lives. We are taught from early stages of childhood the importance of obeying parents, grandparents, clerics, educators, and many others. At a very basic level, obedience to laws and authority is of course necessary in any functioning society. Altogether, the links between perceived authority and the effectiveness of an influence effort are fairly well established. Milgram’s research experiments demonstrate a troubling aspect of conformity. An important component of that research, which others have also expanded upon in later years, is that context has a great deal of influence on a target’s propensity to conform and obey authority. The participants in Milgram’s study were ordinary people who under different circumstances may not have been willing to administer painful electric shocks to the person role-playing the “victim.” But two-thirds of them did just that, leading Milgram to conclude that if the target of influence is expected to obey authority, they must view that authority as legitimate and relevant, factors that can be context-dependent.31 In other words, there are more than just personality attributes that determine whether a target will or will not respond to the influence of an authority. In fact, some researchers have looked for patterns (e.g., age, religion, gender, and moral compass) that might be common among individuals who refuse to obey orders they disagree with, though studies in that area have thus far proved inconclusive.32 Instead, context plays a significant role in determining whether or not you can influence someone else to believe or do something. Today there is a wide diversity of opinion in terms of whether someone should be viewed as an authority worth listening to. Recent decades have seen public confidence decline overall in religious leaders (due to sex scandals among priests and evangelical leaders bilking television viewers), political leaders (too many scandals to keep track of), the police (too many instances of overt brutality), journalists, lawyers, and other professions as

122

Digital Influence Warfare in the Age of Social Media

well. Even members of the medical profession are seeing their authority called into question by proponents of the anti-vaccination movement (as well as those opposed to abortion). Unfortunately, an overall decline in deference to experts and authorities has provided opportunities for new kinds of influence efforts, including the spread of disinformation and conspiracy theories. Of course, as noted earlier, responsiveness to authority is just one of many ways in which the target processes the information they receive. Researchers have also found that people are more easily influenced when their self-esteem is low.33 Similarly, people who are uncertain about how to assess facts, or who tend to accept ideas without question, are more susceptible to influence. As Denise Winn notes, “If one has never cultivated the capacity to think for oneself, to ask questions, to check facts, or take responsibility for the information one accepts,” they are more easily persuaded. “The result is often an inability to distinguish between fact and opinion.”34 This inability is directly related to how we process information. For example, research has shown that higher levels of weekly television viewing are directly correlated with higher levels of belief that what is depicted on the screen is reality. Television is a very passive means of exposure to information: we sit and watch. There is no opportunity to directly question whether the information is accurate or not, and if we don’t like what we are seeing or hearing, we can simply change to another channel that we find more appealing. All the while, we are processing the information we see on our screen, allowing it to influence us (whether we consciously realize it or not). Central and Peripheral Routes of Information Processing Several researchers, including Richard Petty and John Cacioppo, have argued that there are two primary ways in which we process ­information— central and peripheral—which have direct implications for attempts to persuade or influence us.35 As Manheim explains, the central route involves a thoughtful consideration and careful evaluation of the information presented.36 For example, in the central route the person may actively argue against the message, may want to know the answers to additional questions, or may seek out new information. The persuasiveness of the message is determined by how well it can stand up to this scrutiny. The ability to persuade someone through the central route is built around more or less intense engagement and argumentation. It achieves persuasion by facilitating the individual’s thinking thoroughly through the issue and the alternatives and by coming to appreciate the persuader’s position.37 Meanwhile, persuasion through the “peripheral route” involves a much more passive form of information processing, where a solid,

Psychologies of Persuasion: Human Dimensions of Digital Influence123

evidence-based argument is unnecessary. This route arguably facilitates greater success for the digital influence efforts discussed in this book. In the peripheral route, a message recipient devotes little attention and effort to processing a communication. Some examples might include watching television while doing something else or listening to a debate on an issue you don’t really care much about. In the peripheral route, persuasion is determined by simple cues, such as the attractiveness of the communicator, whether or not the people around you agree with the position presented, the pleasure or pain associated with agreeing with the position, or whether a reason is given (no matter how bogus) for complying with a request. We tend to be cognitive misers, trying to expend as little mental effort as possible. When faced with complex problems, we prefer shortcuts that don’t require as much thought and effort. We typically don’t want to think too hard about whether something we see online is true or not, particularly if there is nothing overtly suspicious about it. This is why effective influence efforts incorporate simple messages that can be easily absorbed with minimal cognitive effort. According to Brooking and Singer, “The first rule of building an effective narrative is simplicity. In 2000, the average attention span of an Internet user was measured as twelve seconds. By 2015, it has shrunk to eight seconds—slightly less than the average attention span of a goldfish. An effective digital narrative, therefore, is one that can be absorbed almost instantly.”38 Often, a person’s aversion to sufficient analytical thinking means they will already have a fairly high level of receptivity for information that has little or no basis in truth. This makes them an ideal target for influence efforts, particularly those that include deception and misinformation.39 People are generally lazy, which means peripheral information processing is much more attractive and likely. And unfortunately, television and social media feeds promote a lot more peripheral route persuasion, because users are often scrolling through their Twitter or Facebook feeds while doing something else, like riding the train to work, or working on a paper. We have no desire to spend more time and effort than we have to, particularly when the information—the message and the context for relevance—is already aligned with what we already want to believe, want to do, want to hear, etc. Further, according to Manheim, the peripheral route relies not on reason per se, but on packaging—attractiveness of the message or the medium, use of evocative symbols, use of an attractive or credible source, and so forth.40 Similarly, researchers have described how humans tend to rely on “heuristics”—generally defined as assumptions or artificial constructs— when we make quick judgments using the peripheral route of information processing.41 For example, you might rely on a “social endorsement ­heuristic”—that someone you trust has endorsed (e.g., retweeted) a post

124

Digital Influence Warfare in the Age of Social Media

on social media—to determine how trustworthy it is. 42 Heuristics are indicators we use to make quick judgments—sort of like cognitive shortcuts when processing the vast amounts of information presented to us. The problem with heuristics is that they often lead to incorrect conclusions. For example, you might rely on a “social endorsement heuristic”—that someone you trust has endorsed (e.g., retweeted) a post on social media—to judge how trustworthy it is. But however much you trust that person, it’s not a completely reliable indicator and could lead you to believe something that isn’t true. In contrast, sometimes the problem is not lazy reasoning but motivated reasoning, where the target wants (sometimes desperately) to believe in what they are hearing. Thus, even if the message may appear suspiciously questionable to the ordinary observer, motivated reasoning will lead some individuals to subdue any doubts they may have and accept it as truth. And as we’ll examine later in this chapter, motivated reasoning can be a problematic source of vulnerability to digital influence efforts. Altogether, the point to keep in mind from this part of the discussion is that one must gather as much information as possible about the attributes of the target in order to prepare the most potentially effective influence strategy. In addition to their sociodemographic background and current occupation, the influencer will want to know the target’s preferences, beliefs, prejudices, wants and needs, likes and dislikes, family, friends and associates, and many other attributes. Before the Internet, gathering this kind of data about one individual could take a fair amount of time, and gathering this much information on a large group would naturally be more difficult. But as described in the previous chapters, the Internet changed all that—we can now gather massive amounts of data on a user or a group of users, data on a wide range of things that can then be scrutinized and incorporated into a digital influence strategy. For example, knowing with whom the target is connected (e.g., by viewing their list of Facebook friends and Twitter followers) is a particularly important attribute to know. Intelligence Other attributes of importance include a target’s level of intelligence. As research by psychologist Kathleen Taylor reveals, an individual’s processing of information can be affected by “low educational achievement, dogmatism, stress” and other factors that “encourage simplistic, black-andwhite thinking.”43 Similarly, ignorance is already well known as a vector of vulnerability for influence. While smart people have frequently fallen prey to the kinds of deception, provocation, and other tactics described in chapter 3 of this book, people who are just plain ignorant are especially likely to believe a broad range of lies and conspiracy theories. And if someone

Psychologies of Persuasion: Human Dimensions of Digital Influence125

refuses to acknowledge they are ignorant about something (or even that they might be ignorant about anything), this is a particularly dangerous form of arrogance that leads to higher susceptibility for influence efforts. According to Lee McIntyre, “The Dunning-Kruger effect (sometimes called the ‘too stupid to know they’re stupid’ effect) describes how individuals with low cognitive abilities are often unable to recognize their own ineptitude. This ‘overconfidence bias’ has serious consequences; ‘I am an excellent driver, at any speed and in any weather conditions’ is a bias that can frequently get people hurt or even killed.”44 Further, as Dunning and Kruger put it, “incompetence robs [people] of their ability to realize it; the greatest inflation in one’s assessment of one’s own ability comes from the lowest performers. . . . What seems to be going on here is self-deception. We love ourselves so much that we cannot see our own weaknesses.”45 Academic researchers generally agree that increasing levels of knowledge typically reveal a greater breadth and depth of questions that remain unanswered. And as higher levels of intelligence lead to greater uncertainty about what we know, this should produce a sense of humility. But many people do not want to be humbled by the recognition of all that they don’t know or understand and so opt for strategies to deflect that kind of information. This is particularly true when confronting uncertainty about the many complexities in life. Meanwhile, because of the overwhelming modern information ecosystem, people have less confidence in what is true or not, leading some to embrace fake news and disinformation. At the same time, the proliferation of fake news and disinformation make people even less confident about what is true and what is not, creating a mutually reinforcing spiral of increasing uncertainty, which in turn provides numerous opportunities for exploitation by a variety of influence strategies. Researchers have also pointed to the effects of “pluralistic ignorance”— a lack of understanding about what others in society think and believe. According to Stephan Lewandowsky, “Pluralistic ignorance refers to the divergence between the prevalence of actual beliefs in a society and what people in that society think others are believing. For example, in 1976, more than 75 percent of white Americans actually thought that a mother should allow her daughter to play with an African American child at home; but only 33 percent believed that that was the majority opinion—the remaining 67 percent thought that it was only a minority of people who would endorse cross-racial friendships. In other words, the vast actual majority of people felt that they were in the minority, whereas the bigoted minority felt that they were dominant in society.”46 A variant of this is the false consensus effect: when someone incorrectly thinks that the majority of society shares his or her belief, when in fact it is a view held by very few people. This is how, as Lewandowsky explains, “people who hold extremist minority opinions often vastly overestimate the support for their own opinions in the population at large.”47 Further, as

126

Digital Influence Warfare in the Age of Social Media

Tommy Shane notes, “this can be made worse by rebuttals of misinformation (e.g., conspiracy theories), as they can make those views seem more popular than they really are.”48 Unfortunately, human history is rich with examples of how the combination of ignorance, prejudice, and arrogance has contributed to injustice, crime, and war, among many other evils of our world. And of course, arrogance also leads some people to falsely assume they are impervious to the effects of misinformation. As noted earlier, many people of high intelligence have still been successfully targeted by digital influence efforts described in this book. As an academic, I see this frequently among both colleagues and students. And yet a recent study found that people frequently rate themselves as better at identifying misinformation than others. This means people can underestimate their vulnerability and don’t take appropriate actions.49 Sometimes it’s not ignorance but arrogance that is our own worst enemy, in terms of how strategies of persuasion can influence our behavior. Let’s turn now to explore some of those strategies. STRATEGIES FOR INFLUENCING A TARGET’S BEHAVIOR To begin with, effective persuasion requires communication that is contextually relevant. The strategies of influence warfare won’t have much effect on an audience unless you tie your narrative or problem statement with an issue that is already (or is very likely to be) deemed relevant by a large portion of that target audience. But beneath this simple fact, we find a broad range of challenges in determining what may or may not be relevant at any given point in time. Are there political, socioeconomic, or other issues that generate (or give legitimacy to) certain grievances shared by your audience? Examples might include widespread poverty, political corruption, structural disadvantages in a society that create hardships for certain ethnic or racial minorities, and so forth. Further, are there specific triggers—for example, events or new government policies—that enhance those grievances among your target audience? In short, relevance is framed by contextual factors that the target often has limited control over. So, this is why influence warfare competency requires an understanding of what the target audience feels is contextually relevant and why. But what is meant by the term “contextual relevance”? Basically, if we feel that something is not all that relevant, we don’t really care whether or not we know anything about it at all.50 A massive amount of information fits this description for each of us. In reality, given the vast diversity of things we could care and know about in this big world of ours, we really only care and know about a small fraction of them. So, relevance is the cognitive function that separates for us what matters and what does not. The influencers and information aggressors we examine throughout this book

Psychologies of Persuasion: Human Dimensions of Digital Influence127

often try to make something matter to you, even when it is something that you might not initially feel is relevant. And as you might imagine, this can be quite a challenge for them, at least initially. One way to determine what is contextually relevant for members of the general public is to conduct surveys and polls that ask a question like “What are the most important problems facing the country today?” Compiling the answers to such a question then helps identify concerns and issues that are mentioned more frequently than others. Issues that are mentioned the most can be said to have higher contextual relevance, and as a result, media coverage and policies that address those issues will garner the most attention and have the most influence in that society. Of course, responses to a question like this will also vary over time, often influenced by current events. For example, one can expect that in the 2006 aftermath of Hurricane Katrina, people would highlight concerns about infrastructure and emergency response, while cybersecurity would be emphasized as a major problem in the wake of reports about the growing number of cyberattacks against the United States by Russia, North Korea, and Iran. Responses may also vary geographically. During a severe drought, for example, Midwest farmers tend to respond differently than residents of large cities on the East and West coasts. Further, how each individual views “what is important” will vary according to their sociodemographic and educational background, personal experiences, occupation, and many other variables. But when we see issues and concerns mentioned frequently despite all these differences among the respondents’ backgrounds, we can be certain those issues have contextual relevance. Some issues have contextual relevance for virtually everyone—like medicines and illnesses, something we saw demonstrated quite frequently during the COVID-19 global pandemic of 2020. Here, the source of contextual relevance was multidimensional. Millions of people became sick, with over 500,000 dying. Tens of millions of people lost their jobs and filed for unemployment. A lack of testing capability created confusion about the sources, means of transmission, and ways to mitigate the virus. Out of an abundance of caution, schools and churches were closed, businesses shut down, and public events and public transportation resources were reduced to a minimum. Clearly, information about COVID-19 had great contextual relevance during this period of time—socially, personally, and emotionally. Unfortunately, the culmination of all this contextual relevance created an opportunity for massive amounts of misinformation and disinformation, including a broad range of conspiracies and politically volatile debates—all against the backdrop of a hotly contested presidential election. Research on how the American media and public view public affairs identified five main areas of heightened interest—conflicts, economics, human interest, powerlessness, and morality.51 By addressing one or more

128

Digital Influence Warfare in the Age of Social Media

of these topics, a digital influence effort can capture the attention of the target audience. Further, influence efforts should take advantage of whatever current events are most relevant to the target audience, before that relevance declines. Given the ever-increasing landscape of issues competing for our attention, we can expect that the life span of relevance for an issue may become shorter unless it is considered uniquely important. A political scandal, a natural disaster, a major terrorist attack—dramatic events like these can remain relevant for several months, even several years. But those can be considered exceptions to the rule, compared to the myriad issues that may make headlines one week, yet be forgotten entirely weeks later. This, in turn, can fuel a sometimes volatile public agenda, as issues will rise and fall in contextual relevance according to factors beyond your control.52 So, the influencer will first need to identify ways in which their information will be seen as more important to the target than the hundreds of other issues that are competing for their attention at any given moment. A commonly used strategy for doing this is called “framing.” As Singer and Brooking explain, nearly all effective narratives conform to what social scientists call “frames,” products of particular language and culture that feel instantly and deeply familiar, and resonate among the target audience.53 Research has identified three main types of relevance framing that appear to have the strongest impact on us:54 • Social relevance: These can often involve concerns about civic duty and peer influence; things are relevant if they are deemed “important to us” as a society. • Personal relevance: These are informed by the individual’s self-interest and avocation; things are relevant if they are deemed “important to me.” • Emotional relevance: When things are emotionally arousing, interesting, exciting, etc., they are deemed relevant. Each of these is used to frame information that the influencer wants to resonate among the targets of the influence effort. The effectiveness of the information “framing” is vital to the success of an influence effort. If you can make something socially, personally, and emotionally relevant to your audience, they will have a more difficult time ignoring you. According to a recent report by researchers at the University of Washington, “A frame is a way of seeing and understanding the world that helps us interpret new information. Each of us has a set of frames we use to make sense of what we see, hear, and experience. . . . Framing is the process of shaping other people’s frames, guiding how other people interpret new information.”55 Additionally, framing a question in certain ways can be a form of prepersuasion, setting the stage upon which there are only a limited number of acceptable answers for a particular question. If fully successful,

Psychologies of Persuasion: Human Dimensions of Digital Influence129

pre-persuasion establishes “what everyone knows” and “what everyone takes for granted” (even if it shouldn’t be and should, instead, be taken as a point of discussion). By cleverly establishing how an issue is defined and discussed, however, a communicator can influence cognitive responses and obtain consent without giving the appearance that they are attempting to persuade us.56 Media interviewers, politicians, skilled debaters, and salespeople do this all the time, in order to direct your thoughts and emotions toward a perspective they hope to convince you of. And typically, the influencer does this in order to lead you toward an outcome that is more beneficial to themselves than to you. In sum, the ways in which a question or issue is framed as contextually relevant to us can influence our subsequent behavior and decisions.57 As you might expect, given what we’ve already covered in this book, there will undoubtedly be different frames at play within any politically diverse society, and these frames influence key differences in how people interpret a specific event, policy, or social problem. For example, health care has contextual relevance for virtually everyone on a social, personal, and often emotional level. However, there are widely divergent opinions about it. Doctors and hospitals are expensive everywhere, but who should pay the bill? Understanding these divergent opinions gives us clues for how to frame a question in ways that will elicit a particular kind of response. Here’s an example: In 2013, the popular television show Jimmy Kimmel Live conducted an experiment by going out on the street with video cameras and asking people which was better: Obamacare or the Affordable Care Act.58 Of course, these are one and the same: the official name of the health-care legislation (passed by Congress in 2010) is the Affordable Care Act, while the oft-used nickname “Obamacare” was often used by the media and some politicians (particularly those opposed to the health-care legislation). What was fascinating (and disturbing) about this experiment was in revealing how many ordinary citizens were adamantly opposed to Obamacare and yet were strongly in favor of the Affordable Care Act. The underlying reason, of course, was in how the issue of the question was framed, revealing how individuals would automatically condemn Obamacare when asked, simply because they were opposed to President Obama (e.g., “anything Obama must be bad”). Further, as we’ll see in chapter 5, the power of influence silos— particularly in defining in-group and out-group identities—had much to do with this. Certainly, there was some level of ignorance on display here, but for many of the respondents an understanding of a health and economics issue had been perverted by their political identity and preferences. We now have the capability to surround ourselves exclusively with sources of information that confirm what we want to believe, and we can ignore, block, and denigrate competing sources of information that may challenge or contradict those beliefs. Republicans can now ensure they

130

Digital Influence Warfare in the Age of Social Media

see and hear only information favorable to the Republican point of view, while Democrats can ensure they see and hear only information that supports the Democratic point of view. The combination of cable news and social media has strengthened political partisanship and mutual animosity, as we’ll discuss later in this book. The individuals mentioned above had become convinced, based on the sources of information they preferred within their influence silos, that if they wanted to be identified as Republican they must demonstrate that they were opposed to “Obamacare.” Any factual evidence that contradicted this—for example, the fact that Obamacare is the Affordable Care Act—was not deemed relevant. More recently, as a group of Washington researchers describe, competing frames were used to portray a group of Central American migrants trying to cross the border from Mexico into the United States. One framing portrayed these people as refugees trying to escape poverty and violence and described their coordinated movement (in the “caravan”) as a method for ensuring their safety as they traveled hundreds of miles in hopes of a better life. A competing framing portrayed the caravan as a chaotic group of foreign invaders (“including many criminals!” according to Trump59) marching toward the United States (due to weak immigration laws created by Democrats), where they would no doubt cause economic damage and perpetrate violence.60 This was the same event, but a completely different way of framing it in order to elicit the kind of emotional reaction desired by the influence effort. Of course, contextual relevance is also largely a matter of perceptions, and perceptions can change. For example, we all have perceptions about what we find acceptable versus what we find unacceptable. Regarding the former, researcher Joseph Lehman coined the term “Overton Window” to describe how a range of publicly acceptable ideas can be seen through a “window of political possibility” and how this range of ideas can be altered.61 This window presents a menu of policy choices to politicians and their supporters—relatively safe choices are seen to be inside the window, while politically riskier choices are outside. Given that an elected politician’s primary goal is to get reelected, they will support policies that are politically acceptable and avoid the politically unacceptable ones. But if you can shift the position or size of the window, you can change what is viewed as politically possible. Thus, a common type of strategic influence strategy involves finding ways to adjust the window by promoting ideas outside of it. Further, the more “outer fringe” the ideas, you promote the better, because in comparison other “less fringe” ideas begin to appear more reasonable or even acceptable. In other words, the wider you can push the window, the more ideas can become politically viable instead of appearing overly radical or even unthinkable. Altering contextual relevance in this way thus creates new opportunities for framing issues and influencing targets.

Psychologies of Persuasion: Human Dimensions of Digital Influence131

Provoking Emotions Once the influencer has determined the kinds of information that will have relevance for the target, they can begin conveying the kinds of information that support the overall influence effort’s goals. The effective influencer knows that, as Rachel Barr observes, “emotionally provocative information stands a stronger chance of lingering in our minds and being incorporated into long-term memory banks.”62 And research has shown that a highly effective way to provoke more engagement is by using hyperbolic, emotional, and negative language. Further, as Roger McNamee (an early investor in Facebook) observed, social media platform’s algorithms “give an advantage to negative messages . . . fear and anger produce a lot more engagement and sharing than joy. The result is that the algorithms favor sensational content over substance.”63 People are not logical robots, but rather they respond to incentives and are led by emotions.64 In his early writings on persuasion, Aristotle considered an understanding of the feelings of the audience to be essential. An angry person will act differently from one who is pleased. The orator must be able to direct these emotions in ways that will benefit their strategic goals. In the end, Aristotle described how to evoke emotions in an audience—anger, friendship, fear, envy, and shame—and discussed how to put such emotions to effective persuasive use.65 So, the clever digital influence entrepreneur will craft messages that elicit the kinds of emotional response that will be of direct or indirect benefit to the overall strategic objectives of the influence campaign. According to psychological research on the topic, some common themes among messages that are likely to provoke emotional responses include supremacy, injustice, vulnerability, distrust, and helplessness.66 Other message types that have been shown to induce some kind of reaction by the target include responsibility invocation, reciprocity, pleasure induction, and social comparison.67 As Pratkanis and Aronson explain in Age of Propaganda, “Effective influence controls the emotions of the target and follows a simple rule: Arouse an emotion and then offer the target a way of responding to that emotion that just happens to be the desired course of action. In such situations, the target becomes preoccupied with dealing with the emotions, complying with the request in hopes of escaping a negative emotion or maintaining a positive one.”68 One powerful influence technique of communication is called a “vivid appeal.” Vivid appeals are messages that are (1) emotionally interesting (it attracts our feelings), (2) concrete and imagery-provoking, and (3) immediate (discussing matters that are personally close to us). Vivid appeals attract attention; influencers can encourage the target to use their own imagination to create images in their mind based on the information provided by the influencer. Vivid information can make the appeal more concrete and personal; it directs and focuses our thoughts on issues and arguments

132

Digital Influence Warfare in the Age of Social Media

that the communicator feels are most important; it makes the material most memorable.69 An influence campaign will garner more attention if its messages stimulate arousal, especially by being entertaining or visually appealing. This reflects the power of novelty: as Singer and Brooking note, “Content that can be readily perceived as quirky or contradictory will gain a disproportionate amount of attention.”70 Further, there are specific forms of novelty that can distract people from central issues of concern. Being overly provocative, for example, could distract an audience from the specifics (or lack thereof) offered by a candidate’s policy agenda. Incorporating dramatic images is another common strategy—it is widely understood that images can have even more emotion-provoking power than words. When you read a story that contains a phrase like “the young Syrian refugee was killed,” it has less impact than seeing a photo of a young Syrian refugee who has been killed. In order to connect with readers and viewers—as well as encourage more media coverage—an influence campaign should incorporate imagery that evokes outrage, anger, joy, fear, or some other kind of powerful emotion.71 As described in the previous chapter, image-based memes have a unique potential for influencing a target. According to researchers Alice Marwick and Rebecca Lewis, “A meme is a visual trope that proliferates across Internet spaces as it is replicated and altered by anonymous users.”72 Memes, and particularly viral memes, often involve images that have captions embedded, sometimes funny, other times provocative. If you want to include some motion in the image, you can use an animated GIF file, which is essentially a sequence of still images sewed together into one file, so when it is viewed it’s basically like seeing a very brief (maybe a few seconds) video clip. A meme could be something that makes us laugh or makes us angry. It could be derisive and derogatory (e.g., an unflattering portrayal of a political candidate) or uplifting and inspirational. Particularly effective are memes that resonate with the social norms, perceptions, and preferences of your intended audience and facilitates the transmission and reinforcement of an idea.73 Examples include jokes, rumors, videos, or commentary about something very contemporary, like a soundbyte from a speech (“read my lips,” “lock her up,” or “mission accomplished”), a political or social phenomenon (flash mobs dancing “Gangnam Style”), or even a mistake that costs the team the winning score (e.g., the Seahawks throwing an interception at the goal line at the end of SuperBowl 49).74 And a single image can be used to provoke several different emotional responses depending on the target. In contrast, words, sentences, and paragraphs can only do so much. Carefully arranging these words in the form of poetry and song lyrics might help enhance their effects. Audio recordings—the spoken word—may have more emotional impact than the written word, because

Psychologies of Persuasion: Human Dimensions of Digital Influence133

the speaker can use various inflections and tone to convey feelings. We also know that incorporating music can have an emotional arousal effect. Overall, digital influencers want to create messages that solicit some kind of response by the target, including inspiring them to like and share it with family and friends in order to spread and maximize the message’s potential influence. Further, as Pratkanis and Aaronson explain, good propaganda often leaves it up to the audience to decipher the full meaning of a word or phrase.75 “Sometimes a message can be persuasive even if its arguments are not fully understood or comprehended.”76 Examples include “A kinder, gentler America,” or “Let’s make America strong again,” or “The best money can buy.” This was a key reason that Trump’s campaign slogan of “Make America Great Again” appealed to so many in 2016—exploiting natural patriotic pride in wanting America to be great. However, there were vast differences in terms of how each citizen qualifies what they mean by “great.” Some interpreted the phrase to mean supporting white nationalism (it clearly had a particularly exclusive meaning for racists and white supremacists who violently attacked minorities and others). Researchers have suggested that the underlying objective of a vague, nonspecific phrase in this kind of campaign is to transfer ownership of it to the target audience. The effective influencer is one who conveys messages that the target accepts and then adopts as their own, willing to defend against competing arguments or counternarratives. The goal is to enlist the target’s pride and ego in defending their decision to accept your message as credible. We’ll discuss this more as it relates to cognitive bias and the avoidance of cognitive dissonance later in this chapter. Of course, some observers responded to the Trump campaign slogan by protesting that America already is great, but the campaign naturally sought to undermine that patriotic sentiment. A perceived crisis must first be manufactured and believed by the target audience, for if there is no crisis, there is no need for Trump to be the solution to that crisis. And so millions of Americans were drawn to his populism, harmful trade sanctions, draconian anti-immigration rhetoric, and all the rest of it as necessary in order to make things better—with little or no reflection on the possibility that (a) they have been misled about how bad things are and the reasons for those things being bad; (b) whether better alternatives should be pursued beyond that which the influencer is promoting; and (c) whether their preferred remedy for making things better is actually what the candidate meant in the first place. In sum, provoking emotions is a proven way to get the attention of your target. Further, one of our strongly felt emotions—one that motivates a broad range of positive and negative behavior—is fear. Knowing this, many influence strategies have incorporated various means of manipulating fears. For example, fear appeals are quite common during democratic

134

Digital Influence Warfare in the Age of Social Media

elections. Of course, elections are a natural source of heightened uncertainty, and in times of greater uncertainty, it is easier for influencers to spread disinformation and provoke emotional responses of all kinds. But in addition to provoking fear (e.g., “If you don’t elect me, the future will be much worse!”), we have also seen political candidates try to create doubt about what is or is not true. Lies are all too common, especially lies that serve the interests of those seeking to obtain (or remain in) positions of power. The influence strategy pursued here is to undermine both the capacity and the desire to separate truth from falsehood, to drown out facts and truth with salacious, misleading, and downright false information that confirms the target’s fears and prejudices and secures their political support in the process. Essentially, it involves exploiting our uncertainty as a means for manipulating fear. Manipulating Uncertainty Uncertainty is an inherent part of human existence. We are mortal beings, surrounded by uncertainty all the time. We ask ourselves whether certain foods are safe to eat, whether we will keep our jobs amid a severe economic downturn, or whether that dull pain in our back is just a pulled muscle or something more serious. As human beings, we are plagued by millions of questions about the world and our place within it. A rational human being discovers that the future is unknowable, and no matter what we do today, there can never be absolute certainty about what will happen tomorrow. We learn about cause and effect; yet we also learn about the many exceptions to the idea of a cause-and-effect relationship. We learn that some things are simply uncontrollable, particularly regarding the natural world—the daily weather, for example, or natural disasters like earthquakes and hurricanes. Uncertainty is also inherently uncomfortable. We are psychologically hardwired to want certainty in our lives, and being uncertain about something implies some amount of risk is possible. Further, it is widely understood that doubt, uncertainty, and fear create significant opportunities for influence and persuasion. Digital influence strategies for manipulating uncertainty can be placed within two general categories: increasing (or fabricating) uncertainty and reducing uncertainty (typically by providing some explanation for complex issues). Regarding the former, increasing uncertainty creates opportunities for a variety of other influence strategies, because uncertainty creates a lot of discomfort for everyone, and we are constantly looking for ways to minimize it. This creates opportunities for malicious influencers to provide disinformation that responds to uncertainty. Increasing uncertainty is a prominent goal of influence strategies, because it can lead a person to question what they believe and begin to consider that anything—including false information—could be true. A prime example

Psychologies of Persuasion: Human Dimensions of Digital Influence135

of this is called the “gaslighting” strategy, through which the influencer not only tries to deceive the target into believing things that aren’t true, but they also show concern about the fact that others are questioning or attacking the target because of those false beliefs. The term derives from a 1930s play (and subsequent film in the 1940s) about a woman whose husband slowly manipulates her into believing that she is going insane. He secretly orchestrates a variety of mysterious experiences (including remotely raising and lowering the lights) and then eventually convinces her to voluntarily check into an asylum. Gaslighting is one of several tactics that involve both deception and emotional provocation. It often involves a relationship initially based on trust or respect, which devolves into a series of increasingly damaging exchanges between deceiver and victim. The deceiver will typically try to convince the victim that everyone else is lying to them and cause them to question their own beliefs or decisions. Meanwhile, the deceiver is also repeatedly lying to the victim and will criticize the victim for raising any questions about those lies, using derogatory terms like “hysterical” or “insane” (gaslighters are often patronizing and downright ruthless). Then they may suddenly switch from criticizing the victim to supporting and praising the victim in order to provoke confusion and uncertainty about what the victim should believe. The deceiver may say something or promise to do something and then later deny they ever said that or made that promise. The most effective gaslighting is done slowly over an extended period of time. According to Robert Walker, this will “keep the victims unaware of the process to convince the targets that false information is factual, which then slowly erodes the targets’ grip on reality and thus builds reliance on the purveyors of the false information.”77 In the end, the influencer wants the victim to distrust their own memory or perceptions and to doubt their own judgment or understanding of reality, which will then lead them to make a choice or behave in some way that will benefit the deceiver’s strategic objectives. Authoritarian figures (cult leaders, dictators, domestic abusers, etc.) use the tactic of gaslighting to raise the target’s level of confusion and uncertainty and then provide the illusion that they have access to privileged information (“what’s really going on”) due to a position of relative authority. Based on this perceived authority, the influencer can then blatantly lie to their target with impunity. They can lie to their target repeatedly and then also deny having said anything at all—even when the target believes they have proof that the person did indeed say it. Eventually, gaslighting will result in the victim questioning his or her own sense of reality; they also accept the perpetrator’s false remedy because they become accustomed to questioning their own grasp of reality. Due to the powerful urge to avoid cognitive dissonance—loosely defined as being faced with information that conflicts what we believe—the target can choose to ignore or reject the contradictory information, or change

136

Digital Influence Warfare in the Age of Social Media

what they believe (which happens far less often). In this instance, the target will simply convince themselves that someone they respect and trust would not behave in any sort of unhinged manner, and thus, they will turn their focus toward internal doubts about whether they misunderstood or heard something incorrectly. Meanwhile, another form of manipulation in this category involves creating uncertainty where none existed before. Similar to gaslighting, we see a two-part strategy here: first, create uncertainty and then provide a false remedy to mitigate that uncomfortable uncertainty. The underlying strategy is to undermine trust in anyone other than the malicious influencer. Higher levels of uncertainty allow the influencer to present the “alternative facts” they want the target to believe. Manufacturing doubt and uncertainty is the approach that earned the nickname “the tobacco strategy”—a type of influence war launched by the tobacco industry seeking to raise uncertainty about the evidence linking cigarette smoking with various illnesses, including cancer.78 In response to the overwhelming scientific evidence, the tobacco industry funded a multi-decade, well-resourced effort to confuse the public about the dangers of smoking. “Doubt is our product,” proclaimed a 1969 memo written by a tobacco industry executive, “since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public.”79 Essentially, their strategy involved identifying a handful of seemingly reputable professionals who were willing to refute established science, or argue that more research is needed, and then publish their arguments as broadly as possible. This was not a new strategy for influencing mass audiences to reject scientific fact. For example, the Nazis argued that there was no singular term “science,” but rather, there was “German Science” and “Jewish Science” among others. As Kakutani notes, this fragmenting of truth and facts according to the origin of the scientists leads then to an obvious conclusion: some scientific truths are superior to others.80 The tobacco industry faced quite a challenge during this time period. In 1957, the U.S. Public Health Service had concluded that smoking was “the principal etiological factor in the increased incident of lung cancer.”81 In 1959, leading researchers had declared in the peer-reviewed scientific literature that the evidence linking cigarettes and cancer was “beyond dispute.”82 That same year, the American Cancer Society had issued a formal statement declaring that “cigarette smoking is the major causative factor in lung cancer.”83 By 1967, the U.S. Surgeon General reviewed over 2,000 scientific studies that all supported three conclusions: “One, smokers lived sicker and died sooner than their nonsmoking counterparts. Two, a substantial portion of these early deaths would not have occurred if these peo­ ple had never smoked. Three, were it not for smoking ‘practically none’ of the early deaths from lung cancer would have occurred. Smoking killed people. It was as simple as that.”84

Psychologies of Persuasion: Human Dimensions of Digital Influence137

As Naomi Oreskes and Erik Conway explain in their groundbreaking book Merchants of Doubt, the tobacco industry responded to these reports by spending billions on fabricating the impression of uncertainty about the scientific evidence that linked smoking and illnesses. Their strategy involved attacking the legitimacy of scientists and institutions behind the research that produced these inconvenient facts. During the 1950s, they created the Tobacco Industry Research Council (TIRC) in order to “cast doubt on scientific consensus that smoking cigarettes causes cancer, to convince the media that there were two sides to the story about the risks of tobacco and that each side should be considered with equal weight.”85 They paid for massive public relations campaigns to shape public opinion, hiring famous athletes and Hollywood celebrities to endorse its message and sponsoring full-page newspaper ads across the country claiming that “no conclusive link” between cigarettes and cancer had been found.86 They also funded biased research projects and pro-smoking scientists. When Congress held hearings in 1965 on bills for requiring health warnings on tobacco package and advertisements, the tobacco industry responded with “a parade of dissenting doctors” and a “cancer specialist [who warned] against going off ‘half-cocked’ in the controversy.”87 As a Rand Corporation report explains, “The goal was to frame the issue and shape the narrative, controlling how at least a significant minority of the population understood the issue.”88 Even if most people hearing the tobacco industry’s counternarrative rejected their argument (or rather their lie) that the scientific evidence against smoking was inconclusive, some individuals—as noted in earlier chapters of this book—can believe anything. The goal, therefore, in pursuing the strategy of “doubt is our product” is to convince just enough people that there might be something to reconsider and that there might be an alternative view of the scientific evidence. Just because something is not true does not mean it can’t be believed, as we know. For nearly 50 years, the tobacco industry sought to have scientists raise questions about whether we could (or should) be certain about the research findings demonstrating the links between smoking and illness. They realized “that you could use normal scientific uncertainty to undermine the status of actual scientific knowledge.”89 They consistently repeated the mantra “no proof” for decades, including during the 1990s when attention turned to the impact of secondhand smoke on public health. As Oreskes and Conway explain, this massive industry campaign was designed to confuse the public in order for the industry to “defend itself when the vast majority of independent experts agreed that tobacco was harmful, and their own documents showed that they knew this”90—in fact, “it was part of a criminal conspiracy to commit fraud.”91 The strategy of manufacturing doubt and uncertainty has now been adopted by right-wing think tanks, the fossil fuel industry, and other

138

Digital Influence Warfare in the Age of Social Media

corporate interests who are intent on discrediting science about the reality of climate change, the hazards of asbestos, the impacts of secondhand smoke or acid rain, and even the scientific recommendations for wearing protective masks during a global pandemic. In particular, as Judith Warner notes, “questioning accepted fact, revealing the myths and politics behind established certainties . . . .[and] attacking science became a sport of the radical right.”92 The strategy involves enlisting a number of individuals with adequately impressive credentials who will try to refute (or at least question) whatever facts or truths people have come to accept.93 It does not matter if their scientific expertise is in a different field, as long as they have impressive enough credentials. Intentional efforts to foment uncertainty—like gaslighting or the socalled “tobacco strategy”—have largely had their desired effect. Increased insecurity and distrust lead to anger, frustration, and a refusal to validate other perspectives. Influence strategies that increase uncertainty provide particular advantages to deceivers, because the target begins to consider that a falsehood could be true (even if it is clearly not). This is how people come to accept baseless factoids and conspiracy theories. The Lure and Impact of Conspiracy Theories Increasing uncertainty is not only about getting individuals to question scientific facts—it is about providing alternative narratives, even conspiracy theories, as a potentially attractive means for the target to lower their uncomfortable uncertainty. Further, there are obvious benefits from pursuing both types of strategies in tandem: the more uncertain we are, the more receptive we may be to the alternate narrative (the influencer’s preferred interpretation of events). The more we become uncertain about whether there is actually a truthful answer to our questions, the more likely we are to accept a variety of information even if the source is of questionable credibility. In this way, conspiracy theories—attempts to convince you of something that is not, or cannot be, proven—thrive on uncertainty. The purveyors of rumors and conspiracies naturally claim to base their knowledge on “secret information” that inherently cannot be verified or factually discredited. Further, spreading these kinds of rumors, conspiracies, and factoids to others can give some people a false sense of superiority, a feeling that they “know” some kind of secret information that others are currently “in the dark” about; an ego-driven rush of adrenaline serves our psychological need to feel “right.”94 And most critically—as Pratkanis and Aronson explain—factoids, rumors, and conspiracies swirl around us in a form of pre-persuasion; they influence “social reality . . . as bits and pieces that are used to construct our picture of the world.”95 When we don’t know the answers to important, contextually relevant questions, it is human nature

Psychologies of Persuasion: Human Dimensions of Digital Influence139

to search for those answers. We are frequently seeking a form of reassurance in order to reduce the terribly uncomfortable uncertainty in our lives. But this in turn provides an advantage to information aggressors, who may be able to convince us of many kinds of disinformation. A core purpose of conspiracy theorists is to manipulate uncertainty about something by encouraging alternate—and often wildly controversial— narrative. Vaccines are bad for you! The CIA created the crack epidemic and is covering up the fact that the Earth is actually flat! Aliens built the pyramids; the CIA is covering that up, too! NASA is a tool of U.S. imperialism! A common tactic in promoting “alternative facts” is to claim various conspiracies have suppressed the truth of the matter. For example, during a discussion with a caller to The Rush Limbaugh Show in April 2015, Limbaugh denied secondhand smoke was a danger. “That is a myth. That has been disproven at the World Health Organization and the report was suppressed . . . . it will not make you sick, and it will not kill you,” he claimed.96 Using a classic conspiracy theory model, his claim to the truth was supported by an additional claim that it can’t be proven because the evidence for the truth is being suppressed by some entity. Meanwhile, according to the Centers for Disease Control and Prevention, approximately 2,500,000 nonsmokers have died from health problems caused by exposure to secondhand smoke since 1964.97 Efforts to replace uncertainty with conspiratorial explanations of complex phenomena have been part of human history forever. In addition to our desperate search for certainty in a confusing and often incomprehensible world, being part of a community of “true believers” in a conspiracy (or a charismatic cult leader, a terrorist group’s ideology, or many other things) helps us feel special, one of the “enlightened few” or the “chosen ones.” As Tom Nichols explains, conspiracy theories “appeal to a strong streak of narcissism: there are people who would choose to believe in complicated nonsense rather than accept that their own circumstances are incomprehensible, the result of issues beyond their intellectual capacity to understand, or even their own fault.”98 Since the best predictor of believing in one conspiracy is that you already believe in another,99 once an initial lie is accepted on the basis that it cannot be disproven the liar can proceed to make other false claims with impunity, like “crime is at record rates” or that Russian interference in 2016 is just “a hoax.” Those who disagree with your political positions or policies can simply be dismissed as unenlightened “sheep” not worthy of your attention. Perhaps the most prominent conspiracy theory in recent years is known as “QAnon” or just “Q,” a reference to an anonymous individual who has been posting ominous predictions, cryptic riddles, and often provocative messages online since October 2017. As a conspiracy theory, QAnon is complex and confusing, but an overview is the following: Q is an intelligence or military insider with proof that corrupt world leaders are secretly

140

Digital Influence Warfare in the Age of Social Media

torturing children all over the world; the malefactors are embedded in the deep state; Donald Trump is working tirelessly to thwart them.100 It’s absolute garbage, and disseminators of this nonsense have been deplatformed by Facebook, Twitter, and YouTube. But for hard-core supporters of Trump, the QAnon conspiracy theory is attractive because it provides a narrative that confirms their pro-Trump bias despite all the harm he has caused the American people. This reflects how influence strategies can also focus on increasing uncertainty about something you want to believe and on provoking emotional and behavioral responses from the target that benefit the influencer. Essentially, too much certainty is just as harmful as too much uncertainty. Weak convictions can be called into question, manipulated, and altered, while strong convictions can be strengthened despite evidence that contradicts those convictions. While too much uncertainty can be problematic, too much certainty can also be bad. When the message of an influence effort aligns with prior beliefs, convictions, and prejudices—things the target is already comfortably certain about—the message is likely to believed even when it’s entirely untrue. Strategies for Manipulating Certainty We like being certain about what we believe, and we want more certainty in our lives because we like the comfort and sense of security it brings. For many of us, our beliefs are sacred, things to be defended at all costs. As Cailin O’Connor and James Owen Weatherall observed in The Misinformation Age, “We generally expect our beliefs to conform with and be supported by the available evidence.”101 However, when evidence is unavailable, or when the available evidence points in a different direction, we have a tendency to ignore the evidence and continue to believe what we want to believe. Further, as Denise Winn explains in The Manipulated Mind, “the framework of assumptions we each construct around our world is very precious—it acts as a kind of road map. If it is found that a particular set of assumptions does not correspond to reality, a disabling emotional upheaval is experienced. The assumptions have to be changed, and that is not easy if they are deeply entrenched.”102 Our pride in what we think we know and what we believe to be true is a form of hubris that makes us vulnerable to influence. In The Social Animal, psychologist Elliot Aronson notes that “often beliefs that we hold are never called into question; when they are not, it is relatively easy for us to lose sight of why we hold them.”103 When we view something as highly relevant, and we feel we have a high level of certainty in what we know (or think we know) about it, we tend to seek (or require) much less information about it. For example, individuals may have all the information that they desire about an issue or a political candidate. This low degree

Psychologies of Persuasion: Human Dimensions of Digital Influence141

of uncertainty makes them fairly confident in the decisions they make as a result. In fact, we may be so confident in our convictions that we ignore new information and reject any alternative or competing interpretations of information altogether. Further, as described earlier, humans tend to find cognitive dissonance deeply troubling and adopt a range of strategies to avoid it—even if it means increasing your level of support for something that an overwhelming amount of evidence shows to be untrue. When processing new information, our brain often looks for connections with what we already know as a way of simplifying the navigation of our complex neural pathways. As noted previously in this chapter, we tend to be cognitive misers, forever trying to conserve our cognitive energy.104 We prefer the peripheral route over the central route of information processing. So, given our finite ability to process information, we often adopt strategies that simplify complex problems and make information processing easier. However, this introduces a variety of tendencies called “cognitive biases”—mental shortcuts that influence our decision-making.105 Some of these biases may lead us to make assumptions based on stereotypes or personal memories. Other biases may help us see patterns in data, or notice things that we have seen before, or identify particular flaws in others more easily than we recognize the same flaws in ourselves. Cognitive biases can lead us to misinterpret information or even draw the wrong conclusions about a situation. A particularly powerful form of cognitive bias is called “confirmation bias,” which happens when you interpret new evidence as confirmation of your existing beliefs or theories. When we have an idea about something, or especially a strongly held belief about it, we tend to look for confirmation, some kind of supporting evidence to help us feel validated in our belief. We certainly won’t be looking for information that seems contrary or contradictory, and if we see such information, we may choose to ignore it or dismiss it as invalid. Among academic scholars and scientists, there is a term for this kind of tendency: “cherry picking.” This term refers to the unacceptable practice of looking for data that supports your hypothesis and disregarding data that does not support it. The scientific method requires objective, neutral gathering and analysis of all available data, which is then used to confirm or reject a hypothesis. Thousands of important scientific discoveries would never have happened if the researchers did not hold true to the basic tenets of the scientific method. As Elizabeth Kolbert observes, “Assorted theories have been advanced to explain confirmation bias—why people rush to embrace information that supports their beliefs while rejecting information that disputes them; that first impressions are difficult to dislodge, that there’s a primitive instinct to defend one’s turf, that people tend to have emotional rather than intellectual responses to being challenged and are loath to carefully examine evidence.”106 Confirmation bias also holds a huge amount

142

Digital Influence Warfare in the Age of Social Media

of appeal for many people. Having confidence in what you believe is far more appealing than confronting the fear and ignorance imposed by the unknown or the uncertain.107 However, confirmation bias can also lead to the problem of having too much certainty. In fact, some people have way, way too much certainty in their own convictions and instincts. As social psychologist Bert H. Hodges explains, “There are clear cases where people trust themselves too much, and others far too little.”108 For example, perhaps the one person in the world who is least plagued with uncertainty and doubt is Donald Trump, who famously declared on the campaign trail in 2016: “My primary consultant is myself and I have a good instinct for this stuff, I’m speaking with myself, number one, because I have a very good brain.”109 Similarly, when asked in the summer of 2016 whether he read much, he replied, “I never have. I’m always busy doing a lot.”110 Of course, the obvious problems associated with overconfidence in one’s own knowledge and abilities have been well documented throughout human history, even dramatized in ancient Greek tragedies, so I’ll assume there’s no reason to discuss the issue of hubris further here. The critical takeaway here is that, as Lee McIntyre notes: “we have a built-in cognitive bias to agree with what others around us believe, even if the evidence before our eyes tells us otherwise.  .  .  . If we are already motivated to want to believe certain things, it doesn’t take much to tip us over to believing them, especially if others we care about already do so. . . . Our inherent cognitive biases make us ripe for manipulation and exploitation by those who have an agenda to push, especially if they can discredit all other sources of information.”111 If you are predisposed to be against certain government policies, watching a news program in which convincing or compelling arguments are made in support of those policies begins to make you uncomfortable. As you become slightly less confident in your opposition to those policies, you begin to question your judgment, and uncertainty rises. At this point, it is most likely that you will simply change the station and watch something else. A study by Lance Canon found that, as one’s confidence is brought into question, a person becomes less prone to listen to arguments against his or her beliefs.112 Influence efforts that exploit these aspects of human nature are not trying to convince a large group of people about something they didn’t already believe. Rather, their goal is to make people defend what they already believe. Once the influencer understands the beliefs and values of a particular target, they can provoke emotional responses (including fear and anger about a perceived threat to those beliefs) in order to achieve the strategic goal of increasing divisions within the society. This also relates to the previous chapter’s discussion on the impact of postmodernism. If anything and everything is open to interpretation, confirmation bias allows you—even encourages you—to interpret facts (or lack thereof) anyway

Psychologies of Persuasion: Human Dimensions of Digital Influence143

you like. When truth is seen only as an interpretive commodity, this allows the strategies and tactics of digital influence warfare to become especially powerful. The influencer can obfuscate and distort facts that aren’t aligned with their strategic goals, and then shape the target audience’s perceptions at will—and the audience will embrace those facts just as long as they conform to their predetermined values and beliefs. Unfortunately, human history is replete with examples of people believing in something that was proven untrue. Even when confronted with factual evidence that undermines that belief, some people still chose to believe in the lie. Years ago, people believed the Earth was flat—amazingly, there are still some who claim this today. As a Rand report on “truth decay” explains: “The ways in which human beings process information and make decisions cause people to look for opinions and analysis that confirm preexisting beliefs, more heavily weight personal experience over data and facts, and rely on mental shortcuts and the opinions of others in the same social networks. These tendencies contribute to the blurring of the line between opinion and fact and, in some cases, allow opinion to subsume fact.”113 And once we have arrived at a sense of certainty about something, we will defend it with increasing ferocity. We have made a personal investment in this belief, so any challenges to it are seen as a personal attack. Admitting that you have made a mistake, that you have been fooled or deceived, is not easy for many people, especially those with fragile egos. In a sense, our own psychological makeup explains why there are a surprising number of people who adamantly believe conspiracy theories. Even more surprising, there are some Americans who have convinced themselves that a certain political leader is blessed by God, despite a mountain of evidence revealing the absolute moral bankruptcy of that politician. In truth, many kinds of beliefs are quite troublesome, but as the writer C.S. Lewis once noted, “A belief in invisible cats cannot be logically disproved.”114 Exploiting Our Reliance on Group Identity and Social Proof As we navigate a contested terrain of issues vying for our attention, using perceived relevance as one of our core navigational tools, our social identity, and our relationships to each other, we frame our perception of reality and provide contextual relevance for processing information. As Gaffney and Hogg observe, “Sitting at the heart of social influence is the relationship of the influencer to the target of influence.”115 The nature of our relationships with others in society influences the kinds of speech and behavior we feel is acceptable and what is unacceptable. Because we generally want to avoid public condemnation or humiliation, we actively look at the behavior of others when determining the appropriate response to a given situation, something we often refer to as social validation, or social proof.116

144

Digital Influence Warfare in the Age of Social Media

Research by Cialdini and others has also found that we intuitively seek this information because the actions of others provide a good indication of what is likely to be approved and effective conduct in a given situation117— something particularly important when managing uncertainty. Researchers have also found that people who depend more on others for guidance are more susceptible to influence than those who depend less on others.118 In fact, the ability of a group to exert influence on you depends on how strongly you identify as a member of that group.119 Most importantly, when an individual’s sense of self is heavily influenced by their concerns over connections to members of their in-group, conformity with the narrative proposed by key members of that in-group will be seen as in their best interests. As noted earlier, the rules of appropriate behavior have contextual relevance particularly because we tend to want people to like us. In his book The Need to be Liked, Roger Corvin argues that this is a fundamental human need because (1) the brain and body are designed to acquire it and (2) not fulfilling this need has negative effects on the person. The primary function of this need is to “ensure that we form relationships with other people.” There is a clear evolutionary reason that being liked would have mattered for our ancestors: those who were able to form relationships and work together were more likely to survive.120 As Jack Schafer explains, “Human beings are social animals. As a species we are hardwired to seek out others. This desire is rooted in our primitive beginnings, when togetherness gave us the best chance to move up the food chain as we emerged from our caves and struggled for survival in a hostile and unforgiving world.”121 Information about what people like and don’t like is thus contextually relevant, and we see this type of information in many forms. For example, product reviews are a form of social proof, especially on websites like ­Amazon​.­com or Yelp. None of us want to consciously make bad choices, particularly when spending our money on products and services, so we look to others for some sort of confirmation about whether something might be a good purchase. Consumers are significantly more likely to buy something when they see that others have rated it highly (the more reviews, the better), which is why all the major online retailers today offer and encourage product reviews. Online reviews can be so influential that some companies now follow up your purchase with a free offer if you post a review of their product at the website where you purchased it (e.g., ­Amazon​.­com). Research examining Facebook user activity found that when members of a person’s social network “like” a product, other members of that network are more likely to click on an advertisement for that product. Open-access review sites such as Yelp or TripAdvisor are popular for the same reason. Before we go to a restaurant or stay at a hotel—particularly in a city we’ve never visited before—we want to reduce the uncertainty

Psychologies of Persuasion: Human Dimensions of Digital Influence145

that our choice will be an unpleasant one. So we will look to others for their opinions. Further, as noted earlier in this chapter, when information is received from someone who appears to have the same in-group identity as you, we are likely to view it as credible and potentially more persuasive. And more importantly, if the recommendation comes directly from a close friend or family member, this has considerably more influence on our decisions. For example, if they recommend a particular movie or Broadway show, it increases the likelihood that we would consider buying a ticket to see it. Again, it’s all about social validation, and the stronger your connection with the individual, the more likely they will influence your purchase and provide their approval afterwards. However, because we look for social proof regarding contextually relevant information, we run the risk of being influenced improperly. According to Philip Howard, Director of Oxford University’s Internet Institute, “Social validation from a few humans can result in a cascade of misinformation,” particularly because this kind of “validation can drive social media algorithms to treat the content as socially valuable, distributing it even more widely.”122 Our quest to manage uncertainty often makes us vulnerable to manipulation through confirmation bias and falsified social proof.123 Worse, our desire for social validation can form the foundations for influence silos (which we’ll discuss in chapter 5), especially when using new social media tools to manufacture credibility. For example, having hundreds of thousands of followers on a person’s social media account gives them the appearance of social proof, a confirmation of perceived credibility (even if most of those followers are actually fake accounts). This in turn can convince some people that what the person says has importance, while at the same time downplaying the relevance of information provided by someone with only a few dozen followers. Social proof is also a key aspect of how we try to manage another prominent type of uncertainty, called “fear of missing out (FOMO).” Essentially, a target with higher levels of FOMO will be more susceptible to an influence effort. It will probably come as no surprise to you that virtually everyone in the world feels at least some sort of psychological dissatisfaction. Researchers have found that many people in particular indicate not being psychologically satisfied in areas of personal autonomy, relatedness, and competence. These areas, in turn, are related to higher levels of FOMO. If you feel like you lack independence, closeness to other people, and general capability, the chances are higher that you will often look for—and find—information about other people’s lives that you then compare to your own (favorably or otherwise). Research has found that FOMO is primarily experienced by young people, and by young men more than young women.124 Marketers know very well the power of FOMO. They use this psychological tool in a rather manipulative way to convince audiences that failure

146

Digital Influence Warfare in the Age of Social Media

to do “x” or buy “y” will result in missing out on a chance for happiness. There is also ample research that explains why “scarcity sells”—clever marketing often includes convincing the prospective buyer that there are “only a few left on the shelves, buy now before they’re all gone!” The goal of this marketing ploy is to make the item appear virtually unavailable and therefore exotic and special, something to covet even more. Meanwhile, social media companies purposefully try to manufacture and increase FOMO in order to keep you more engaged on their social media platform. The basic principle here is to ensure users constantly want to log in and see what is going on, what people are saying, etc. As noted earlier, this is a core part of the attention economy. The more people log in to the social media platforms, and the more time they spend there, the more advertising revenues the social media platforms can generate. The information feed presented on your screen is constantly updated 24 hours, 7 days a week, so if you haven’t logged in to the platform in a while it is very likely you have missed a lot of information posted to the platform. In order to better capitalize on FOMO, the social media platforms will even send you “helpful” push notifications (emails and text messages) whenever something of interest (or potentially of interest) was posted while you were away from your social media feed. So, if social media companies are set up specifically to exacerbate FOMO, make you keep returning to log in to the platform (or else you will miss what your connections are saying, doing, sharing), how do digital influence warriors use this to their advantage? To begin with, they will use hashtag flooding to manufacture the perception that something is trending—an indicator of popularity—in order to draw attention to their information. When thousands of user accounts are all mentioning, discussing, debating, or sharing opinions about the same topic, by tagging that topic with a hashtag the social media platform can amplify and elevate that topic’s importance in its “What’s Trending Now” list. The sense of immediacy suggested by that list also contributes to FOMO—the individual sees that other people are focused on a particular topic, so they feel left out unless they also join in and focus on that topic as well. And using teasers or hooks (e.g., “I can’t believe he just said this!” or “This is the funniest thing I’ve ever seen!”) is another means to use FOMO for enticing people to click on your links. This kind of “clickbait” is described further in chapter 3. Commitment and Consistency Building on the target’s sense of certainty (in what they believe, who they trust, etc.), an influencer often wants the target to make a commitment of some kind, usually to the direct advantage of the influencer. One way to provide that kind of commitment is to manipulate the target’s selfperception about consistency. In his book Psychology of Influence, Cialdini

Psychologies of Persuasion: Human Dimensions of Digital Influence147

refers to a range of commitment and consistency traps, where the influencer makes use of the fact that we prefer to appear consistent to ourselves.125 Generally speaking, societies value individuals who do what they say they are going to do, while those who say they will do something and then forget or refuse to do it are not as well respected. An influencer can take advantage of this in at least a couple of different ways. The first is referred to by scholars as the “door in the face technique” and unfolds in two parts. First, the influencer will make a large request that the respondent will most likely turn down, much like a metaphorical slamming of a door in the persuader’s face. The respondent is then more likely to agree to a second, more reasonable request than if that same request had been made in isolation.126 For example, the political campaign may first ask for a $1,000 donation, but when that request is turned down, it will come back with something like, “Okay, but if you believe in our candidate, could you help us out with just $100?” This is also referred to sometimes as the contrast principle (providing different messages, a process during which the target will view the second message as it relates to the first). If you’ve bought a car anytime recently, this will sound familiar: you are first shown the manufacturer’s suggested retail price (MSRP) and then shown the dealer’s price, “a special price, just for you, today only.” In the end, this type of messaging helps the influencer to get the target to make a commitment that achieves the influencer’s goal. Another approach is more common and is referred to as a “foot in the door technique,” an influencing strategy of escalating commitment.127 Here, individuals are asked to commit themselves in a small way, and then the likelihood they will commit themselves further in that direction is increased over time in order to remain consistent with the first commitment. Escalating the target’s commitment to do something can be facilitated by providing opportunities for participation that take minimal effort. Here’s an example: it is early 2016 and you have registered to vote for the Republican Party. You place a sign on your lawn, begin wearing a hat with the Republican candidate’s name or slogan on it, and display a bumper sticker declaring your support for that candidate. You many even attend a Republican political rally. Any of these actions means you are far more likely than others to vote for the Republican candidate on Election Day in November. This holds true even if a variety of scandals (e.g., financial improprieties or an “I grab women by their p—”audio recording) and blatant falsehoods are brought to light in between your initial purchase of the bumper sticker and Election Day. Once we have been convinced of a particular direction, even just a little, our self-image often requires us to do whatever it takes to justify continuing in that direction. Having done the small favor creates pressure to agree to do the larger favor; in effect, we comply with the large request to be consistent with our earlier commitment.128

148

Digital Influence Warfare in the Age of Social Media

In the world of digital influence warfare, this translates to similar kinds of behavior manipulation techniques. Once you have followed, shared, liked, or commented on a certain account or message, social media platforms are primed to use that information to then suggest other accounts to follow and to show you other messages similar to that which you have expressed an affinity for. The same user experience takes place for online shopping, particularly at the ­Amazon​.­com website, where a sophisticated system uses algorithms and data of your recent purchases to present you with suggestion about other items you may be persuaded to purchase next. In both cases, the goal is to keep you engaged in the social media platform or keep you purchasing more and more. The digital influence aggressor can use this information to their advantage. By tracking your online behavior—and the digital evidence of your likes and dislikes, personal values, in-group identity, and so forth—they can develop a pre-persuasion profile of you. Once they know what little commitments you have already made, it becomes easier to identify what sorts of larger commitments you would be ready to make in order to be consistent with your earlier ones. If you have at one point expressed support for the statement, opinion, or platform of a political candidate, you should expect to receive further information about how to support that candidate by purchasing a bumper sticker or T-shirt and to attend meetings or campaign rallies planned for your region. Because you made the earlier commitment, the assumption will now be made that you will be more likely to be persuaded by a request to make a more significant commitment in the future. These are the “commitment and consistency traps” that Cialdini refers to in his analysis.129 If we have agreed to do something small and can be persuaded to at least consider doing something greater that is consistent with the previous action, our internal drive combined with social pressures will often compel us to follow through with that greater commitment. As Pratkanis and Aaronson explain, “Commitment can be self-perpetuating, resulting in an escalating commitment to an often failing course of action. Once a small commitment is made, it sets the stage for ever-increasing commitments. The original behavior needs to be justified, so attitudes are changed; this change in attitudes influences future decisions and behavior. The result is a seemingly irrational commitment to a poor business plan, a purchase that makes no sense, a war that has no realistic objectives, or an arms race gone out of control.”130 We see a lot of this in our social media interactions today, for example, the many “like and retweet” or “please comment” appeals seeking to provoke our engagement and to solicit our commitment to a position on something. A typical approach was seen in early 2020 during the impeachment proceedings against Trump. First, you would be asked to respond to an online poll: “Do you agree that President Trump has done nothing

Psychologies of Persuasion: Human Dimensions of Digital Influence149

wrong?” Once you have indicated your agreement, you are then invited to join the Official Impeachment Defense Task Force (“All you need to do is DONATE NOW!”). When it comes to social or political movements not centered on a specific individual, as Manheim notes, commitment and participation can be generated through protests, public demonstrations, attendance at rallies, or other events; through grassroots lobbying activities; or through invitations to help shape the influence campaign itself— for example, by developing your own blog or by creating and sharing campaign-related videos and other materials.131 Further, once you have demonstrated a response to these kinds of requests, more will typically follow. Throughout these types of messaging efforts, the influencer will attempt to apply enough pressure to override the target’s capacity to think rationally about his or her situation and beliefs. To be effective, the influencer can’t allow the target enough time to pause and say to themselves, “Am I sure about this?” The target’s behavior will largely be shaped by their desire to remain consistent with their earlier decision. Repetition Finally, another effective way to ensure the impact of a message is repetition. This is something well known in the world of marketing psychologists, and educators have also found that student learning is enhanced through repetition. Instead of trying to use compelling arguments in an appeal to the target’s logic, simply repeating something over and over leads them to perceive that there must be something to it. Calling someone a derogatory nickname repeatedly will have an impact, over time, on how others view that person. This is why Trump would choose a derogatory nickname for a political opponent—crooked Hilary, Lying Ted, Sleepy Joe—and repeat it at every opportunity. Over time, his supporters latched onto that characterization, as did other listeners, influencing their perceptions of that candidate. As Pratkanis and Aaronson explain, the power of repetition in propaganda was well understood by Joseph Goebbels, the head of the Nazi propaganda ministry. His propaganda crusades were based on a simple observation: What the masses term truth is that information which is most familiar. As Goebbels put it: “The rank and file are usually much more primitive than we imagine. Propaganda must therefore always be essentially simple and repetitious. In the long run, only he will achieve basic results in influencing public opinion who is able to reduce problems to the simplest terms and who has the courage to keep forever repeating them in this simplified form despite the objections of intellectuals.” According to Goebbels, the repetition of simple messages, images, and slogans creates our knowledge of the world, defining what is truth and specifying how we should live our lives.132

150

Digital Influence Warfare in the Age of Social Media

The way this connects to the previous discussion about an individual’s information processing is called “fluency.”133 Basically, people are more likely to believe something to be true if they can process it fluently. The information “feels right,” so it is deemed believable by the target. In other words, easy-to-understand information is more believable, because it’s processed more fluently. This is why repetition is so powerful: if you’ve heard this information before, you process it more easily this time, and in turn, this fluency makes you more likely to believe it. And if the information is repeated multiple times, this increases the fluency effect. Further, even when the information is proven to be false, the amount of repetition of the original claim will still lead individuals to believe it.134 CONCLUSION As I discussed in the preface for this book, a puzzling question that led me to write this book is why—despite an overwhelming mountain of evidence to the contrary—people believe things that are just not true. In seeking answers to this question, I examined research on several aspects of human nature—including the psychological impact of uncertainty, cognitive dissonance, motivated reasoning, confirmation bias, social conformity, and group identity—that help us understand why, as Lee McIntyre notes, “so many people seem prone to form their beliefs outside the norms of reason and good standards of evidence, in favor of accommodating their own intuitions or those of their peers.”135 This chapter is meant to provide just a representative sample of a much larger landscape of many theories and research findings that scholars have generated about the ways in which attitudes and behaviors develop and how these can be influenced. What I have learned from studying this research is that when we don’t know for certain what is true and what is not, this uncertainty can fuel a range of compensating behaviors, including the acceptance of disinformation and lies. This, in turn, explains how today’s digital influence efforts can be so successful. Manipulating uncertainty (toward more or less of it) has become a major part of the modern social and political landscape in many democracies. While our educational systems struggle to cope with the changes wrought by the Information Age, a range of media, corporate, and political entities seek to blur the line between opinions and facts, thereby contributing to an already dangerous spiral of increasing uncertainty. Meanwhile, the Internet (especially social media) provides new and powerful ways to reinforce cognitive bias and confirmation bias, avoid cognitive dissonance, and find the comfort of certainty. We can block social media accounts expressing opinions that we disagree with or simply don’t want to hear. We can choose which websites we visit and which ones we ignore. We can—like never before—tailor our information consumption in ways that minimize

Psychologies of Persuasion: Human Dimensions of Digital Influence151

the chances of that uncomfortable uncertainty. And the social media platforms encourage us to do so. In fact, it’s largely what they are designed to do. Roger McNamee, an early investor in Facebook, describes how a social media platform’s algorithms give consumers “what they want” with “an unending stream of posts that confirm each user’s existing beliefs  .  .  . [E]veryone sees a different version of the Internet tailored to create the illusion that everyone else agrees with them . . . while also making them more extreme and resistant to contrary facts.”136 In seeking to lower our uncertainty about complex issues, we look for perceived authority and social proof among other indicators of information that we should trust. And yet, social proof and consensus of opinion can be all too easily manipulated via social media. The appearance of many shares or retweets can give someone a false sense of confidence that something—no matter how false or stupid—is credible. And this is particularly the case when we surround ourselves with information sources that we have already determined likely to reinforce what we want to believe. As Lee McIntyre observes, it is somewhat “ironic that the Internet, which allows for immediate access to reliable information by anyone who bothers to look for it, has for some become nothing but an echo chamber.”137 Influence efforts succeed most when information is conveyed by trusted members of an in-group and is information that provokes emotional responses—especially when it involves defending something that members of the group strongly believe. The actions of others within our in-group shape our perceptions of acceptable (even celebrated) behavior. As Gaffney and Hogg explain, “People often make decisions based on the norms and values of their important group memberships.”138 In addition to establishing and celebrating in-group identity (e.g., wear this MAGA hat and display this Trump flag or bumper sticker), these efforts also frequently include some form of enemy construction, defining some “other” that must be defeated. Once the influencer has identified what the target audience considers important moral or ethical positions to take, the influence strategy involves making the defense of that position part of the fear appeal and/or emotionally provocative message.139 Finally, influence efforts are especially effective when ensuring that the narrative and the way it is presented conform with the target audience’s previously demonstrated beliefs, biases, and prejudices. This is how influence silos—a natural response to managing too much (and too conflicting) information and uncertainty—play an important role in digital influence warfare, as we’ll examine in chapter 5.

CHAPTER 5

Exploiting the Digital Influence Silos in America

Given your interest in this book, it’s highly likely you have already heard about, read about, and thought about this thing I’ve called the influence silo in previous chapters. It’s been given many other names as well, like echo chamber, filter bubble, and information silo. So, what is an influence silo? Essentially, it’s a place where both the goals of the influencer and the goals of the target can be met—in other words, influencers and targets have a mutual desire for the influence silo to exist. Target audience members are drawn to the influence silo because of their quest for validation. They want confirmation of their beliefs, and the influencer wants to provide that confirmation as a means of achieving their own influence objectives. And as a result, within their preferred in-group influence silo individuals can be manipulated through various tactics, as shown in numerous examples provided throughout this book. There are, of course, variations in how an influence silo is created. As Richard Fletcher notes, sometimes we are overexposed to information that we like or agree with, potentially distorting our perception of reality because we see too much of one side, not enough of the other, and we start to think perhaps that reality is like this. This exposure could be a product of us consciously choosing to avoid or ignore what we don’t want to see and hear. Meanwhile, as described in previous chapters, an array of online technologies—including search engine optimization—and social media platforms can facilitate a kind of automatic “algorithmic filtering,” which essentially narrows the scope of what we are exposed to, without us really having a conscious choice in the matter.1 The creation of influence silos is also fueled by a range of push and pull factors, including conformity, social proof, a desire to avoid cognitive dissonance, and the attraction of group identity. These vary by strength and context across each individual

154

Digital Influence Warfare in the Age of Social Media

and result in different levels of commitment toward the collection of narratives within the influence silos. The true power of the influence silo is created over time when the more skeptical individuals within the target community opt out, leaving behind only those like-minded “true believers” for whom the echo chamber becomes an increasingly powerful barrier that repels (or “protects” them from) any differing points of view. Unfortunately, influence silos can make you more ignorant and arrogant at the same time, as we see reflected in many examples provided in this book, and can lead people to defend the indefensible. As noted in chapter 4, many people have a knee-jerk tendency to justify whatever it is that we have done, and the same instinct can apply when considering what other people do, have said, or have done.2 In fact, most of us will go to great lengths to justify our actions, as well as the actions of others within our influence silo. In this chapter, we will focus on the digital manifestation of these silos largely within the context of American political polarization, as well as the implications of these for modern forms of digital influence warfare. To begin with, the Internet (and especially social media) can help you find individuals who are like you in some way or another. It is relatively uncommon for a social media user to simply reach out and befriend a bunch of other users whose political views, religious beliefs, cultural norms, and so forth are significantly different from their own. Further, as Katherine Brown and Elizabeth Pearson note, the Internet “works like a virtual echo chamber” in which “confirmation biases are tapped, further polarizing group identities.”3 Social media platforms are especially geared toward insulating you in bubbles of cognitive and emotional comfort, because this keeps you engaged, and in the attention economy, this translates directly into profits for them. In other words, digital influence silos play a very important attention-generating role. According to Philip Howard, “An underlying driver of attention is the social endorsement that is communicated through the act of sharing: social media users will not pay attention simply because a piece of political news is from a credible source or generated by a political party; they will pay attention because someone in their social network has signaled the importance of the content.”4 Further, as the Wall Street Journal demonstrated in a 2016 experiment, social media users predominately see information exclusively from like-minded friends and media sources, essentially creating an entirely different “reality” about an issue depending on your individual preferences.5 Thanks to the Internet, and especially social media, we now have the ability to create and nurture what David Patrikarakos refers to as the “homophily” effect of a “cocoon in online bubbles of like-minded friends and followers.”6 Within each bubble, we believe “We are right. They are wrong,” which exacerbates prejudice, bigotry, and hatred of the out-group, the “other.” The comfort of certainty we find within a digital influence silo

Exploiting the Digital Influence Silos in America155

also allows us to avoid discomforting cognitive dissonance when the facts presented to us indicate we are wrong—we can simply defend our position by claiming we’re not wrong and that others are just using the wrong information, the wrong “facts.”7 And as Jason Gainous and Kevin Wagner explain, “The users of social media can opt to follow particular flows of information creating not just polarization but entire networks of reinforced beliefs.”8 This is how our quest for certainty leads us to accept—and often actively seek out—lies and disinformation. And while finding solace within your influence silo, others on the outside may try to get your attention, get you to see something different. You may look up from time to time, and through the membrane of your bubble, you can see that others outside the bubble are speaking, but you can’t hear them. They may even try to show you images or even data to support what they want you to know about. This may or may not arouse a bit of curiosity, but in the end, you can easily choose to ignore them and turn back toward the reassuring information sources within your bubble. As Lee McIntyre explains, “If we get our news from social media, we can tune out those sources we don’t like, just as we can unfriend people who disagree with our political opinions. Whether our news feeds are reliable or fact free will depend on vetting by our friends and the algorithm that Facebook uses to decide which news stories we will ‘like’ more than others.”9 Perhaps Cailin O’Connor and James Owen Weatherall put it most succinctly in their book The Misinformation Age: Since the early 1990s, our social structures have shifted dramatically away from community-level, face-to-face interactions and toward online interactions. Online social media such as Facebook and Twitter dramatically increase the amount of social information we receive and the rapidity with which we receive it, giving social effects an extra edge over other sources of knowledge. Social media also allows us to construct and prune our social networks, to surround ourselves with others who share our views and biases, and to refuse to interact with those who do not. This, in turn, filters the ways in which the world can push back, by limiting the facts to which we are exposed.10

Unfortunately, today’s disinformation landscape is driven significantly by the “news silos” that feed polarization and fragmentation in media content, resulting in ever-stronger disagreements among the members of society. In the United States, political partisanships and debates over climate change and vaccines are perhaps the leading examples, with most Americans no longer confident of any meaningful reality in either area outside the perceptions of their echo chambers. This trend has spread to other issues on which there ought to be some basis of objective agreement, such as the level of crime in society, the health threat posed by recreational drugs, and the reliability of treatments for various diseases. In these and other cases, digital influence campaigns have employed multiple

156

Digital Influence Warfare in the Age of Social Media

means of disinformation and provocation to undermine any potential for consensus.11 Strategies driving such efforts begin with identifying a specific influence silo and then targeting them in the most effective ways for their specific context, using tactics like coordinated bot attacks, fearmongering, and anonymous mass texting. An example is seen in the digital influence efforts orchestrated by Brad Parscale in support of Trump’s 2016 presidential campaign. As McKay Coppins explains, a central component of these efforts was “the campaign’s use of microtargeting—the process of slicing up the electorate into distinct niches and then appealing to them with precisely tailored digital messages. The advantages of this approach are obvious: An ad that calls for defunding Planned Parenthood might get a mixed response from a large national audience, but serve it directly via Facebook to 800 Roman Catholic women in Dubuque, Iowa, and its reception will be much more positive.”12 In Florida, Black voters were shown ads that read, “Hillary Thinks African-Americans Are Super Predators.” Instead of the political candidate needing to convince a broad, diverse audience to believe in their campaign promises, “microtargeting allows them to sidle up to millions of voters and whisper personalized messages in their ear.”13 And of course, the United States is clearly not the only country in which we see various kinds of politically oriented digital influence campaigns. As a 2019 report by Freedom House discovered, domestic governments and local actors engaged in various kinds of digital influence efforts in 26 of 30 national elections during the previous year. The report describes how Internet-based election interference—spreading disinformation, propaganda, conspiracy theories, and misleading memes—has become “an essential strategy” for political actors seeking to disrupt democracy. Further, governments had enlisted bots and fake accounts to surreptitiously shape online opinions and harass opponents in 38 of the 65 countries covered in the report. “Authoritarians and populists around the globe are exploiting both human nature and computer algorithms to conquer the ballot box, running roughshod over rules designed to ensure free and fair elections.”14 In the Philippines, for example, the research found that candidates had paid social media “micro-influencers” to promote their campaigns on Facebook, Twitter, and Instagram, where they peppered political endorsements among popular culture content.15 In a separate report, also released in 2019, Bradshaw and Howard described how “in the Philippines, many of the so-called ‘keyboard trolls’ hired to spread propaganda for presidential candidate Duterte during the election continue to spread and amplify messages of his policies now that he’s in power.”16 In fact, the government of President Duterte actively encourages “patriotic trolling” to undermine his critics.17 In India, the world’s largest democracy, both the ruling Bharatiya Janata Party (BJP) and the opposition Indian National Congress have

Exploiting the Digital Influence Silos in America157

been linked to disinformation, including fraudulent or misleading pages and accounts. Before India’s 2019 elections, shadowy marketing groups connected to politicians used the WhatsApp messaging service to spread doctored stories and videos to denigrate opponents. The country also has been plagued with deadly violence spurred by rumors that spread via WhatsApp groups.18 Facebook, WhatsApp, and Twitter all announced efforts to remove fraudulent accounts and pages, and are collaborating with local fact-checkers and election authorities.19 Similarly, Facebook and Twitter removed at least 45 accounts and pages spreading pro-government disinformation in Bangladesh—accounts that were reportedly connected to the government.20 And the proliferation of false information in Brazil, particularly on WhatsApp, surged in the lead-up to the 2018 general elections. In July 2018, Facebook reportedly removed a network of pages and accounts that were “spreading misinformation.”21 A study of 100,000 political images shared on WhatsApp in Brazil in the run-up to its 2018 election found that more than half contained misleading or flatly false information.22 Sometimes the spread of disinformation online has very real—and potentially deadly—consequences. In Myanmar, a study commissioned by Facebook blamed military officials for using fake news to whip up popular sentiment against the Rohingya minority, helping to set the stage for what UN officials have described as genocide.23 In Sri Lanka and Malaysia, fake news on Facebook has become a battleground between Buddhists and Muslims. In one instance in Sri Lanka, posts falsely alleging that Muslim shopkeepers were putting sterilization pills in food served to Buddhist customers led to a violent outburst in which a man was burned to death.24 It’s worth noting that influence silos tend to be more common in democracies than in authoritarian regimes. As described earlier, several authoritarian states have been quite successful in maintaining direct control over the mass media, disallowing any kind of news coverage or narratives that question or criticize the country’s leaders. In several instances, the government owns and directly operates the main sources of news that their population has access to, and competing sources of information are prohibited. They have secured for themselves the kind of information dominance that allows a regime to oppress, deny freedoms, and lie to its people without consequences. When you own the information, you can bend it all you want. Meanwhile, in a liberal democracy like the United Kingdom or the United States, the different media environment requires a different strategy for establishing information dominance. You, the influencer, want the same power as that which governments in authoritarian countries enjoy, but here nobody has complete control or ownership of the media or the Internet. Instead, democracies typically have a private sector model in which a variety of media services compete against each other for audience

158

Digital Influence Warfare in the Age of Social Media

size, market share, and profit. Because of this, one might assume that citizens of open liberal democracies are far more resistant to disinformation and lying, compared to those ruled by authoritarian regimes (as we’ll explore in chapter 6). After all, one could characterize the oppressed populations in such countries as being forced to inhabit one huge, national influence silo where only government-approved narratives are heard and shared. However, we are increasingly seeing how influence silos evolve within an open democratic context and the ways in which they can help achieve the goals of influence warfare strategies. The push toward establishing and utilizing an influence silo is predicated on the quest for power. Here, the influencer wants to create an information microcosm (or utilize an already established one) in which they can establish a level of information dominance that gives them power to shape the beliefs, opinions, and perceptions of a target audience in ways that will help the influencer achieve their strategic goals. In the end, whether you control the country’s entire media ecosystem (as is the case in authoritarian regimes) or dominate a politically or ideologically oriented influence silo (e.g., within a liberal democracy), the central effort is to acquire the sole (or most powerful) means of controlling the narrative within the silo in order to achieve the goals and objectives of your influence strategy. Democracies have built-in divisions, tensions, and disagreements about what the country should do or not do. The core principles of democracy emphasize the values of negotiation and compromise in order to find resolutions to these disagreements, but there will always be members of a democratic society who are dissatisfied with this or that. Some will want to challenge the dominant narrative and influence the course of the political debate, and influence silos offer a prime opportunity to muster support toward this effort. Unfortunately, they also provide a prime opportunity for the spread of lies and disinformation, as we’ll see later in this chapter. Influence aggressors derive benefits from influence silos, because they can use our cognitive biases to make key decisions for us about what information is validating and what should be considered “fake news.” The influence silo offers them unique opportunities to amplify their message, enhance its contextual relevance for the target, and be more successful in their influence strategy. In many cases, the influence aggressor will never fully achieve the goals and objectives of the influence campaign, for reasons well beyond their control. But the manipulation of a target audience within an influence silo certainly increases their odds of success. For example, an influence silo full of conservative voters will offer a prime opportunity for messages about defending gun ownership, while an influence silo populated by liberal activists is fertile ground for messaging on income inequality or alleged threats to the social safety net.25 So, this accounts for why influencers want to find and utilize influence silos:

Exploiting the Digital Influence Silos in America159

they can significantly enhance the achievement of an influence strategy’s objectives. THE DIGITAL ECOSYSTEM AND INFLUENCE SILOS In the real, offline world, it has become increasingly difficult to block out many kinds of unwanted information, noise, and people without going completely off the grid and thereby sacrificing all of your social interactions. You can put up walls and fences, but you can’t stay inside your comfortable fortress forever; when you attend a sporting event or a movie or travel on public transportation (like buses, commuter trains, and airplanes), you are forced to deal with diverse people. Some of them (hopefully most) have good manners, but others do not. Some are kind, while others are not. Some are loud, obnoxious, and offensive. They may say things you don’t want to hear or wear a T-shirt with a slogan that you don’t want to see. They may even force upon you information that contradicts what you want to believe, perhaps to the point of causing the distress associated with cognitive dissonance (described in chapter 4). Regardless, there is nothing you can do about any of this. You are outside your fortress and must now deal with the inevitable discomfort of reality. But contrast that with your typical online experience today. Larry Sanger (cofounder of Wikipedia) summed it up nicely in a piece he titled “Internet Silos” several years ago, in which he describes silos as “news, information, opinion, and discussion communities dominated by a single point of view.” His opinion of this aspect of the Internet was rather critical: Online silos make us stupid and hostile toward each other . . . Silos also make us overconfident and uncritical. Critical knowledge and objective decision-making requires a robust marketplace of ideas. Silos give too much credence to unsupportable views that stroke the egos of their members. In a broader marketplace, such ideas would be subjected to much-needed scrutiny. Silos are epistemically suspect; they make us stupider; they might be full of (biased) information, while making us less critical-thinking and thus lowering the quality of our belief systems. It can be social suicide to criticize a silo from within the silo, while external criticism tends to bounce off, ignored. So silos become hostile to dissent, empowering fanatics and power-seekers at the expense of the more moderate and the truth-seekers. Silos also alienate us from one another—even from family and friends who don’t share our assumptions and opinions.26

Chapter 4 of this book described how people are uncomfortable when confronted with information that is not consistent with what they believe. So, as Jason Gainous and Kevin Wagner explain, “They order themselves online into groups and networks with very real and consistent patterns of beliefs and understandings. These patterns of networks online are the inadvertent effect of human psychology and the desire

160

Digital Influence Warfare in the Age of Social Media

to avoid cognitive dissonance or any general discomfort. Challenging information causes discomfort. Agreeable information sources are preferred, because they prevent the user from experiencing discomfort by helping them avoid exposure to any contrary information which could cause confusion or doubt.”27 Our psychological view of ourselves is now reinforced by information sources tailored to what we want to see; there is no longer one set of facts and evidence; instead, there exists a set of “facts” for every conceivable point of view. We can now create what Eli Pariser calls “an endless you-loop”28 and what Singer and Brooking calls “the echo chamber of me.”29 Instead of the Internet opening minds to new ideas, it has done the opposite: creating myriad opportunities to isolate oneself from any discomforting, confusing alternate narratives that do not align with what they want to believe. You can now carve out a space online in which you communicate only with like-minded individuals, read or view only material that conforms to your preferences and confirms your sense of self-worth, your views of the world, and your place within it. Internet platforms and social media were originally envisioned as places to bring people together, perhaps even discover new ways of sharing information and addressing some of society’s complex problems and tensions. But they also introduced a variety of new dimensions to the arena of influence warfare. For starters, according to a 2019 report by the Rand Corporation, “The way that Americans consume and share information has changed dramatically. People no longer wait for the morning paper or the evening news. Instead, equipped with smartphones or other digital devices, the average person spends hours each day online, looking at news or entertainment websites, using social media, and consuming many different types of information.”30 New media sources have become peer competitors in the influence arena. Among these, the most-trafficked online conservative sources are Fox News, Breitbart, and the Daily Caller, while the most-trafficked left-leaning sources are BuzzFeed, Politico, and Huffington Post.31 The report also noted that the most frequently visited news sources tend to represent the more extreme biases of the political grouping with which they are affiliated. That is, “many of the most-balanced news sources according to bias ratings are associated with either print media . . . or major television networks.”32 Further, the ways in which news is presented in these online media sources differs noticeably from traditional mainstream media. For example, newspaper reporting has been characterized by the use of concrete objects, numbers, references to duration, retrospective reasoning, and event-based reporting that often referred to authoritative institutions or sources. In contrast, online journalism can be characterized by a personal and subjective style, with personal perspectives and opinions, argument, and advocacy, with an eye toward persuasion.33

Exploiting the Digital Influence Silos in America161

Meanwhile, news media are no longer most people’s primary sources of information about domestic or foreign affairs. In 2017, the Pew Research Center reported that 67 percent of U.S. adults surveyed get their news from social media. Facebook is the most popular social media platform for this purpose, followed by Twitter and Instagram.34 The video-sharing website YouTube has also become a key source of information, entertainment, and influence efforts for many online users worldwide. As Lee McIntyre observed in his book Post-Truth, The rise of social media as a source of news blurred the lines even further between news and opinion, as people shared stories from blogs, alternative news sites and God knows where, as if they were all true. As the 2016 presidential election heated up, more and more content on social media skewed partisan, which fit well with a “motivated reasoning” vibe enabled by technology. We could click on “news” stories that told us what we wanted to hear (whether they had been vetted for accuracy or not) as opposed to some of the factual content from mainstream media that may have been less palatable. Without knowing that they were doing so, people could feed their desire for confirmation bias (not to mention score some free news content) directly, without bothering to patronize traditional news sources.35

We also have to take into account how the “attention economy” functions in today’s societies. In today’s digital ecosystem, attention is the coin of the realm. Profits are generated for websites and social media platforms by displaying paid advertisements for a wide variety of products and services, from low-interest credit card offers to car insurance. The profit model is dependent on the user seeing and engaging with advertising. Websites get paid by the click, and users can’t unclick once they have done so. Attention is what the advertisers are paying for. The more visitors to your website and the more views of your YouTube video, the more users of your social media platform and the more you can charge to display those ads. In this attention economy of the Internet, a “provoking for profit” model fueled the rise of the blogging industry, as Ryan Holiday describes in his book Trust Me, I’m Lying: Confessions of a Media Manipulator.36 These bloggers write stories to provoke responses and gain attention, for the sole purpose of selling ad space to online advertisers. Often that’s their sole revenue stream, which incentivizes as much provocation and attentiongetting as they can muster. Because media coverage is so instrumental in garnering public attention, a core objective in this arena is to say or do something that would be deemed “newsworthy” by a journalist or news editor. The more website traffic and page views you can generate, the more you can profit from advertising. Visitors to a media blog or website will invariably see some kind of “Top 10 Stories” or “Most Popular” list, perhaps in a scrolling sidebar. The purpose, of course, is to keep the visitor at that website for much longer than it will take them to read the news item that brought them there in the first place.

162

Digital Influence Warfare in the Age of Social Media

However, this also creates a massive need for content. Creating enough of that content takes time, so many bloggers will post information provided to them by others, often in some sort of profit-sharing arrangement. This opens the door wide for the dissemination of partially or completely fake news. The blogger has an incentive to ignore the obvious moral and ethical drawbacks of spreading disinformation. If the fake news is provocative, the fact that it generates revenue will often take precedence over the fact that it’s untrue. Mainstream news media also has a constant need for content—especially since the advent of 24/7 news channels decades ago. As a result, professional journalists and editors are also pressured to publish, and if the material is emotionally provocative and attracts more attention to your newspaper, broadcast, or website, the better. The fact that it later proves to be untrue can simply be blamed on the source (e.g., the blogger), with no real harm to the journalist who reported it. The rise of social media and other Internet platforms as a go-to source for news has given rise to an “influence industry”—a multibillion-dollar industry full of competitors trying to influence us.37 If you can master the tools and techniques of data-driven targeting and online campaigning, you could become quite rich. Driving traffic to your website could involve placing advertisements on social media platforms like Facebook and Twitter and paying for enhanced placement among Google search results. But there are also ways to drive traffic to your site that are much less expensive (and far more manipulative), as explained in chapter 3. Meanwhile, our daily online activity generates data that can be recorded, analyzed, and then used to influence us. As Singer and Brooking observed in their book LikeWar, “The amount of data being gathered about the world around us and then put online is astounding. In a minute, Facebook sees the creation of 500,000 new comments, 293,000 new statuses, and 450,000 new photos; YouTube the uploading of more than 400 hours of video; and Twitter the posting of more than 300,000 tweets. And behind this lies billions more dots of added data and metadata, such as a friend tagging who appeared in that Facebook photo or the system marking what cell phone tower the message was transmitted through.”38 In short, all of our online activities leave digital trails, a cookie crumb trail that researchers can follow, analyzing the crumbs for whatever they can reveal about us. You, the potential target of the influence attempt, can be identified by a broad range of data, including your IP address, allowing data analytics to establish patterns of behavior and patterns of preference for and reactions to online information that can be captured, stored, and converted to decision-making algorithms.39 According to Singer and Brooking, each tweet posted on Twitter carries with it more than 65 different elements of metadata, “digital stamps that provide underlying details of the point of origin and movement of any online data.”40 And it’s not just our Facebook and Instagram posts or our tweets and retweets that can be mined for

Exploiting the Digital Influence Silos in America163

detailed information about us. There are others: our patterns of information sharing; habits of applications used; websites visited (how frequently, what times of the day or night); the amount of time we spend online or at one particular website; whom we communicate with, how, and how often; what movie trailers we look for; what video clips do we watch; what kinds of things are we shopping online for; what have we purchased: If more expensive or less expensive items of similar quality were available, what influenced our purchasing decision? Were we perhaps influenced by reviews and testimonials by other customers who had previously made the same purchase? How can we be sure those reviews were written by real customers and not some individuals hired by the company selling the product? Social media platforms and other Internet companies can access a wealth of data and algorithms to identify an individual’s beliefs and ideas, interests, likes and dislikes, patterns of online activity (like what we click on and how long we spend looking at particular content), and much more— all of which can then be used to provide highly personalized messages and content that align with previously expressed preferences. As Manheim notes in Strategy in Information and Influence Campaigns, “Google, Amazon, Facebook, Twitter and many others have the capability to provide user-specific, real-time data about not only who is online and onsite at any given moment but what information they are receiving and, in some instances, how they react to it.”41 The same data provide attractive profit models for the social media platforms—advertising can be personalized, a type of microtargeting based on those user attributes in ways that increase the likelihood of predictable consumer behavior. From the huge amount of information available to them, algorithms can then communicate to us things like “Lots of others who bought that same item also bought this” or “Based on your prior purchasing history, we think you would like this new product.” Algorithms are large, complex computer codes that decide how relevant specific kinds of information are to each individual user. The use of algorithms is how an online shopping website ­Amazon​.c­ om is able to present to you a list of “suggestions you might like” based on what you have looked at or purchased on the site previously. This is also how Google is able to tailor your search results to user preferences. There is no standard Google search engine today: each of us can search for the same term and be presented with different lists of search results. Give it a try sometime with a friend who has their own laptop. Sit with both laptops at a table, and search for the same term: Are your lists of search results identical? In his book The Filter Bubble, Eli Pariser describes how “with Google personalized for everyone, the query ‘stem cells’ might produce diametrically opposed results for scientists who support cell stem research and activists who oppose it. ‘Proof of climate change’ might turn up different results

164

Digital Influence Warfare in the Age of Social Media

for an environmental activist and an oil company executive. In polls, a huge majority of us assume search engines are unbiased. But that may be just because they’re increasingly biased to share our own views. More and more, your computer monitor is a kind of one-way mirror, reflecting your own interest while algorithmic observers watch what you click.”42 Algorithms are how Gmail—the wildly popular free email platform offered by Google—generates profits by providing microtargeted advertising to its users, based on the data they collect and analyze on each user. Social media platforms like Twitter and Facebook generate profits much the same way, using algorithms to place the kinds of advertising in a user’s page that are most likely to influence their decisions and behavior. Foreign and domestic influencers now have much more latitude to gather information on social media users and then target them with a barrage of fake news and disinformation. They can now convey personalized, influential messages to a wide range of individuals who will feel as if the influencer is speaking individually with them. They can use the “magic” of bots and zombies and computer programs that can give the illusion of personalized attention—the kind of attention we all thrive on, the kind that feeds our egos and makes us want more. The data available on each of us allows the influencer to develop “microtargeting” strategies: messages, tactics, and contextual relevance that directly relates to the attributes of the target. This is essentially what mercenary data mining firms do (like the now-defunct Cambridge Analytica).43 These data-driven microtargeting strategies tailor the user’s online experiences in ways that satisfy their desire for self-validation and social proof. Analytic tools and algorithms make extensive use of that data to filter out what we may not want to see and filter in what we probably do want to see; in a way, social media can automate confirmation bias, reinforce prejudices, and keep us within our influence silo. This has the effect of reinforcing a sense of self-perception that may be far removed from reality. Furthermore, because there are so many different influence silos today, with their own political orientations and goals, there is an increasingly diminished likelihood of individuals communicating across the silo boundaries to discuss with one another their disagreements and resolve differences. How we are online is that we avoid information we don’t want to see, much the same as we do offline. The Internet presents quite a paradox: here we have at our disposal virtually unlimited amounts of information, and yet we can easily surround ourselves with the ability to filter out unwanted information, and we are enabled to do so by the personalization functions and algorithms of these Internet service providers and social media platforms. In a society that is already divided into competing factions, with too few willing to engage with “others” outside our in-group or learn about other points of view beyond our own, polarization is exacerbated by the digital ecosystem.

Exploiting the Digital Influence Silos in America165

Beliefs spread and are reinforced through our social network connections and social media interactions. In an age of information overwhelm and political partisanship, “othering”—the good versus evil narratives described earlier—can be amplified in ways that allow no compromise, no alternative views to be considered valid.44 The more time we spend online (and avoiding the offline world), the more we are able to control information we see and hear; we can limit the universe of potential influences to only what we want to believe, blocking out other sources of information to which we don’t want to be exposed; this, in turn, makes it easier to persuade us of something that may seem to reinforce what we already believe (even when it’s completely false), rather than challenging us to learn, reconsider, expand our horizons, and reach higher levels of cognitive and emotional development. Essentially, the digital ecosystem strengthens influence silos, providing the optimum kind of social space in which fake news and disinformation can cascade. Through the use of Internet and social media platforms, influencers have at their disposal a broad array of “transmission lines” through which they are now able to convey messages of contextual relevance to the target. In short, our window onto the Internet can be tailored to show us only what it thinks we want to see and hear, without regard to what might be true or not. For the purposes of digital influence warfare, the technical means of influencing the target can be easily manipulated. The fewer people we follow online in our social media feeds, the more influence those people could have over our views and opinions. By blocking dissenting views, disagreeable individuals, and only accepting input from those we are likely to agree with, we set ourselves up for the kinds of influence tactics described in chapter 3 of this book. The influencer can study the target attributes (the data generated by our online behavior), and use that information to determine messages that will be contextually relevant, and tactics for conveying those messages in the most effective means. The attributes of the influencer can also be manipulated to make them appear a member of the target’s ingroup, increasing the likelihood that the messages will get their attention. An influence silo—digital or otherwise—is fueled by information sources that a significant number of people within the silo turn to. But unlike the traditional sources of information (publishing, radio, television broadcasting, etc.), anyone could potentially become an influencer with minimal resources. From websites and Facebook pages to blogs and online magazines, the Internet provides us with the easiest means of writing and publishing falsehoods. Social media platforms also give you the means to widely disseminate your opinions—even criticisms about things you know nothing about or cannot even comprehend—and obviously the entire range of lies imaginable. And, as we discussed in chapter 3, videos can be manipulated in various ways to shape our perceptions and behaviors according to what an influencer might want.

166

Digital Influence Warfare in the Age of Social Media

But by far the most effective means of influencing us online is to first create and utilize an online variant of the influence silo described earlier in this chapter—something I call the “digital influence silo.” The same elements as described earlier apply here as well: (1) the attributes of influencers and targets, (2) push and pull factors that lead them to engage in the influence silo, (3) the formation of an in-group identity, (4) out-group “othering,” and (5) the overall quest for information dominance. And as with the offline influence silos, the central issue here is that we really don’t want our beliefs and ideas to be challenged by anyone, because it elicits uncertainty and discomfort. One of the core attractions of the Internet is the fact that people can find whatever kinds of information they want, especially if it serves to confirm their own biases. We tend to follow the social media accounts of only individuals whom we are likely to agree with or learn from; the same decision-making informs which news articles and other sources of information we choose to access. As noted earlier, the influence silos we opt into help us create an informational barrier around ourselves that prevents us from seeing opposing viewpoints. At the same time, we are far more likely to pay attention to (and believe) information provided to us by those within these influence silos. So, the influencer will use the available data on the target (their interest, likes and dislikes, behavioral patterns, sociodemographic backgrounds, etc.) to develop a tailored influence strategy. Then they will manufacture relationships between themselves (or their automated proxy account) and the target, emphasizing perceived sameness in social identity, preferences, values, and so forth. Often the influencer will reinforce this perception of in-group identity “alikeness” using additional “peer group” voices that collectively define in-group/out-group expectations. Then, having established an accepted narrative of “We are x. We do x,” the influencer can articulate a crisis that threatens the in-group, followed by guidance about what the target should do (e.g., to reduce uncomfortable cognitive dissonance or to fight back against the “other”). Chapter 3 describes these and other digital influence tactics in greater detail. The digital ecosystem provides us with a unique ability to sequester ourselves as never before behind virtual firewalls inside which only the information we want is allowed in, and all else is blocked. The influencer can use this to their advantage by (1) identifying what we want and what kinds of information are being allowed to permeate our influence silo; (2) tailoring their narratives in a manner consistent with what is being allowed within the influence silo; (3) using that access to misinform and disinform using information that is false yet consistent with the narratives and values of the influence silo; (4) exacerbating “othering,” provoking deeper emotional responses (like anger and hatred toward the “other”); (5) strengthening the sense of identity within, perhaps even tied to a sense of victimhood and need to defend those within the silo; (6) making any

Exploiting the Digital Influence Silos in America167

hint of compromise or alternative to the narrative appear irresponsible, disloyal, and unequivocally evil; and (7) ensuring the divisions within a society are amplified, magnified to the degree they become insurmountable. In doing so, the digital influence campaign achieves the goal of fragmenting and fracturing the society, making it more difficult to govern and more difficult to agree on any major foreign or domestic policy. As described earlier in this chapter, influencers want influence silos, either seeking them out or trying to establish them on their own. In addition to the monetary incentives discussed earlier, the proliferation of online blogs can be viewed as efforts to become a type of influencer. Your goal should be either to create an influence silo and attract people to it, or—as explained by Mazarr et al.—“find a silo and dominate it with a product, a channel of information, or a perspective.”45 Either way, instead of trying to develop some sort of global or national mass appeal, the digital influencer should seek to influence one silo at a time. To do so, you will first need to conduct extensive research on the values and beliefs of the target members within the silo, norms, established influencers, and so forth, and then work within that context to provide relevant messages that are crafted and delivered in ways that will achieve the goals of your digital influence campaign. Effective influencers know they can’t convince an entire population, so they engage in microtargeting: identifying those online users who would be most receptive to what they’re trying to get people to do, pushing information their way, and making it seem legitimate by having others endorse it. Once established, the influence silo can exert power over its members’ perceptions, behavior, and decision-making. As research by Zimbardo et al. explains, “People’s opinions and attitudes are strongly influenced by the norms and goals of groups to which they belong and want to belong.” These groups “induce conformity pressures on their members when they meet their members’ social-emotional needs as well as fulfill instrumental goals.”46 After constructing clear boundaries around in-groups and outgroups in ways that diminish uncertainty and replace complexity with simplified narratives, the nourishment of influence silos comes in the form of social influence pressures to conform and constantly prove your loyalty to the in-group. Influence silos create a kind of gravitational force that condenses within and repels without. The more cohesive the group, the more the members of the group will have power to influence each other, and people who are most attached to the group are probably least influenced by communications that conflict with group norms.47 Within the influence silo, only approved information is allowed—the agreed-upon narrative must dominate, even if it is based on wholesale fabrication. This is of course easier to do in a small, isolated charismatic cult than in a much larger “community,” but it can be done to scale with the right amount of resources. As it happens, we do have a unique example of information

168

Digital Influence Warfare in the Age of Social Media

dominance and the influence silo effect here in the United States, something I’ve begun calling the politically conservative influence silo, as explained in the next part of this chapter. It would be tempting to shorten this to “PC Silo,” but I suspect doing so would elicit howls of protest, so I’ll avoid that temptation. THE POLITICALLY CONSERVATIVE INFLUENCE SILO IN A DIVIDED AMERICA Portrayals of American revolutionary history often give the impression that a unified population rose up as one against an oppressive, universally despised occupying force. However, it should be noted that a sizeable portion of American colonial inhabitants opposed the Declaration of Independence and the subsequent revolution against British authority. The so-called “Loyalists” of that era lost, but they should not be forgotten because they represent a key fact: rarely (if ever) have all Americans been united in common cause. In fact, examples of a divided republic permeate our nation’s history: southern support for (and northern opposition to) slavery that culminated in a devastating civil war; public support for and against prohibition; various waves of anti-immigration sentiment; strong isolationist resistance toward involvement in World War II, until the 1941 Pearl Harbor attack; massive protests against the Vietnam War; and of course, the contested terrain of a two-party political system. But America is certainly not alone in being disunified. No country—even an ethnically and culturally homogeneous one like Japan—can claim to be completely unified in its approach to anything meaningful. Whatever an individual proposes, there will be others who will, no matter what, disagree with that proposal. As we’ll examine in chapter 6, authoritarian regimes are able to impose a sense of unity on the masses (or at least the illusion of it) through information dominance. But how can such information dominance be established in a democracy? Any information silo that emerges in a democratic society is naturally framed by contextual factors, as described earlier. Influence aggressors in a democratic society want the same power to shape beliefs and behaviors as authoritarian regimes have. They want information dominance over a sizeable proportion of the population, and they keep them within the influence silo by any means necessary. And yet they are at a distinct disadvantage compared to authoritarian regimes, as they obviously have much less power and control over the information its citizens are allowed to see or not see. This was the central problem that most troubled Roger Ailes during the 1980s, who by then had served as a media consultant to Republican presidents like Richard Nixon and Roger Ailes. In his opinion, the mainstream press was dominated by liberal views and overly critical of the conservative agenda pursued by President Reagan. Even

Exploiting the Digital Influence Silos in America169

though the so-called “Fairness Doctrine”—established in the late 1940s— instructed the Federal Communications Commission to ensure that the media covered issues of public importance in an “honest, equitable, and balanced” manner, he felt too little support was given to the conservative point of view. But how could this situation be altered in his favor? The beginning of the answer came in 1987, when the Reagan administration succeeded in repealing the Fairness Doctrine. This was immediately followed by a conservative radio talk show host named Rush Limbaugh, who established a program in which he could share his strongly held opinions on all kinds of topics, and nobody was allowed to contradict him.48 A radio talk show has proven to be significantly useful for establishing a type of politically oriented influence silo. First, you want to find a host for your talk show who is both provocative and entertaining, someone who will keep people listening to the show because he or she offers something that cannot be found on other radio stations. The unique advantages of the talk radio show include being able to repeat a narrative in multiple ways (as noted in chapter 4, repetition is one of the core strategies of effective influence) and to provoke emotional responses among listeners without giving them any real opportunity to address those emotions. The talk radio host can simply refuse to air anyone’s opinion or evidence that contradicts what the listeners believe or want to hear. If a caller tries to question the merits or veracity of a statement or narrative, the host can just hang up on them, even follow up with a barrage of derogatory “othering” statements that serve the dual purpose of delighting the true believer listeners (and reinforce their solidarity of “belonging” in the influence silo) and clearly demarcating the boundaries of what is to be considered unacceptable information or behavior. Those who are disbelievers in the narrative promoted by the talk show host can be deemed political infidels. If they happen to be members of the in-group’s political party, they can be called apostates, who are often deemed a more vile sort of species because they have seen the light but turned away from (our interpretation of) that light. Meanwhile, those who have differences of opinion altogether will turn away from the radio channel and choose some other source of information, leaving a much more homogenous audience of listeners, among which the percentage of those agreeing with the host naturally increases over time. In the end, you have established a radio-based influence silo. This is particularly feasible when your audience has come to view mainstream media as not representing (or even antagonistic toward) their values and beliefs. Thus, listeners who called into Limbaugh’s show were encouraged to discuss sympathetic opinions and topics, but he would hang up the phone on callers who challenged his characterization of events or people. This allowed him to spread an incredible plethora of rumors, distortions, innuendos, unsupported claims presented as the truth, and even outright fake

170

Digital Influence Warfare in the Age of Social Media

news disinformation. For example, he could argue with impunity that the “liberal” government was failing in areas such as civil rights, energy, education, and tobacco regulation, and nobody was allowed to offer reasonable counterpoints to his opinions. As a result, those Americans who shared his opinions and his anti-liberal sentiments became very loyal listeners, allowing him to build up a sizeable following. As Tom Nichols explains in his excellent book The Death of Expertise, Rush Limbaugh created a massive following to his radio show by presenting himself as “a source of truth in opposition to the rest of American media.” Millions of Americans identify themselves as conservative (or leaning in that direction) politically and fiscally, but in their eyes, the mainstream media was too liberal, not conservative enough. Limbaugh saw an opportunity, a school of hungry fish, and capitalized on it successfully. “Within a few years of his first broadcast, Limbaugh was heard on more than six hundred stations nationwide . . . [He built] a loyal national base of followers by allowing them to call in and express their support . . . The object was to create a sense of community among people who already were inclined to agree with each other.”49 Callers who disagreed were dismissed, disparaged, disrespectfully hung up on by the host, and even called all kinds of crass belligerent names. Listeners who were inclined toward the narrative loved it. His follower base increased dramatically. He spoke to workingclass conservatives as if he were one of them, even though his eventual salary of $20 million per year clearly meant he was not. Limbaugh also showed a penchant for disparaging humor, encouraging his audience to laugh while absorbing his political rants, racism, and dehumanization of others. His argument for NAFTA was: “Let the unskilled jobs, let the kinds of jobs that take absolutely no knowledge whatsoever to do—let stupid and unskilled Mexicans do that work.” When challenging the political positions of Democratic leaders, he said: “Governor Ann Richards was born needing her face ironed. Hillary Clinton looks like a Pontiac hood ornament.” He also played on two sets of emotions—fear and pride. By arousing fear about impending social, political, or security crises, he got the attention of his listeners and had them focus on what he wanted them to hear. And by arousing feelings of pride at being a fan of Rush Limbaugh, he used derogatory comparisons with others (“you are morally superior to those liberal compassion fascists; you have a real job, they must beg for a living”), including minorities (“Have you noticed how all newspaper composite pictures of wanted criminals resemble Jesse Jackson?”). Far from encouraging thoughtful deliberation over whatever Rush Limbaugh was proposing, such disparaging humor resulted in what was known as jeer pressure; it induced conformity to others’ opinions out of a fear that we, too, could be the subject of ridicule.50 In line with the research literature on the psychology of persuasion, Limbaugh repeated his argument over and over. Once he had chosen a theme

Exploiting the Digital Influence Silos in America171

for the day’s show, it became repeated incessantly because (as noted ear­ lier) repetition could increase the effectiveness of an influence attempt. As a result, he created a domain of information dominance in which only his interpretation of reality was considered valid. You could agree with him, and by doing so be considered part of the in-group, or you could disagree and be condemned to scorn, ridicule, and purgatory as a member of the out-group. Over time, he convinced millions of people they were getting smarter by listening to him. He played on their ego-driven need for validation, and in response, they became dependent upon him for that validation. As Pratkanis and Aronson explain in their book Age of Propaganda: When a propagandist unscrupulously plays on our feelings of insecurity, or exploits our darkest fears, or offers fake hope, exploration and inquiry stop. We become snared in the rationalization trap. The goal becomes to prove yourself superior and right no matter what. We become dependent on those who will support our masquerade. Our emotions overwhelm our critical abilities. And we take actions that we might not otherwise consider wise to take.51

Limbaugh also portrayed himself as a champion for his listener’s hateful bigotry, misogyny, and stupidity, validating it all with his bullhorn and his bullying of anyone who disagreed with him. Suddenly it was okay to be racist, anti-Jew, anti-Muslim, anti-immigrant (at least, the nonwhite kind), and most importantly, anti-Democrat. The demonization and dehumanization of Democrats (and even some moderate Republicans who were deemed not conservative enough) has also been a constant part of Limbaugh’s radio show tirades. Already favored toward “othering” in principle, his listeners and supporters loved hearing all the creative ways he could insult “others” on a daily basis. The mere suggestion that anything a Democrat said or did could be okay was met with derision, scorn, condemnation, and accusations of disloyalty or a lack of patriotism. The fact that he could say such things with impunity was instructive. Because of his information dominance within this setting, his power was derived from the fact that nobody could shut him up. He enjoyed pure, unbridled information dominance within an influence silo he created. Building off the same profit model, like-minded conservatives like Roger Ailes and Rupert Murdoch recognized the value of investing in a televi­ sion news network that was dedicated to a politically aligned interpreta­ tion of policies and events. Their goal at the outset was to attract much the same audience that identified themselves as listeners to the radio talk show described earlier. The network they envisioned would offer a mix of entertaining shows and sports programs along with news programs and analysis that supported the conservative political agenda and eventually inspired loyalty to this network as the exclusive go-to source for all their television viewing needs. Issues and narratives chosen for programs on

172

Digital Influence Warfare in the Age of Social Media

this network would highlight a certain worldview and avoid questioning or undermining the validity of that worldview. In time, those who shared some or all of that worldview would find enormous benefits of validation, affirmation, and self-esteem that would keep them glued to this source of information and tune out all others. Similar to Limbaugh’s radio show, this network would want to use the same tactics of influence: ensuring the message content and format met what your audience wanted; using emotional provocation, repetition, and other tactics described in chapter 4; and focusing on context-relevant socioeconomic and political hot-button issues of importance to your audience. The larger your viewership and the more consistent they are in turning to your network and affiliates over all other options, the more profitable you will be; the larger your influence silo will be, the greater the information dominance you will enjoy within that silo. When Roger Ailes and Rupert Murdoch launched the Fox Broadcasting Corp in October 1996, this became their mission: to create an influence silo for conservatives who disagreed with the moderate or liberal-leaning mainstream media. The primary goal of Fox News was (and remains) the creation of a solid, cohesive social identity for conservative Republicans. They were convinced that there were no real significant “voices” representing the conservative viewpoint. Mainstream news media; academe; political think tanks in New York and Washington DC; Hollywood—all the major sources of information and entertainment—were, in their opinion, too liberally oriented. They believed (correctly, it was later proved) there was a large potential audience of viewers hungry for a nationwide media service focused specifically on conservative political agenda. Taking its cues from the success of Rush Limbaugh’s radio shows, Fox identified a specific target audience that felt underserved by the mainstream media. This audience did not like what they read, saw, and heard from the likes of ABC, CBS, CNN, and NBC—even if it happened to be the truth. What they wanted (and now desperately cling to) was a source of validation for their own version of the truth, and Fox sought to provide this for them, generating massive profits from doing so. The primary goal, as conceived by Roger Ailes and colleagues, was to convince conservative viewers that Fox News was the only major news outlet that was not secretly beholden to liberal Democrats. Since many conservatives already suspected this about the mainstream media, the opportunity was there all along to be exploited. If Fox hadn’t done it, some other entity would surely have come along and done this instead. By denigrating the integrity of other news media, convincing viewers that nobody but Fox was trustworthy, they effectively built a powerful influence silo. Perhaps more accurately, they strengthened and expanded an influence silo that began with the audience generated by Rush Limbaugh’s radio show. Together, their efforts resulted in what I now refer to as the politically

Exploiting the Digital Influence Silos in America173

conservative influence silo, an impenetrable bubble within which others could echo and reinforce an approved narrative, while anything that contradicted that narrative was rejected, filtered out, and sometimes even vilified as inherently evil. This gave oxygen to conspiracy theories among conservatives about all kinds of secret, nefarious, evil plots of anyone viewed as either nonconservative or not conservative enough. This was not really brainwashing as most people understood it, but it was certainly a close relative. The rapid growth in viewership of Fox programming validated the Fox News founders’ belief that a hungry audience was out there looking for some kind of media that validated their political views. Just like Limbaugh, this network offered them what they wanted, a cable network information silo in which they could enjoy the kind of information feed that supported and nurtured their beliefs, in the process solidifying a confirmation bias among millions. Viewership exploded—by the summer of 2006, Fox news had nearly double the size of the primetime viewing audience of CNN or MSNBC.52 A 2004 Pew survey found that 44 percent of self-identified conservative viewers rated Fox News the “most trusted” news network.53 Another study published by the Pew Research Center in 2009 found that Fox News was considered by Americans as the most ideologically oriented network, with 47 percent of those surveyed describing it as “mostly conservative.” Further, the study found that a majority of Fox News viewers believed that cable news hosts with strong political opinions were a good thing and that press coverage of President Barak Obama was not critical enough.54 Meanwhile, the profit incentive furthers the need to ensure the audience remains within the silo and does not stray. A core pressure drives the media—the ability to hold the audience’s attention. All television programming, including the evening news, must strive for profits—and that trans­ lates into securing ratings and viewers that will attract advertising dollars. And what induces people to watch the news? As Pratkanis and Aronson explain, a study of why people watch the news concluded that most view­ ers want to be amused and diverted; being informed is only a secondary motive for watching. Further, as Susan Jacoby observes, we have in America today signs of an “addiction to infotainment,” one that “equates intellectualism with a liberalism supposedly at odds with traditional American values.”55 As a result, being a broadly informed critical thinker, one who cares about objectivity and fact-based decision-making, is now portrayed as being liberal and unpatriotic. The conclusion that television executives draw from these kinds of studies is that programming should be arousing, emotionally engaging, illiberal, and above all entertaining.56 This ethos of “infotainment” has become a hallmark of Fox News programming, generating profits and market share that continually prove Rupert Murdoch made a wise investment in underwriting the network.

174

Digital Influence Warfare in the Age of Social Media

Now, an entire generation of conservatives have been raised on Fox News and its biased, slanted, and often misleading coverage of major news events and people. In millions of households across America, children grew up with Fox News as their only source of television news, shaping their worldview day by day, cultivating an almost cult-like reverence for a specific version of conservative political values. They will defend these beliefs and values with all means necessary, even if it requires rejecting factual evidence and demonizing those who question their beliefs. The deeper and perhaps more sinister trick is to convince people they are being well-informed when they are not. As the Nazi propagandist Joseph Goebbels once noted, “This is the secret of propaganda: Those who are to be persuaded by it should be completely immersed in the ideas of the propaganda, without ever noticing they are being immersed in it.”57 People who watch Fox News believe they are informed. They are told repeatedly that they are smart and special for watching Fox News and that if they dare watch any other source of news or information they will be misled and deceived. But the deception is already upon them, in the form of broadcasts that ignore truth and facts, instead focusing on reinforcing a political narrative that its viewers have already approved. This helps explain why repeated public opinion polls—like the October 2020 survey by the Pew Research Center—consistently indicate strong beliefs in falsehoods and conspiracies (like QAnon) among respondents who get their election news primarily from Fox.58 Further, the rise of right-wing online news media has strengthened the sense of certainty among the politically conservative influence silo’s members. By amplifying narratives and tropes promoted earlier by Limbaugh and then Fox, these relatively new media outlets—like Breitbart and the Sinclair Broadcast Group—have all contributed to a mutually reinforcing conservative information ecosystem full of websites, blogs, YouTube channels, and radio broadcasts. Some have been founded and financed by wealthy conservatives in much the same way Rupert Murdoch underwrote the creation of Fox News. For example, the far-right cable channel One America News Network (a favorite of Trump) was founded by multimillionaire Robert Herring, and Breitbart was established with significant financial backing from billionaire Robert Mercer and his daughter Rebekah. Breitbart became especially known for its provocative columnists and for publishing inflammatory articles that championed nationalism and denigrated women, Muslims, immigrants, Democrats and many others, fueling its rise in popularity (its website recorded 17.3 million unique visitors in early 2017).59 Breitbart has also frequently criticized the Republican establishment, and even Fox News. For example, according to a study published in the Columbia Journalism Review, the “most-widely shared stories during 2016 in which Breitbart refers to Fox News were stories aimed to delegitimize

Exploiting the Digital Influence Silos in America175

Fox as the central arbiter of conservative news, tying it to immigration, terrorism and Muslims, and corruption.”60 This makes sense from a business strategy perspective: the goal would be to establish Breitbart as a more reliably conservative information source in order to grow a following among loyal Fox News viewers. By positioning itself to the right of Fox, a media outlet like Brietbart can carve out a place within the politically conservative influence silo and then work to expand its influence within that silo. It’s a classic business case of trying to pry away market share from a larger competitor. And it proved rather profitable. But as noted earlier, despite this competition for market share, there is also a form of narrative unity: the political orientation of information provided by all these media outlets within the conservative information ecosystem has been purposely aligned to match the confirmation bias of their target audience. Meanwhile, some politicians have apparently recognized that they can utilize this media-orchestrated influence silo to be the meanest, most arrogant, and disrespectful bastard imaginable, with no consequences. You can freely violate all the norms of moral and ethical behavior and eschew any need to be polite or respectful—especially toward those who have been identified as “the other.” The influence silo gives you immunity for many of your faults. You can have a trail of failed marriages, scandals, and bankrupt businesses and still be endorsed by your influence silo to the highest political office of your country. To criticize your behavior from within the silo would be viewed as apostasy, a turning against the core identity and narrative that would have brought the members of the silo together in the first place. The partisanship of the audience plays a major role in the effectiveness of any persuasion attempt. As we might expect, if a member of the audience is already predisposed to believing the communicator’s argument, their presentation has a greater impact on his or her opinion. This is a core reason that political candidates (like Trump) favor campaign rallies over open debates in front of an audience of people who are undecided. Because they are speaking at the campaign rally to an audience of the party faithful, the politician can be as derisive and mocking as they want of other candidates and policy positions, without ever needing to buttress their own position with factual evidence or persuasive arguments. Not only does the audience not really care about such evidence, but they are also already inclined to believe anything the candidate says that is aligned with the political party’s platform. Instead, this type of event offers a unique opportunity to motivate the audience through fear—unless they commit to doing something specific (e.g., vote for the candidate), dire consequences will result. For example: the security of our nation will plummet, our economy will go into depression, our international standing will rapidly diminish—unless you vote for me. Again, the influence strategy is to amplify uncertainty and fear combined with offering a simple remedy.

176

Digital Influence Warfare in the Age of Social Media

Another example of intense partisanship within the politically conservative influence silo is that its members tend to have lower regard for several important social institutions. For example, a 2017 Pew survey showed that 72 percent of Democrats and Democratic-leaning independents believe college and universities have a positive effect on the country, but a majority of Republicans and Republican-leaning voters (58 percent) expressed a negative view toward those institutions of higher learning.61 And similarly, a Gallup poll found that trust in the mainstream media is particularly low among right-wing and Republican survey participants, at only 14 percent—perhaps reflecting, as Alice Marwick and Rebecca Lewis explain, how much of the news coverage by “hyper-partisan right-wing media outlets . . . is devoted to attacks on the mainstream media, so those who gravitate to these sources may become increasingly distrustful of, and insulated from, outside coverage.”62 The implications of news coverage in today’s increasingly polarized environment have been staggering. Not only did Fox, CNN, and MSNBC (for example) come to provide markedly different interpretations of major incidents or trends, but they also differed in terms of what was or was not considered newsworthy or legitimate—for example, a 2013 study found that 69 percent of Fox News guests were skeptical of climate change.63 As Ted Koppel noted in 2010, Fox News and MSNBC no longer even attempt objectivity. “They show us the world not as it is, but as partisans (and loyal viewers) at either end of the political spectrum would like it to be. This is to journalism what Bernie Madoff was to investment: He told his customers what they wanted to hear, and by the time they learned the truth, their money was gone.”64 Essentially, the strategy of these kinds of politically oriented media outlets to gain power and money is to nurture a shared identity and common narrative within an increasingly self-reinforced influence silo. The result has direct implications of how the audiences within these silos process information. In chapter 4, we reviewed how the research literature in psychology has identified two different paths to persuasion. One, termed the “central route,” is built around more or less intense engagement and argumentation. It achieves persuasion by facilitating the individual’s thinking thoroughly through the issue and the alternatives and by coming to appreciate the persuader’s position. The second, termed the “peripheral route,” relies not on reason per se, but on packaging—attractiveness of the message or the medium, use of evocative symbols, use of an attractive or credible source, and so forth.65 This is where the influence silo is of greatest benefit to the influencer. The peripheral route of processing information is one that is comparatively easier and preferred by most people over the more effort-requiring central route. Thinking can be hard, especially about complicated issues. When an influencer is able to simplify those issues in ways that conform to your own values and beliefs, as well as confirming

Exploiting the Digital Influence Silos in America177

your prejudices and biases, you tend to like that and want more of it. The act of sitting in one’s car or at one’s kitchen table listening to someone who sounds articulate while sharing his opinion, and perspective on these complicated issues is inherently peripheral: your only engagement with the information is of a passive nature. The speaker may provoke an emotional reaction—in fact, that may even be their primary goal—but it’s unlikely that you are being encouraged to think about the pros and cons, weigh alternative views and perspectives, or engage in any other kind of critical assessment of what is being said. Rather, you are being told what to think, and if you are inclined to already agree with this line of thinking, the speaker becomes a powerful influencer in your life. Confirmation bias, as described in chapter 4, is at work here. Unfortunately there are also millions of individuals whose natural inclination is to just be lazy, and their influence silo provides a comfort zone in which being lazy is accepted, even encouraged. “People tend to acquire information mostly about things that they find of interest and tend to avoid information that does not agree with their beliefs. Should someone find that they have been unavoidably exposed to uninteresting and disagreeable information, a common response is to distort and reinterpret that information, thus ignoring its implications for updating beliefs and attitudes.”66 Don’t touch that dial; we’ll be right back. As long as we say what you are already probably thinking, you’ll continue to watch us and nobody else; this is especially true when it comes to complicated issues that we don’t want to have to think about. The politically conservative influence silo described earlier has expanded into a multipronged network of silos in which core elements (like Limbaugh, Fox, and Breitbart) anchor the narratives, and multiple streams of online information feed into and reinforce those narratives. Examples like the Daily Caller, InfoWars, Newsmax, The Daily Wire, TheBlaze, National Review, The Federalist, the Drudge Report, and One America National News occupy various places of importance within this network, catering to the same audience that feeds from the trough of Fox News. Since the abandonment of the Fairness Doctrine during the Reagan Administration, there has no longer been any requirement or incentive to provide competing viewpoints about an issue. Your news station can, if it so chooses, broadcast all day long material with a specific political agenda, information that conforms to a specific worldview, regardless of the facts or truth. As Michiko Kakutani notes, Sinclair Broadcast Group—which reaches an estimated 38 percent of American households through local news broadcasts—has even forced local news anchors to read a scripted message about “false news” that echoed Trump’s own rhetoric undermining real reporting.67 In the media business, your news station will likely flounder and fold if there are not enough viewers who embrace that worldview. But as Fox

178

Digital Influence Warfare in the Age of Social Media

News discovered, there was a huge untapped market in the United States for a media outlet wholly committed to serving a conservative Republican viewpoint. By repeatedly attacking the credibility of the more established mainstream media, Fox was able to carve out its place of influence among an audience hungry for the politically skewed interpretation of events. As noted earlier, there has been an inherent distrust among conservatives toward the media; they believe the media are biased in favor of liberal political candidates and policies, in large part because the media comprise mostly liberal-oriented journalists and editors. By amplifying this narrative, Fox was able to exacerbate and deepen the level of distrust and animosity toward other media outlets to a point where no self-respecting conservative would be willing to give any credence to a story promoted by those media outlets, but would only find acceptance among “the tribe” by promoting the Fox news version of events. Further, all of the media outlets within the politically conservative influence silo today replicate the same “othering” narrative emphasized by Fox and the other core elements. It’s not enough for the silo audience to prefer sources of information from within the silo—any source of news outside the silo needs to be viewed not only with suspicion but also with animosity, even hatred. By the same token, a political leader who wants to ensure information dominance will naturally seek to denigrate and degrade all others and warn his audience about the evils of information sources outside the silo. For example, media within the pro-Trump influence silo are highly invested in a narrative that proclaims other sources of news are dead or dying. Trump himself often uses derogatory terms like “the failing New York Times” whenever that newspaper prints something he disagrees with. He has even referred to various news outlets as “enemies of the people.” The goal, of course, is to try and convince (or reinforce the conviction of) his audience that there is no need whatsoever to pay attention to information sources outside this influence silo. The evidence that this works can be seen in the example of a library in Florida that was not allowed to purchase a subscription to the New York Times in 2019. In this widely reported case, the librarians of Citrus County, Florida, had filed a request to purchase a digital subscription to the New York Times (which costs about $2,700 annually) in order to “offer their roughly 70,000 patrons an easy way to research and catch up on the news,” but local officials refused. At the county meeting to discuss the issue, one commissioner declared the paper to be “fake news,” noting “I agree with President Trump . . . I don’t want the New York Times in this county.” Another commissioner stated, “I support President Trump . . . I would say they put stuff in there that’s not necessarily verified.” It’s worth noting that in their interviews about the decision, four of the commissioners admitted that they did not read the New York Times, raising the obvious question about objective analysis versus believing what you have been

Exploiting the Digital Influence Silos in America179

told by someone who is clearly seeking to benefit from disparaging rhetoric about the media.68 This also highlights how the power of the influence silo is manifest in the decisions made by individuals who have determined that “other” information sources outside their influence silo are illegitimate. Trump’s verbal attacks against many traditional media outlets as the “enemy of the people” even led to an increasingly hostile atmosphere for journalists covering various political events, in some instances resulting in physical attacks against reporters at Trump rallies in Texas, Florida, and elsewhere.69 In essence, the politically conservative influence silo provides the strength of certainty that conservatives have so desperately longed for. As the conservative radio host Charlie Sykes observed, conservative media created an “alternate reality bubble” that “destroyed our own immunity to fake news, while empowering the worst and most reckless on the right.”70 “In the new Right media culture,” he wrote in 2017, “negative information simply no longer penetrates; gaffes and scandals can be snuffed out, ignored or spun; counternarratives can be launched. Trump has proven that a candidate can be immune to the narratives, criticism, and factchecking of the mainstream media.”71 A core reason for this is the commitment to vanquishing all kinds of contradictory information that does not conform to the preferred conservative narrative. As described previously in this book, research generally finds that the more objectively informed the members of an audience are, the less likely they are to be persuaded by a one-sided argument and the more likely they are to be persuaded by an argument that brings out the important opposing arguments and then attempts to refute them. This makes sense; a wellinformed person is more likely to know some of the counterarguments, and when an influence attempt avoids mentioning these, the knowledgeable members of the audience are likely to conclude that the communicator is either unfair or is unable to refute such arguments. On the other hand, an uninformed person is less apt to know of the existence of opposing arguments. If the counterargument is ignored, the less informed members of the audience are persuaded.72 This is how media like Fox News and Breitbart have been able to so effectively mislead millions of Americans over the years, through a strategy of ignoring the existence of a counterargument or dismissing contradictory sources of information as illegitimate and irrelevant. Within this information silo, there is no need to provide evidence to support even the most ludicrous claims of Trump’s brilliance or innocence of wrongdoing. Viewers are already primed to believe what is said, even when there is no factual evidence to support it. A 2017 Harvard study of more than 1.25 million stories (published online between April 1, 2015, and Election Day, November 2016) concluded that pro-Trump audiences relied heavily on this “insulated knowledge community” and reinforced users’ shared

180

Digital Influence Warfare in the Age of Social Media

worldview while poisoning them against mainstream journalism that might challenge their preconceptions.73 As Jason Gainous and Kevin Wagner note, “Social media has the significant potential to polarize people by offering readily available one-sided information combined with an interface that easily rewards a preference for such information.”74 Michiko Kakutani agrees: “Because social media sites give us information that tends to confirm our view of the world, people live in increasingly narrow content silos and correspondingly smaller walled gardens of thought. It’s a big reason why a shared sense of reality is becoming elusive.”75 The result is the entrenchment of tribal politics. As Kakutani notes, much of the Republican base now reacts with immediate rejection toward issues like gun violence, affordable health care, or global warming. Never mind statistics, expert analysis, or carefully researched university or government studies, in some cases even their own selfinterests—a lot of hard-core Trump supporters dismiss such evidence as never-to-be-trusted liberal or deep state politics. Party loyalty and tribal politics matter more than facts, more than morality and decency; witness the Trump supporters who booed John McCain, a Republican war hero, and viciously claimed that God had punished him with cancer for standing up to Trump.76 As a recent study on “network propaganda” concluded, Donald Trump represents the present state of a dynamic system that has been moving Republican politicians, voters, audiences, and media to the right at least since Rush Limbaugh launched this model of mass media propaganda on talk radio in 1988 and became, as the National Review wrote in 1993, “the Leader of the Opposition.” In that ecosystem, Trump now operates as catalyst in chief.77 Meanwhile, on the other side of the political spectrum, liberals appear to have a much different approach to news consumption habits and much different information ecosystem. While conservative households report Fox News being the exclusive source of information, with the television on constantly and the station never to be changed, liberals don’t really have the same exclusive relationship with any particular media outlet. In fact, the lack of a comparably strong, centralizing gravitational force among liberals can actually be viewed as a political disadvantage. THE COMPARATIVE DISADVANTAGE OF POLITICAL LIBERALS At this point, some readers of this book will undoubtedly complain that I am portraying influence silos in the United States as a mainly conservative-oriented phenomenon. But there is a research-based justification for doing so. According to the Pew Research Center, liberals tend to trust a wide (and sometimes eclectic) variety of different information sources,

Exploiting the Digital Influence Silos in America181

while conservatives trust comparatively few. As a result, the nature of the conservative influence silos is more tightly bounded, ideologically consistent, and resistant to outsiders. The research also finds that some of the strongest measures of trust within an influence silo are found among influencers with relatively smaller audiences (e.g., Breitbart or InfoWars).78 As described earlier, the shared animosity among conservatives toward the mainstream media is based on a perceived legacy of liberal bias, one that energized the rise of Fox News to its level of profit and prominence in our information ecosystem. In fact, no other major cable news networks have done what Fox has done. MSNBC has tried to replicate it for a mostly left-wing, liberal audience, offering a noticeably anti-conservative, anti-Republican slant in programming and outspoken hosts like Keith Olbermann. But their efforts have produced a much smaller influence silo effect—in part due to the inclusivity problem mentioned earlier. They just don’t get much traction in forming the same kind of solidarity-driven influence silo as the political conservatives have. Meanwhile, the tradi­ tional broadcast news stations—ABC, CBS, NBC—are trying to appeal to a national audience, while Fox clearly wants to attract and keep political conservatives only. They make little to no effort to moderate the tone of their media coverage to be less derisive and derogatory toward nonconservatives. This is likely based on an understanding that to do so would seem disloyal to conservatives who have grown quite accustomed to the comfort and solidarity of a reinforced in-group identity and a consistency in “othering” all those not associated with that identity. From what I have found in my own research, it makes sense that conservative audiences would be more unified in their “othering” when compared to the inclusive-minded liberals. For starters, the political liberals are too diversified; the liberal ethos of tolerating others apparently weakens their ability to replicate what the political conservatives have done in creating a massive, unified influence silo. While the politically conservative influence silo demands loyal adherence to a specific politically conservative doctrine, liberals promote the acceptance of diversity—intellectual, racial, ethnic, sexual, and so forth. In fact, one of the major criticisms that conservatives often claim about liberals is that it is often next to impossible to establish for certain what a liberal truly stands for beyond a vague sense of equality for all. As a result, instead of matching the same level of “othering” and perceived threats that the political conservatives have infused in an entire generation of voters, the liberal appeal seems less powerful and less motivating in terms of a gravitational pull toward the center of any influence silo. An egalitarian ethos that celebrates diversity, tolerance of others, and equal rights is apparently not as emotionally arousing or compelling as the conservatives’ ethos of tribal unity, exclusion of “others,” and fear appeals about how terrible the future would be if the “others” (i.e., political liberals)

182

Digital Influence Warfare in the Age of Social Media

were in power. When it comes to influence silos and digital influence warfare, this translates into an advantage that favors the conservatives. In other words, it’s not just the size of the political conservative silo that matters; it’s the strength of in-group identity, brand loyalty, and outright hostility toward the “other.” Further, the overt emphasis on “othering” aligns well with an angry undercurrent of right-wing extremists in the United States. Their hatred of “others” (the opposite of the liberal emphasis on tolerance and diversity) is amplified and celebrated by Trump’s periodic rage-tweeting; his anti-immigration policies; and his vilification of Muslims, women, people with disabilities, and of course political liberals. Thus, not only do right-wing extremists add to the strength of numbers aligned with the political conservatives, but they also reinforce the narrative of distrust toward information sources outside their influence silo and loyalty to those within. A certain motivating power comes from the combination of exclusion and perceiving an existential threat from those excluded. The political liberals have no comparable motivating gravitational force to coalesce a large, loyal membership within an internally cohesive influence silo, at least not at this moment. The various forms of “othering” by conservatives may also underscore why they are more prone to believe threatening falsehoods than liberals do. According to a study published in the journal Psychological Science, when participants were presented with a series of false statements, there was no difference (in terms of political affiliation) in individual assessments about innocuous things like “exercising on an empty stomach burns more calories,” but for statements that were threatening (like “terrorist attacks in the United States have increased since September 11, 2001”), conservatives had a much higher probability of believing those false statements.79 Further, there is something else about certain voters that is less common among others: According to research by Pennycook and Rand, during the 2016 presidential election, “The overall capacity to discern real from fake news was lower among those who preferred Donald Trump over Hilary Clinton.”80 Perhaps the many fear appeals and emotional provocations used among this target audience were compelling enough to increase the confirmation bias that allowed these voters to be more easily deceived by fake news and disinformation, as long as it provided narratives aligned with their political preferences. Chapter 4 described how a frame is a way of seeing and understanding the world that helps us interpret new information. Each of us has a set of frames that we use to make sense of what we see, hear, and experience. Framing is the process of shaping other people’s frames, guiding how other people interpret new information.81 Influence silos built on adherence to a specific ideology provide those frames for individuals who find resonance and validation from others who share a belief in that ideology. As former conservative talk show host Charlie Sykes observes, “Many Trump voters

Exploiting the Digital Influence Silos in America183

get virtually all their information from inside the bubble . . . Conservative media has become a safe space for people who want to be told they don’t have to believe anything that’s uncomfortable or negative . . . the details are less important than the fact that you’re being persecuted, you’re being victimized by people you loathe.”82 Another recent example of this ideological narrative is seen in how American conservatives and white nationalists have fawned over Sebastian Gorka, a British-accented, Hungarian-born racial supremacist and noted Islamophobe. His ranting and frothing about the evil influences of immigrants is embraced by anti-immigration crowds in the United States, despite his obvious foreign background (he is, after all, an immigrant himself). And because he portrays Islam and Muslims as inherently evil and dangerous, millions of Americans tune into his radio show and follow him on social media in order to get their daily dose of bias confirmation. He is successful because, through his “othering” narratives, he has crafted the persona that he is “one of them” and that he shares their view that they are under siege.83 And as Limbaugh, Ailes, Bannon, Gorka, and many others have shown, an impressive amount of personal wealth can be gained by providing this type of confirmation and certainty reinforcement. Essentially, those who are seeking to manipulate political conservatives have several strategic advantages over political liberals in America. A focus on inclusivity is a less powerful and motivating force than the Trump supporters’ focus on “othering,” blaming those outside their ingroup identity for all the ills and misfortunes of America. It is much easier to provoke a strong emotional response with the narrative of threats and fear than with a narrative of compromise and equity. This is particularly true among members of a society who do feel threatened by heightened economic insecurity, identity insecurity, personal insecurity, and other insecurities—all of which can be blamed on others, routinely characterized as untrustworthy and evil traitors to society. The contrast in political party in-group identities along these lines is particularly striking. In December 2019, Charles Gaba analyzed the ethnic diversity of the 2019 House of Representatives, highlighting the differences between Republicans and Democrats. The results are shown in Table 5.1. Obviously, “othering” is particularly salient and powerful when the ingroup identity of the influence silo’s members is predominately homogeneous. Naturally, there are also a lot of other ways in which the two major U.S. political parties differ, but one must not overlook the importance of this basic in-group identity comparison and its impact on how the two parties behave differently. When one political party is so comparatively homogeneous compared to the other, it gives them a distinct advantage in maintaining a unified narrative and set of beliefs within their influence silo. Conversely, the Democratic party’s diverse, heterogeneous membership

184

Digital Influence Warfare in the Age of Social Media

Table 5.1  Ethnic breakout of the 2019 U.S. House of Representatives (as of December 29, 2019) 2019 Republican House Caucus

White men White women

2019 Democratic House Caucus

Number

Percent

Number

Percent

177

89.4

89

38.4

12

 6.1

44

19.0

Black men

1

28

12.1

Black women

0

22

9.5

Hispanic men

5

23

9.9

Hispanic women

1

11

4.7

Native American men

2

0

Native American women

0

Asian/Pacific Islander men

0

6

Asian/Pacific Islander women

0

7

4.5% combined

2 6.4% combined

Not shown: 1 independent (white man); 4 vacancies (2 were GOP white men; 1 was a Dem white woman; 1 was a Dem Black man) Source: Charles Gaba, posted online December 29, 2019 at: h ­ ttps://​­twitter​.­com​/­charles​_gaba​ /­status​/­1211505302476132352

places it at a significant disadvantage when it comes to creating and utilizing influence silos. Inclusive membership weakens the power of the influence silo, and the reverse—exclusive membership—increases the power of the influence silo. This is particularly the case when there is a concerted effort to “othering”—blaming the challenges faced by those within the silo on the “others” (in this case, members of the opposing political party). The implications of this for a power-hungry demagogue are clear. Your pathway to influence and power does not lie with the diverse, heterogeneous party but with the homogenous, unified one. There is far too much difficulty in mobilizing people in an information environment that embraces diversity of all kinds. Instead, the smart influencer will focus on convincing those within the more homogenous party that you are one of them and can demonstrate a commitment to ensuring the “other” party receives no quarter, no compromise, and no sympathy from you. Indeed,

Exploiting the Digital Influence Silos in America185

as several recent scholarly books have noted, Trump’s electoral victory in 2016 was fueled by amplifying a variety of polarizing identity issues, including illegal immigration, religious toleration, and resentment among whites that minority groups were benefitting at the expense of “real” Americans.84 This is how, as we’ll examine in chapter 6, he was able to establish a level of attention dominance that no other presidential candidate was able to match. Finally, the emotional dimension of today’s digital influence warfare efforts should not be underappreciated. As noted in the previous chapters of this book, emotional provocation is a key component of most successful influence attempts. The message format and content need to have contextual relevance, and the most powerful forms of relevance are those that trigger our emotions. Further, we tend to want the kind of emotional satisfaction that comes from believing our views are correct and shared by others. This is found within the influence silos we choose to be part of. At the same time, this creates an opportunity for influencers within the silo to pursue the most important element of the strategy: information dominance. Equally important, the political conservatives have established a significantly large and eternally loyal collection of voters. This has direct implications for local, state/provincial, and national elections. For its part, the conservative party can put forth a candidate that may be repulsive to independent voters or members of the liberal opposition party, but as long as that candidate can manipulate the loyalties of those within the conservative influence silo, the odds of winning the election are quite strong. Meanwhile, if an opposing political party (say, the liberals) does not have an influence silo of equal or greater size as that of the conservatives, the only way to win a large enough percentage of the electorate is to put forth the rare kind of candidate whose attractiveness transcends the different factions of the society and brings independent voters over to their side. Further, when the landscape of liberal voters is diversified— a widespread amalgam of competing influence silos of various size and effectiveness—there is no unified message, no cohesive narrative around which to coalesce enough voting support. This raises uncertainty and confusion that could be capitalized upon in an influence campaign, but having no information dominance means less power to influence your way to electoral victory. CONCLUSION So, to sum up, influence silos are entities in which both the goals of the influencer and the goals of the target can be met. The influencer wants to shape perceptions and behaviors, and the target wants sources of information that confirm the validity of their perceptions and behavior. And as

186

Digital Influence Warfare in the Age of Social Media

noted earlier in this book, influence silos are being used for domestic and foreign efforts to fracture and polarize American society. A divided America is one that is weaker, but we are divided by design. This did not come about by chance. As reflected in the efforts described earlier, silos in America emerged by conscious choice, driven by people with resources and a strong desire to shape the national discourse in a direction more aligned with their political beliefs. The results are well known to most observers of today’s political landscape. A 2018 Pew study found that political party “identification plays a role in how Americans differentiate between factual and opinion news statements. Both Republicans and Democrats show a propensity to be influenced by which side of the aisle a statement appeals to most. For example, members of each political party were more likely to label both factual and opinion statements as factual when they appealed more to their political side . . .”85 The central goal of influencing silos within a constantly divided society is thus often perceived as winning a zero-sum game. Whenever you adopt a particular position about an issue of political or social importance (e.g., gun registration, prayer in schools, abortion, same-sex marriage, marijuana legalization, and political elections), it is assumed that some part of the society will agree with you and that others will have strong opposition to your stance. Thus, your general strategy will involve finding ways to enlarge the segment of society that is already inclined to agree with you, at the expense of the opposing segment of society. In other words, it would be foolish to even try and convince the entire population of our nation to support (or acquiesce to) your position. As a result, the success you achieve in trying to influence the society will come at the expense of some other members of that society who do not want you to succeed. To further complicate matters, there are virtually no mass media outlets in the United States today that we can truthfully assess as being entirely objective. Rather, each of them has an agenda, a set of values and perspectives that they deliberately seek to convey in their selection and characterization of the day’s “news.” Even if they often select similar news stories to cover, no conscientious observer today truthfully could claim that Fox News does not have a significantly different agenda from that of the Washington Post or CNN. Other well-known media outlets embrace their agenda more overtly: readers with left-leaning political views will choose Huffington Post or Slate, while readers who lean toward the right will choose Breitbart or The National Review. The number of information providers has proliferated over the past half-century, and as a result, there is a “news” source for individuals whose preferences lie anywhere along the political spectrum. Liberal democracies appear far more susceptible to foreign-sponsored digital influence campaigns than their closed, autocratic counterparts, which in turn are more vulnerable to domestic influence campaigns. In

Exploiting the Digital Influence Silos in America187

the latter, the government often owns and operates the principal media outlets and controls access to the Internet (and, for some, content as well). Those countries can thus exert information dominance over the messages and narratives available to their citizens to a much greater degree. Meanwhile, in a liberal democracy like the United States, the central importance placed on freedom of speech makes for a rich terrain in which the digital influencer can manipulate perceptions in many ways, using an increasingly vast spectrum of tools and social media platforms. With no restrictions on speech, influencers can disinform at will, aided by Facebook’s decision to allow completely false political advertisements on their platform. Right-wing media outlets like the Daily Caller can distribute its messages via multiple channels simultaneously, creating the illusion (through automated fake accounts) of a groundswell of grassroots support (e.g., astroturfing, described in chapter 3) for what they are trying to convince their readers to believe. Essentially, the digital ecosystem makes influence warfare much easier in all facets and in some ways more efficient and effective. Gathering a robust collection of data on your target of influence has never been easier. The tools of digital influence are easily acquired and learned. You have relatively easy access to the target, against whom you can use those tools to provoke and deceive (as we’ll examine in chapter 6). And you can also employ various sophisticated methods to assess the impact of your influence effort in real time, making adjustments and refinements along the way to ensure maximum effect. In December 2016, a TED Talk was posted online by Wael Ghonim, the Egyptian Google employee whose anonymous Facebook page helped to launch in early 2011 the Tahrir Square revolution that toppled President Hosni Mubarak. Following what should have been a liberating moment in Egypt’s history, he noted, social media began to amplify political polarization in his country “by facilitating the spread of misinformation, rumors, echo chambers and hate speech.” Now, he notes, “Rumors that confirm people’s biases are now believed and spread among millions of people. We tend to only communicate with people that we agree with, and thanks to social media, we can mute, un-follow and block everybody else. [And] online discussions quickly descend into angry mobs. . . . It’s as if we forget that the people behind screens are actually real people and not just avatars.”86 In the United States, the rise of influence silos—from the earlier media-based forms of the 1990s to the modern Internet and social media platform-based forms—has contributed significantly to our own kinds of polarization. There seems to be far more shouting and pointing fingers at “others” than ever before. As Larry Sanger notes, “Perhaps it is no coincidence that the rise of the Internet correlates with the rise in the late 1990s and 2000s of a particularly bitter partisan hostility that has, if

188

Digital Influence Warfare in the Age of Social Media

anything, gotten worse and made reaching political compromise increasingly unpopular and difficult. This threatens the health of the republic, considering that compromise has been the lifeblood of politics.”87 Meanwhile, the more our society is attacked by the spread of disinformation (from both foreign and domestic influence aggressors), the more polarized we become. At the same time, polarized societies offer greater opportunities for influence aggressors to exacerbate differences among us, drive deeper wedges between us, while sowing discord and confusion about what can or cannot be believed. The vicious cycle that results—polarization increases vulnerability to disinformation, which then increases polarization—is likely to get worse, particularly if our political leaders continue providing the fuel that energizes this cycle. And as we saw earlier in this book, Russian digital influence efforts contribute to this polarization and capitalize on it. A primary goal of their influence campaigns has been to exacerbate differences in a society and turn members of the society against each other with greater emotional ferocity. The more disunited we become, the less likely we are to agree on how Russia’s annexation of Crimea should be viewed, much less how we should respond.88 This and other aspects of Russia’s digital influence warfare strategies and tactics are explored much further in chapter 2. But for now, let’s conclude with the observation that since democratic societies have divided themselves into influence silos (and digital influence silos), the strategy described earlier—of infiltrating the silo and manipulating perceptions effectively—can be followed by foreign and domestic influence aggressors much the same way. Influence silos (including digital versions) compel an individual’s political beliefs to take precedence over factual truth. The most dangerous impact of the influence silo is that they can make you more ignorant and arrogant at the same time. As Lee McIntyre notes: “Is it therefore any surprise that to the extent we are emotionally attached to our political beliefs—and in fact may even see them as part of our identity—we will be reluctant to admit that we were wrong and may even be willing to put our own ‘gut instinct’ up against the facts of experts?”89 This, in turn, makes us vulnerable to the many kinds of digital influence efforts described throughout this book, and all signs indicate that in the future we will see increasingly sophisticated uses of artificial intelligence for target identification, deception, emotional provocation, and disinformation.

CHAPTER 6

Information Dominance and Attention Dominance

In this chapter, we expand the concept of the influence silo in two different directions depending on the type of regime found within a particular country. In authoritarian countries, as described in the first part of the chapter, we find governments seeking (and sometimes successfully achieving) a form of information dominance, in which all the information available to the country’s citizens is highly controlled. In one sense, they are able to establish and maintain a nationwide influence silo, within which individuals only see and hear information that has been preselected for their consumption. This kind of influence power, however, is not available in truly democratic countries, where freedom of speech and expression is protected and where citizens can access a broad range of information sources. So instead, anyone wishing to achieve a similar predominance of influence can pursue an approach that I call “attention dominance,” something that is made increasingly possible through the algorithms of social media platforms, search engines, and website trackers. We’ll discuss examples of this in the second part of the chapter. Conceptually, the idea of information dominance (or its sibling, attention dominance) is very attractive to many individuals seeking power; not just authoritarian governments but also corporate CEOs, military leaders, and others would certainly like to have much greater control over what members of their organization hear, see, and do. I know teachers who would much prefer their students not have access to information that questions what they are being taught and parents who would rather have their children not know that other families have fewer rules to follow. I’m sure if you think about it for a moment, you can identify some attractive benefits to the notion of having information dominance over others.

190

Digital Influence Warfare in the Age of Social Media

Regarding the subject of this book, the ability to dominate the information ecosystem (or dominate the attention of its inhabitants) can create the most pervasive context within which an influencer can effectively manipulate the beliefs and perceptions of the target. Authoritarian regimes have a unique advantage here, because the political system and institutions provide authorities the power to decide what information is relevant (or not) for its citizens. A premium is placed on compelling and enforcing conformity within such a system, deciding for its citizens what is deemed acceptable or unacceptable behavior. Certainty and uncertainty can be manipulated fairly easily once this level of information dominance has been established. Authorities in these regimes can even proscribe what are valid sources of “social proof.” In democracies, however, there are also ways to control the information that is heard, seen, and believed by many. As we’ll see below, complete information dominance in a country like the United States is not really possible, but attention dominance over a portion of the population is increasingly possible in this Information Age. If an influencer can find a way to dominate one or more influence silos, as described in chapter 5, they can control the narrative, beliefs, and behaviors for those within those silos. Attention dominance can allow the influencer to define for the influence silo’s members what is deemed acceptable or unacceptable behavior and what are valid sources of social proof. Both information dominance and attention dominance provide the influencer with the means to shape the perceptions of the target using the tactics and tools of digital influence warfare. INFORMATION DOMINANCE IN AUTHORITARIAN COUNTRIES Throughout the past century, several authoritarian states have established the means to ensure that their government leaders (the influencers) are the primary sources of information received by their population (the targets). This is what can be termed as “information dominance.” Authoritarian regimes want information dominance for an obvious reason: to ensure control and conformity of those whom they seek to rule. The leaders want their citizens to realize that it is in their best interests to believe whatever they are being told by the authorities. How do you make someone want to believe what you’re telling them? As we discussed in previous chapters, uncertainty and fear are very powerful motivators of behavior. So, because people generally want to avoid uncertainty, they will want to believe the government is telling them the truth, because to deny that would create uncertainty for their national in-group identity. Further, they often have no alternative or no real choice in the matter—they will believe what they are being told or go to prison.

Information Dominance and Attention Dominance191

Authoritarian regimes can achieve information dominance through various means, including restricting and constraining access to information or overloading the information ecosystem with so much disinformation it overwhelms the information processing abilities of its citizens. Another approach is to own and control the mass media—newspaper, radio, and television—and ensure that independent, potentially critical media coverage is disallowed. The ability to control the media and the Internet within these countries results in massive amounts of power to influence the reality that people believe in. Information dominance provides these regimes with the ability to discredit and diminish (or even destroy) opposing sources of information. Citizens are allowed to access only approved sources. When you can own and control the media and access to the Internet, you can convince the population of many different kinds of lies and disinformation about virtually anything. In pursuit of information dominance, authoritarian regimes routinely arrest and imprison or force into exile journalists, activists, politicians, and others who voice critical opinions or inconvenient questions about the regime. Typically, independent media and journalists are treated much the same as political opponents—threats to the regime that must be silenced or eliminated. For four years (2015–2018), Turkey led the world in arresting and imprisoning journalists, essentially for the crime of saying or printing something the Turkish government did not like or investigating something the government did not want them to. (China took the top spot in 2019, as we’ll describe later in this chapter.) As a report by Amnesty International observed, tens of thousands of people have been “locked up by a judiciary that lacks the most basic independence and incarcerates real or perceived critics of the government without evidence of any actions that can reasonably constitute offences.”1 Anyone—journalists, social media users, and protesters—who dares to deviate from the government’s official line (including forms of online speech) can find themselves in trouble. Even a satirical Instagram caption, penned by a former Miss Turkey, was enough to net her a 14-month jail sentence.2 Toward the end of 2019, at the start of a Turkish military offensive in Northern Syria, Amnesty International described how “language around the military incursion was heavily policed, as the government used the cover of the military operation to launch a domestic campaign to quash dissenting opinions from media, social media and the streets. Critical discussion on issues of Kurdish rights and politics has become even further off limits, with hundreds of people detained merely for commenting or reporting on the offensive. They are facing absurd criminal charges, often under anti-terrorism laws, and if prosecuted and found guilty, they could face lengthy prison sentences.”3 In one instance, Hakan Demir, a journalist with the daily newspaper Birgün, tweeted: “Turkish warplanes have started to carry out airstrikes on civilian areas.” His tweet was based on

192

Digital Influence Warfare in the Age of Social Media

a report by NBC. In the early hours of the next morning, police raided his house, and he was taken away for questioning for “inciting enmity or hatred.” He was later released with overseas travel bans pending the outcome of criminal investigations.4 In other countries like Saudi Arabia and Egypt, there is also no real “freedom of the press.” These regimes routinely impose sanctions (like house arrest) against dissenting voices, inflict physical harm on them and/or their loved ones, and in the most extreme cases even eliminate them altogether. The brutal murder in 2018 of Saudi journalist Jamal Khashoggi captured international attention,5 but it is clearly not the only instance. In Russia, some journalists have lost their jobs simply for asking a question the regime did not like. Others have disappeared, and some have been brutally murdered—like Anna Politkovskaya and Natalia Estemirova.6 And, as NPR’s Scott Simon reported in 2018, “a surprising number have implausibly fallen or slipped to their deaths.”7 High-profile examples include Maxim Borodin, who died from a fall from a 5th-floor balcony; Olga Ktovskaya, who fell to her death from a 14th-floor window; Ivan Safronov, who died after falling from a 5th-floor window; and Victor Aphansenko, who died after slipping in his home. And then there’s the interesting case of Mikhail Lesin, who was found dead after a fall in his hotel room in Washington, DC. The FBI says he fell from extreme drinking and had “blunt force trauma to the head” and injuries to his neck, arms, legs, and torso.8 While all of these journalists were killed in recent years, this is not a new phenomenon in Russia: according to records of the Committee to Protect Journalists, a New York-based organization that tracks violations against free journalism around the world, 42 journalists were killed and 3 disappeared under Vladimir Putin’s predecessor, Boris Yelstin.9 Most of the world is well aware of the high-profile Russian political figures who have been killed, like Boris Nemtsov, Boris Berezovsky, Alexander Litvinenko, and Sergei Yushenkov.10 But as Simon noted, “journalism is a dangerous trade in Russia.” Dangerous indeed. In some countries, however, controlling the media by direct oversight, money, or the threat of mortal peril may not be entirely feasible. But here as well, the authoritarian leader can still find ways to compel journalists to fall in line, including the use of market-oriented and access-oriented incentives. For example, authoritarians like Recep Tayyip Erdoğan in Turkey, Viktor Orban in Hungary, and Rodrigo Duterte in the Philippines have found that the tycoons who own the major media outlets care far more about profit and market share than anything else. So, a simple bargain is struck: if you want access for your journalists and if you want government contracts and advertising, you will be loyal to the regime. Thus, we find the same kind of information dominance results, just achieved through different methods.

Information Dominance and Attention Dominance193

Authoritarian regimes can also prevent outside sources of information from penetrating their influence domain. Newspapers, radio broadcasts, and television channels from other countries—in which the regime cannot control the narrative—can simply be declared illegal and disallowed. Naturally, the creation and rapid global expansion of the Internet threatened to undermine the ability of these authoritarian regimes to block unwanted information. But many countries—including China, Russia, Turkey, Iran, and North Korea—have all devised highly effective means of creating filters and barriers that restrict online sources of information. This is what these kinds of regimes must do—control access to information, in their pursuit of information dominance. An openly accessible Internet, especially social media, poses a direct threat to their ability to control what influences their citizens. In instances where the authoritarian government controls the Internet, at least to some degree, they can censor online information sources at will. Examples include Iran and China, where not only is any anti-government sentiment prohibited but also anything that is deemed “immoral” or threatens to undermine the country’s policies is prohibited. In Iran, enforcers of its “clean” Internet routinely arrest human rights activists on charges that they pose threats to “public morality and chastity.”11 As we’ll discuss later in this chapter, China is particularly adept at controlling the information their citizens are allowed to see and hear, including filtering and prohibiting search engine results for particular terms. Similarly, the government in Bahrain has sought to filter unwanted search results as well as prevent dissidents from finding each other or debating politically dangerous topics online.12 Even semi-authoritarian regimes may impose constraints on a citizen’s freedom of online speech. In 2017, Pakistan became the first nation to sentence someone to death for online speech after a member of the Shia minority got into a Facebook argument with a government official posing as someone else.13 In many countries, the government can also control when the Internet gets shut off and what goes on it. Some countries employ a “throttling” strategy, which slows down connections but doesn’t completely shut down Internet access. As Singer and Brooking note in their well-received book LikeWar, “Designed as an open system and built on trust, the web remains vulnerable to governments that play by different rules. In less-than-free nations around the world, Internet blackouts are standard practice. All told, sixty-one countries so far have created mechanisms that allow for national-level Internet cutoffs.”14 In Kashmir, India’s shutdown of access to the Internet is the longest ever imposed in a democracy. It began on August 5, 2019, and by mid-December, the province had been without Internet access for 134 days.15 In Bahrain, authorities have cut Internet access to specific villages, individuals, and IP addresses.16 Within the past few years, the Russian government has turned off mobile Internet services

194

Digital Influence Warfare in the Age of Social Media

during local protests in Moscow and Ingushetia,17 and as described later in this chapter, a law passed in 2019 provides a new legal foundation allowing Russia’s government to cut off all access to the Internet if and when desired.18 Saudi Arabia’s government also uses its control of the country’s Internet for propaganda and disinformation, as well as to intimidate others. According to a Soufan Center report, “The Saudis command a legion of virtual bots and trolls that stalk Riyadh’s detractors online in an attempt to stifle dissent and silence all criticisms.”19 The harshest punishments are reserved for those who challenge the monarchy and the competence of the government. A man who mocked the king was sentenced to eight years in prison.20 And a well-publicized case in late 2019 found that the government of Saudi Arabia had cultivated ties with Twitter employees and persuaded them to spy on various user accounts and report back to Saudi leadership.21 Similar kinds of digital influence efforts are orchestrated even in a poor country like Sudan, as a recent Soufan Center analysis describes, where an online disinformation campaign was launched against pro-democracy protesters: “Self-styled cyber warriors sought to convey that the protesters represented a threat to the stability of Sudan and that only the military could protect the country and keep it from sliding into anarchy.”22 During the pre-Internet era, authoritarian regimes could effectively censor all information sources much more easily, while it is more difficult these days to achieve information dominance over a population solely through restriction and oppression. Perhaps the only exception to this is found in North Korea, where vast resources are devoted to ensuring information scarcity and controlling narratives. For example, people in this isolated country are told that the United States (and the West in general) poses an existential threat to the country’s survival. This narrative is used by the regime to justify the enormous sacrifices made to support the development and expansion of their nuclear arsenal. Those few who have escaped the totalitarian regime have expressed shock and bewilderment when they discovered that the outside world really does not have any intention to attack North Korea at all. In truth, the level of information dominance within that country is so extreme here that some observers consider the entire North Korean population to be essentially “brainwashed.” But in contrast, many authoritarian countries—like China and Russia— cannot enforce the same level of isolation or impose the same level of information scarcity because the forces of economic globalization require some level of online connectivity with other countries. So instead of completely restricting information, there is an alternative they can pursue called “censorship by noise.” They can distort their information ecosystem with distractions and disinformation, making it increasingly difficult for their citizens to discern fact from fiction. According to Columbia

Information Dominance and Attention Dominance195

Law School professor Tim Wu, we are now seeing new “information abundance” forms of speech control, through which authoritarians are “unleashing ‘troll armies’ to abuse the press and other critics” and using so-called “flooding” tactics “that distort or drown out disfavored speech through the creation and dissemination of fake news, the payment of fake commentators, and the deployment of propaganda robots.”23 As journalist Peter Pomerantsev explains, these techniques employ “information . . . in weaponized terms, as a tool to confuse, blackmail, demoralize, subvert and paralyze.”24 A recent Soufan Center report on “The Social Media Weapons of Authoritarian States” describes how the intent of these influence operations is to make things so confusing that people “suffer from disinformation fatigue, unable to discern fact from fiction.”25 The power for convincing people to accept lies can be derived from raising uncertainty about what you think you know. We see the same strategy being pursued today in democratic countries as well, which we’ll examine later in this chapter. The strategy of undermining faith in an objective truth is actually a core source of power for the authoritarian or totalitarian regime. Hannah Arendt observed that “the ideal subject of totalitarian rule is not the convinced Nazi or the convinced communist but people for whom the distinction between fact and fiction . . . true and false . . . no longer exists.”26 As Timothy Snyder wrote in his book On Tyranny, “To abandon facts is to abandon freedom. If nothing is true, then no one can criticize power because there is no basis to do so.”27 And Masha Gessen observed that lies are told by the authoritarian leader in order to “to assert power over truth itself.”28 So, beyond the ability to arrest (or eliminate) troublesome journalists and shut down various means of communication, authoritarian governments have also now developed and deployed various means to digitally influence their own citizens, and some of them are also deploying the same tactics and tools against foreign audiences (see chapter 2). As Singer and Brooking note, “Authoritarian leaders have long since attuned themselves to the potential of social media, both as a threat to their rule and as a new vector for attacking their foes.”29 Enforcers of these regimes are tasked with finding online sanctuaries where dissidents might congregate and support each other’s views. Their objective is to track down and monitor those individuals, and they also spread disinformation in order to provoke controversy, turn people against each other, and raise uncertainty about who can be trusted and who cannot. As Michiko Kakutani observes, “Troll factories and bot armies are used by political parties and governments of countries like Russia, Turkey and Iran to spread propaganda, harass dissenters, flood social networks with misinformation, and create the illusion of popularity or momentum through likes, retweets, or shares.”30 In sum, authoritarian regimes have developed highly effective ways to impose and consistently maintain forms of information dominance.

196

Digital Influence Warfare in the Age of Social Media

In some countries, like China and Russia, authoritarian leaders ensure their preferred narrative dominates public perceptions by subjugating the media, censoring and silencing independent voices, and eliminating political opposition. Here, information dominance is established by force and fiat, resulting in a nationally orchestrated filter bubble in which the government decides what kinds of information its citizens are allowed to see or not see. The most expansive and well-resourced example of this kind of information dominance is found in China. China As described in chapter 2, China’s Three Warfares Doctrine emphasizes the need to be overwhelmingly effective in (1) public opinion (media) warfare (yulun zhan), (2) psychological warfare (xinli zhan), and (3) legal warfare (falu zhan).31 This doctrine guides many of their current efforts to generate support for Chinese government policies both at home and abroad. On the domestic front, the government has a multidimensional approach to information control, penetrating every channel of mass communication in China and policing content from print publishing and broadcast media to cyberspace, the arts, and education.32 Their public opinion warfare efforts incorporate the use of television programs, newspaper articles (particularly in China Daily and the Global Times), books, films, and the Internet, with two million official “public opinion analysts” monitoring and censoring social media networks and blogs (including Sina Weibo, China’s equivalent of Twitter).33 The government also produces an endless stream of pro-regime propaganda to ensure conformity to the Communist Party policies and objectives.34 Local offices coordinate with regional and national counterparts to create and disseminate a common narrative of events across the country.35 According to a July 2020 report by the Stanford Internet Observatory, “Budgets allocated to propaganda organs at every level of government” under the current Chinese regime facilitate “the most sophisticated infrastructure of media surveillance and censorship on the planet.”36 China also routinely imprisons journalists and forces newspaper editors to quit over articles that were written and/or printed and that it deems critical of the government.37 Since the mid-2000s, the government has been replacing editors and publishers at many of the more popular media outlets in order to reassert control over domestic information, and in 2018, the Communist Party tightened its control of the media by shifting direct oversight of print publications, film, press, and key broadcast properties to a central committee.38 Media companies are told to pledge loyalty to the Chinese Communist Party, and Chinese Internet firms are told to maintain “ideological security.”39 In 2019, China occupied the top spot among countries locking up the most journalists, imprisoning 48 (compared to

Information Dominance and Attention Dominance197

47 in Turkey).40 China also imposes a variety of other censorship measures to control the information allowed within its media ecosystem. For example, in December 2019, under pressure from the Chinese government, the financial information provider that distributes Reuters news to investors blocked over 200 stories that could paint Beijing in a negative light. The censorship began earlier that year after the company feared its China operation would be suspended.41 In another example, a 2018 investigation published by The New Yorker recounts how China had forced over 500 academic journals to blot out a handful of selected words—including “Tiananmen,” “Dalai Lama,” and “Tibet”—from their articles.42 The government also actively blocks foreign television and radio broadcasts such as the British Broadcasting Corporation, Radio Free Asia, and the Voice of America as well as websites that it finds unsuitable. China now controls printed and broadcast media through the use of the General Administration of Press and Publication (GAPP). The GAPP holds a monopoly over what is allowed to be published or broadcast in the country, which enables the government to prevent material that it deems a threat to China’s interests. The GAPP also controls access to the Internet, and all websites that wish to be viewed by Chinese nationals must be licensed or registered with the Ministry of Information. Websites that are licensed must adhere to Chinese regulations or risk being shut down.43 In fact, as Singer and Brooking describe, China’s government has always sought to ensure their control over how their citizens interact with the Internet. In 1993, officials banned all international connections that did not pass through a handful of state-run telecommunications companies. The Ministry of Public Security was tasked with blocking the transmission of all “subversive” or “obscene” information, working hand in hand with network administrators. Although Chinese Internet users were allowed to build their own websites and communicate with others inside China, only a few closely monitored cable lines connected them to the wider world, resulting in the oft-cited term “the Great Firewall.”44 While Chinese authorities communicate to an international audience through an array of state-owned media outlets as well as on popular global social networks (such as Facebook, YouTube, and Twitter), Chinese citizens communicate and interact almost exclusively through state-approved online communities (such as Weibo and WeChat).45 In essence, the technical infrastructure built by Chinese authorities was carefully designed to maximize control and to block information sources that they did not approve. Because all Internet traffic in China (email, social media, websites, videos, etc.) passes through servers and hubs owned and controlled by the Chinese government, this allows them to prohibit terms, images, and anything else they feel would undermine their nation’s “harmonization” policy. Through an array of monitoring and filtering mechanisms, in support of their “cleanse the web” policy, the government can essentially

198

Digital Influence Warfare in the Age of Social Media

manipulate the perceptions of their citizens about their own country’s history. Billions of old Internet postings have been wiped from existence, targeting anything from the past that fails to conform to the regime’s “harmonious” history. Momentous events like the 1989 Tiananmen Square protests have been erased through the elimination of nearly 300 “dangerous” words and phrases. Baidu Baike, China’s equivalent of Wikipedia, turns up only two responses to a search on “1989”: “the number between 1988 and 1990” and “the name of a computer virus.”46 These efforts produce a uniquely powerful form of information dominance. While sitting at your computer in Europe or North America, using an Internet search engine to find information about “Tiananmen Square” will result in millions of web page results, but searching for the term within China will produce virtually no results. Basically, the Chinese government has produced a national Internet that is separate and distinct from the online spaces accessible to most of the rest of the world.47 This allows them to exert control over social media and Internet platforms in multiple ways, such as controlling access to information, censoring terms, and attacking dissidents with disinformation and other means. On China’s popular WeChat platform, messages disappear if they contain banned words. References to “Winnie the Pooh” have been scrubbed from web pages and blocked in social media posts, because of some Chinese users’ comparisons of the cartoon character to the physical appearance of their country’s leader.48 The motivation behind these government efforts is largely ideological: control of social media is an essential part of China’s “cyber sovereignty” model, a vision that rejects the universalism of the Internet in favor of the idea that each country has the right to shape and control the Internet within its own borders.49 The result is an all-encompassing structure intertwining Chinese regime policy goals with citizens’ rights and freedoms (or lack thereof). Of course, maintaining information dominance of this kind requires extensive and constant monitoring. In 1998, China formally launched its Golden Shield Project, transforming the country’s Internet connections into the largest surveillance network in history—a database with records of every citizen, an army of censors and Internet police, and automated systems to track and control every piece of information transmitted over the web. The most prominent part of this project is the ability to filter and block keywords. Web searches won’t find prohibited results; messages with banned words will simply fail to reach the recipient. As the list of banned terms updates in real time, events that happen on the rest of the worldwide web simply never occur inside China.50 China’s approach to censorship also seeks to suppress any messages that receive too much grassroots support, even if they’re apolitical. For example, what seemed like positive news of an environmental activist who built a mass movement to ban plastic bags was harshly censored,

Information Dominance and Attention Dominance199

even though the activist started out with support from local government officials. In a truly “harmonious society,” only the central government in Beijing should have the power to inspire and mobilize on such a scale.51 Additionally, China’s top court ruled in 2013 that individuals could be charged with defamation (and risk a three-year prison sentence) if they spread “online rumors” seen by 5,000 Internet users or shared more than 500 times.52 In order to further enforce its “harmonization” policies, the Chinese government also supports armies of bureaucrats and college students in publishing positive stories about the government. As a leaked government memo explained, the purpose of this effort is to “promote unity and stability through positive publicity.” According to an estimate by King et al., in 2017 the government employed two million people to write 448 million social media posts a year. Their primary purpose is to keep online discussion away from sensitive political topics.53 Chinese authorities also routinely employ what a recent Soufan Center analysis calls “social media disinformation campaigns”54 as a means of influencing its citizens. As the Stanford Internet Observatory report notes, the government uses “paid commenters and fake accounts to spread disinformation and preferred messages in an unattributed fashion.”55 They have also deployed an array of trolls and bots to confront any online discussion about controversial topics, including Tibet. These attacks have included direct harassment as well as the equivalent of denial-of-service attacks used to flood certain discussions.56 As detailed in a study by three researchers at Harvard University, Chinese troll farms may allow some criticism, but immediately censor any hints at protest: “The Chinese people are individually free but collectively in chains,” they concluded.57 With this background in mind, the Chinese government’s response to recent events in Hong Kong should come as no surprise. The large-scale Hong Kong protests in early June 2019 initially led the Beijing government to censor any reference to them, with the word “Hong Kong” becoming a censored search term on June 9. The government then followed with an official media campaign to portray the demonstrations as violent with “shades of terrorism.”58 Meanwhile, pro-government hashtags including #SupportTheHongKongPolice and #ProtectHongKong were aggressively rolled out by government media on the Chinese social media platform Sina Weibo.59 Some China-supported efforts went even further, including “doxxing”—releasing personal details (names, home addresses, personal telephone numbers) of hundreds of people, which are posted alongside details of their “misdeeds”—as a way of harassing and intimidating prodemocracy supporters in Hong Kong. Much of this activity was linked to a website HK Leaks, whose front page shows a photo of black-clad protesters with a Chinese-language banner saying: “We want to know who these people are and why they are messing up Hong Kong!”60

200

Digital Influence Warfare in the Age of Social Media

In August 2019, Twitter and Facebook revealed a Chinese governmentbased influence effort to delegitimize the pro-democracy movement in Hong Kong. Twitter said it had taken down 936 accounts that were “deliberately and specifically attempting to sow political discord in Hong Kong.” Facebook said it had found a similar Chinese government-backed operation and deleted fake accounts.61 During the same month, Google shut down 210 channels on YouTube it said were part of a “coordinated” attempt to post material about the ongoing protests in Hong Kong. The announcement noted that the perpetrators of this attempt had tried to “disguise the origin of these accounts” and engaged in “other activity commonly associated with coordinated influence operations,” and it concluded that “this discovery was consistent with recent observations and actions related to China announced by Facebook and Twitter.”62 Online harassment and intimidation of political dissidents is all too common in many authoritarian regimes, so China’s response to the Hong Kong protests is not really that unusual. In contrast, however, the government has recently launched perhaps the most far-reaching and ambitious digital influence effort of any country to date: the new “social credit” system. Unveiled in 2015, the vision document for the system explains how it will create an “upward, charitable, sincere and mutually helpful social atmosphere”63—one characterized by unwavering loyalty to the state. To accomplish this goal, as Jacob Silverman notes, all Chinese citizens will receive a numerical score reflecting their “trustworthiness . . . in all facets of life, from business deals to social behavior.”64 Blocking the sidewalk, jaywalking, fare evasion, and even loitering can lower your social credit score. Punishments for maintaining a low social credit score include being banned from taking trains, having your Internet speed cut, and being publicly shamed.65 The main premise behind the social credit system—which is meant to be fully implemented in 2020—is the perceived need for more “civilized behaviors.”66 According to China, the ranking system seeks to reinforce the notion that “keeping trust is glorious and breaking trust is disgraceful.”67 Much like a traditional financial credit score, each citizen’s “social credit” is calculated by compiling vast quantities of personal information and computing a single “trustworthiness” score, which measures, essentially, someone’s usefulness to society. This is possible thanks to Chinese citizens’ near-universal reliance on mobile services like WeChat, in which social networking, chatting, consumer reviews, money transfers, and everyday tasks such as ordering a taxi or food delivery are all handled by one application. In the process, users reveal a staggering amount about themselves—their conversations, friends, reading lists, travel, spending habits, and so forth.68 As Singer and Brooking explain, these bits of data can form the basis of sweeping moral judgments. Buying too many video games, a program

Information Dominance and Attention Dominance201

director explained, might suggest idleness and lower a person’s score. On the other hand, regularly buying diapers might suggest recent parenthood, a strong indication of social value.69 And, of course, one’s political proclivities also play a role. The more “positive” one’s online contributions to China’s cohesion, the better one’s score will be. By contrast, a person who voices dissent online “breaks social trust,” thus lowering their score.70 In an Orwellian twist, the system’s planning document also explains that the “new system will reward those who report acts of breach of trust.”71 That is, if you report others for bad behavior, your score goes up. Your score also depends on the scores of your friends and family. If they aren’t positive enough, you get penalized for their negativity, thus motivating everyone to shape the behavior of the members of their social network.72 As of December 2019, there are an estimated 854 million Internet users in China, the largest of any country by a significant margin.73 As you might anticipate, not all of them are generally enthusiastic about this new social credit system—especially if their relationship to the state is already punctuated by suspicion and mistrust. For example, residents of the Xinjiang region (particularly its Muslim minority population) have been forced to install the Jingwant (web-cleansing) application on their smartphones. As Singer and Brooking explain, “The app not only allows their messages to be tracked or blocked, but it also comes with a remote-control feature, allowing authorities direct access to residents’ phones and home networks. To ensure that people were installing these ‘electronic handcuffs,’ the police have set up roving checkpoints in the streets to inspect people’s phones for the app.”74 Finally, in addition to all these efforts to influence the behavior of its citizens online and offline, China is now instituting new controls before an individual even has access to an Internet device. In September 2019, government authorities announced that residents applying for a new mobile or Internet device will have their faces scanned by telecommunications carriers. This is the first country to mandate using facial identification in order to sign up for Internet services. According to the announcement by China’s Ministry of Industry and Information Technology (MIIT)—the state agency responsible for Internet and technology regulation—the decision was part of its moves to “safeguard the legitimate rights and interests of citizens in the cyberspace” and prevent fraud.75 To sum up, under the leadership of President Xi Jinping, China has had significant success in subordinating the Internet to the will of the state. The Chinese regime has expanded its grip over Internet use by continually developing and fine-tuning their ability to monitor and censor their citizens on social media; to access their private information; and to influence their decisions, speech, and behavior. China has also enacted a raft of new laws and regulations enlarging the legal framework for its control of the Internet, while centralizing power over social media in the hands of

202

Digital Influence Warfare in the Age of Social Media

high-level decision-makers.76 Following closely in their footsteps is their neighbor to the north, Russia, which is also pursuing an “information sovereignty” approach to dominating the digital information ecosystem of its citizenry. And just like China, this is—as Peter Pomerantsev observes— “essentially a cover for censorship.”77 But while restricting and controlling information is prevalent here as well, Russia has also aggressively embraced the use of social media to achieve the disinformation “censorship by noise” strategy described earlier in this chapter. Russia With a much smaller population (and economy) than China, Russia is a comparatively weak autocracy, but one with a huge chip on its shoulder for having lost its former “superpower” status. Similar to China, Russia’s government invests heavily in the ability to maintain perceived legitimacy among its people by co-opting media services, eliminating troublesome journalists (as described earlier), and controlling Internet access. As discussed in chapter 2, Russian information dominance was formally recognized as a strategic goal in one of Vladimir Putin’s first pieces of legislation as President of Russia, the Information Security Doctrine of 2000.78 At the time, the government was already reassembling a Soviet-like control of print and TV media outlets, yet Russian citizens freely used the Internet to share their thoughts and concerns with each other and the world.79 But in this new doctrine, the government describes the “information sphere” as an arena of conflict in which Russia is facing both external and internal threats, which explains in part why Russian media today (in print, television, and online) spends a good deal of time trying to convince its audience that the world outside Russia is dangerous.80 While this Information Security Doctrine was being drafted, independent media outlets were being forced by the government to shut down or transform into a distinctly pro-Russian platform for broadcasting disinformation and propaganda, monitored closely by Russia’s media and censorship agency Roskomnadzor. Over the course of the next two decades, this strengthened the power of Russia’s government to spread a wide variety of lies, disinformation, and conspiracy theories. Of course, any country that has a state-run news agency inherently has at its disposal the means of misinformation and disinformation. Rather than trying to convince independent journalists that the government’s official story is credible, they can simply feed it to the journalists, editors, and broadcasters on the government payroll. Russia is not alone in doing this—in fact, they are joined by a significant number of other authoritarian states around the world— but they are particularly adept at it. For example, RT (formerly known as Russia Today) receives an annual budget from the government of several hundred million dollars. With this

Information Dominance and Attention Dominance203

sort of arrangement, Russian authorities have at their disposal what some have referred to as a “weapons system” of influence.81 RT broadcasts in English, Arabic, French, and Spanish. Its online content includes these four languages plus Russian and German. According to Singer and Brooking, it has more YouTube subscribers than any other broadcaster, including the BBC and Fox News. As Matt Armstrong explains, “RT is not about Russia as much as it is about everyone else. Their slogan ‘Question More’ is not about finding answers, but fomenting confusion, chaos, and distrust. They spin up their audience to chase myths, believe in fantasies, and listening to faux—or manufactured when convenient—‘experts’ until the audience simply tunes out or buys whatever RT serves up. Media illiteracy is the fertile soil on which RT thrives and that it in turn enriches.”82 Meanwhile, just like other authoritarian regimes, the Russian government has also developed many ways to coerce the online behavior of its citizens. Consider the case of VKontakte (VK), the most popular social network in Russia. The government asserted its control over VK by first extorting its founder Pavel Durov, threatening him with prison for a trumped-up crime, and eventually, he sold his shares to a friend of Vladimir Putin and fled the country.83 In 2011, tens of thousands of people took to the streets in anti-government protests after allegations of fraud in the recent parliamentary election. These protests were organized largely over Facebook and Twitter and spearheaded by leaders like the anticorruption crusader Alexei Navalny,84 who used LiveJournal blogs to mobilize support. The Russian government responded by launching a digital influence campaign of pro-Kremlin trolling. In his book This is Not Propaganda, Peter Pomerantsev describes how “social media mobs and cyber militias harassed, smeared and intimidated dissenting voices into silence or undermined trust in them,” and yet because the connections between the government and these trolling campaigns were unclear, Russian authorities “could always claim it had nothing to do with them, that the mobs were made up of private individuals exercising their freedom of expression.”85 Russia also responded by developing a massive system for monitoring social media activities called Prism that looks for indications of “protest sentiments.”86 Censorship and punishment of online expressions of political dissent have become common in Russia. For example, in 2016, a Russian woman who posted negative stories about the invasion of Ukraine was sentenced to 320 hours of hard labor for “discrediting the public order.”87 In November 2019, a new law was enacted in Russia giving the government even more control over the freedom of speech and information online. Human Rights Watch condemned the so-called “Sovereign Internet” law as “inconsistent with standards on freedom of expression and privacy protected by the International Covenant on Civil and Political Rights and the European Convention on Human Rights.”88 According to

204

Digital Influence Warfare in the Age of Social Media

their analysis, the law “obliges Internet service providers to install special equipment that can track, filter, and reroute Internet traffic.”89 Specifically, they are now required to install equipment that will conduct Deep Packet Inspection (DPI), an advanced method of network monitoring that can be used to block or surveil Internet traffic. If service providers fail to cooperate with the authorities’ demands, operators of Internet exchange points (key hubs for Internet traffic exchange) are required to disconnect them. “This equipment allows Russia’s telecommunications watchdog, Roskomnadzor, to independently and extrajudicially block access to content that the government deems a threat.”90 Russian authorities have a great deal of flexibility to decide which situations would require tracking, rerouting, or blocking. They can also base these decisions on the origin of the content or the type of application or platform through which it was conveyed. Blocking can range from a single message or post to an ongoing network shutdown, including cutting Russia off entirely from the global Internet or shutting down connectivity within Russia.91 But in addition to its focus on restricting and controlling information, Russia has also invested significantly in spreading an enormous amount of disinformation, targeting both domestic and foreign audiences. Of course, as described earlier in this chapter, many authoritarian regimes spread disinformation among their citizens. But Russia is uniquely aggressive in the scope and scale of its influence efforts. As Michiko Kakutani observed in her book The Death of Truth, Russia uses propaganda “to distract and exhaust its own people (and increasingly, citizens of foreign countries), to wear them down through such a profusion of lies that they cease to resist and retreat back into their private lives. A Rand Corporation report called this ‘the firehose of falsehood’—an unremitting, high-intensity stream of lies, partial truths, and complete fictions spewed forth with tireless aggression to obfuscate the truth and overwhelm and confuse anyone trying to pay attention.”92 That report, by Christopher Paul and Miriam Matthews, describes how “Russian propaganda makes no commitment to objective reality.” Manufactured sources of information and evidence are routinely used (such as fake photographs, fake on-scene news reporting, and even staged video footage with actors playing victims of manufactured atrocities or crimes). Overall, “Russian news channels, such as RT and Sputnik News, are more like a blend of infotainment and disinformation than factchecked journalism, though their formats intentionally take the appearance of proper news programs.”93 The extensive diversity of false narratives spread by Russia’s governmentcontrolled media outlets in recent years has been cataloged by several researchers and include such claims as “NATO is preparing for war with Russia,” “Fireworks are forbidden in Europe because of migrants,” “There is no freedom of speech in Estonia,” and “Poland started World War II.”94 In addition to sowing confusion about what is real or not, as Pomerantsev

Information Dominance and Attention Dominance205

notes, another frequent theme of Russia’s false narratives is that “freedom does not lead to peace and prosperity, but to war and devastation (a message, first and foremost, meant for its own people so they didn’t become overenthusiastic about the idea).”95 According to research by Anton Sobolev, these and other kinds of Russian state-sponsored influence activities targeting its domestic population are fairly successful in “diverting online discussions away from politically charged topics.”96 In addition to dampening enthusiasm for democratic reforms, the main purposes of Russia’s massive disinformation efforts are to distract and disorient its own people, overwhelming their capacity to process information and leading them to believe that nothing at all is true and anything is possible. Chapter 2 of this book describes the long history of Russia’s commitment to using dezinformatziya against the populations of foreign countries. But as Kakutani explains, “The sheer volume of dezinformatziya unleashed by the Russian ‘firehose of falsehoods’ effort tends to overwhelm and numb people while simultaneously defining deviancy down and normalizing the unacceptable. Outrage gives way to outrage fatigue, which gives way to the sort of cynicism and weariness that empowers those disseminating the lies.”97 And by some accounts, outrage fatigue has already settled in among the Russian population. A reporter went undercover in 2015 to try and expose the Internet Research Agency (IRA) in St. Petersburg, Russia. After working there for two and a half months, she revealed its operations to newspapers, who published her accounts as authored by “anonymous.” As Pomerantsev explains, her hope had been that “by unmasking the workings of the IRA she would cause so much outrage that it would help stop its work, that she would shock people into seeing how they were being manipulated, shame those who work there into resigning. … But instead of an outcry, she found that many people, including fellow activists, just shrugged at the revelations. This horrified her even more. Not only did the lies churned out by the farm become reality, but the very existence of it was seen as normal.”98 This is to be expected in an environment where information dominance has been established by a regime that has no regard for truth or inconvenient facts, but have regard only for power and control of its people. Summary Overall, the governments of China and Russia represent the two largest and most advanced examples of dominance over the digital information available to their citizens. Two main differences between China and Russia are evident regarding the ways and means of exerting digital influence over its citizens. First, China excels at this—they have more influence and control within their borders than Russia. While China’s population of Internet users is several orders of magnitude larger than that of Russia, it’s actually

206

Digital Influence Warfare in the Age of Social Media

quite impressive that China has still managed to successfully maintain constraints on media and the Internet in a more powerful and consistent manner than most other countries. Some might say China is the envy of other authoritarian regimes in this regard, particularly Russia. Putin clearly wants the same level of power to influence his citizens and is pushing his country in that direction, as reflected in the new “Sovereign Internet” law. The second primary difference is that China has not yet deployed its advanced, sophisticated digital influence machinery against the United States in an effort to sow confusion and distrust, amplify discord and “othering,” and impact democratic elections like Russia has (and continues to do so today). While we are well aware that many countries are engaged in some form of digital influence efforts against the United States, none compares to that of Russia, as detailed in chapter 2. Numerous government reports have revealed the massive scale of those efforts. And regular “coordinated inauthentic behavior” investigation announcements by Facebook, Twitter, and other social media platforms in the United States—which have resulted in the suspensions and deletions of millions of pages and accounts—indicate these efforts are ongoing to this very day. Further, Russia’s use of digital influence mercenaries99 is also unparalleled, and they will likely continue to find new and innovative ways to circumvent whatever regulations and other obstacles are meant to prevent or deter these activities. The third primary difference between domestic influence campaigns in China and Russia is that the latter is far more aggressive in its attempts to employ a “censorship by noise” strategy to disorient and discourage its own people. But with the relative success that Putin’s regime has had with the “firehose of falsehoods” strategy, it is likely that others will follow the example they have set in pursuit of information dominance over their own citizens. And unfortunately, we are seeing the emergence of this strategy in both authoritarian regimes and democracies. In true democracies, where the freedom of speech and the press is legally protected, a politician or an elected leader faces considerable difficulty in replicating the kinds of restrictive approach to information dominance found in the authoritarian regimes described in this chapter. The politician may instinctively want the same kind of power and control over their citizenry, but legal constraints impede them. This is where the Internet—and especially social media—has proven to be most valuable to such people. Because of the way the attention economy works, the means are now available to establish and maintain what I call “attention dominance,” as we’ll explore in the next section of this chapter. ATTENTION DOMINANCE IN DEMOCRACIES Of course, digital influence campaigns are not exclusive to authoritarian regimes. A recent report by the Oxford Internet Institute’s Computational

Information Dominance and Attention Dominance207

Propaganda Research Project found evidence of formally organized digital influence campaigns in 48 countries in 2018—that is, in each of 48 countries there was at least one political party or government agency using social media to manipulate public opinion domestically.100 As described in chapter 2, politicians in many democratic countries have mustered armies of online trolls and automated bot accounts to mobilize (or give the false impression of) support for a political agenda and to attack political opponents and dissidents. But in addition to those kinds of tactics, we are also seeing the emergence of a new strategy, in which an influencer is able to capitalize on “attention scarcity” in the age of information overload.101 An unregulated news and social media environment, combined with the influence silos described in chapter 5, can now be utilized to manifest the kind of power in democracies that authoritarian regimes derive through their information dominance efforts. Both democracies and authoritarian regimes can use the “censorship by noise” strategy to muddle reality and get their citizens to question everything. The strategy for what I call “attention dominance” requires at least three elements: (1) an influencer who wants to dominate the attention of a target audience; (2) a target audience that wants (or is willing to accept) the kind of information offered by the influencer; and (3) a medium through which a loyal and consistent audience can be sustained. Earlier chapters of this book examined various strategies for influencing a target and a range of psychological dimensions that facilitate influence. The reasons for wanting to dominate an audience’s attention are fairly well established and understood, including power and profit. The second element—an audience seeking specific kinds of information—is also fairly well understood. Several chapters of this book have already described the ways in which people want the kinds of information that will help reduce uncertainty and fear, bolster their self-esteem and confidence in a particular position or group identity, and help them navigate an increasingly complex world. So let’s focus on this third element—the means by which the influencer can dominate the attention of their target audience. An obvious first step for the influencer is to identify a sizable target audience that has cocooned themselves within an echo chamber or influence silo. And if one doesn’t exist, build it. As we saw in the previous chapter, influence silos have been formed around specific audiences by the likes of Rush Limbaugh and the Fox television network—an audience whose members want (and now cling to) a source of validation for their own version of the truth. From the early years of his radio show, Limbaugh capitalized on a specific confirmation bias, reinforced it, celebrated it, and made his followers feel justified in having a certain set of biases and prejudices (particularly anti-liberal and anti-Democrat). And in the process of doing so, he helped create a segment of Americans with a proudly shared political identity that other people could then easily manipulate to serve their own purposes—including

208

Digital Influence Warfare in the Age of Social Media

Roger Ailes and Fox News, Alex Jones and InfoWars, Steve Bannon and Breitbart, and of course, the political campaign of Donald Trump. One could easily argue that the rise of Trump and his “Make America Great Again” political following would not have occurred without the overtly fawning support given to him by Rush Limbaugh and by Fox producers, news programs, and audiences. The most effective way to influence an audience is to “speak the language of the tribe” and ensure that they view you as one of them, and Trump spent years establishing his “street credibility” by repeating and amplifying the so-called “birther conspiracy” (a long-disproven claim that President Obama was not born on U.S. territory). He then proceeded to do the same with other anti-Democrat missives, establishing in-group identity credentials by out-group “othering,” and over time, Trump began to capture and hold a center of gravity within the conservative influence silo. Because millions of Americans came to see these conservative media sources as part of their social identity, all other sources of information—particularly if they would be critical of Trump— were blocked from their consciousness. In this way, social identity markers bring with them blinders, like the horses running the Kentucky Derby. They’re all running in the same direction and are very likely aware that others are heading in the same general direction, but because of those blinders, they are focused only on what they are allowed to see. Or to use a similar analogy, buffalo have eyes on the sides of their head, allowing them to only see what’s on either side of them, but not what is in front. In centuries of old, this allowed Native Americans to drive scores of buffalo over a cliff. The leaders and influencers within a silo can use fear appeals of the “other” and demands for in-group identity solidarity and conformity, which will be amplified significantly by the media outlets favored by the silo’s members. As Pratkanis and Aronson observe, “A fear appeal is most effective when (1) it scares the hell the out of people, (2) it offers a specific recommendation for overcoming the fear-arousing threat, (3) the recommended action is perceived as effective for reducing the threat, and (4) the message recipient believes that he or she can perform the recommended action.”102 Influence silos can generate fear as a form of mass-based emotional arousal. In extreme cases, effectively using fear appeals can allow a prominent influencer within the silo to make repeated mistakes, spread lies and conspiracies, and commit crimes without any real consequences, so long as others within the influence silo still view the influencer as maintaining fealty with the in-group identity, and never giving an ounce of leeway to the out-group “other.” The influencer can draw and sustain power over their target audience by exploiting irrational fears—for example, the specter of a monstrous terrorist or Communist under every bed, waiting to destroy you. Racial prejudices and biases also play a role in fomenting the kinds of fear and animosity that can divide a society into fragmented camps of those who believe the fear is justified and those who do not.

Information Dominance and Attention Dominance209

Fear is an extremely powerful emotion indeed, and irrational fears are rarely confronted effectively with rational, fact-based reasoning. Knowing this, a demagogue can build a following within an open democracy simply by amplifying those irrational fears in ways that resonate to large audiences.103 If the influencer has chosen the right kind of audience, the result will be a form of attention dominance, a type of power that approximates the information dominance enjoyed by leaders of authoritarian regimes. For Trump’s rise to power, the combination of Fox News, Breitbart, InfoWars, Daily Caller, and other such media outlets regularly conveyed to their massive audiences an uncritical, pro-Trump narrative and harshly criticized any and all opposition. As detailed in previous chapters, the founders of Fox News wanted to establish attention dominance among American conservative households. Because they largely succeeded, Trump has a unique capability of utilizing that attention dominance to promote his own agenda. Having convinced Fox News viewers that he is the champion of what they believe, Fox has no choice but to remain loyal to Trump or lose viewers, ratings, and profits. In a perverse sort of way, this symbiotic relationship now holds certain media outlets hostage. In order to maintain the level of attention they receive, they must ensure their viewers and readers continue liking what they see, even it means abandoning any façade of credibility. For example, while other media services have revealed the massive scope of outright lies and half-truths said by Trump, the pro-Trump media outlets ignore such claims, denigrate and demean any fact-checking media services, and redirect people’s attention toward other issues, particularly fearmongering about the supposed existential threat of illegal immigration. As McIntyre notes, “Without Breitbart, Infowars and all of the other alt-right media outlets, Trump likely would not have been able to get his word out to the people who were most disposed to believe his message.”104 But beyond the media outlets that saturated their audiences with a proTrump narrative, the basic functions of social media platforms were also essential to his achieving attention dominance. Remember the earlier discussion in this book about how social media platforms want to dominate your attention. Your attention is their main source of revenue. As Tim Wu explains, “Facebook primarily profits from the resale of its users’ time and attention: hence its efforts to maximize ‘time on site.’ This leads the company to provide content that maximizes ‘engagement,’ which is information tailored to the interests of each user. While this sounds relatively innocuous (giving users what they want), it has the secondary effect of exercising strong control over what the listener is exposed to, and blocking content that is unlikely to engage.”105 Similar to the ways in which radio programs and television networks want to dominate your attention, social media platforms want users to engage with their platform as often and consistently as possible, because

210

Digital Influence Warfare in the Age of Social Media

the more attention they can attract, the more revenue they can generate from advertisement and from selling data about the account activities of its users. But we are only capable of focusing our attention on a finite number of things each day. None of us have the ability to spend 24 hours a day, seven days a week, focusing all our attention on a media network or on processing the overwhelming amounts of information flooding email inboxes and social media feeds. As a result, the basic economic forces of supply and demand respond to this “attention scarcity” by fueling steep market competition for a resource that has monetary value. Meanwhile, your attention is also what the influencer wants. So, there is a natural synergy here in that the influencer and the means of influence are both pursuing the same objective. In truth, digital influence efforts and the attention economy form a very happy marriage. It’s a symbiotic relationship, where the social media platform provides access to audiences who can be engaged, and the influencer provokes engagement. Working hand in hand, the ability to dominate your attention has never been greater. However, only certain kinds of messengers and messages will gain traction among certain targets, so the influencer should be strategic in choosing audiences to manipulate, which will result in the highest return on their investment. Outside the realm of the Internet, an influencer can capitalize on the profit-seeking tactics of politically biased media outlets by provoking constant media coverage and attention, which translates into mass audience attention in real life. But at the same time, the influencer can also use a social media platform to identify a target audience, cultivate a presence among them, and achieve attention dominance among that target audience. As described in chapters 2 and 3, there are already scores of opportunities to gather and analyze data in order to identify the most optimum targets for your influence campaign. And if you don’t have the time or interest in doing so, hire some digital influence mercenaries to do that work for you. Data can be gathered and analyzed about the individual profiles of a target audience’s members, including their likes, dislikes, hopes, and fears. Monitoring the target audience’s engagement on social media can reveal patterns of interactions that allow you to determine when they are most likely to be online and, thus, the most optimum period of time for launching influence operations. Gathering research on your target audience also sheds light on the kinds of things that have gone viral among the community in the past. These are all highly useful bits of information to incorporate into your attention dominance strategy. Once you have a clear understanding of what your target audiences believe, want, fear, and hate, you can develop an influence campaign that captures their attention and create opportunities for attention dominance. Further, the digital influence silos described in the previous chapter become the most effective means of achieving attention dominance

Information Dominance and Attention Dominance211

over a target audience. As The Wall Street Journal demonstrated in a 2016 experiment, social media users predominately see information exclusively gathered from like-minded friends and media sources, essentially creating an entirely different “reality” about an issue depending on your individual preferences.106 When the information sources you turn to online for social confirmation are self-sequestered (along with you) in an ideologically defined influence silo, the outcome is fairly predictable. When those around you reject a particular narrative (whether it is true or false), so will you. And when those around you accept and embrace the lies of a dominant voice within the influence silo, the chances are high that you will accept them as well. This is how someone like Donald Trump has been able to build such a massive following on Twitter. People self-selected to be part of the pro-Trump influence silo by “following” his Twitter account. As described in the previous chapters, being provocative with an increasingly large social media following captures attention and generates a kind of online “social proof” that attracts more followers. Social proof creates contextual relevance for an influencer, and thus having a huge number of followers on your social media account virtually guarantees you some form of attention dominance. Trump’s promotion of conspiracies and other controversial statements attracted individuals who either agreed with his views or liked the stimulation of the emotional provocation on display here. As numerous reports and books have explained, Trump’s electoral victory in 2016 was fueled by amplifying a variety of polarizing identity issues, including immigration, fear of “others,” and resentment among whites who believe that minority groups are benefitting at the expense of “real” Americans.107 Meanwhile, Twitter loved it—the combination of a controversial celebrity and an audience hungry for certainty and emotional provocation produces higher levels of online engagement and advertising revenue. Trump’s online antics keep people coming back to the social media platform regularly to see what they may have missed. Those who change their mind and decide they don’t like what they see can simply exit by unfollowing (or even blocking) Trump. Those who remain have now formed a stronger kind of group identity within a pro-Trump influence silo. From this increasingly powerful position, Trump can tell those who remain loyal followers whatever he wants them to believe (whether or not he even believes it himself). And those who dare criticize what Trump says, or question his sense of reality, can be easily blocked from the conversation, ensuring that those within the self-selected bubble of followers are protected from any competing, alternate narratives. As the influence silo around his followers strengthens, the narrative of the influencer becomes “Keep the spotlight on me, I’m the only one who matters,” which basically forces a kind of contextual relevance (see chapter 4) that facilitates attention dominance. Over time, his followers come to rely on his manipulative

212

Digital Influence Warfare in the Age of Social Media

tweets for guidance and for cues on how to interpret confusing information and how to navigate this complex world of ours. Further, intense drama and emotional provocation stimulates the highly addictive neurotransmitter Dopamine, so becoming a significant provider of that stimulation can help the influencer capture and maintain the attention of the target audience. This is particularly the case when the messages being proliferated reinforce the beliefs, hopes, hatreds, and prejudices of those within the silo. Comforting lies will always gain more traction than uncomfortable truths. “Only to a limited extent does man want truth,” Friedrich Nietzsche argued. “He desires the pleasant, life-preserving consequences of truth; to pure knowledge without consequences he is indifferent, to potentially harmful and destructive truths he is even hostile.”108 As a result of this human tendency, news media and the entertainment industry have increasingly chosen to focus on giving people what they want (in order to generate the most revenues). In many cases, as M ­ azarr et al. observe, they have determined that what people want is “sensational, extreme, and targeted against some sort of out-group that allows the audience to deepen its sense of social membership.”109 The authoritarian-minded politician within a democracy can also use and abuse the mainstream media at will and get away with it because of the power amassed in the digital ecosystem and enabled by influence silos that support or defer to the authoritarian. It is highly likely that if Trump had only a few thousand Twitter followers in 2016, he would not have succeeded in his political campaign. But by the time he launched his campaign for the presidency he had already (1) developed a specific kind of persona on a “reality” television show and (2) been exceedingly active on Twitter, amassing millions of followers. As a result, his ability to influence people’s views and behaviors was already considerably greater than any other Republican candidate. Media coverage of his provocative antics were off the charts, and his constant flood of controversial Twitter messages—combined with a much larger social media following than any other candidate—helped him establish a powerful form of attention dominance. What lessons can be learned from the model that Trump has demonstrated? To begin with, an influencer must be willing to utilize potentially uncomfortable strategies for gaining attention. Strategies for attracting attention are well known throughout any society and are typically learned at a young age (as any parent knows). Both children and adults know that throwing temper tantrums and shouting will attract attention. Violating norms works as well—people who follow all the rules are rarely the ones who get noticed. Be provocative; the more outlandish you can get away with, the better your ability to either provoke outrage or engender loyalty from those who share your views. You should also not be afraid to make enemies, but instead seek to cultivate them. Further, the more prominent

Information Dominance and Attention Dominance213

your enemies, and the more vocally they oppose you, the more attention you will get. Capitalizing on prejudices about “others” (like immigrants and other powerless subgroups) can be a particularly effective tool. Provoking emotional responses, either favorable or unfavorable, will always garner you attention. And if you happen to be uniquely gifted with lots of money, spend it on getting your name out there in the public domain on as many products and buildings as possible. Name recognition is often essential for attention dominance. For the same reason, you will want to spend your money on tons of social media advertising, particularly on Facebook (which to date still allows political ads to be completely false), as well as on television networks whose audiences are known to have a specific collection of sociopolitical beliefs. And as described in chapter 3, there are many other digital influence tactics (like search engine optimization) that can also facilitate greater amounts of attention. Flooding the information ecosystem with false and contradictory narratives can also push people deeper into the shelter of their influence silos, which can further amplify the effects of your attention dominance strategy. The fact that unambiguously disreputable media organizations have been credentialed to the White House press corps under the Trump administration is quite telling. Fear mongers, conspiracy theorists, racist right-wing extremists, and many others were treated by that administration as equals among professional journalists from national, Pulitzer prize–winning newspapers and television programs. These and other provocations result in a form of outrage fatigue that is similar to what the undercover reporter in Russia discovered in 2015. But it’s all part of Trump’s strategy for establishing the kind of attention dominance that approximates the information dominance enjoyed by authoritarian regimes in Iran, Russia, China, Turkey, and so forth. Like the “firehose of falsehoods” in Russia, the influencer’s objective here is to overwhelm the information processing capabilities of any reasonable human being. Disinformation fatigue naturally leads people to throw their hands up in frustration and conclude, “I don’t know what or whom to trust anymore!” This is an underlying objective of gaslighting (see chapters 3 and 4), which is something we have seen a lot by pro-Trump trolls and others in recent years. Once people have begun to question everything, an influencer can position himself as the one (the only one) with all the answers. When we lose our ability to discern the truth (or worse, lose faith in our own and everyone else’s ability to discern the truth), we lose confidence in what we think we know, as well as in what others claim to know. As a result, we become unable to make decisions that healthy democratic societies need to make in order to govern ourselves. Information overload can trigger an individual to simply give up trying to actively interpret or discern fact from fiction. It is far less taxing on the human mind and spirit to simply shut down your information processing efforts and instead focus your

214

Digital Influence Warfare in the Age of Social Media

attention solely on information that reinforces what you already want to believe. This, in turn, provides the influencer with opportunities to spread even more lies and disinformation. The same kind of “I give up” response has been studied by psychologists who wanted to figure out why people prefer extremist views over a more balanced or nuanced middle-ground perspective. In one study, people were exposed to evidence that both supported and contradicted two opposing sides of an issue, but instead of acknowledging the validity of both sides, they increased their support for a more polarized or extreme position than they had held before hearing the evidence. According to the authors of this study, “People who hold strong opinions on complex social issues are likely to examine relevant empirical evidence in a biased manner. They are apt to accept ‘confirming’ evidence at face value while subjecting ‘disconfirming’ evidence to critical evaluation, and, as a result, draw undue support for their initial positions from mixed or random empirical findings.”110 From a psychological perspective, this makes sense: people generally want to avoid feelings of uncertainty, insecurity, and inferiority, and when the complexity or contradictions surrounding a particular issue create feelings of confusion and frustration, the easiest response is to pull back into the shelter of our previously established convictions and if necessary defend them more ferociously. Thus, the targets of the flurry of disinformation will respond in various ways that can benefit the influencer. Some will have greater uncertainty about what they think is real and greater fear about the unknown, creating opportunities for the influencer to convince (and often deceive) them with certain kinds of information. Meanwhile, others will have stronger certainty about what they believe, regardless of the truth, based on the information provided by the influencer. Members of the target audience will also look to others within their influence silos, where the only information permitted is that which is already filtered for them and where they can find ample sources of comforting reassurance in their prejudices, relying on group identity and conformity to provide answers and reinforce their certainty. Thus, the influencer should continually seek to identify and target specific influence silos in which the members are likely to be supportive of the views the influencer wants to spread. For example, since loyal fans of Fox News, Brietbart, and the Daily Caller are predominantly anti-immigrant, the influencer will naturally target those audiences with a campaign of anti-immigration messages. Influence silos make the job of effective targeting much easier. The members of the silo have already selfselected (or been drawn in by the gravitational forces described in the previous chapter) and will thus be more accepting to messages that conform to the beliefs, biases, prejudices, hopes, and fears embraced within the silo. Through these influence silos, the influencer can capitalize on confirmation bias to nurture and strengthen prejudices in favor of their preferred

Information Dominance and Attention Dominance215

narrative or political agenda, provoke outrage about political or ideological opponents, challenge inconvenient facts, raise suspicions and suggest conspiracy theories, and much more. As described in chapter 5, the true power of the influence silo is created over time when the more skeptical individuals within the target community opt out, leaving behind only those like-minded “true believers” for whom the echo chamber becomes an increasingly powerful barrier that repels (or “protects” them from) any differing points of view. In a politically divided society, any topic you want to embrace and promote will have both supporters and opponents. Thus, your influence strategy will involve finding ways to enlarge the segment of society that is already inclined to agree with you, at the expense of all other segments of society. Drawing from the extensive data available about your target audiences, you can choose the most optimum influence silos within which to pursue your attention dominance strategy. Unfortunately, influence silos can make us more ignorant and arrogant at the same time, as we see reflected in many examples provided in this book, and can lead people to defend the indefensible. Many people have a knee-jerk tendency to justify whatever it is that we have done, and the same instinct can apply when considering what others who represent our in-group have said or done.111 In fact, most of us will go to great lengths to justify our actions, as well as the actions of others within our influence silo. Your goal as an influencer, then, should be to get the members of an influence silo to defend you—your statements and actions—as a matter of self-preservation. Attention dominance can help you achieve this goal. Further, the tactics and tools described in chapter 3 can be employed to generate and amplify this attention dominance across multiple social media platforms. We have already seen many examples of authoritarian state-based efforts using a mix of real and automated accounts working together to spread a particular message. These same methods can be used to catapult an influencer to positions of power within a democratic country. The more social media followers you have, the more social proof that what you say deserves attention. These followers become the key vector of transmission for spreading your messages to their like-minded friends, and through this, you can capture more attention. And even if the influence aggressor has only a small number of followers, they can still gain a lot of attention by provoking a pop star, celebrity, or politician with millions of followers to respond in some way to a particular message. If they like it and share it with their followers, it grabs attention and can even become viral. If they denounce it publicly in some way, the influencer can still gain a lot of attention—either way, the influence of your message spreads to a sizeable audience. Within an influence silo, a lie can be amplified by the power of repetition and sharing—and even if the members of the silo respond negatively

216

Digital Influence Warfare in the Age of Social Media

to that disinformation narrative, the response itself (and its repetitive amplification) can also serve to achieve certain digital influence goals. Psychologists have found that repeating things that are untrue—through a constant barrage of advertising and disinformation—can actually create the illusion of truth even in people who are knowledgeable.112 In fact, a 2019 study published in the journal Psychological Science found that repeatedly encountering fake news makes it seem less unethical to spread— regardless of whether one believes it and even when it was later proven to be false.113 So, the digital influencer will seek to spark a cascade of message proliferation through a social network, particularly if it functions as an influence silo within which members are aligned ideologically, politically, demographically, and so forth. It becomes, as Manheim puts it, “an inherent facilitator of potential campaign diffusion through a given population.”114 And of course, the influencer can amplify that spread even further through an army of automated accounts that fabricate the illusion of social proof, which then provokes real users to pay attention and engage with the information. At the end of the day, what you want is to replicate in some fashion what authoritarian regimes are able to do with regard to information dominance. You want to be your target’s exclusive information source (or group of sources), while all others are banished as illegitimate. Your influence effort also benefits enormously from the unique ability on social media platforms to block unwanted sources of information. Anyone who wants to challenge your version of events is free to do so, but you can block their messages from being seen by your followers. The strategy allows you to build up a cult-like following, where your messages are the only ones considered acceptable by your target audience. This kind of attention dominance leads to power. Finally, the target audience for this strategy also bears some of the blame for the influencer’s ability to dominate their attention. Too often we have seen individuals avoid the work and energy needed for critically evaluating a political candidate’s platform or policies, and they instead embrace the much easier peripheral route of processing information. When an influencer is able to simplify complicated challenges in ways that conform to your own values and beliefs, as well as confirm your prejudices and biases, you tend to like that and want more of it. As a result, the audience allows the influencer to tell them what to think, how to feel about those challenges, and how to remedy them. Our search for certainty allows our attention to be grabbed and held by those clever enough to know how; they can then deceive even well-educated people who should know better. Essentially, influencers and social media platforms want the same thing—to dominate an audience’s attention—and to varying degrees, the members of that audience are guilty of aiding and abetting them, an accomplice to the crime. Not only have people become lazy and negligent

Information Dominance and Attention Dominance217

with regard to critically assessing information (and sources), falling prey to the peripheral route of persuasion, but they are also actively embracing narratives that reinforce their confirmation biases and allow the influencer to willingly disinform them, as long as what they see and hear reaffirms what they want to believe. As a result, the supply of disinformation we see online today is meeting the demand for disinformation. The supply of provocation is meeting the demand for stimulation. And the more an influencer can provoke and disinform their target audience with impunity, the greater the attention dominance they can achieve. CONCLUSION Modern forms of information dominance and attention dominance are siblings that were born from the same mix of technologies, human and social psychology, and lust for power. The tactics and strategies of digital influence are most effective when you can control both messengers and messages. Online disinformation campaigns are now routine business for several authoritarian regimes—as Singer and Brooking note, authoritarian regimes are constantly seeking the ability to control the information and control the people115—and we are seeing rapid growth of similar campaigns in democracies as well. A primary difference is that instead of the authoritarian regime’s efforts to impose information scarcity, the strategy in a democracy is to capitalize on attention scarcity within the environment of information overload. But across the spectrum of these efforts, there is a symbiotic relationship between the influencers and their targets. Information operations are not viable unless there is a significant market demand for disinformation among the target audience. Examples of both information dominance and attention dominance are made possible because the target wants what the influencer is providing more than the alternative. In both authoritarian states and democracies, conspiracies are being invoked by government leaders to explain events to an increasingly overwhelmed citizenry, encouraging people to simply throw up their hands in dismay and give up even trying to separate fact from fiction. As Gary Kasparov, former world chess champion and Russian pro-democracy leader, noted in December 2016: “The point of modern propaganda isn’t only to misinform or push an agenda. It is to exhaust your critical thinking, to annihilate truth.”116 Peter Pomerantsev agrees, noting how the goal of these efforts is to “surround audiences with so much cynicism about anyone’s motives, persuade them that behind every seemingly benign motivation is a nefarious, if impossible-to-prove plot, so that they lose faith in the possibility of an alternative. . . . the net effect of all these endless pileups of conspiracies is that you, the little guy, can never change anything. For if you are living in a world where shadowy forces control everything, then what chance do you have to turn it around?”117

218

Digital Influence Warfare in the Age of Social Media

Arguably, today’s forms of attention dominance cannot exist without social media. As Philip Howard explains, “Social media companies have provided the platform and algorithms that allow social control by ruling elites, political parties, and lobbyists.”118 While a great deal of attention has been focused in recent years on foreign influence efforts, the development and use of digital influence tactics and tools (described in c­ hapter 3) were initially meant to support domestic influence efforts in order to ensure an authoritarian regime remains in power. In truth, digital influence is still more focused on domestic targets than on foreign ones, and we are likely to see this expand in the coming years. Today, an army of automated “bot” accounts can be used for “manufacturing consensus” about opinions for or against a policy or political agenda in ways that are more all-encompassing and pervasive than any kind of media we have seen before.119 Social media allows the influencer to create an illusion of “normality” that conformity-seeking individuals naturally accept, especially when reinforced by their echo chambers and influence silos. Politicians and others can hire digital influence mercenaries to do this work, regardless of whether the targets are foreign or domestic. The basic goals and contents of digital influence efforts are often similar, and the potential impact on the target audiences is largely the same. Authoritarian regimes have invested heavily in these efforts and will no doubt continue to do so in the future. China and Russia utilize the most robust information dominance infrastructure for controlling the information accessible to their citizens. As Martin and Shapiro note, “Both have large state-run media organizations that spread propaganda locally and conduct influence operations on their own citizens.”120 We are also seeing a huge growth of digital influence efforts in democratic countries. The primary difference is whether the influencers behind these efforts are pursuing their goals in an environment that allows information dominance or if they must instead seek to orchestrate attention dominance. Both information dominance and attention dominance equip the influencer with the power to lie with impunity. And that impunity serves as a power-reinforcing mechanism, allowing the liar to build a mountain of lies upon lies, picking up momentum and scale to where even 30,000+ lies by a politician won’t deter his base of supporters. The attention dominance approach to digital influence certainly helps explain the amount of lying and disinformation that has been produced by the Trump administration. Given the amount of division and polarization already prevalent in the United States, the information environment was primed for someone like Trump to adopt a “censorship by noise” strategy that emulates what Putin has done in Russia, overwhelming citizens and the media with a firehose of falsehoods. He recognized how social media disinformation can become a major source of power. As a 2019 report from Disinfo Portal notes: “The fact is that today’s digital ecosystem presents possibilities and

Information Dominance and Attention Dominance219

incentives to lie at scale with lightning speed. The past few years have shown the digital ecosystem to be an incredibly effective environment through which a variety of actors can embed disinformation in the public digital sphere.”121 Unfortunately, with the right combination of provocation and influence silos, a clever politician in an open democracy can essentially build up a sizeable following online and disorient and fragment the opposition—all facilitated by the attention-seeking algorithms of social media platforms.

CHAPTER 7

Concluding Thoughts and Concerns for the Future

Digital influence warfare involves the use of technology to exploit our doubts and uncertainty about what we know. This uncertainty has been heightened through the rise of postmodernist arguments used for political, social, anti-science, and other purposes. In turn, postmodernism has led to “post-truth,” a framework of discourse that questions the very existence of truth (that which is supported by agreed-upon facts and evidence) and suggests instead that we should be able to embrace “alternative facts.” This discourse has combined with the rise of digital influence silos in which individuals eagerly embrace and defend different interpretations of truth, many of which have no factual evidence to support them. All of this disinformation and self-deception has produced fractures in society that undermine faith in political and educational institutions, scientific research, and so forth, making it impossible for Americans to maintain a healthy, civil society based on a shared perception of an objective reality. These fractures are being exploited by malevolent actors, foreign and domestic, to increase our uncertainty and make us more suspicious and disagreeable toward each other and overall weaker as a nation. “The strategy is to take a crack in our society and turn it into a chasm,” said Senator Angus King of Maine during a Senate Intelligence Committee hearing on Russian interference in the election.1 Meanwhile, as this book goes to press a global pandemic is also offering new opportunities for digital influence warfare against American citizens. In late July 2020, U.S. authorities announced that “Russian intelligence services are using a trio of English-language websites to spread disinformation about the coronavirus pandemic, seeking to exploit a crisis that America is struggling to contain ahead of the November 2020 presidential election.”2 Further, it was revealed that the main thrust of this effort involved

222

Digital Influence Warfare in the Age of Social Media

digital influence mercenaries,3 including an outfit called “InfoRus” which described itself on its English-language Facebook page as an “Information agency: world through the eyes of Russia.”4 There are also concerns that Russian influence mercenaries are becoming increasingly adept at masking their efforts. One investigation found that digital influence mercenaries linked to Russia’s intelligence agencies were infiltrating the networks of an elite Iranian hacking unit and then attacking governments and private companies in the Middle East and Britain—hoping Tehran would be blamed for the havoc.5 In another case, Facebook announced in October 2019 that it had suspended three networks of Russian accounts that attempted to interfere in domestic politics of eight African countries—all tied to a Russian businessman (Yevgeniy Prigozhin) accused of meddling in past U.S. elections.6 Some of these accounts had been promoting Russian policies, while others criticized French and American policies in Africa. A Facebook page set up by the Russians in Sudan that masqueraded as a news network, called Sudan Daily, regularly reposted articles from Russia’s state-owned Sputnik news organization. Both examples illustrate how the increasing use of local proxies will also pose a major challenge for identifying and confronting foreign digital influence and disinformation efforts in the future. Meanwhile, the COVID-19 pandemic also demonstrated the challenges America faces in the form of domestic-origin influence efforts. On the one hand, the massively increased levels of uncertainty and fears about a previously unknown virus compelled millions to seek information and answers via the Internet. Being compelled by policy or very real health concerns to remain self-isolated for months on end, the television and especially the Internet became the exclusive source of information for many. But this proved very advantageous for those seeking to spread disinformation in the form of fake cures (e.g., hydroxychloroquine, which governments across Europe—and eventually the FDA in the United States—denounced after finding it did far more harm than good) and conspiracy theories about the origin of the virus (“Bill Gates wants to use vaccines to cull humanity”) or how it spreads (“George Soros is secretly funding the spread of this virus”). Others used the pandemic as an opportunity to try and score political points (e.g., attempting to blame China for the global pandemic, even calling it a “Wuhan Flu” at one point) and stoke political polarization about health experts’ guidance on wearing masks during a pandemic (“don’t be a sheep”). One public Facebook group called “REFUSE CORONA V@X AND SCREW BILL GATES” (referring to the billionaire whose foundation is helping to fund the development of vaccines) was started in April 2020, by a city contractor in Waukesha, Wisconsin. The group grew to 14,000 members in under four months, one of more than a dozen groups created throughout that year that were dedicated to opposing the COVID-19 vaccine and the idea that it might be mandated by governments.7

Concluding Thoughts and Concerns for the Future223

Throughout this tragic event, as thousands of Americans were killed by the virus each day, social media platforms began to respond by establishing new policies specifically about COVID-19 misinformation and enforcing those policies by flagging and removing content that violated those policies. In early August 2020, Facebook announced it had removed more than seven million pieces of content with false claims about the coronavirus since January of that year.8 Such claims as “social distancing does not work” were banned—as was Trump’s video post on August 5 claiming that children are “almost immune” to COVID-19. YouTube also deleted the video for violating its COVID-19 misinformation policies, and a tweet containing the video that was posted by the Trump campaign’s @TeamTrump account and shared by the president was also later hidden by Twitter for breaking its COVID-19 misinformation rules.9 By politicizing a tragedy like this and spreading false information, individuals effectively did what Russia, China, and others have been doing: chip away at America’s potential to be a unified, strong, and resilient nation. Worse, as Singer and Brooking point out, “a significant part of the American political culture is willfully denying the new threats to its cohesion. In some cases, it’s colluding with them.” The authors describe how Americans remain “vulnerable to the foreign manipulation of U.S. voters’ political dialogue and beliefs. . . . Until this is reframed as a nonpartisan issue—akin to something as basic as health education—the United States will remain at grave risk.”10 This is what digital influence warfare is all about. And unfortunately, all signs point to a future in which a relatively small number of influencers, facilitated by an even smaller number of private companies, will have extraordinary leverage and power to shape social, political, and economic agendas. And according to researchers, social media users in the United States are the most prominent victims, consumers, and facilitators of malicious digital influence efforts. For example, a research team led by Philip Howard at Oxford University found that during the 2016 U.S. election, “there was a one-to-one ratio of junk news to professional news. In other words, for every link to a story produced by a professional news organization, there was another link to content that was extremist, sensationalist, or conspiratorial or to other forms of junk news. This is the highest level of junk news circulation in any of the countries we have studied.”11 Further, as described in chapter 5 of this book, politically conservative members of American society have shown particular attraction to disinformation that confirms what they want to believe about the world, a view that has already been reinforced for decades by radio talk shows and a certain cable news network. They are not being tricked or misled—it is a matter of individual preference. When I first began to research and write this book, I was under the misperception that only dumb people fall prey to fake news and

224

Digital Influence Warfare in the Age of Social Media

disinformation. Boy, was I wrong. Smart people are just as susceptible to the tactics and tools of digital influence, for a variety of different reasons. It’s not just ignorance or uncertainty that makes us vulnerable; it’s our beliefs, hopes, desires, biases, prejudices, likes and dislikes, friends and family members, and so much more. Psychological research indicates that we generally dislike being manipulated or deceived.12 But as explained in previous chapters, many people are convinced that they are acting on their own will, yet behaving in ways that achieve the goals of the manipulator or the deceiver without even realizing it. This is because the form and content of the digital influence effort is something that the target actually wants; the manipulator is confirming what they want to believe, reinforcing their sense of purpose and group identity, or providing some cognitive comfort that relieves them of the uncertainty and fear they long to be rid of. As Nazi propagandist Joseph Goebbels explains, “This is the secret of propaganda: Those who are to be persuaded by it should be completely immersed in the ideas of the propaganda, without ever noticing they are being immersed in it.”13 And the modern onslaught of digital influence tactics is resulting in ever-increasing harm to societies and individuals. Today, disinformation is being spread by both well-established media outlets and fake ones that are designed to look authentic, often driven by a specific political agenda that aligns with the biases of the target, or to merely attract enough visitors so that the website can generate revenue through online advertising. At the same time, enormous amounts of social media, search, and e-commerce companies accountable to nobody have amassed immense power worldwide using algorithms to drive the value of their shares by increasing traffic, consumption, and addiction to their technology.14 Our interactions on social media are making our societies more fractured and divided, while digital influence mercenaries profit at our expense.15 As Clare Wardle explains, “Sock puppet accounts post outrage memes to Instagram and click farms manipulate the trending sections of social media platforms and their recommendation systems. Elsewhere, foreign agents pose as Americans to coordinate real-life protests between different communities while the mass collection of personal data is used to microtarget voters with bespoke messages and advertisements. Over and above this, conspiracy communities on 4chan and Reddit are busy trying to fool reporters into covering rumors or hoaxes.”16 Older strategies, tactics, and tools of influence warfare have evolved to encompass a new and very powerful digital dimension. By using massive amounts of Internet user data, including profiles and patterns of online behavior, microtargeting strategies have become a very effective means of influencing people from many backgrounds. Now, it’s obvious that I am not fond of the digital influence efforts described in these chapters. Personally, I find it offensive when someone tries to take advantage of someone else, and that is what I see among the great majority of examples

Concluding Thoughts and Concerns for the Future225

encountered during my research on this topic. In my opinion, many of these digital influence efforts seem like attempts to spread a contagious virus among a group of victims. Each of us can become a vector for retransmission of the insidious disease of disinformation, keeping it alive to infect others. As more of us participate in the spread of it, the origins of viruses and disinformation become more difficult to trace, and both have harmful symptoms and effects on individuals and society. In this analogy, the goal of a digital influence attack is to locate and then infect a suitable host for the virus-like disinformation, manipulate the host into sharing it with others in order to facilitate its spread, and then watch as the organism slowly destroys its host. A fake video, for example, can be believed by an initial dozen people who see it first; then at least a few of them share it with some of their family members and friends. In virtually no time at all, hundreds if not thousands of people watch the video, believe it to be authentic, and share it with others. Disinformation and contagious viruses unleashed in a population spread pretty much the same way with one exception—you can choose whether to believe and share a fake video, but you have no choice in whether or not a contagious virus infects you once you have been exposed to it. Further, prominent remedies for slowing or preventing the spread of a virus—like having everyone remain in their homes and staying away from public spaces where the virus can be transmitted from host to host—there is really no way to keep people off social media, with the exception of shutting down all access to the Internet for a period of time (which some countries have done, but usually in response to political protesting, not disinformation). And yet some publications have described things like disinformation in a surprisingly benign way. For example, a 2018 report published by the Hewlett Foundation describes disinformation as “a broad category describing the types of information that one could encounter online that could possibly lead to misperceptions about the actual state of the world”17 [emphasis added here]. However, I disagree with this characterization. The producers and disseminators of disinformation and falsehoods (from terrorist groups to Russian trolls, right-wing extremists, and social activists) don’t just put stuff out online and hope that gullible passers-by may “encounter” it. That would leave too much to chance. Instead, they purposefully seek out specific individuals and groups to target, looking for ways that (based on available information about those targets) they can maximize the impact of their influence efforts in order to manipulate misperceptions about the actual state of the world. They want to provoke and deceive you, the target, in ways that will benefit themselves at your expense. There are coherent strategies driving this digital influence phenomenon. Some strategies are meant to promote fear and uncertainty about threats to the security of the nation and its people, in order to ensure compliance with the government’s proposed remedy for the perceived

226

Digital Influence Warfare in the Age of Social Media

threats to security. By creating chaos and promoting a constant sense of conflict and danger, a regime can use the heightened levels of uncertainty and fear among its citizens to justify new draconian laws that restrict freedom of movement, expression, and assembly (allegedly for their own protection). Other influence warfare strategies focus on sowing divisions with a society (using labels such as “real Americans and patriots” vs. “traitors, a corrupt elite, enemies of the people, globalists”) and then capturing the loyalty of specific segments through a bias confirming narrative within influence silos, while simultaneously blaming those “others” outside the silo for all the problems faced by those within. Chapter 3 explored a variety of tactics that are used to achieve the goals of digital influence warfare, including emotional provocation—especially inciting fear, hate, and outrage among members of the target audience. Disempowered segments of society—like ethnic and racial minorities, foreigners, and immigrants— can be portrayed as an existential threat to those within the silo, and the more “othering” you can get away with, the more unified the silo’s ingroup identity can become. Each of us is influenced by our emotions (some more strongly than others), and emotions can be manipulated in various ways. One research study has described how a “weaponized narrative” can be used “to undermine an opponent’s civilization, identity, and will. By generating confusion, complexity, and political and social schisms, it confounds response on the part of the defender. . . . A firehose of narrative attacks gives the targeted populace little time to process and evaluate. It is cognitively disorienting and confusing—especially if the opponents barely realize what’s occurring. Opportunities abound for emotional manipulation undermining the opponent’s will to resist.”18 Another useful tactic involves provoking a political opponent or activists to publicly denounce and fight against seemingly extreme laws or policies, and then when the regime gives the appearance of moderation, the broader population will not notice that the laws or policies that are substituted (which seem more reasonable in comparison) are still erosions of civil rights and freedoms. The act of spreading disinformation serves multiple purposes. Using a variety of easy digital tools, one can use fake images and videos to allege superficial “scandals” and smear political opposition with unfounded accusations. Digital influence silos offer a uniquely powerful arena in which someone can distort the truth, deny facts, lie often and repeatedly, spread “post-truth” narratives and “alternative facts,” and replace knowledge and logic with emotions and fiction. These deepfakes can also be combined with modern forms of doxing—stealing and then posting online private information about a particular target and encouraging others to harass that target further. Mixing fact and fiction has already been proven to be quite effective as elements of digital influence campaigns.

Concluding Thoughts and Concerns for the Future227

Meanwhile, authoritarian leaders have traditionally threatened overt and covert means of punishment in their efforts to control the mainstream press and undermine or limit the independence of the media. Journalists who are critical of the authoritarian regime’s goals and policies will be banned from press briefings and called liars, purveyors of “fake news,” while media deemed unsupportive of the authoritarian agenda will be labeled “unpatriotic” or even “enemy of the state.” In extreme cases, physical abuse and violence will be directed toward those journalists (whether Nazi “Brownshirts” in the streets or MAGA hat-wearing thugs at a Trump campaign rally). All of this is now fueled by social media platforms and other online information sources that have established a form of attention dominance within digital influence silos. A powerful democratic society like America is not in danger of being defeated by bombs and bullets, but it is being weakened by those who exploit the inherent vulnerabilities within an open democratic society that values free speech, diversity, mutual understanding, and trust in each other. Trust is a critical asset that can be damaged and destroyed using the right kinds of weapons. Those seeking to destroy this trust will first identify the major fissures within the target population and then exacerbate them in ways that encourage distrust about the integrity and intentions of others in the society. The strategy also calls for undermining perceptions of transaction integrity; if the target can be convinced that they may no longer trust the integrity of their democratic elections, or believe what they see and hear from their political leaders (particularly online), they will begin to question the legitimacy of democracy itself. Some members of that society may even begin to favor more autocratic leaders, as long as the leader appears at least somewhat aligned with their personal and political values. The strategies and tactics of digital influence warfare are being used to make people doubt whether their votes matter or will be accurately counted. Many already are questioning whether the things they once believed about science or the world itself are true. Others are becoming increasingly worried about being targeted by trolls and hackers. Increasing uncertainty in one part of our lives has an impact on how we view other parts of our lives, and as described throughout this book, the emotional and psychological effects of uncertainty are transferrable. Each of us can be manipulated, especially when seeking a form of certainty within our social mediated digital influence silos. Our enemies are trying to sow the seeds of distrust, dismay, and animosity toward one another, because by influencing the body politic in this way the viral effects of digital disinformation will weaken its ability to fight back. Over time, if left untreated, the body grows increasingly sick and dies. And unfortunately, we are increasingly vulnerable targets for these kinds of digital influence attacks. The amount of information available

228

Digital Influence Warfare in the Age of Social Media

about most of us online is more than enough for digital influencers to form a basic profile that they can use to determine whether or not our expressed opinions, ideological and professional affiliations, sociodemographic status, and other attributes would make us a good target for them to try and manipulate. For all the reasons—technological, social, and psychological—described in the previous chapters, we can expect a grim future of new and increasingly effective forms of digital influence warfare. THE FUTURE OF DIGITAL INFLUENCE WARFARE Digital influence attacks will continue for the foreseeable future and will become more innovative and sophisticated. As a report by the Rand Corporation notes, “Increasingly, hostile social manipulation will be able to target the information foundations of digitized societies: the databases, algorithms, networked devices, and artificial intelligence programs that will dominate the day-to-day operation of the society.”19 The future will bring darker influence silos that no light of truth can penetrate, resulting in heightened uncertainty and distrust, deeper animosity, more extremism and violence, and widespread belief in things that simply are not true. The future evolution of digital influence tools—including augmented reality, virtual reality, and artificial intelligence (AI)—promises to bring further confusion and challenges to an already chaotic situation, offering a new frontier for disinformation and perceptions manipulation.20 For example, in the not-too-distant future, we will see a flood of fake audio, images, messages, and videos created through AI that will appear so real it will be increasingly difficult to convince people they are fakes.21 Technology already exists that can be used to manipulate an audio recording to delete words from a speech and sow the rest together seamlessly or add new words using software that replicates the voice of the speaker with uncanny accuracy.22 Imagine the harm that can be done when in the future, digital influencers have the ability to clone any voice, use it to say anything the influencer wants, and then use that audio recording to persuade others.23 Creating deepfake images and video is also becoming easier, with increasingly realistic results becoming more convincing. One particularly sophisticated AI-related approach involves a tool known as generative adversarial networks (GANs). These involve integrating a competitive function into software, with one network seeking to generate an item, such as an image or video, while the other network judges the item to determine whether it looks real. As the first network continues to adapt to fool the adversarial network, the software learns how to better create more realistic images or videos.24 Over time, according to Michael Mazzar and his colleagues at Rand, “as technology improves the quality of this production, it will likely become more difficult to discern real events from doctored or artificial ones, particularly if combined with the advancements in audio

Concluding Thoughts and Concerns for the Future229

software.” If the target of such deepfake disinformation holds true to the old adage of “hearing and seeing is believing,” the long-term harmful effects of this technology are quite obvious. Technological advances will make it increasingly difficult to distinguish real people from computergenerated ones, and it will be even more difficult to convince people that they are being deceived by someone they believe is real. Further, according to the Rand report, AI will be used in the future to process large data sets and simulate neural networks for recognizing objects and activity in images and videos; for sensory perception in audio, speech, and natural language processing; and for real-time translation of mainstream languages. “When integrated with behavioral psychology, AI will begin to enable robots, artificial agents, and programs to learn by ‘experience-driven sequential decision-making’ . . . [contributing to] contextual understanding, emotional understanding, and mimicry capabilities.”25 The implications of this technology for manipulation of perceptions and digital influence warfare are clear and potentially frightening. Lifelike interactive platforms—often called chatbots or “conversational AI”26— will be used to engage people in rhetorical debates online, and they will provide convincing arguments that will persuade them to believe (or do) whatever the AI machine is programmed to convince them about.27 New and more powerful forms of digital influence silos, enhanced through virtual reality and augmented reality, will further remove the possibility of a shared objective reality among a society. Experts warning of fake videos or images will be increasingly ignored and rejected within these silos, even from experts that a silo’s members would consider “one of our experts” (meaning they are part of the in-group and acceptable within the influence silo). In some cases, deepfakes will be generated by people with the express intent of influencing people’s perceptions about historical or current events. Global warming skeptics can create and circulate realistic video “proving” that the ice caps are not melting. Sophisticated videos of political scandals can be fabricated with increasing ease that will undoubtedly be harmful, even after they have been proven false. Convincing videos are already being developed and posted online, supporting all manner of conspiracy theories. By inventing realities based on personal preferences that are shared by others within your influence silo, an influencer has enormous power to shape the perceptions of people in new and increasingly powerful ways. Overall, the more we trust technology—and especially AI—to help us make decisions, the more vulnerable we will become to hackers who want to manipulate these decisions. Imagine a future in which you are contacted by (or you reach out to) a voice-activated AI assistant who provides information necessary for you to make important decisions. Now imagine an influence mercenary has hacked into that AI machine and programmed it to provide inaccurate information, much like the misinformation or

230

Digital Influence Warfare in the Age of Social Media

disinformation being propagated today through social media channels. In our future, AI will be used to predict and manipulate behavior in many ways, and it will be used for defensive and offensive forms of digital influence warfare. As Singer and Brooking note, “Within a decade, Facebook, Google, Twitter and every other Internet company of scale will use neural networks to police their platforms. Dirty pictures, state-sponsored botnets, terrorist propaganda, and sophisticated disinformation campaigns will be hunted by machine intelligences that dwarf any now in existence. But they will be battled by other machine intelligences that seek to obfuscate and evade, disorient and mislead. And caught in the middle will be us— all of us—part of a conflict that we definitely started but whose dynamics we will soon scarcely understand.”28 As a U.S. Congressman recently observed, we are in a race between those who use artificial intelligence to detect deepfakes and other kinds of disinformation and those using artificial intelligence to perfect it.29 Beyond artificial intelligence, we must also learn to be more cautious about adopting new Internet-connected devices in our homes—the socalled “Internet of Things.” Remember the discussion in chapters 3 and 4 about “gaslighting,” in which the goal is to raise a person’s uncertainty about their perception of reality? Imagine being able to take over control of someone’s Internet-connected devices like a coffee maker, refrigerator, heat, lights, television, and other items in their home. You could turn these devices on or off randomly, or change their settings, making the target believe that they have no control over their surroundings, warping their sense of reality, and raising uncertainty about what they can and cannot trust. We have already seen attacks like this involving the hacking of Internet-connected baby monitoring cameras, among other devices.30 In the end, the more the things we connect to the Internet, the more vulnerable to an external attacker we may become.31 Online sources of information that we depend upon for making purchases or other decisions are also a source of vulnerability. For example, hacking into real estate websites in order to change addresses, prices, photos, and other information may not just be intended to harm the real estate company—the more we begin to doubt the accuracy of what we see on our screens, the more we are likely to believe nothing or anything at all. Hacking into a web server in order to disrupt its functions or take it offline is less impactful than altering the information provided on that server. More impactful still would be the ability to diminish trust in e-commerce. Imagine how you would feel about online shopping if you purchased something on a website for $29, but then your bank or credit card account shows a charge of $129 for that item. Now imagine the uproar (and economic impact) if one day this scenario was repeated millions of time all around the world, raising uncertainty and decreasing our willingness to trust in the security of e-commerce services. If suddenly we

Concluding Thoughts and Concerns for the Future231

are all uncertain whether our credit card and banking information is safe when conducting online transactions (e.g., when buying holiday gifts on Amazon, paying our bills via online banking, or simply downloading an inexpensive app onto our cell phone), we will become less likely to engage in online commercial behavior. This, in turn, has far-reaching implications for the long-term health of our economy, which has already moved heavily online. And of course, we can fully expect that the kinds of digital influence warfare attacks against democratic elections will continue and will likely involve new innovative tactics. For example, there are concerns that in the future malicious hackers could use ransomware to snatch and hold hostage databases of local voter registrations or cause power disruptions at polling centers on Election Day. Further, as one expert noted, “with Americans so mistrustful of one another, and of the political process, the fear of hacking could be as dangerous as an actual cyberattack—especially if the election is close, as expected.”32 “You don’t actually have to breach an election system in order to create the public impression that you have,” said Laura Rosenberger, director of the Alliance for Securing Democracy, which tracks Russian disinformation efforts.33 There is much more that could be said about digital influence warfare and its increasingly terrible impact on our daily lives today and into the future. And unfortunately, tomorrow’s uses of digital influence warfare will be worse than what we see today. But in the remaining pages of this concluding chapter, let’s turn our attention to how governments and the private sector are responding to these challenges and how their decisions will have a critical impact on the future health of our democracy. How each individual citizen responds to digital influence efforts will have an even more critical and lasting impact, which will also be addressed in the latter part of this chapter. RESPONDING TO DIGITAL INFLUENCE ATTACKS According to a January 2020 report in the New York Times, the good news is that “American defenses have vastly improved in the four years since Russian hackers and trolls mounted a broad campaign to sway the 2016 presidential election. Facebook is looking for threats it barely knew existed in 2016, such as fake ads paid for in rubles and self-proclaimed Texas secessionists logging in from St. Petersburg. Voting officials are learning about bots, ransomware and other vectors of digital mischief.”34 A variety of government investigations and reports have generated greater public awareness about the threat of digital influence attacks. Examples include the five-volume Senate Intelligence Committee report produced after a 36-month investigation of Russian influence in the 2016 election,35 portions of which have been cited in several chapters of this book. The United

232

Digital Influence Warfare in the Age of Social Media

States has also imposed sanctions against specific individuals in response to their roles in Russia’s efforts to influence the 2016 presidential election. New government initiatives have been launched, like the Media Forensics program at the Defense Advanced Research Projects Agency (DARPA).36 In 2017, the FBI established a Foreign Influence Task Force (FITF) to identify and counteract malign foreign influence operations targeting the United States. According to testimony in 2019 by FBI Director Christopher Wray, this task force has been involved in numerous investigations and shares intelligence with local, state, federal, and foreign government agencies. Perhaps more importantly, the FBI has also “met with top social media and technology companies several times, provided them with classified briefings, and shared specific threat indicators and account information, so they can better monitor their own platforms.”37 In fact, the social media companies play the most critical role in countering the digital influence efforts described in this book, both foreign and domestic, in at least four ways: establishing new policies to prohibit disinformation, creating new tools and information resources for use in detecting abuse of their social media platforms, removing content that violates their policies, and suspending or deleting accounts that violate their policies. Regarding policy changes, Facebook has banned “deepfake” videos from its platform, and Google publicly released a “Deep Fake Detection Dataset” (containing more than 3,000 manipulated videos captured in various scenes with 28 actors) for use as a tool for research and education.38 In January 2019, YouTube announced it would change its algorithms to stop recommending conspiracies.39 In September 2019, Twitter announced a new “platform manipulation and spam policy,” which states, “You may not use Twitter’s services in a manner intended to artificially amplify or suppress information or engage in behavior that manipulates or disrupts people’s experience on Twitter.”40 On June 5, 2019, YouTube announced that it would take down videos advocating Nazi or other hateful ideologies or denying major events such as the Holocaust or the Sandy Hook Elementary School massacre.41 And in June 2020—Nick Clegg, the head of global affairs and communications at Facebook—announced that the company would be increasing its efforts to reduce fake news on its platform and would begin “blocking all ads in the U.S. during the election period from state-controlled media organizations from other countries. All political ads must have a ‘paid for by’ disclaimer attached to them—a label which will remain on the ad even if it is shared—and information on which voters were reached, and how many saw them, is logged in an ads library for everyone to see.”42 In August 2019, Instagram released a new tool that allows users to report misinformation, and the flagged content is then reviewed by the company’s fact-checking program.43 A year earlier, Instagram announced a new effort to use machine learning tools for “removing inauthentic likes,

Concluding Thoughts and Concerns for the Future233

follows and comments from accounts that use third-party apps to boost their popularity.”44 In October 2019, Facebook unveiled new features to label whether posts were coming from state-sponsored media outlets.45 Google Jigsaw launched Assembler, an experimental platform that automates a range of media forensics tasks.46 In 2018, Twitter announced a new Ads Transparency Center.47 Twitter also hosts a massive archive of data on state-backed information operations that have been discovered on Twitter since 2016 and declared their intent “to empower academic and public understanding of information operations around the world, and to empower independent, third-party scrutiny of these tactics on our platform.”48 The data include account details and related content associated with information operations by Russia, China, Iran, Indonesia, Nigeria, Turkey, Egypt, and many other countries. Over the past few years, millions of accounts on Twitter, Facebook, Instagram, YouTube, and other platforms have been suspended or deleted. Several high-profile “takedowns” have already been described in previous chapters of this book, like Facebook’s April 2017 suspension of over 30,000 accounts in France that they suspected were automated and linked to Russia.49 In September 2020, Facebook announced that since 2017 they had “removed over 100 networks worldwide for engaging in coordinated inauthentic behavior . . . [and] with each takedown, threat actors lose their infrastructure across many platforms, forcing them to adjust their techniques, and further reducing their ability to reconstitute and gain traction.”50 And while Facebook continues to develop and implement new ways of identifying and disrupting these kinds of activities, other private organizations are also providing assistance—for example, in October 2019, the Stanford Internet Observatory collaborated with Facebook to identify Russian influence operations in Africa.51 Many other “takedown” efforts have been described in previous chapters of this book, and it seems like new ones are announced almost every week. And in addition to removing accounts for “coordinated inauthentic behavior,” social media platforms are also taking action against specific individuals for repeated violations of policies. For example, in May 2019, Facebook closed the accounts of several controversial figures, including Alex Jones, the conspiracy theorist and founder of InfoWars.52 Twitter had already banned Jones in 2018, along with Louis Farrakhan and others who had been using the social media platform to disseminate hateful and false information.53 For its part, Twitter began cracking down in July 2020 on fans of the right-wing QAnon conspiracy, removing thousands of accounts that engaged in targeted harassment and announcing a plan to keep QAnon information from appearing in trending topics or search results.54 On July 29, 2020, the president’s own son—Donald Trump Jr.—was suspended from Twitter for 24 hours after posting a video clip claiming

234

Digital Influence Warfare in the Age of Social Media

that the anti-malaria drug hydroxychloroquine works as a preventative measure against COVID-19. Because this claim had already been repeatedly refuted by medical and scientific studies, Twitter determined that the post had violated its rules against the spread of misinformation. A spokesman for Trump Jr. later responded that this was proof that social media companies were committed to “killing free expression online” and “committing election interference to stifle Republican voices.”55 Facebook and other social media platforms have been increasingly aggressive in shutting down terrorists and hate groups that promote violence online.56 And following the January 6, 2021 attack against the U.S. Capitol Building in Washington, DC, both Twitter and Facebook permanently shut down Trump’s accounts, noting that the violence incited by his consistent lying about voting fraud violated their platforms’ terms of service. Twitter also removed tens of thousands of QAnon-affiliated accounts, while Facebook removed thousands of pages, groups, events, profiles, and Instagram accounts linked to QAnon and many other militarized social movements. Social media companies are also partnering with academic researchers and other organizations to develop even more tools that would help identify and block the spread of disinformation. For example, Facebook is funding the efforts of several academic researchers who are trying to develop a computer algorithm that may detect deepfakes.57 Similarly, a collaboration between researchers at the University of California and the software company Adobe produced a tool that correctly identifies altered images 99 percent of the time.58 Some tools and information resources have been created even without direct involvement of social media companies. For example, the German Marshall Fund launched a Hamilton 68 dashboard, which tracks Russian, Chinese, and Iranian influence operations on Twitter, YouTube, state-sponsored news websites, and via official diplomatic statements at the United Nations.59 Indiana University’s Observatory on Social Media launched a “BotSlayer” tool that instantly detects the use of fake accounts to manipulate public opinion.60 Other examples include Deeptrace, an Amsterdam-based start-up using “deep learning and computer vision for detecting and monitoring AI-generated synthetic videos.”61 Researchers at the Oxford Internet Institute launched The ComProp Navigator,62 an online resource guide that aims to help civil society groups better understand and respond to the problem of disinformation.63 The Atlantic Council-Eurasia Center supports an online information portal called the Disinformation Portal,64 which contains a variety of reports for educating people about the tactics used by Russia to spread disinformation. Another new initiative was launched in 2019 by the Carnegie Endowment for International Peace, called the Partnership to Counter Influence Operations.65 Altogether, these government and private sector initiatives illustrate a much higher awareness of the digital influence efforts described

Concluding Thoughts and Concerns for the Future235

in this book and a greater commitment to confronting them and mitigating their impacts on society. And yet, there is still more that needs to be done. MEETING THE CHALLENGES AHEAD Social media platforms must continue to get better at preventing disinformation from fueling belief in falsehoods within digital influence silos. Often the blatant lies being spread on social media are wholly offensive and should be confronted and removed quickly. For example, the Committee to Defend the President, a major GOP SuperPAC, ran a series of advertisements on Facebook in early January 2020 claiming Joe Biden is a criminal who used his position as vice president to make himself rich. This claim has been debunked by Facebook’s own fact-checkers as well as by every credible major news service. And yet for over a week, these ads were seen by millions. Is there anything that can be done when a social media platform allows—even encourages—the broad dissemination of blatant disinformation such as this?66 On October 29, 2019, the CEO of Twitter, Jack Dorsey, announced that this social media platform would no longer run political ads. His statement also said Twitter would ban all advertisements about political candidates, elections, and hot-button policy issues such as abortion and immigration.67 This was a decision made on principle, despite the loss of revenue that will result. The Trump campaign manager was quick to condemn Twitter for this decision, calling it stupid to turn down millions in profits and characterizing it as a left-wing conspiracy to silence pro-Trump conservatives. Russian state media also condemned the decision, for reasons that should now be quite obvious. But in stark contrast, Facebook resisted following Twitter’s example. In October 2019, Facebook CEO Mark Zuckerberg defended allowing politicians to lie on Facebook by saying that “in a democracy, I believe people should decide what is credible, not tech companies.”68 His underlying argument here was that asking social media platforms to ban false implications and inferences (as opposed to demonstrable lies) is in some ways asking the platforms to do voters’ work for them.69 “We don’t fact-check political ads. We don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying.”70 Mark Zuckerberg noted that social media platforms work hard to stop users from spreading terrorist propaganda, bullying young people, or disseminating pornography. However, he asked, “The question is: Where do you draw the line? Most people agree with the principles that you should be able to say things other people don’t like, but you shouldn’t be able to say things that put people in danger.”71 For example, he notes, Facebook policy focuses “on making sure complete hoaxes don’t go viral. We especially focus on misinformation that could lead to imminent physical harm,

236

Digital Influence Warfare in the Age of Social Media

like misleading health advice saying if you’re having a stroke, no need to go to the hospital.”72 Newer policies also address coordinated efforts to spread disinformation posted by fake accounts, individuals pretending to be someone else. Facebook AI systems, he claimed, “have also gotten more advanced at detecting clusters of fake accounts that aren’t behaving like humans. We now remove billions of fake accounts a year—most within minutes of registering and before they do much. Focusing on authenticity and verifying accounts is a much better solution than an ever-expanding definition of what speech is harmful.”73 Unfortunately, as much as his protection of free speech is admirable, Zuckerberg missed the point about all the digital influence warfare efforts that have emerged over the past decade. The perpetrators of these influence operations have all had one thing in mind: using our freedoms of speech against us to their advantage. Whether state-sponsored teams of influence mercenaries or groups of extremists and terrorists, the ability to use Facebook and other social media platforms to manufacture outrage, “othering,” uncertainty, and conspiracies among a target audience is unprecedented. Influence attackers intend to identify and reinforce behavior within digital influence silos not for the good of the people, but to achieve specific political, social, economic, and even violent goals at the expense of the people. Allowing politicians—or anyone—to pay for the dissemination of demonstrably false information is not a free speech issue; it is an issue of right and wrong. Just as it is wrong in any society to lie, it is wrong to reinforce an audience’s belief in the lies they are being told. And it is even more wrong for Facebook to be generating profits by allowing politicians to reinforce their audience’s beliefs in those lies. Perhaps this is why Zuckerberg announced in September 2020 that Facebook would limit political advertising during the week before the early Novem­ ber election, stating that decision was meant to halt misinformation and prevent civil unrest.74 The strategies and tactics of digital influence warfare are being used for a variety of purposes, as described throughout this book. But one of the vexing problems of our time is why it has become so commonplace— and effective—to convince people via social media of something that is entirely false. From foreign governments and terrorists to domestic political parties and extremists, a variety of completely untrue narratives have been propagated and distributed online throughout the past decade with increasing ferocity and tactical success. Some of the blame for this lies squarely with the profit model of social media platforms—for example, Facebook has no policy that requires information posted on its platform to be true.75 As Monika Bickert—Facebook’s vice president for product policy and counterterrorism—notes, “We think it’s important for people to make their own informed choice for what to believe.”76

Concluding Thoughts and Concerns for the Future237

The approach reflected in these and other statements by Facebook highlights the fundamental difference between social media (where an increasing number of Americans turn to for their “news” sources) and the world of scholarly journals, traditional mainstream press, and other information sources that have editorial policies, strong constraints against publishing material that is not verifiable, and publishes retractions when they are shown to have made a mistake. The chapters in this book have examined the implications of this “make your own informed choice” approach in a society where “informed” is increasingly based on what you believe and who or what you allow yourself to be influenced by, regardless of factual evidence. The problem we face today is that the entire landscape of Internet companies needs to address how their products and services are being used for malicious purposes. The Russian-based “Operation Secondary Infektion” described in chapter 1 of this book involved 326 different social media platforms.77 As powerful as Facebook, Twitter, Instagram, and Google have become, digital influence warfare is not a problem that can be solved by those companies alone. It is also clear that governments imposing punitive sanctions against foreign individuals and countries engaged in digital influence attacks are having little to no impact. While billions have been spent on developing the capability to identify cyberattacks and respond quickly to mitigate and limit the damage wrought by them, the United States does not currently have an effective deterrent against foreign countries responsible for digital influence attacks against us. Governments should be able to hold malicious digital influencers accountable in a court of law. In Singapore, a law that took effect in October 2019 allows for criminal penalties of up to 10 years in prison and a fine of up to S$1 million ($720,000) for anyone convicted of spreading online inaccuracies. Malaysia enacted a similar law, and France has a new law that allows judges to determine what is fake news and to order its removal during election campaigns.78 Germany also has a range of laws specifically for governing content on social media. In the United States, arguments for the protection of free speech have been used to claim that laws like these are not possible. But at least with their new policies, the social media platforms and Internet companies can punish individuals for violating the terms of service. You may have a right to “say” whatever you want, but you do not have a right to use their microphone to say it. Another challenge that must be taken more seriously is the need for nationwide education programs to improve digital information literacy in order to make sure all citizens are fully aware of how vulnerable they are to disinformation and show them how to recognize fake news. Authorities should demonstrate how image manipulation software is being used to mislead them and should showcase recent examples of realistic deepfakes. In the United Kingdom, the government published a

238

Digital Influence Warfare in the Age of Social Media

69-page toolkit to help public sector communication professionals prevent the spread of disinformation.79 Public education initiatives in countries like Ukraine and Finland, which emphasize critical thinking and responsible information consumption, could be adapted for the American public.80 In fact, this is where the most important fight against digital influence efforts will take place: at the individual level, where you and I see something in our social media feed and decide whether to amplify it or delete it. Further, because Russia’s influence operations focus on undermining the perceived legitimacy of democratic elections and institutions, the United States would benefit from the kind of civics education efforts envisioned by the Education for Democracy Act passed in 2020 by the House of Representatives. WHAT YOU AND I CAN DO The most important thing for any of us is to first decide that we want to fight back against malicious digital influence efforts. Some of us do, but others do not. There are already a variety of government and private sector efforts to educate us about the harms of disinformation, which help us to become more informed consumers of information.81 But these will only succeed if we choose to do our part. When looking at a video or photo online, stop and ask yourself some important questions: Is this the original version? Do I know who created this, when, where, and why? Is the source of this credible? The more concretely you can answer these questions, the more likely it is you can trust the information you are seeing. On the other hand, if you have no answers for any of these questions, it’s likely you are making a lot of assumptions about this information that may not be entirely true. As noted in previous chapters, there are few barriers to posting a website or a profile on a social media service that is completely fabricated. This has been done for years, with increasing sophistication and ease, with the result that millions of Internet users have at one time or another been duped by something online they thought was real, but then turned out to be entirely fake. It is increasingly necessary for all of us to take the extra effort to verify the veracity of the website you are visiting or the details of the user profile you are viewing. We can use online cues to reduce uncertainty about falsehoods—for example, if you believe you’re seeing something peculiar on the website of the New York Times, but you then notice that the website domain ends in “­nytimes​.­yyz​.­info,” and you realize quickly that this is a fake, parody website crafted to look as real and authentic as possible in order to deceive viewers. Other cues we can look at include a generic greeting and poor grammar in an email we receive (i.e., “Dear sir/madam, my mane is Jack and I went to gave you $1million…”), clearly marking it as a phishing attempt that can rightly be marked as spam.

Concluding Thoughts and Concerns for the Future239

Of course, as described in the previous chapter, fake news sources are intended to look as realistic as possible, in order to achieve the goals of the digital influence campaign. For example, a fake news story on ­abcnews​ .­com​.­co (a site designed to deceive people into thinking they were viewing the ABC News website) was titled “Obama Signs Executive Order Banning The Pledge Of Allegiance In Schools Nationwide.” The story quoted as one of its sources “Fappy the Anti-Masturbation Dolphin.” The story was allegedly written by “Jimmy Rustling,” whose bio claimed he is a “doctor” who won “fourteen Peabody awards and a handful of Pulitzer Prizes” and whose photo appears on other websites attached to different names like “Darius Rubics.” But despite all these obvious indicators of being fake news, the title of the story evoked an emotional response among social media users who disliked President Obama, so they fell for the trick and forwarded or retweeted the story without ever bothering to read it or assess the credibility of its author.82 Another story that went viral on social media included a graphic purporting to show crime statistics on the percentage of white people killed by non-white people and other murder statistics by race. Then-presidential candidate Donald Trump retweeted it, telling Fox News commentator Bill O’Reilly that it came “from sources that are very credible.” But almost every figure in the image was wrong—anyone could simply compare it to real FBI crime data, which is publicly available. And the supposed source given for the false data, “Crime Statistics Bureau–San Francisco,” doesn’t even exist.83 Fake news like this can have real-world implications, fueling and deepening people’s convictions in things that simply are not true. It doesn’t have to be this way, but it’s up to each of us to commit ourselves to recognizing the problem and avoid contributing to it. According to Stephan Lewandowsky, Professor of Cognitive Science at the University of Bristol, “If people are made aware that they might be misled before the disinformation is presented, they demonstrably become more resilient to the misinformation. This process is variously known as ‘inoculation’ or ‘prebunking’ and it comes in a number of different forms. At the most general level, an up-front warning may be sufficient to reduce—but not eliminate—subsequent reliance on disinformation.”84 These efforts may eventually help in some instances, but inoculation will only work if and when individuals choose to participate, because you cannot force them. Just like we have seen with the anti-vaccination movement in the United States, many will choose to ignore the threat of digital influence warfare, or resist any efforts to curb its effects, until it is too late. Even more tragic, it is already too late for some people, whose sense of self is reinforced by the information they are spoon-fed, loaded with confirmation bias and empty calories. To turn away from the bias-confirming feast of disinformation would be too gut-wrenching for them to even consider. They have already committed far too much emotional presence and

240

Digital Influence Warfare in the Age of Social Media

cognitive energy, so asking them to abandon it in return for uncertainty would be a waste of time. However, for those who do want to fight back, there are weapons— digital, educational, and psychological weapons—that we can arm ourselves with and learn to use effectively. On the technological front, a program like PhotoDNA can be used to analyze the details of an online image. The Internet research firm First Draft has assembled a very useful online collection of tools85 for verification of photos and videos, among many other ways to identify fakes. Researchers at the University of Cambridge have launched an online game that reduces a person’s susceptibility to disinformation,86 and the media outlet Buzzfeed developed an entertaining “Fake News Quiz”87 to showcase how much disinformation we are exposed to each week, while Facebook’s “News Hero” game app is meant to develop a user’s “ability to evaluate and separate fake news from real stories.”88 Facebook has posted guidelines for how its users can spot fake news,89 and Wikipedia has compiled a number of lists showcasing fake news media outlets.90 Eugene Kiely and Lori Robertson of ­FactCheck​ .­org published a brief summary for how to spot fake news,91 and the Atlantic Council Digital Forensic Research Lab has partnered with Google Jigsaw to provide a compelling visualization that explains the threat of disinformation around the world.92 The research organization EU vs Disinfo has compiled an online database of disinformation that anyone can use to search for particular fake news narratives.93 Other information resources of value include books like Brooke Borel’s The Chicago Guide to Fact-Checking; Daniel Levin’s Weaponized Lies: How to Think Critically in the Post-Truth Era; and Sarah Harrison Smith’s The Fact Checker’s Bible.94 The recently published book True or False, by former CIA analyst Cindy Otis, takes readers through the history and impact of fake news over the centuries and then provides a wealth of useful tips (and illustrations) for how to spot fake news.95 The Stanford History Education Group has published a video tutorial on “lateral reading” to demonstrate how fact-checkers can efficiently find accurate information online.96 Carl Miller—Research Director at the Center for the Analysis of Social Media and author of a book about power in the digital age—has recommended the following “Seven Rules to Keep Yourself Safe Online.”97 1. Actively look for the information you want, don’t let it find you. “The information that wants to find you isn’t the information you want to find. Instead, reach out, actively find good sources. Proactively learn about the world using your own, conscious sense of what to trust.” 2. Beware the passive scroll. This is when you are prey to processes that can be gamed and virals that can be shaped. “Sitting there scrolling through your feed makes you prey to all the gaming and manipulation that targets

Concluding Thoughts and Concerns for the Future241











algorithmic curation. This is one of the ways that illicit/manipulative content makes itself extremely visible.” 3. Guard against outrage. Outrage is easy to hijack and makes you particularly vulnerable to being manipulated online. What’s more, your outrage can induce outrage in others, making it a particularly potent tool. “Activating outrage is the easiest way to manipulate you. It is present in literally every info warfare campaign I’ve ever analyzed. When you become angry, you make others angry as well—both your friends and opponents. Guard against it.” 4. Slow down online. Pause before sharing. Give time for your rational thought processes to engage with what you are reading. “Manipulation very commonly activates your emotion, not reasoning. The hope (I’ve heard from viral crafters) is to get you to share stuff literally before you’ve thought about it. Always pause before sharing. Consider it as well as feel it.” 5. Lean away from the metrics that can be spoofed. Don’t trust something because it is popular, trending or visible. “Even Jack Dorsey says not to trust Twitter followers. All those easily visible, countable metrics have been taken to be a proxy for authority, and they really, really shouldn’t be. They’re unbelievably easy to manipulate, across [social media].” 6. Never rely only on information sourced from social media. This is particularly the case for key pieces of information, such as where polling booths are or whether you can vote. “For really key stuff, don’t trust the Internet I’m afraid. Speak to actual people too.” 7. Spend your attention wisely: it is both your most precious and coveted asset. “Your attention changes you. Where you spend it is a proactive choice that change who you are, what you think, whom you know. Spending attention should be made with the same discernment and care as, say, deciding what food to put into our bodies.”

There are also a number of online tools that will help you determine whether a social media account is a real person or the kind of automated “bot” described in chapter 3.98 One of the easiest of all is the Bot Sentinel: simply type someone’s twitter account into the form at h ­ ttp://​­botsentinel​ .­com. You can also use the Information Operations Archive99 to see if an account has interacted with other accounts involved in known influence operations. A more advanced online tool called Botometer,100 developed at the University of Indiana, uses around 1,200 different features of an account’s profile to deem whether or not it is a bot. And another approach called the IMPED model101—developed in partnership with academics at City University London and Arizona State University—looks at linguistic and temporal patterns in tweets to deem whether the content is low quality. Oxford research Philip Howard has also identified some ways to identify a highly automated account:102 • One or only a few posts are made, all about a single issue; • The posts are all strongly negative, over-the-top positive, or obviously irrelevant;

242

Digital Influence Warfare in the Age of Social Media

• It is difficult to find links to and photos of real people or organizations behind the post; • No answers, or evasive answers, are given in response to questions about the post or the source; • The exact wording comes from several accounts, all of which appear to have many of the same followers.

With all these tools and information guides at our disposal, there is much that you and I can do to combat the spread of online disinformation. In a way, we are the most important warriors to combat digital influence warfare in the future. As you know by now (to quote Adi Robertson of The Verge), “the Internet is full of grifters, tricksters, and outright liars who rely on people’s basic trust to amplify their message. It’s worth slowing down and carefully navigating their traps, to avoid spreading an alarming false rumor, getting angry at a group of people for something they didn’t do, or perpetuating an honest misunderstanding.”103 There are a number of strategies you can use to identify, evaluate, and reject the disinformation that’s creeping into your social media feed and email inbox. An essential component of all of these strategies is making a personal commitment to slowing down and thinking about information—whether that information is true, false, or something in between—before you click on that “share,” “like,” or “retweet” button. As Robertson recommends, a key first step is to become acutely aware of why some stories grab your attention, particularly when a given piece of content is too good (or bad) to be true. “Once you start looking, you’ll notice specific subtypes of this content—like ragebait designed to get traffic from people’s anger, hyperpartisan appeals that twist the facts, or outright scams. The techniques are relatively common across different types of story, and they’re not hard to recognize.”104 If you have a strong emotional reaction to the information you are seeing, pause and consider why that is and how the information source might benefit from your reaction. We need an ability to detach ourselves emotionally from the situation, to find what the SCUBA diving industry calls “neutral buoyancy,” when we are neither being pulled downward against our will nor are we exerting significant energy (or valuable oxygen) fighting against the forces of gravity. As discussed in earlier chapters of this book, influencers want to provoke emotional reactions—from outrage to humor—that they hope will lead us to forward their information to others in our social network, thereby expanding the reach and potential impact of their information (or disinformation). If you find yourself instinctively wanting to share a story or social media post because it makes you angry, pause for a moment and ask yourself if there is something you want your friends and followers to do with that information. Do you want them to share in your anger and outrage? If so, who would this benefit most—you, them, or the original source of the information?

Concluding Thoughts and Concerns for the Future243

When encouraging digital literacy, we should advise people to stop for 30 seconds and think about why they agree with something, particularly if it reinforces a negative perception of something or someone specific. Remember, there are thousands (if not millions) of digital influence mercenaries online today whose posts—whether satire or just plain “fake news”—are designed to encourage clicks and generate money for the creator through ad revenue. They are trying to make a living off this kind of engagement, and if deceiving you is the easiest and most lucrative means of doing so, this profit incentive means you will be seeing more and more disinformation attempts in your future—particularly if nobody is bothering to fact-check anything. And when you see a friend, family member, or colleague sharing information that you suspect (or has already been proven) to be false, send them a private message pointing it out in a nice way and letting them know that you’re trying to help them avoid the embarrassment of spreading disinformation and aiding those who want to harm us via the tactics of digital influence warfare. In addition to developing awareness and a commitment to resisting digital influence efforts, we must also commit to stepping outside our comfort zones, our influence silos, at least once in a while. As described earlier in this book, we now have the capability to surround ourselves exclusively with sources of information that confirm what we want to believe, and we can ignore, block, and denigrate competing sources of information that may challenge or contradict those beliefs. Republicans can now ensure they see and hear only information favorable to the Republican point of view, while Democrats can ensure they see and hear only information that supports the Democratic point of view. But unfortunately, as Lee McIntyre notes, “Those in the grip of partisan bias are strongly motivated to reject evidence that is dissonant with their beliefs, sometimes even leading to a ‘backfire effect.’”105 How can we ever hope to work together as a society toward solving some of our most dangerous problems—like terrorism and extremist ideologies—when people can simply ignore information sources that have been so conveniently walled off? As Kakutani notes, “Without commonly agreed-upon facts—not Republican facts and Democratic facts; not the alternative facts of today’s silo-world—there can be no rational debate over policies, no substantive means of evaluating candidates for political office, and no way to hold elected officials accountable to the people. Without truth, democracy is hobbled.”106 Our influence silos provide uniquely powerful and easy opportunities for digital influence mercenaries to exac­ erbate divisions and increase the hatred and “othering” that make social unity evermore impossible. If we can’t destroy these silos, we must do all we can to transcend them, step out and above them, and take control over them instead of letting them control us. As Bruce Bartlett explains, a strong defense against fake news involves critical thinking, “taking in news from

244

Digital Influence Warfare in the Age of Social Media

a variety of sources, including those that don’t conform to your own biases, and being skeptical about information that sounds too good (or bad) to be true.”107 This is not to say that mitigating our vulnerabilities to influence requires us to replace uncertainty with stronger convictions. But there is a huge difference between healthy skepticism and embracing industry-promoted efforts to increase uncertainty. This is why media literacy, digital literacy, and education are so critical now for any democracy to regain some sense of balanced objectivity. Efforts to disseminate fake news would be powerless in an information environment where everyone places supreme emphasis on facts and logically defensible interpretations of those facts to determine an unchallengeable truth. Finally, each of us should reflect honestly on our relationship with the Internet. Are we overly dependent on our Facebook, Twitter, and Instagram accounts to inform us about the rest of the world? Do we feel a constant need to “check-in” and see what others are saying or to post new photos and information about ourselves? How much time do we spend watching YouTube videos, scrolling through our social media feeds, surfing websites for something that provokes our emotions, stimulates our intellect, or confirms what we want to believe about the world and our place within it? I know some individuals who literally spend hours upon hours every single day engaged in the digital information ecosystem, and I’m betting many of you do as well. This brings us to the most central truth about digital influence warfare: the more we rely on digital information sources for making decisions and forming opinions, the more vulnerable we are to the strategies and tactics described throughout this book. Influential monsters have been created via the Internet that cannot be easily tamed, but as an optimist at heart, I feel certain that if we want to tame them, we have the power to do so. For the sake of our children, I hope I’m not wrong.

Notes

CHAPTER 1 1. Alice Marwick and Rebecca Lewis, “Media Manipulation and Disinformation Online,” Data & Society (May 15, 2017). Online at: ­https://​­datasociety​.­net​/­pubs​ /­oh​/­DataAndSociety​_MediaManipulationAndDisinformationOnline​.­pdf 2. Michael V. Hayden, The Assault on Intelligence: American National Security in an Age of Lies (New York: Penguin Press, 2018), p. 191. 3. Michael Erbschloe, Social Media Warfare: Equal Weapons for All (Boca Raton, FL: CRC Press, 2017), p. xix. 4. Gregory Rattray and Laurence Rothenberg, “A Framework for Discussing War in the Information Age,” in War in the Information Age, edited by Robert Pfalzgraff and Richard Shultz (New York: Brassy’s, 1997), pp. 331–354, 340. 5. Laura Galante and Shaun Ee, “Defining Russian Election Interference: An Analysis of Select 2014 to 2018 Cyber Enabled Incidents,” Atlantic Council’s Digital Forensics Lab (September 11, 2018). Online at: h ­ ttps://​­www​.­atlanticcouncil​.­org​ /­in​-­depth​-­research​-­reports​/­issue​-­brief​/­defining​-­russian​-­election​-­interference​-­an​ -­analysis​-­of​-­select​-­2014​-­to​-­2018​-­cyber​-­enabled​-­incidents​/ 6. Soufan Center, “The Social Media Weapons of Authoritarian States,” IntelBrief (September 13, 2019). Online at: ­https://​­thesoufancenter​.­org​/­intelbrief​-­the​-­social​ -­media​-­weapons​-­of​-­authoritarian​-­states​/ 7. Oxford Internet Institute’s Computational Propaganda Project, online at: ­http://​­comprop​.­oii​.­ox​.­ac​.­uk​/ 8. Michael J. Mazarr et al., The Emerging Risk of Virtual Societal Warfare: Social Manipulation in a Changing Information Environment (Santa Monica, CA: Rand Corporation, 2019), p. 122. 9. Braden R. Allenby, “The Age of Weaponized Narrative, or, Where Have You Gone, Walter Cronkite?” Issues in Science and Technology 33, no. 4 (Summer 2017), p. 68.

246Notes 10. Marwick and Lewis, “Media Manipulation and Disinformation Online.” 11. Claire Wardle and Hossein Derakhshan, “Information Disorder: Toward and Interdisciplinary Framework for Research and Policy Making,” Council of Europe Report (September 27, 2017); Caroline Jack, “Lexicon of Lies: Terms for Problematic Information,” Data & Society (August 9, 2017). 12. Mazarr et al., The Emerging Risk of Virtual Societal Warfare, p. 122. 13. Richard Stengel, Information Wars: How We Lost the Global Battle against Disinformation and What We Can Do about It (Washington, DC: Atlantic Monthly Press, 2019). 14. Danah Boyd, “The Information War Has Begun,” Apophenia (January 27, 2017). Online at: ­http://​­www​.­zephoria​.­org​/­thoughts​/­archives​/­2017​/­01​/­27​/­the​ -­information​-­war​-­has​-­begun​.­html 15. Jakub Kalenský, “Defending Democracy through Media Literacy,” Disinfo Portal (September 17, 2019). Online at: ­https://​­disinfoportal​.­org​/­defending​ -democracy-through-media-literacy/ 16. Adrian Chen, “The Agency,” New York Times Magazine (June 2, 2015), citing a Forbes Russia analysis of the Prism computer system for monitoring public sentiment. Online at: ­https://​­www​.­nytimes​.­com​/­2015​/­06​/­07​/­magazine​/­the​-­agency​.­html 17. Kalenský, “Defending Democracy through Media Literacy.” 18. Committee on Armed Services, United States Senate, “Russian Influence and Unconventional Warfare Operations,” Hearing before the Subcommittee on Emerging Threats and Capabilities (March 29, 2017). 19. Cherilyn Ireton and Julie Posetti, Journalism, Fake News and Disinformation (Paris: UNESCO, 2018), p. 15. 20. Mazarr et al., The Emerging Risk of Virtual Societal Warfare, p. 154. 21. Ibid., p. 155. 22. Carl von Clausewitz, On War, trans. Col. J.J. Graham. New and Revised edition with Introduction and Notes by Col. F.N. Maude, in Three Volumes (London: Kegan Paul, Trench, Trubner & C., 1918), Vol. 1. 23. Nina Jankowicz, How to Lose the Information War: Russia, Fake News and the Future of Conflict (New York: IB Taurus, 2020). 24. David Patrikarakos, War in 140 Characters: How Social Media Is Reshaping Conflict in the Twenty-First Century (New York: Basic Books, 2017); Erbschloe, Social Media Warfare; P.W. Singer and Emerson T. Brooking, LikeWar: The Weaponization of Social Media (Boston: Houghton Mifflin Harcourt, 2018). 25. Singer and Brooking, LikeWar, p. 261 26. Samantha Bradshaw and Philip N. Howard, Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation. Computational Propaganda Research Project, Oxford Internet Institute (2018), p. 3. Specifically, the authors define “Cyber troops” as government or political party actors tasked with manipulating public opinion online, drawing on their previous work: Bradshaw and Howard, “Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation,” Computational Propaganda Research Project, Oxford Internet Institute, Working Paper No. 2017.12 (July 17, 2017). Online at: ­https://​­comprop​.­oii​.­ox​.­ac​.­uk​/­research​/­troops​-­trolls​-­and​-­trouble​-­makers​-­a​ -­global​-­inventory​-­of​-­organized​-­social​-­media​-­manipulation​/.

Notes247 27. Bret Schafer, A View from the Digital Trenches: Lessons from Year One of Hamilton 68. Alliance for Securing Democracy, Report No. 33. The German Marshall Fund of the United States (2018). 28. Max Boot and Michael Doran, “Political Warfare,” Council on Foreign Relations (June 28, 2013). Online at: ­https://​­www​.­cfr​.­org​/­report​/­political​-­warfare 29. Paul A. Smith, On Political War (Washington, DC: National Defense University Press, 1989), p. 3; cited in Linda Robinson et al., Modern Political Warfare (Santa Monica, CA: Rand, 2018), p. 4. 30. Carnes Lord, “The Psychological Dimension in National Strategy,” in Political Warfare and Psychological Operations: Rethinking the US Approach, edited by Carnes Lord and Frank R. Barnett (Washington, DC: National Defense University Press, 1989), p. 16. 31. Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (New York: Farrar, Strauss and Giroux, 2020). 32. Brad M. Ward, Strategic Influence Operations: the Information Connection (Carlisle, PA: U.S. Army War College, April 7, 2003), pp. 1–2. Online at: ­http://​­fas​.­org​ /­irp​/­eprint​/­ward​.­pdf 33. Robinson et al., Modern Political Warfare, pp. xix–xx. 34. Walter L. Sharp, Information Operations (Joint Publication 3-13, February 13, 2006), p. I-2. Online at: ­http://​­www​.­dtic​.­mil​/­doctrine​/­jel​/­new​_pubs​/­jp3​_13​.­pdf; see also: Air University. Strategic Communication(s) (May 1, 2007). Online at: h ­ ttp://​ ­www​.­au​.­af​.­mil​/­info​-­ops​/­strategic​.­htm​#­top 35. Vincent Vitto, Report of the Defense Science Board Task Force on Strategic Communication. Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (September 2004), p. 2. 36. James J.F. Forest and Frank Honkus, “Introduction,” in Influence Warfare, edited by James J.F. Forest (Westport, CT: Praeger Security International, 2009). 37. Anthony R. Pratkanis and Elliot Aronson, Age of Propaganda: The Everyday Use and Abuse of Persuasion (New York: Henry Holt and Company, 1992), p. 89. 38. Massimo Flore et al., Understanding Citizens’ Vulnerabilities to Disinformation and Data-Driven Propaganda, Technical Report, Joint Research Center, European Commission Science and Knowledge Services (2019). Online at: h ­ ttps://​­ec​.­europa​ .­eu​/­jrc​/­en​/­publication​/­understanding​-­citizens​-­vulnerabilities​-­disinformation​ -­and​-­data​-­driven​-­propaganda 39. Brian M. Jenkins, “America’s Great Challenge: Russia’s Weapons of Mass Deception,” Workshop Summary Report (September 2019). Online at: h ­ ttps://​ ­weaponsofmassdeception​.­net​/ 40. Richard Stengel, “We’re In the Middle of a Global Information War. Here’s What We Need to Do to Win,” Time (September 26, 2019). Online at: h ­ ttps://​­time​ .­com​/­5686843​/­global​-­information​-­war​/ 41. Sharp, Information Operations, p. I-2. 42. U.S. Army, Manual. Online at: ­http://​­fas​.­org​/­irp​/­doddir​/­army​/­fm100​-­6​ /­intro​.­htm 43. Sharp, Information Operations, p. II-1. 44. Edward Rouse, Psychological Operations (June 5, 2007). Online at: h ­ ttp://​ ­www​.­psywarrior​.­com​/­psyhist​.­html

248Notes 45. Ivan Goldberg, “Information Warfare,” Institute for the Advanced Study of Information Warfare (December 23, 2006). Online at: h ­ ttp://​­www​.­psycom​.­net​ /­iwar​.­1​.­html 46. Alex Krasodomski-Jones, Josh Smith, Elliot Jones, Ellen Judson, and Carl Miller, “Warring Songs: Information Operations in the Digital Age,” Demos (May 2019), p. 12. Online at: ­http://​­www​.­demos​.­co​.­uk 47. Jason Stanley, “How Propaganda Works in the Age of Fake News,” ­WBUR​ .­org (February 12, 2017). Online at: ­https://​­www​.­wbur​.­org​/­hereandnow​/­2017​/­02​ /­15​/­how​-­propaganda​-­works​-­fake​-­news 48. Philip G. Zimbardo, Ebbe B. Ebbesen, and Christina Maslach, Influencing Attitudes and Changing Behavior, Second edition (New York: Random House, 1977), p. 157. 49. Richard H. Shultz and Roy Godson, Dezinformatsia: Active Measures in Soviet Strategy (New York: Pergamon Brassey’s, 1984). 50. Thomas X. Hammes, “Fourth Generation Continues to Evolve; Fifth Emerges,” Military Review (May–June 2007), p. 2. 51. Quoted in John Cloake, Templer: Tiger of Malaya—The Life of Field Marshal Sir Gerald Templer (London: Harrap, 1985), p. 262. 52. Kurt Braddock, Weaponized Words: The Strategic Role of Persuasion in Violet Radicalization and Counter-Radicalization (London: Cambridge University Press, 2020). 53. Many descriptions of these online efforts by terrorist groups are provided in various issues of the peer-reviewed scholarly journal I co-edit, Perspectives on Terrorism, available online at: ­https://​­www​.­universiteitleiden​.­nl​/­perspectives​-­on​ -­terrorism, and in the CTC Sentinel, the monthly publication of the Combating Terrorism Center (where I used to work), also available online at: h ­ ttps://​­ctc​.­usma​ .­edu​/­ctc​-­sentinel​/; and for a description of how Hamas and Israel used the Internet in 2012 to convince audiences of their versions of events, see Eric Schmidt and Jared Cohen, The New Digital Age (New York: Alfred A. Knopf, 2013), pp. 189–190. 54. Singer and Brooking, LikeWar, p. 133; citing Joseph Bernstein, “Alt-White: How the Breitbart Machine Laundered Racist Hate,” Buzzfeed (October 5, 2017). 55. Hannah Murphy, “Inside Facebook’s Information Warfare Team,” Financial Times (July 5, 2019). Online at: ­https://​­www​.­ft​.­com​/­content​/­70b86214​ -9e77-11e9-9c06-a4640c9feebb 56. Mark Galeotti, Russian Political War: Moving beyond the Hybrid (London: Routledge, 2019), p. 11. 57. Jane Mayer, “How Russia Helped Swing the Election for Trump,” The New Yorker (September 24, 2018). Online at: ­https://​­www​.­newyorker​.­com​/­magazine​ /­2018​/­10​/­01​/­how​-­russia​-­helped​-­to​-­swing​-­the​-­election​-­for​-­trump 58. Zack Beauchamp, “Trump’s Allies in the National Security Council Are Being Taken Out,” Vox (August 2, 2017). Online at: ­https://​­www​.­vox​.­com​/­world​ /­2017​/­8​/­2​/­16087434​/­ezra​-­cohen​-­watnick​-­fired 59. Freedom House, “2019 Internet Freedom Election Monitor,” Report. Online at: ­https://​­freedomhouse​.­org​/­sites​/­default​/­files​/­2019​-­11​/­11042019​_Report​_FH​ _FOTN​_2019​_final​_Public​_Download​.­pdf 60. Nike Aleksajeva et al., “Operation Secondary Infektion,” Atlantic Council Digital Forensic Research Lab (June 22, 2019). Online at: ­https://​­www​.­atlanticcouncil​ .­org​/­in​-­depth​-­research​-­reports​/­report​/­operation​-­secondary​-­infektion​/

Notes249 61. Ibid. 62. Robert S. Mueller, III, “Report on the Investigation Into Russian Interference in the 2016 Presidential Election,” U.S. Department of Justice (March 2019). Online at: ­https://​­www​.­justice​.­gov​/­storage​/­report​.­pdf. We will explore Russia’s digital influence efforts in greater detail later in this book. 63. Craig Timberg and Tony Romm, “Bipartisan Senate Report Calls for Sweeping Effort to Prevent Russian Interference in 2020 Election,” Washington Post (October 8, 2019). Online at: ­https://​­www​.­washingtonpost​.­com​/­technology​/­2019​/­10​ /­08​/­bipartisan​-­senate​-­report​-­calls​-­sweeping​-­effort​-­prevent​-­russian​-­interference​ -­election​/; United States Senate, 116th Congress, “Report of the Senate Committee on Intelligence on Russian Active Measures Campaigns and Interference in the 2016 Election,” Volume 1, Report 116-XX (2019). Online at: h ­ ttps://​­www​ .­intelligence​.­senate​.­gov​/­sites​/­default​/­files​/­documents​/­Report​_Volume1​.­pdf 64. Tripti Lahiri, “China’s Disinformation on Hong Kong Protests Is on Twitter and Facebook,” Quartz (August 19, 2019). Online at: ­https://​­qz​.­com​/­1690935​ /­twitter​-­facebook​-­tackle​-­china​-­disinformation​-­on​-­hong​-­kong​-­protest​/ 65. Shelly Banjo, “Facebook, Twitter and the Digital Disinformation Mess,” Washington Post (October 31, 2019). Online at: ­https://​­www​.­washingtonpost​.­com​ /­business​/­facebook​-­twitter​-­and​-­the​-­digital​-­disinformation​-­mess​/­2019​/­10​/­31​ /­3f81647c​-­fbd1​-­11e9​-­9e02​-­1d45cb3dfa8f​_story​.­html 66. “Hong Kong Protests: YouTube Shuts Accounts over Disinformation,” BBC News (August 22, 2019). Online at: ­https://​­www​.­bbc​.­co​.­uk​/­news​/­technology​ -49443489 67. Paul Mozur and Alexandra Stevenson, “Chinese Cyberattack Hits Telegram, App Used by Hong Kong Protesters,” New York Times (June 13, 2019). Online at: ­https://​­www​.­nytimes​.­com​/­2019​/­06​/­13​/­world​/­asia​/­hong​-­kong​-­telegram​ -­protests​.­html 68. Esther Chan and Rachel Blundy, “‘Bulletproof’ China-Backed Site Attacks HK Democracy Activists,” AFP (Agence France Presse) (November 1, 2019). Online at: ­https://​­news​.­yahoo​.­com​/­bulletproof​-­china​-­backed​-­attacks​-­hk​-­democracy​ -­activists​-­070013463​.­html 69. Tom Uren, Elise Thomas, and Jacob Wallis, “Tweeting through the Great Firewall: Preliminary Analysis of PRC-linked Information Operations on the Hong Kong Protests,” Australian Strategic Policy Institute (September 3, 2019). Online at: ­https://​­www​.­aspi​.­org​.­au​/­report​/­tweeting​-­through​-­great​-­firewall 70. J. Michael Cole, “China Intensifies Disinformation Campaign against Taiwan,” Taiwan Sentinel (January 19, 2017). Online at: ­https://​­sentinel​.­tw​/­china​ -­disinformation​-­tw​/; and Russell Hsiao, “CCP Propaganda against Taiwan Enters the Social Age,” China Brief 18, no. 7 (April 24, 2018). Online at: h ­ ttps://​­jamestown​ .­org​/­program​/­ccp​-­propaganda​-­against​-­taiwan​-­enters​-­the​-­social​-­age​/;  cited  in Uren et al., “Tweeting through the Great Firewall.” 71. Hsiao, “CCP Propaganda against Taiwan Enters the Social Age”; cited in Uren et al., “Tweeting through the Great Firewall.” 72. Ibid. 73. Carl Miller, “China and Taiwan Clash over Wikipedia Edits,” BBC Technology (October 5, 2019). Online at: ­https://​­www​.­bbc​.­com​/­news​/­technology​-­49921173 74. Ibid.

250Notes 75. FireEye Intelligence, “Suspected Iranian Influence Operation Leverages Network of Inauthentic News Sites & Social Media Targeting Audiences in U.S., UK, Latin America, Middle East” (August 21, 2018). Online at: h ­ ttps://​­www​.­fireeye​ .­com​/­blog​/­threat​-­research​/­2018​/­08​/­suspected​-­iranian​-­influence​-­operation​.­html 76. Nathaniel Gleicher, “Taking Down More Coordinated Inauthentic Behavior” and “What We’ve Found So Far,” Facebook Newsroom (August 21, 2018). Online at: ­https://​­newsroom​.­fb​.­com​/­news​/­2018​/­08​/­more​-­coordinated​-­inauthentic​ -behavior/ 77. Nathaniel Gleicher, “Removing More Coordinated Inauthentic Behavior From Iran and Russia,” Facebook Newsroom (October 21, 2019). Online at: ­https://​ ­n ewsroom​ .­f b​ .­c om​ /­n ews​ /­2 019​ /­1 0​ /­removing​ -­m ore​ -­c oordinated​ -­i nauthentic​ -behavior-from-iran-and-russia/ 78. Ibid. 79. Ibid. 80. “Hacking Group Linked to Iran Targeted a U.S. Presidential Campaign, Microsoft says,” Los Angeles Times (October 4, 2019). Online at: h ­ ttps://​­www​ .­latimes​.­com​/­politics​/­story​/­2019​-­10​-­04​/­2020​-­iran​-­hacking​-­microsoft​-­dnc 81. For more on this, please see James J.F. Forest, Digital Influence Mercenaries: Profits and Power through Information Warfare (Annapolis, MD: Naval Institute Press, 2021). 82. Nathaniel Gleicher, “Removing Coordinated, Inauthentic Behavior in UAE, Egypt and Saudi Arabia,” Facebook Newsroom (August 1, 2019). Online at: h ­ ttps://​ ­newsroom​.­fb​.­com​/­news​/­2019​/­08​/­cib​-­uae​-­egypt​-­saudi​-­arabia​/ 83. Declan Walsh and Nada Rashwan, “‘We’re at War’: A Covert Social Media Campaign Boosts Military Rulers,” New York Times (September 6, 2019). Online at: h ­ ttps://​ ­www​.­nytimes​.­com​/­2019​/­09​/­06​/­world​/­middleeast​/­sudan​-­social​-­media​.­html 84. Timberg and Romm, “Bipartisan Senate Report”; Davey Alba and Adam Satariano, “At Least 70 Countries Have Had Disinformation Campaigns, Study Finds,” New York Times (September 26, 2019). Online at: ­https://​­www​.­nytimes​ .­com​/­2019​/­09​/­26​/­technology​/­government​-­disinformation​-­cyber​-­troops​.­html 85. Banjo, “Facebook, Twitter and the Digital Disinformation Mess.” 86. Bradshaw and Howard, “Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation”; Michiko Kakutani, The Death of Truth: Notes on Falsehood in the Age of Trump (New York: Tim Duggan Books, 2018), p. 132. 87. Kakutani, The Death of Truth, p. 141. 88. Christopher Paul and Miriam Matthews, The Russian ‘Firehose of Falsehood’ Propaganda Model (Rand Corporation, 2016), pp. 1–5. Online at: ­https://​­www​.­rand​ .­org​/­pubs​/­perspectives​/­PE198​.­html 89. Kakutani, The Death of Truth, p. 141; and Paul and Matthews, The Russian ‘Firehose of Falsehood’ Propaganda Model. 90. Kakutani, The Death of Truth, pp. 142–143. 91. “Putin Signs Law Making Russian Apps Mandatory on Smartphones, Computers,” Reuters (December 2, 2019). Online at: ­https://​­www​.­reuters​.­com​/­article​ /­us​-­russia​-­internet​-­software​/­putin​-­signs​-­law​-­making​-­russian​-­apps​-­mandatory​ -­on​-­smartphones​-­computers​-­idUSKBN1Y61Z4 92. Nina Masih, Shams Irfan, and Joanna Slater, “India’s Internet Shutdown in Kashmir Is the Longest Ever in a Democracy,” New York Times (December 16,

Notes251 2019). Online at: ­https://​­www​.­washingtonpost​.­com​/­world​/­asia​_pacific​/­indias​ -­internet​-­shutdown​-­in-​ ­kashmir​-­is​-­now​-­the​-­longest-​ ­ever​-­in​-­a​-­democracy​/­2019​/­12​ /­15​/­bb0693ea​-­1dfc​-­11ea​-­977a​-­15a6710ed6da​_story​.­html 93. Jane Lytvenenko, “‘I Found Election Interference and No One Cared’: One US Veteran’s Fight to Protect His Compatriots Online,” Buzzfeed News (December 20, 2019). Online at: ­https://​­www​.­buzzfeednews​.­com​/­article​/­janelytvynenko​ /­kristofer​-­goldsmith​-­veteran​-­disinformation 94. Kristofer Goldsmich, “VVA Investigative Report,” Vietnam Veterans of America (September 17, 2019). Online at: ­http://​­vva​.­org​/­trollreport​/ 95. Drew Harwell, “Faked Pelosi Videos, Slowed to Make Her Appear Drunk, Spread Across Social Media,” Washington Post (May 23, 2019). Online at: h ­ ttps://​ ­www​.­washingtonpost.​ ­com​/­technology/ ​ ­2019/ ​ ­05​/­23​/­faked​-­pelosi​-­videos​-­slowed​ -­make​-her-appear-drunk-spread-across-social-media/ 96. Ibid. 97. Samantha Cole, “Americans Don’t Ned Deepfakes to Believe Lies about Nancy Pelosi,” Motherboard/Tech by Vice (May 24, 2019). Online at: ­https://​­www​.­vice​.­com​ /­en​_us​/­article​/­qv7zmx​/­deepfakes​-­nancy​-­pelosi​-­fake​-­video​-­trump​-­tweet 98. Saranac Hale Spencer, “Biden Video Deceptively Edited to Make Him Appear ‘Lost,’” ­FactCheck​.­org: A Project of The Annenberg Public Policy Center (August 7, 2020). Online at: ­https://​­www​.­factcheck​.­org​/­2020​/­08​/­biden​-­video​-­deceptively​ -­edited​-to-make-him-appear-lost/ 99. Cole, “Americans Don’t Ned Deepfakes to Believe Lies about Nancy Pelosi.” 100. Stefan Halper, “China: The Three Warfares,” Report for Andy Marshall, Director, Office of Net Assessment, Office of the Director of the Secretary of Defense (May 2013), p. 25. Online at: ­https://​­cryptome​.­org​/­2014​/­06​/­prc​-­three​-­wars​.­pdf; cited in Peter Pomerantsev, This Is Not Propaganda (New York: Public Affairs, 2018), p. 190. 101. Information Security Doctrine of the Russian Federation (Approved by President of the Russian Federation Vladimir Putin on September 9, 2000). Online at: ­https://​­www​.­itu​.­int​/­en​/­ITU​-­D​/­Cybersecurity​/­Documents​/­National​_Strategies​ _Repository​/­Russia​_2000​.­pdf 102. Renee Diresta et al., “Telling China’s Story: The Chinese Communist Party’s Campaign to Shape Global Narratives,” Stanford Internet Observatory and Hoover Institution, Stanford University (July 20, 2020). Online at: h ­ ttps://​ ­w ww​.­h oover​.­o rg​ /­research​ /­t elling​ -­c hinas​ -­s tory​ -­c hinese​ -­c ommunist​ -­p artys​ -campaign-shape-global-narratives 103. Mazarr et al., The Emerging Risk of Virtual Societal Warfare, p. 156. 104. Wardle and Derakhshan, “Information Disorder: Toward and Interdisciplinary Framework for Research and Policy Making”; Jack, “Lexicon of Lies: Terms for Problematic Information.” 105. Diego A. Martin and Jacob N. Shapiro, “Trends in Online Foreign Influence Efforts,” Woodrow Wilson School of Public and International Affairs, Princeton University (July 8, 2019), p. 4. Online at: ­https://​­scholar​.­princeton​.­edu​/­jns​ /­research​-­reports 106. Tim Hwang, Deepfakes: Primer and Forecast, NATO Strategic Communications Center of Excellent (June 2020), p. 12. Online at: h ­ ttps://​­www​.­stratcomcoe​ .­org​/­deepfakes​-­primer​-­and​-­forecast 107. Ibid., p. 156.

252Notes 108. Regina Joseph, “A Peek into the Future: A Stealth Revolution by Influence’s New Masters,” in White Paper on Influence in an Age of Rising Connectedness, edited by Weston Aviles and Sarah Canna (Washington, DC: U.S. Department of Defense, August 2017), p. 11. 109. Seth Flaxman, Sharad Goel, and Justin M. Rao, “Filter Bubbles, Echo Chambers and Online News Consumption,” Public Opinion Quarterly 80, no. S1 (2016), pp. 298–320. Online at: ­https://​­5harad​.­com​/­papers​/­bubbles​.­pdf 110. Philip N. Howard, Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations and Political Operatives (New Haven, CT: Yale University Press, 2020), p. 81. 111. Richard Fletcher, “The Truth Behind Filter Bubbles: Bursting Some Myths,” Reuters Institute, University of Oxford (January 22, 2020). Online at: ­https://​ ­reutersinstitute​.­politics​.­ox​.­ac​.­uk​/­risj​-­review​/­truth​-­behind​-­filter​-­bubbles​-­bursting​ -some-myths 112. Mazarr et al., The Emerging Risk of Virtual Societal Warfare, p. 115. 113. Ibid., p. 159. 114. Ibid., p. 124. 115. Ibid., pp. 127–128. Many of these examples are drawn from the case described in Brooke Jarvis, “Me Living Was How I Was Going to Beat Him,” Wired (December 2017). It cites one statistic that by 2016 over 10 million Americans reported that they had been threatened with, or had experienced, the unauthorized sharing of explicit images online. 116. Carl Miller, “In the Age of Fake News and Manipulation, You Are the New Battlefield,” New Scientist, No. 3252 (October 19, 2019). Online at: ­https://​ ­www​.­newscientist​.­com​/­article​/­mg24432520​-­800​-­in​-­the​-­age​-­of​-­fake​-­news​-­and​ -­manipulation​-­you​-­are​-­the​-­new​-­battlefield​/ 117. See Aidan Kirby and Vera Zakem, “­Jihad​.­com 2.0: The New Social Media and the Changing Dynamics of Mass Persuasion,” in Influence Warfare, pp. 27–48. 118. Rid, Active Measures, p. 8. 119. Ireton and Posetti, Journalism, Fake News and Disinformation, p. 18.

CHAPTER 2 1. John Arquilla and David Ronfeld, “The Advent of Netwar (Revisited),” Networks and Netwars: The Future of Terror, Crime and Militancy (Santa Monica, CA: Rand Corporation, 2001), p. 1. 2. Carl Miller, The Death of the Gods: The New Global Power Grab (London: Windmill Books, 2018), p. xvi 3. Susan Ratcliffe, ed., Oxford Essential Quotations (London: Oxford University Press, 2017), citing Sun Tzu, The Art of War: The Oldest Military Treatise in the World, translated by Lionel Giles (London: British Museum, 1910), chapter 3. 4. This section of the discussion significantly amplifies and paraphrases a report by the Rand Corporation, Understanding Commanders’ Information Needs for Influence Operations, Appendix B: Task List Analysis, pp. 71–73, which cites several Department of the Army documents and 1st (Land), Field Support Division, “Terminology for IO Effects,” in Tactics, Techniques and Procedures for Operational and Tactical Information Operations Planning (March 2004), p. 23.

Notes253 5. Ibid. 6. Jarol B. Manheim, Strategy in Information and Influence Campaigns (New York: Routledge, 2011), pp. 68–69. 7. Kathleen Taylor, Brain Washing: The Science of Thought Control (London: Oxford University Press, 2004), p. 146. 8. Anthony R. Pratkanis and Elliot Aronson, Age of Propaganda: The Everyday Use and Abuse of Persuasion (New York: Henry Holt and Company, 1992), p. 87. 9. Philip N. Howard, Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations and Political Operatives (New Haven, CT: Yale University Press, 2020), p. 110. 10. Claire Atkinson, “Fake News Can Cause ‘Irreversible Damage’ to Companies, and Sink their Stock Price,” NBC News (April 25, 2019). Online at: https://​ ­­ ­www​.­nbcnews​.­com​/­business​/­business​-­news​/­fake​-­news​-­can​-­cause​-­irreversible​ -­damage​-­companies​-­sink​-­their​-­stock​-­n995436 11. Ibid. 12. Ibid. 13. Izabella Kaminska, “A Lesson in Fake News from the Info-Wars of Ancient Rome,” Financial Times (January 17, 2017). Online at: ­­https://​­www​.­ft​.­com​/­content​ /­aaf2bb08​-­dca2​-­11e6​-­86ac​-­f253db7791c6 14. Mark Galeotti, Russian Political War: Moving Beyond the Hybrid (London: Rout­ ledge, 2019), p. 10. 15. O.J. Hale, The Captive Press in the Third Reich (Princeton, NJ: Princeton University Press, 1964); cited in Pratkanis and Aronson, Age of Propaganda, p. 269. 16. W. Phillips Davison, “Some Trends in International Propaganda,” Annals of the American Academy of Political Science and Social Science 398 (November 1971), pp. 1–13; cited in Daniel Baracskay, “U.S. Strategic Communication Efforts, During the Cold War,” in Influence Warfare, edited by James J.F. Forest (Westport, CT: Praeger Security International, 2009), p. 256. 17. Russell Hsiao, “CCP Propaganda against Taiwan Enters the Social Age,” China Brief 18, no. 7 (April 24, 2018). Online at: ­­https://​­jamestown​.­org​/­program​ /­ccp​-­propaganda​-­against​-­taiwan​-­enters​-­the​-­social​-­age​/ 18. Much of the following discussion about Cold War influence warfare paraphrases material from Baracskay, “U.S. Strategic Communication Efforts, During the Cold War,” pp. 253–274. 19. Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (New York: Farrar, Strauss and Giroux, 2020), p. 4. 20. Ibid., p. 7. 21. James J.F. Forest, Influence Warfare: How Terrorists and Governments Fight to Shape Perceptions in a War of Ideas (Westport, CT: Praeger Security International, 2009), p. 12. 22. Ibid. 23. James Woods, History of International Broadcasting (London: IET, 1992), p. 110. 24. Ibid., pp. 110–111. 25. Ibid., p. 110. 26. David L. Stebenne, Modern Republican: Arthur Larson and the Eisenhower Years. (Bloomington: Indiana University Press, 2006), p. 194. 27. Richard H. Shultz and Roy Godson, Dezinformatsia: Active Measures in Soviet Strategy (New York: Pergamon Brassey’s, 1984), p. 133.

254Notes 28. Ibid., p. 133. 29. Ibid., p. 149. 30. Ibid., pp. 150–151. 31. Ibid., pp. 152–153. 32. Ibid., p. 155. 33. Ibid., p. 157. 34. Rid, Active Measures, p. 13. 35. Paraphrased from Philip G. Zimbardo, Ebbe B. Ebbesen, and Christina Maslach, Influencing Attitudes and Changing Behavior, Second edition (New York: Random House, 1977), pp. 57–60. 36. Several of these questions are paraphrased from Manheim, Strategy in Information and Influence Campaigns, p. 193. 37. For more on this, please see James J.F. Forest, Digital Influence Mercenaries: Profits and Power through Information Warfare (Annapolis, MD: Naval Institute Press, 2021). 38. Howard, Lie Machines, pp. 99–100. 39. Elisa Shearer and Jeffrey Gottfried, “News Use Across Social Media Platforms 2017,” Pew Research Center (September 7, 2017). Online at: ­­https://​­www​ .­journalism​.­org​/­2017​/­09​/­07​/­news​-­use​-­across​-­social​-­media​-­platforms​-­2017​/ 40. Ben Nimmo, Graham Brookie, and Kanishk Karanm, “#TrollTracker: Twitter Troll Farm Archives, Part One—Seven Key Take Aways from a Comprehensive Archive of Known Russian and Iranian Troll Operations,” Atlantic Council Digital Forensic Research Lab (October 17, 2018). Online at: ­­https://​­medium​.­com​/­dfrlab​ /­trolltracker​-­twitter​-­troll​-­farm​-­archives​-­8d5dd61c486b 41. For more on this, please see Forest, Digital Influence Mercenaries. 42. Paraphrased from Zimbardo et al., Influencing Attitudes and Changing Behavior, pp. 57–60. 43. Ibid., pp. 94–98. 44. Ibid. 45. Ibid. 46. Claire Wardle, “Fake News: It’s Complicated,” First Draft (February 16, 2017). Online at: ­­https://​­firstdraftnews​.­org​/­latest​/­fake​-­news​-­complicated​/ 47. Rid, Active Measures, p. 5, with a direct quote from famous Soviet defector Ladislav Bittman, author of the 1972 book The Deception Game. 48. Tzu, The Art of War, p. 15. 49. For more on this, please see Forest, Digital Influence Mercenaries. 50. Oxford Internet Institute’s Computational Propaganda Project: ­­ http://​ ­comprop​.­oii​.­ox​.­ac​.­uk​/ [accessed July 20, 2018]. 51. Michael J. Mazarr et al., The Emerging Risk of Virtual Societal Warfare: Social Manipulation in a Changing Information Environment (Santa Monica, CA: Rand Corporation, 2019), p. 154. 52. Alice Marwick and Rebecca Lewis, “Media Manipulation and Disinformation Online,” Data & Society (May 15, 2017). Online at: ­­https://​­datasociety​.­net​ /­pubs​/­oh​/­DataAndSociety​_MediaManipulationAndDisinformationOnline​.­pdf 53. Ibid. 54. For a detailed examination of this event, see David Singer, The Perfect Weapon: Sabotage and Fear in the Cyber Age (New York: Crown Publishing, 2018), pp. 124–143.

Notes255 55. Ibid., p. 143. 56. Renee Diresta et al., “Telling China’s Story: The Chinese Communist Party’s Campaign to Shape Global Narratives,” Stanford Internet Observatory and Hoover Institution, Stanford University (July 21, 2020), p. 3. Online at: ­­https://​ ­w ww​.­h oover​.­o rg​ /­research​ /­t elling​ -­c hinas​ -­s tory​ -­c hinese​ -­c ommunist​ -­p artys​ -­campaign​-­shape​-­global​-­narratives 57. Diego A. Martin and Jacob N. Shapiro, “Trends in Online Foreign Influence Efforts,” Woodrow Wilson School of Public and International Affairs, Princeton University (July 8, 2019), p. 3. Online at: ­­https://​­scholar​.­princeton​.­edu​/­jns​ /­research​-­reports 58. Ibid. 59. Ibid. 60. Howard, Lie Machines, p. 75. 61. Howard, Lie Machines, p. 77; Jonathan Kaiman, “Free Tibet Exposes Fake Twitter Accounts by China Propagandists,” The Guardian (July 22, 2014). Online at: ­­https://​­www​.­theguardian​.­com​/­world​/­2014​/­jul​/­22​/­free​-­tibet​-­fake​-­twitter​ -­accounts​-­china​-­propagandists; and Nicholas Monaco, “Taiwan: Digital Democracy Meets Automated Autocracy,” in Computational Propaganda: Political Parties, Politicians and Political Manipulation on Social Media, edited by Samuel C. Woolley and Philip N. Howard (New York: Oxford University Press, 2018), pp. 104–127. 62. Peter Pomerantsev, This Is Not Propaganda (New York: Public Affairs, 2018), p. 53. 63. J.M. Porup, “How Mexican Twitter Bots Shut Down Dissent,” Vice (August 24, 2015). Online at: ­­https://​­www​.­vice​.­com​/­en​_us​/­article​/­z4maww​/­how​-­mexican​ -­twitter​-­bots​-­shut​-­down​-­dissent 64. Pomerantsev, This Is Not Propaganda, pp. 4–7. 65. Samantha Bradshaw and Philip N. Howard, “The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation,” Computational Propaganda Research Project, Oxford Internet Institute (July 2019). Online at: ­­https://​­comprop​.­oii​.­ox​.­ac​.­uk​/­wp​-­content​/­uploads​/­sites​/­93​/­2019​/­09​ /­CyberTroop​-­Report19​.­pdf; Craig Timberg and Tony Romm, “Bipartisan Senate Report,” Washington Post (October 8, 2019); Davey Alba and Adam Satariano, “At Least 70 Countries Have Had Disinformation Campaigns, Study Finds,” New York Times (September 26, 2019). Online at: ­­https://​­www​.­nytimes​.­com​/­2019​/­09​/­26​ /­technology​/­government​-­disinformation​-­cyber​-­troops​.­html 66. Bradshaw and Howard, “The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation.” 67. Alba and Satariano, “At Least 70 Countries Have Had Disinformation Campaigns, Study Finds.” 68. Ibid. 69. Pomerantsev, This Is Not Propaganda, p. 53. 70. Rebecca Ratcliffe, “Journalist Marie Ressa Found Guilty of ‘Cyberlibel’ in Philippines,” The Guardian (June 5, 2020). Online at: ­­https://​­www​.­theguardian​ .­com​/­world​/­2020/ ​ ­jun/ ​ ­15​/­maria​-­ressa​-­rappler​-­editor​-­found​-­guilty-​ ­of​-­cyber​-­libel​ -­charges​-­in​-­philippines 71. Pomerantsev, This Is Not Propaganda, p. 12 72. Ibid., p. 16.

256Notes 73. “Maria Ressa Accepts the 2018 Knight International Journalism Award,” International Center for Journalists, online at: ­­https://​­www​.­icfj​.­org​/­maria​-­ressa​ -­accepts​-­2018​-­knight​-­international​-­journalism​-­award 74. Ratcliffe, “Journalist Marie Ressa Found Guilty of ‘Cyberlibel’ in Philippines.” 75. Ellen Nakashima and Greg Bensinger, “Former Twitter Employees Charged with Spying for Saudi Arabia by Digging into the Accounts of Kingdom Critics,” Washington Post (November 6, 2019). ­­https://​­www​.­washingtonpost​.­com​/­national​ -­security​/­former​-­twitter​-­employees​-­charged​-­with​-­spying​-­for​-­saudi​-­arabia​-­by​ -­digging​-­into​-­the​-­accounts​-­of​-­kingdom​-­critics​/­2019​/­11​/­06​/­2e9593da​-­00a0​-­11ea​ -­8bab​-­0fc209e065a8​_story​.­html 76. “New Disclosures to our Archive of State-Backed Information Operations,” Twitter Safety (December 20, 2019). Online at: ­­https://​­blog​.­twitter​.­com​/­en​_us​/­topics​ /­company​ /­2019​ /­new​ -­disclosures​ -­to​ -­our​ -­archive​ -­of​ -­state​ -­backed​ -­information​ -­operations​.­html 77. Howard, Lie Machines, p. 77. 78. Ibid., p. 76. 79. Diresta et al., “Telling China’s Story,” p. 9. 80. Ibid., p. 13. 81. Larry M. Wortzel, “The Chinese People’s Liberation Army and Information Warfare,” Strategic Studies Institute and US Army War College Press (March 5, 2014), p. 29. Online at: ­­https://​­publications​.­armywarcollege​.­edu​/­pubs​/­2263​.­pdf 82. Ibid. 83. Stefan Halper, China: The Three Warfares. Report for Andy Marshall, Director, Office of Net Assessment, Office of the Director of the Secretary of Defense (May 2013), p. 25. Online at: ­­https://​­cryptome​.­org​/­2014​/­06​/­prc​-­three​-­wars​.­pdf; cited in Pomerantsev, This Is Not Propaganda, p. 190. 84. Ibid. 85. Wortzel, “The Chinese People’s Liberation Army and Information Warfare,” pp. 29–30. Note, according to Wortzel, a direct translation of yulun is “public opinion”; thus, in many English translations, the term “public opinion warfare” is used. In some PLA translations of book titles and articles, however, it is called “media warfare.” 86. Orde Kittrie, Lawfare: Law as a Weapon of War (London: Oxford University Press, 2016), p. 162; cited in Doug Livermore, “China’s ‘Three Warfares’ in Theory and Practice in the South China Sea,” Georgetown Security Studies Review (March 25, 2018). Online at: ­­https://​­georgetownsecuritystudiesreview​.­org​/­2018​/­03​/­25​ /­chinas​-­three​-­warfares​-­in​-­theory​-­and​-­practice​-­in​-­the​-­south​-­china​-­sea​/ 87. Halper, China: The Three Warfares, p. 13. 88. Wortzel, “The Chinese People’s Liberation Army and Information Warfare.” 89. Steven Collins, “Mind Games,” NATO Review (Summer 2003). Online at: ­­https://​­www​.­nato​.­int​/­docu​/­review​/­2003​/­issue2​/­english​/­art4​.­html 90. Halper, China: The Three Warfares, p. 13, citing Dean Cheng, “Winning Without Fighting: Chinese Public Opinion Warfare and the Need for a Robust American Response,” The Heritage Foundation: Backgrounder Number 2745 (November 26, 2012), pp. 3–4. 91. Laura Jackson, “Revisions of Reality: The Three Warfares—China’s New Way of War,” in the Beyond Propaganda report, Information at War, Legatum

Notes257 Institute, Transitions Forum (September 2015), pp. 5–6. Online at: https://​­ ­­ li​.­com​ /­wp​-­content​/­uploads​/­2015​/­09​/­information​-­at​-­war​-­from​-­china​-­s​-­three​-­warfares​ -­to​-­nato​-­s​-­narratives​-­pdf​.­pdf 92. Ibid. 93. Michael Raska, “Hybrid Warfare with Chinese Characteristics,” S. Rajarat­ nam School of International Studies (December 2, 2015). Online at: ­­https://​­www​ .­rsis​.­edu​.­sg​/­wp​-­content​/­uploads​/­2015​/­12​/­CO15262​.­pdf; cited in Livermore, “China’s ‘Three Warfares’ in Theory and Practice in the South China Sea.” 94. Diresta et al., “Telling China’s Story.” 95. Ibid. 96. Ibid. 97. Ibid. 98. Halper, China: The Three Warfares, p. 13, citing Cheng, “Winning Without Fighting: Chinese Public Opinion Warfare and the Need for a Robust American Response,” pp. 3–4. 99. Tzu, The Art of War, p. 41. 100. Livermore, “China’s ‘Three Warfares’ in Theory and Practice in the South China Sea.” 101. Halper, China: The Three Warfares, p. 12. 102. Jackson, “Revisions of Reality.” 103. Ibid. 104. Wortzel, “The Chinese People’s Liberation Army and Information Warfare.” 105. Jackson, “Revisions of Reality.” 106. Diresta et al., “Telling China’s Story.” 107. P.W. Singer and Emerson T. Brooking, LikeWar: The Weaponization of Social Media (Boston: Houghton Mifflin Harcourt, 2018), p. 101; citing: “Planning Outline for the Construction of a Social Credit System (2014–2020),” China Copyright and Media (April 25, 2015). Online at: ­­https://​­www​.­wired​.­com​/­beyond​-­the​-­beyond​ /­2015​/­06​/­chinese​-­planning​-­outline​-­social​-­credit​-­system​/; and Jacob Silverman, “China’s Troubling New Social Credit System—and Ours,” New Republic (October 29, 2015). Online at: ­­https://​­newrepublic​.­com​/­article​/­123285​/­chinas​-­troubling​ -­new​-­social​-­credit​-­system​-­and​-­ours 108. Shultz and Godson, Dezinformatsia, particularly chapter 7. 109. Diresta et al., “Telling China’s Story.” 110. Ibid. 111. Ibid. 112. Information Security Doctrine of the Russian Federation (Approved by President of the Russian Federation Vladimir Putin on September 9, 2000). Online at: ­­https://​­www​.­itu​.­int​/­en​/­ITU​-­D​/­Cybersecurity​/­Documents​/­National​_Strategies​ _Repository​/­Russia​_2000​.­pdf 113. “Disinformation Review: Twenty Years of Distorting the Media,” EU vs. Dis­ info (January 10, 2020). Online at: ­­https://​­www​.­stopfake​.­org​/­en​/­disinformation​ -­review​-­twenty​-­years​-­of​-­distorting​-­the​-­media​/ 114. Information Security Doctrine of the Russian Federation, p. 16. 115. Clint Watts, “Disinformation: A Primer in Russian Active Measures And Influence Campaigns,” Statement Prepared for the U.S. Senate Select Committee on Intelligence (March 30, 2017). Online at: ­­https://​­bit​.­ly​/­2oJ0hZV

258Notes 116. Ibid. 117. For example, see Rid, Active Measures; Clint Watts, Messing with the Enemy (New York: Harper Collins, 2018); Pomerantsev, This Is Not Propaganda; Galeotti, Russian Political War; David Patrikarakos, War in 140 Characters (New York: Basic Books, 2017); Singer and Brooking, LikeWar. 118. “Question That: RT’s Military Mission,” Atlantic Council’s Digital Forensic Research Lab (January 8, 2018). Online at: ­­https://​­medium​.­com​/­dfrlab​/­question​ -­that​-­rts​-­military​-­mission​-­4c4bd9f72c88 119. Galeotti, Russian Political War, p. 11. 120. For a brief overview, see Pomerantsev, This Is Not Propaganda, pp. 60–63. 121. Singer and Brooking, LikeWar, p. 110. 122. Background to “Assessing Russian Activities and Intentions in Recent U.S. Elections”: The Analytic Process and Cyber Incident Attribution. Office of the Director of National Intelligence (January 2017). Online at: ­­https://​­www​.­dni​.­gov​/­files​ /­documents​/­ICA​_2017​_01​.­pdf 123. Rid, Active Measures, p. 409. 124. For a detailed account of the origins and early years of the Internet Research Agency, see Rid, Active Measures, pp. 399–409. 125. Pomerantsev, This Is Not Propaganda, p. 23. 126. Philip N. Howard, Bharath Ganesh, Dimitra Liotsiou, John Kelly, and Camille François, The IRA, Social Media and Political Polarization in the United States, 2012–2018, Working Paper 2018.2. Oxford: Project on Computational Propaganda. Online at: ­­https://​­comprop​.­oii​.­ox​.­ac​.­uk​/­research​/­ira​-­political​-­polarization​/ 127. Jon Swaine, “Twitter Admits Far more Russian Bots Posted on Election Than It Had Disclosed,” The Guardian (January 19, 2018). Online at: ­­https://​­www​ .­theguardian​.­com​/­technology​/­2018​/­jan​/­19​/­twitter​-­admits​-­far​-­more​-­russian​ -­bots​-­posted​-­on​-­election​-­than​-­it​-­had​-­disclosed; Philip N. Howard et al., “Social Media, News, and Political Information during the U.S. Election: Was Polarizing Content Concentrated in Swing States?” Computational Propaganda Research Project, Oxford Internet Institute (September 28, 2017). Online at: ­­ https://​ ­comprop​.­oii​.­ox​.­ac​.­uk​/­research/ ​ ­working-​ ­papers​/­social​-­media​-­news​-­and​-­political​ -­i nformation​-­d uring​-­t he​-­u s​-­election​-­w as​-­p olarizing​-­c ontent​-­c oncentrated​-­in​ -­swing​-­states​/; Michiko Kakutani, The Death of Truth: Notes on Falsehood in the Age of Trump (New York: Tim Duggan Books, 2018), pp. 129–130. 128. Rid, Active Measures, p. 6. 129. Background to “Assessing Russian Activities and Intentions in Recent U.S. Elections.” 130. Ellen Nakashima, “Senate Committee Unanimously Endorses Spy Agencies’ Finding that Russia Interfered in 2016 Presidential Race in Bid to Help Trump,” Washington Post (April 21, 2020). Online at: ­­https://​­www​.­washingtonpost​ .­com​/­national​-­security​/­senate​-­committee​-­unanimously​-­endorses​-­spy​-­agencies​ -­finding​-­that​-­russia​-­interfered​-­in​-­2016​-­presidential​-­race​-­in​-­bid​-­to​-­help​-­trump​ /­2020​/­04​/­21​/­975ca51a​-­83d2​-­11ea​-­ae26​-­989cfce1c7c7​_story​.­html 131. Steve Peoples, “Foreign Threats Loom Ahead of US Presidential Election,” Associated Press (August 1, 2020). Online at: ­­https://​­apnews​.­com​/­13461c385696bc 5fb77ef631130f813c 132. Jean-Baptiste Jeangène Vilmer, “The “Macron Leaks’ Operation: A PostMortem,” The Atlantic Council’s Digital Forensics Lab, in collaboration with the

Notes259 French Ministry’s Institute for Strategic Research (June 2019). Online at: ­­https://​ ­www​.­atlanticcouncil​.o ­ rg​/w ­ p​-c­ ontent​/u ­ ploads​/2­ 019​/0­ 6​/T ­ he​_Macron​_Leaks​ _Operation​-­A​_Post​-­Mortem​.­pdf 133. Renaud Lecadre, Dominique Albertini, and Amaelle Guiton, “ ‘Compte aux Bahamas’: Macron ciblé par le poison de la rumeur,” Libération (May 4, 2017). Online at: ­­https://​­www​.­liberation​.­fr​/­politiques​/­2017​/­05​/­04​/­compte​-­aux​ -­bahamas​-­macron​-­cible​-­par​-­le​-­poison​-­de​-­la​-­rumeur​_1567384; cited in Vilmer, “The “Macron Leaks’ Operation: A Post-Mortem,” p. 10. 134. Eric Auchard and Joseph Menn, “Facebook Cracks Down on 30,000 Fake Accounts in France,” Reuters (April 13, 2017). Online at: ­­https://​­www​.­reuters​ .­com​/­article​/­us​-­france​-­security​-­facebook​/­facebook​-­cracks​-­down​-­on​-­30000​-­fake​ -­accounts​-­in​-­france​-­idUSKBN17F25G 135. Kakutani, The Death of Truth, p. 133. 136. These incidents are described in greater detail in pages 29–47 of Martin and Shapiro, “Trends in Online Foreign Influence Efforts.” 137. “Russia Disinformation Campaigns in Africa: An Interview with Dr. Shelby Gross,” Africa Center for Strategic Studies, National Defesne University (February 18, 2020). Online at: ­­https://​­africacenter​.­org​/­spotlight​/­russian​ -­disinformation​-­campaigns​-­target​-­africa​-­interview​-­shelby​-­grossman​/;  and  for information on Facebook’s transparency effort, see: ­­https://​­www​.­facebook​.­com​ /­help​/­323314944866264 138. “Russians Are Meddling in the Democratic Primary,” Washington Post (October 29, 2019). Online at: ­­https://​­www​.­washingtonpost​.­com​/­opinions​/­2019​ /­10​/­29​/­russians​-­are​-­meddling​-­democratic​-­primary​-­is​-­anyone​-­paying​-­attention​/ 139. Eric Tucker, “U.S. Officials: Russia Behind Spread of Virus Disinformation,” Associated Press (July 29, 2020). Online at: ­­https://​­apnews​.­com​/­3acb089e 6a333e051dbc4a465cb68ee1; and Julian E. Barnes and David E. Sanger, “Russian Intelligence Agencies Push Disinformation on Pandemic,” New York Times (July 29, 2020). Online at: ­­https://​­www​.­nytimes​.­com​/­2020​/­07​/­28​/­us​/­politics​/­russia​ -­disinformation​-­coronavirus​.­html 140. Tucker, “U.S. Officials: Russia Behind Spread of Virus Disinformation.” 141. Barnes and Sanger, “Russian Intelligence Agencies Push Disinformation on Pandemic.” 142. Jane Mayer, “How Russia Helped Swing the Election for Trump,” The New Yorker (September 24, 2018). Online at: ­­https://​­www​.­newyorker​.­com​/­magazine​ /­2018​/­10​/­01​/­how​-­russia​-­helped​-­to​-­swing​-­the​-­election​-­for​-­trump 143. Caroline Orr, “I Watched over 100 Covert Russian Propaganda Videos on YouTube—Here’s What I Saw,” ARC Digital (October 13, 2017). Online at: https://​ ­­ ­a rcdigital​ .­m edia​ /­i​ -­w atched​ -­o ver​ -­1 00​ -­c overt​ -­russian​ -­p ropaganda​ -­v ideos​ -­o n​ -­youtube​-­heres​-­what​-­i​-­saw​-­b854b69762f2 144. Ben Nimmo et al., “Secondary Infektion,” Graphica (June 2020). Online at: ­­https://​­secondaryinfektion​.­org​/­downloads​/­secondary​-­infektion​-­report​.­pdf; and Ellen Nakashima and Craig Timberg, “Russian Disinformation Operation Relied on Forgeries, Fake Posts on 300 Platforms, New Report Says,” Washington Post (June 16, 2020). Online at: ­­https://​­www​.­washingtonpost​.­com​/­national​-­security​/­russian​ -­disinformation​-­operation​-­relied​-­on​-­forgeries​-­fake​-­posts​-­on​-­300​-­platforms​-­new​ -­report​-­says​/­2020​/­06​/­16​/­679f5b5c​-­ae8d​-­11ea​-­8f56​-­63f38c990077​_story​.­html

260Notes 145. Pomerantsev, This Is Not Propaganda, p. 64. 146. Digital, Culture, Media and Sport Committee, “Disinformation and ‘Fake News’: Final Report,” UK House of Commons, Eighth Report of Session 2017– 19 (February 14, 2019), p. 5. Online at: ­­https://​­publications​.­parliament​.­uk​/­pa​ /­cm201719​/­cmselect​/­cmcumeds​/­1791​/­1791​.­pdf 147. Sheera Frenkel, “Made and Distributed in the U.S.A.: Online Disinformation,” New York Times (October 11, 2018). Online at: ­­https://​­www​.­nytimes​.­com​ /­2018​/­10​/­11​/­technology​/­fake​-­news​-­online​-­disinformation​.­html 148. Howard, Lie Machines, p. 97. 149. Sarah Frier, “Trump’s Campaign Said It Was Better at Facebook. Facebook Agrees,” Bloomberg News (April 3, 2018). Online at: ­­https://​­www​.­bloomberg​.­com​ /­news​/­articles​/­2018​-­04​-­03​/­trump​-­s​-­campaign​-­said​-­it​-­was​-­better​-­at​-­facebook​ -­facebook​-­agrees 150. Howard, Lie Machines, p. 76. 151. Nandita Bose and Jeff Mason, “Trump Move Could Scrap or Weaken Law That Protects Social Media Companies,” Reuters (May 28, 2020). Online at: ­­https://​­www​.­reuters​.­com​/­article​/­us​-­twitter​-­trump​-­executive​-­order​-­social​ /­trumps​ -­executive​ -­order​ -­targets​ -­political​ -­bias​ -­at​ -­twitter​ -­and​ -­facebook​ -­draft​ -­idUSKBN2340MW. Of course, it must be noted here that according to the Washington Post Fact Checher, by the end of his term Donald Trump had told “30,573 untruths” during his presidency. Glenn Kessler et al., “A Term of Untruths,” The Washington Post (January 23, 2021). Online at: ­­https://​­www​.­washingtonpost​.­com​ /­politics​/­interactive​/­2021​/­timeline​-­trump​-­claims​-­as​-­president​/ 152. Ibid. 153. Fox News, “Transcript: ‘Fox News Sunday’ Interview with President Trump” (July 19, 2020). Online at: ­­https://​­www​.­foxnews​.­com​/­politics​/­transcript​ -­fox​-­news​-­sunday​-­interview​-­with​-­president​-­trump 154. For a detailed discussion of this, see James J.F. Forest, The Terrorism Lectures (Los Angeles, CA: Nortia Press, 2019), pp. 117–138. 155. James J.F. Forest and Frank Honkus, III, “Introduction,” in Influence Warfare, p. 6. 156. Steven Kull, Testimony to the House Committee on Foreign Affairs, Subcommittee on International Organizations, Human Rights and Oversight (May 17, 2007). 157. Gabriel Weimann, Terrorism in Cyberspace: The Next Generation (New York: Columbia University Press, 2015). 158. Katherine E. Brown and Elizabeth Pearson, “Social Media, the Online Environment, and Terrorism,” in Routledge Handbook of Terrorism and Counterterrorism, edited by Andrew Silke (London: Routledge, 2019), p. 149 (paraphrased for comparison). 159. Hundreds of quality articles have been published in the major academic research journals in this field, including Perspectives on Terrorism, Studies in Conflict and Terrorism, and Terrorism and Political Violence. 160. For research on this topic, see Forest, The Terrorism Lectures; Influence Warfare; James. J.F. Forest, ed., Teaching Terror: Strategic and Tactical Learning in the Terrorist World (Boulder, CO: Rowman & Littlefield, 2006); and James. J.F. Forest, ed., The Making of a Terrorist: Recruitment, Training and Root Causes (Westport, CT: Praeger Security International, 2005).

Notes261 161. Ali Fisher, “Swarmcast: How Jihadist Networks Maintain a Persistent Online Presence,” Perspectives on Terrorism 9, no. 3 (June 2015), pp. 3–20. 162. Brown and Pearson, “Social Media, the Online Environment, and Terrorism,” p. 150 163. Ibid., pp. 150–151. 164. For a detailed analysis, see Madeleine Gruen, “Innovative Recruitment and Indoctrination Tactics by Extremists: Video Games, Hip-Hop, and the World Wide Web,” in The Making of a Terrorist, vol. 1, pp. 11–22; and Forest, Influence Warfare. 165. For analysis of the Turner Diaries and its impact on right-wing extremist movements, see J.M. Berger, “The Turner Legacy: The Storied Origins and Enduring Impact of White Nationalism’s Deadly Bible,” The International Centre for Counter-Terrorism—The Hague 7, no. 8 (2016). ­­https://​­doi​.­org​/­10​.­19165​/­2016​.­1​.­11; Kurt Braddock, Weaponized Words (London: Cambridge University Press, 2020), pp. 71–83; and J.M. Berger, “Alt History,” The Atlantic (September 16, 2016). Online at: ­­https://​­www​.­theatlantic​.­com​/­politics​/­archive​/­2016​/­09​/­how​-­the​-­turner​ -­diaries​-­changed​-­white​-­nationalism​/­500039​/ 166. For a detailed account of Anders Breivik’s manifesto, and the terror attack, see Braddock, Weaponized Words, pp. 110–114. 167. Rebecca Lewis, “Alternative Influences: Broadcasting the Reactionary Right on YouTube,” Data & Society Research Institute (September 2018). Online at: ­­https://​­datasociety​.­net​/­library​/­alternative​-­influence​/ 168. Alex Hern, “Stormfront: ‘Murder Capital of Internet’ Pulled Offline after Civil Rights Action,” The Guardian (August 29, 2017). Online at: ­­https://​­www​ .­t heguardian​ .­c om​ /­t echnology​ /­2 017​ /­a ug​ /­2 9​ /­s tormfront​ -­n eo​ -­n azi​ -­h ate​ -­s ite​ -­murder​-­internet​-­pulled​-­offline​-­web​-­com​-­civil​-­rights​-­action 169. Tess Owen, “How Telegram Became White Nationalists’ Go-To Messag­ ing Platform,” Vice (October 7, 2019). Online at: ­­https://​­www​.­vice​.­com​/­en​ _us​/­article​/­59nk3a​/­how​-­telegram​-­became​-­white​-­nationalists​-­go​-­to​-­messaging​ -­platform 170. Pomerantsev, This Is Not Propaganda, pp. 67–68. 171. Amarnath Amarasingam and Marc-Andre Argentino, “The QAnon Conspiracy Theory: A Security Threat in the Making?” CTC Sentinel (July 2020), p. 39. 172. Eric Schmitt and Thom Shanker, “U.S. Adapts Cold War Idea to Fight Terrorists,” New York Times (March 18, 2008). 173. “ISIS Video Shows Jordanian Pilot Being Burned,” CBS News (February 3, 2015). Online at: ­­https://​­www​.­cbsnews​.­com​/­video​/­isis​-­video​-­shows​-­jordanian​ -­pilot​-­being​-­burned​-­to​-­death​/ 174. “Facebook: New Zealand Attack Video Viewed 4,000 Times,” BBC News (March 19, 2019). Online at: ­­https://​­www​.­bbc​.­com​/­news​/­business​-­47620519 175. Amy Gunia, “Facebook Tightens Live-Stream Rules in Response to the Christchurch Massacre,” Time (May 15, 2019). Online at: ­­http://​­time​.­com​/­5589478​ /­facebook​-­tightens​-­live​-­stream​-­rules​-­in​-­response​-­to​-­the​-­christchurch​-­massacre​/ 176. Aja Romano, “How the Christchurch Shooter Used Memes to Spread Hate,” Vox (March 16, 2019). Online at: ­­https://​­www​.­vox​.­com​/­culture​/­2019​/­3​ /­16​/­18266930​/­christchurch​-­shooter​-­manifesto​-­memes​-­subscribe​-­to​-­pewdiepie 177. Frenkel, “Made and Distributed in the U.S.A.: Online Disinformation.”

262Notes 178. Nick Martin, “Use of Force: Boogaloo Website Urges ‘Justifiable Use of Force’ against Members of Law Enforcement,” The Informant (July 31, 2020). Online at: ­­https://​­www​.­informant​.­news​/­p​/­use​-­of​-­force 179. These and other examples are described in David Gilbert, “Europe’s FarRight Is Flooding Facebook with Racist, Anti-Migrant Disinformation,” Vice News (May 23, 2019). Online at: ­­https://​­www​.­vice​.­com​/­en​_us​/­article​/­gy4jzj​/­europes​ -­far​-­right​-­is​-­flooding​-­facebook​-­with​-­racist​-­anti​-­migrant​-­misinformation 180. Audrey Alexander and Bennett Clifford, “Doxing and Defacements: Examining the Islamic State’s Hacking Capabilities,” CTC Sentinel (April 2019). Online at: ­­https://​­ctc​.­usma​.­edu​/­doxing​-­defacements​-­examining​-­islamic​-­states​-­hacking​ -­capabilities​/; and Daniel Milton, “Truth and Lies in the Caliphate: The Use of Deception in Islamic State Propaganda,” Media, War & Conflict (August 2020). ­­https://​­doi​.­org​/­10​.­1177​/­1750635220945734 181. Ibid. 182. Brown and Pearson, “Social Media, the Online Environment, and Terrorism,” p. 151. 183. Pomerantsev, This Is Not Propaganda, p. 124.

CHAPTER 3 1. Jarol B. Manheim, Strategy in Information and Influence Campaigns (New York: Routledge, 2011), p. 185. 2. Susan Barnes, “A Privacy Paradox: Social networking in the United States,” First Monday 11, no. 9 (2006), p. 5. ­https://​­doi​.­org​/­10​.­5210​/­fm​.­v11i9​.­1394; cited in Ciaran McMahon, The Psychology of Social Media (London: Routledge, 2019), p. 17. 3. Geoffrey A. Fowler, “What Is Fingerprinting?” Washington Post (October 31, 2019). Online at: ­https://​­www​.­washingtonpost​.­com​/­technology​/­2019​/­10​/­31​ /­think​-­youre​-­anonymous​-­online​-­third​-­popular​-­websites​-­are​-­fingerprinting​-­you​/ 4. For example, see Temple University’s guide to Webscraping. Online at: ­https://​­guides​.­temple​.­edu​/­mining​-­twitter​/­scraping; and Allen Zeng, “A Beginner’s Guide to Collecting Twitter Data,” Knightlab Ideas (March 15, 2014). Online at: ­https://​­ k nightlab​ .­n orthwestern​ .­e du​ /­2 014​ /­0 3​ /­1 5​ /­a​ -­b eginners​ -­g uide​ -­t o​ -­collecting​-­twitter​-­data​-­and​-­a​-­bit​-­of​-­web​-­scraping​/ 5. McMahon, The Psychology of Social Media, p. 29. 6. Michiko Kakutani, The Death of Truth: Notes on Falsehood in the Age of Trump (New York: Tim Duggan Books, 2018), p. 127; Matthew Rosenberg and Gabriel J.X. Dance, “You Are the Product’: Targeted by Cambridge Analytica on Facebook,” New York Times (April 8, 2018). Online at ­https://​­www​.­nytimes​.­com​/­2018​/­04​/­08​ /­us​/­facebook​-­users​-­data​-­harvested​-­cambridge​-­analytica​.­html; Carole Cadwalladr and Emma Graham-Harrison, “Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Break,” The Guardian (March 17, 2018). Online at: ­https://​­www​.­theguardian​.­com​/­news​/­2018​/­mar​/­17​/­cambridge​ -­analytica​-­facebook​-­influence​-­us​-­election; Olivia Solon, “Facebook Says Cambridge May Have Gained 37m More Users’ Data,” The Guardian (April 4, 2018). Online at: ­https://​­www​.­theguardian​.­com​/­technology​/­2018​/­apr​/­04​/­facebook​ -­cambridge​-­analytica​-­user​-­data​-­latest​-­more​-­than​-­thought

Notes263 7. For more on this, see Peter Pomerantsev, This Is Not Propaganda (New York: Public Affairs, 2018), pp. 125–126. 8. Kate Conger and Jack Nicas, “Twitter Bars Alex Jones and Infowars, Citing Harassing Messages,” New York Times (September 6, 2018). Online at: ­https://​­www​ .­nytimes​.­com​/­2018​/­09​/­06​/­technology​/­twitter​-­alex​-­jones​-­infowars​.­html;  Mike Isaac and Kevin Roose, “Facebook Bars Alex Jones, Louis Farrakhan and Others from Its Services,” New York Times (May 2, 2019). Online at: h ­ ttps://​­www​.n ­ ytimes​ .­com​/­2019​/­05​/­02​/­technology​/­facebook​-­alex​-­jones​-­louis​-­farrakhan​-­ban​.­html 9. Kevin Roose and Kate Conger, “YouTube to Remove Thousands of Videos Pushing Extreme Views,” New York Times (June 5, 2019). Online at: h ­ ttps://​­www​ .­nytimes​.­com​/­2019​/­06​/­05​/­business​/­youtube​-­remove​-­extremist​-­videos​.­html; Facebook Suspends Accounts Tied to Putin Ally for Political Meddling,” New York Post (October 30, 3019). Online at: ­https://​­nypost​.­com​/­2019​/­10​/­30​/­facebook​ -­suspends​-­accounts​-­tied​-­to​-­putin​-­ally​-­for​-­political​-­meddling​/ 10. Tony Romm and Isaac Stanley-Becker, “Twitter to Ban All Political Ads Amid 2020 Election Uproar,” Washington Post (October 30, 2019). Online at: ­https://​­www​ .­washingtonpost​.­com​/­technology​/­2019​/­10​/­30​/­twitter​-­ban​-­all​-­political​-­ads​-­amid​ -­election​-­uproar​/; Alex Hern, “Facebook Bans ‘Deepfake’ Videos in Run-Up to US Election,” The Guardian (January 7, 2020). ­https://​­www​.­theguardian​.­com​/­technology​ /­2020​/­jan​/­07​/­facebook​-­bans​-­deepfake​-­videos​-­in​-­run​-­up​-­to​-­us​-­election 11. Mike Isaac, “Facebook Says It Won’t Back Down from Allowing Lies in Political Ads,” New York Times (January 9, 2020). Online at: ­https://​­www​.­nytimes​.­com​ /­2020​/0­ 1​/­09​/­technology​/f­ acebook​-p ­ olitical​-a­ ds​-l­ ies​.h ­ tml​?s­ mtyp​=​­cur​&s­ mid​=​­tw​ -­nytimes 12. Gilad Edelman, “Why YouTube Won’t Ban Trump’s Misleading Ads about Biden,” Wired (December 3, 2019). Online at: ­https://​­www​.­wired​.­com​/­story​ /­youtube​-­trump​-­biden​-­political​-­ads​/ 13. Cherilyn Ireton and Julie Posetti, Journalism, Fake News and Disinformation (Paris: UNESCO, 2018), p. 17. 14. Here I am paraphrasing what the independent analysis group Demos called “strategies” in their report Warring Songs (pp. 8–9). In my view, these are more tactical than strategic, because they each should be employed to serve a larger strategic goal. Source: Alex Krasodomski-Jones et al., “Warring Songs: Information Operations in the Digital Age,” Demos/Open Society, European Policy Institute (May 2019). Online at: ­https://​­demos​.­co​.­uk​/­wp​-­content​/­uploads​/­2019​/­05​/­Warring​ -­Songs​-­final​-­1​.­pdf 15. Ibid., p. 13. 16. Ireton and Posetti, Journalism, Fake News and Disinformation, p. 17. 17. Drew Harwell, “Faked Pelosi Videos, Alowed to Make Her Appear Drunk, Spread Across Social Media,” Washington Post (May 24, 2019). Online at: ­https://​­www​ .­washingtonpost​.­com​/­technology​/­2019​/­05​/­23​/­faked​-­pelosi​-­videos​-­slowed​-­make -her-appear-drunk-spread-across-social-media/ 18. Joe Litell, “Don’t Believe Your Eyes (or Ears): The Weaponization of Artificial Intelligence, Machine Learning, and Deepfakes,” War on the Rocks (December 7, 2019). Online at: ­https://​­warontherocks​.­com​/­2019​/­10​/­dont​-­believe​-­your​ -­eyes​-­or​-­ears​-­the​-­weaponization​-­of​-­artificial​-­intelligence​-­machine​-­learning​-­and​ -­deepfakes​/

264Notes 19. Will Knight, “The World’s Top Deepfake Artist Is Wrestling with The Monster He Created,” Technology Review (August 16, 2019). Online at: h ­ ttps://​­www​ .­technologyreview​.­com​/­s​/­614083​/­the​-­worlds​-­top​-­deepfake​-­artist​-­is​-­wrestling -with-the-monster-he-created/ 20. Egor Zakharav et al., “Few-Shot Adversarial Learning of Realistic Neural Talking Head Models,” Cornell University, Computer Vision and Pattern Recognition, ­arXiv​.­org (May 20, 2019). Online at: ­https://​­arxiv​.­org​/­abs​/­1905​.­08233 (and at ­https://​­arxiv​.­org​/­pdf​/­1905​.­08233​.­pdf) 21. Samantha Cole, “It’s Getting Way Too Easy to Create Fake Videos of People’s Faces,” Motherboard/Tech by Vice (May 23, 2019). Online at: ­https://​­www​.­vice​.­com​ /­en​_us​/­article​/­qv7zkw​/­create​-­fake​-­videos​-­of​-­faces​-­samsung​-­ai​-­labs​-­algorithm 22. P.W. Singer and Emerson T. Brooking, LikeWar: The Weaponization of Social Media (Boston: Houghton Mifflin Harcourt, 2018), p. 255. 23. Robert Chesney and Danielle Citron, “Deepfakes and the New Disinformation War,” Foreign Affairs (January/February 2019), pp. 147–155. Online at: ­https://​­www​.­foreignaffairs​.­com​/­articles​/­world​/­2018​-­12​-­11​/­deepfakes​-­and​-­new -disinformation-war 24. Singer and Brooking, LikeWar, p. 253. 25. Maggie Miller, “Report Highlights Instagram, Deepfake Videos as Key Disinformation Threats in 2020 Elections,” The Hill (September 3, 2019). Online at: ­https:// ​­ t hehill​ .­c om​ /­regulation​ /­c ybersecurity​ /­4 59492​ -­report​ -­h ighlights​ -­instagram​-­deepfake​-­videos​-­as​-­key​-­threats​-­in​-­2020 26. Michael J. Mazarr et al., The Emerging Risk of Virtual Societal Warfare (Santa Monica, CA: Rand Corporation, 2019), p. 99. 27. For more on this, please see James J.F. Forest, Digital Influence Mercenaries: Profits and Power through Information Warfare (Annapolis, MD: Naval Institute Press, 2021). 28. Craig Silverman, “This Analysis Shows How Viral Fake Election News Stories Outperformed Real News On Facebook,” Buzzfeed News (November 16, 2016). Online at: ­https://​­www​.­buzzfeednews​.­com​/­article​/­craigsilverman​/­viral​ -­fake​-­election​-­news​-­outperformed​-­real​-­news​-­on​-­facebook 29. Caitlin Dewey, “What Was Fake on the Internet This Week,” Washington Post (July 18, 2014). Online at: ­https://​­www​.­washingtonpost​.­com​/­news​/­the​-­intersect​ /­wp​/­2014​/­07​/­18​/­what​-­was​-­fake​-­on​-­the​-­internet​-­this​-­week​-­pregnant​-­tarantulas​ -­fried​-­chicken​-­oreos​-­and​-­anti​-­semitic​-­feminist​-­tweets​/ 30. Paris Martineau, “Internet Deception Is Here to Stay—So What Do We Do Now?” Wired (December 30, 2019). Online at: ­https://​­www​.­wired​.­com​/­story​ /­internet​-­deception​-­stay​-­what​-­do​-­now​/ 31. Hannah Ritchie, “Read All about It: The Biggest Fake News Stories of 2016,” CNBC (December 20, 2016). Online at: ­https://​­www​.­cnbc​.­com​/­2016​/­12​/­30​/­read​ -­all​-­about​-­it​-­the​-­biggest​-­fake​-­news​-­stories​-­of​-­2016​.­html; and Claire Wardle, “6 Types of Misinformation Circulated this Election Season,” Columbia Journalism Review (November 18, 2016). Online at: ­https://​­www​.­cjr​.­org​/­tow​_center​/­6​_types​ _election​_fake​_news​.­php 32. Dan Evon, “Pope Francis Shocks World,” Snopes (July 10, 2016). Online at: ­https://​­www​.­snopes​.­com​/­fact​-­check​/­pope​-­francis​-­donald​-­trump​-­endorsement​/; Also, on October 2, 2016, Pope Francis spoke publicly about the U.S. election for

Notes265 the first time, saying “I never say a word about electoral campaigns.” See Ritchie, “Read All about It.” 33. Shelly Banjo, “Facebook, Twitter and the Digital Disinformation Mess,” Washington Post (October 31, 2019). Online at: ­https://​­www​.­washingtonpost​.­com​ /­business​/­facebook​-­twitter​-­and​-­the​-­digital​-­disinformation​-­mess​/­2019/10/31 /­3f81647c​-­fbd1​-­11e9​-­9e02​-­1d45cb3dfa8f​_story​.­html 34. David Lazer et al., “The Science of Fake News,” Science 359, no. 6380 (March 9, 2018), pp. 1095–1096. 35. Ryan Holiday, Trust Me, I’m Lying: Confessions of a Media Manipulator, Revised & Updated edition (New York: Portfolio/Penguin, 2017), p. 215. 36. Priyanjana Bengani, “Hundreds of ‘Pink Slime’ Local News Outlets Are ­Distributing Algorithmic Stories and Conservative Talking Points,” The Tow Center for Digital Journalism (December 18, 2019). Online at: h ­ ttps://​­www​.­cjr​.­org​ /­tow​_center​_reports​/h ­ undreds​-­of​-­pink​-s­ lime​-l­ ocal​-n ­ ews​-o ­ utlets-​ ­are​-­distributing​ -­algorithmic​-­stories​-­conservative​-­talking​-­points​.­php 37. Ibid. 38. Mihir Zaveri, “Government Website Is Hacked With Pro-Iran Messages,” New York Times (January 6, 2020). Online at: ­https://​­www​.­nytimes​.­com​/­2020​/­01​ /­06​/­us​/­iran​-­hack​-­federal​-­depository​-­library​.­html 39. Brandon Stosh, “Official U.S. Army Website Hacked and Defaced by Syrian Electronic Army,” Freedom Hacker (June 8, 2015). Online at: h ­ ttps://​­freedomhacker​ .­net​/­us​-­army​-­website​-­hacked​-­defaced​-­syrian​-­electronic​-­army​-­4259​/ 40. Ibid. 41. Jakub Kalenský, “A Change of Tactics: Blurring Disinformation’s Source,” Disinfo Portal (June 6, 2019). Online at: ­https://​­disinfoportal​.­org​/­a​-­change​-­of​-­tactics -blurring-disinformations-source/ 42. Martineau, “Internet Deception Is Here to Stay.” 43. Katherine Marsh, “A Gay Girl in Damascus Becomes a Heroine of the Syrian Revolt,” The Guardian (May 6, 2011). Online at: ­https://​­www​.­theguardian​.­com​ /­world​/­2011​/­may​/­06​/­gay​-­girl​-­damascus​-­syria​-­blog 44. Barry Neild, “Fears Grow for Missing Syrian ‘Gay Girl’ Blogger,” CNN (June 13, 2011). Online at: ­http://​­www​.­cnn​.­com​/­2011​/­WORLD​/­meast​/­06​/­07​/­syria​ .­blogger​.­missing​/­index​.­html; “‘A Gay Girl in Damascus’ Blogger Kidnapped at Gunpoint in Syria,” Fox News (June 7, 2011). Online at: ­https://​­www​.­foxnews​.­com​ /­world​/­a​-­gay​-­girl​-­in​-­damascus​-­blogger​-­kidnapped​-­at​-­gunpoint​-­in​-­syria; Robert Mackey and Liam Stack, “After Report of Disappearance, Questions about SyrianAmerican Blogger,” New York Times (June 7, 2011). Online at: h ­ ttps://​­thelede​.­blogs​ .­nytimes​.­com​/­2011​/­06​/­07​/­syrian​-­american​-­blogger​-­detained​/; Nidaa Hassan, “Syrian Blogger Amina Abdallah Kidnapped by Armed Men,” The Guardian (June 6, 2011).  Online  at: ­https://​­www​.­theguardian​.­com​/­world​/­2011​/­jun​/­07​/­syrian​ -­blogger​-­amina​-­abdallah​-­kidnapped; Melissa Bell and Elizabeth Flock, “‘A Gay Girl in Damascus’ Comes Clean,” Washington Post (June 12, 2011). Online at: h ­ ttps://​ ­www​.­washingtonpost​.­com​/­lifestyle​/­style​/­a​-­gay​-­girl​-­in​-­damascus​-­comes​-­clean​ /­2011​/­06​/­12​/­AGkyH0RH​_story​.­html 45. Martineau, “Internet Deception Is Here to Stay.” 46. Daniel Boffey, “EU Disputes Facebook’s Claims of Progress against Fake Accounts,” The Guardian (October 29, 2019). ­https://​­www​.­theguardian​.­com​

266Notes /­world​/­2019​/­oct​/­29​/­europe​-­accuses​-­facebook​-­of​-­being​-­slow​-­to​-­remove​-­fake​ -­accounts​?­CMP​=​­share​_btn​_tw 47. Nathaniel Gleicher, “Removing Coordinated Inauthentic Behavior in UAE, Egypt and Saudi Arabia,” Facebook (August 1, 2019). Online at: ­https://​­newsroom​ .­fb​.­com​/­news​/­2019​/­08​/­cib​-­uae​-­egypt​-­saudi​-­arabia​/ 48. Adam Rawnsley and Blake Montgomery, “How One Researcher Exposed the Saudis’ Master of Disinformation,” The Daily Beast (August 1, 2019). Online at: ­https://​­www​.­thedailybeast​.­com​/­how​-­one​-­researcher​-­helped​-­facebook​-­bust​ -­saudi-disinfo-campaign 49. Gleicher, “Removing Coordinated Inauthentic Behavior in UAE, Egypt and Saudi Arabia.” 50. Iyad El-Baghdadi, “How the Saudis Made Jeff Bezos Public Enemy No. 1,” The Daily Beast (February 25, 2019). Online at: ­https://​­www​.­thedailybeast​.­com​ /­how​-­the​-­saudis​-­made​-­jeff​-­bezos​-­public​-­enemy​-­1 51. Lauren Feiner, “Twitter Bans Bots That Spread Pro-Saudi Messages about Missing Journalist,” CNBC (October 19, 2018). Online at: ­https://​­www​.­cnbc​.­com​ /­2018​/­10​/­19​/­twitter​-­bans​-­bots​-­spreading​-­pro​-­saudi​-­messages​.­html 52. Jane Lytvenenko, “‘I Found Election Interference and No One Cared’: One US Veteran’s Fight to Protect His Compatriots Online,” Buzzfeed News (December 20, 2019). Online at: ­https://​­www​.­buzzfeednews​.­com​/­article​/­janelytvynenko​ /­kristofer​-­goldsmith​-­veteran​-­disinformation 53. Kristofer Goldsmich, “VVA Investigative Report,” Vietnam Veterans of America (September 17, 2019). Online at: ­http://​­vva​.­org​/­trollreport​/ 54. Robert Walker, “Combating Weapons of Influence on Social Media” (Final Thesis), Naval Postgraduate School (June 2019). Online at: ­https://​­apps​.­dtic​.­mil​ /­sti​/­pdfs​/­AD1080481​.­pdf 55. Paraphrasing Singer and Brooking, LikeWar, pp. 112–113. 56. Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (New York: Farrar, Strauss and Giroux, 2020), p. 403. 57. Camille François, The IRACopyPasta Campaign, Graphika Report (October 21, 2019). Online at: ­https://​­graphika​.­com​/­reports​/­copypasta​/ 58. Ibid. 59. Judith S. Donath, “Identity and Deception in the Virtual Community,” in Communities in Cyberspace, edited by P. Kollock and M. Smith (London: Routledge, 1998).  Online  at: ­http://​­vivatropolis​ .­org​/­papers​ /­Donath​ /­IdentityDeception​ /­IdentityDeception​.­pdf 60. Rosanna E. Guadagno, “Compliance: A Classic and Contemporary Review,” in The Oxford Handbook of Social Influence, edited by Stephen G. Harkins et al. (New York: Oxford University Press, 2017), p. 123. 61. Judd Legum, “Facebook Allows Prominent Right-Wing Website to Break the Rules,” Popular Information (October 28, 2019). Online at: ­https://​­popular​.­info​/­p​ /­facebook​-­allows​-­prominent​-­right​-­wing 62. Michael Newberg, “As Many as 48 Million Twitter Accounts Aren’t People, Says Study,” CNBC (March 10, 2017). Online at: ­https://​­www​.­cnbc​.­com​/­2017​ /­03​/­10​/­nearly​-­48​-­million​-­twitter​-­accounts​-­could​-­be​-­bots​-­says​-­study​.­html;  Scott Shane and Mike Isaac, “Facebook Says It’s Policing Fake Accounts, But They’re Still Easy to Spot,” New York Times (November 3, 2017). Online at: ­https://​­www​ .­nytimes​.­com​/­2017​/­11​/­03​/­technology​/­facebook​-­fake​-­accounts​.­html

Notes267 63. Sara Fischer, “How Bots and Fake Accounts Work,” Axios (October 31, 2017). Online at: ­https://​­www​.­axios​.­com​/­how​-­bots​-­and​-­fake​-­accounts​-­work​ -­1513306547​-­4b0214b2​-­3277​-­422a​-­b492​-­06a1c0e2c61e​.­html 64. For example, see Shelly Palmer, “How to Build Your Own Bot Army” (March 18, 2018). Online at: ­https://​­www​.­shellypalmer​.­com​/­2018​/­03​/­build​-­troll​-­farm​/; and “Build the Best Free Instagram Automation Bot of 2019 in 15 Minutes.” Online at: https://medium.com/@rohanarun/how-to-build-an-instagram-bot-farm-in -15-minutes-for-free-14468c844f7a 65. For example, see Michael G. Hughes et al., “Discrediting in a Message Board Forum: The Effects of Social Support and Attacks on Expertise and Trustworthiness,” Journal of Computer-Mediated Communication 19, no. 3 (April 2014), pp. 325–341. 66. For more on the effects of multiple sources and endorsements, see Andrew J. Flanagin and Miriam J. Metzger, “Trusting Expert- Versus User-Generated Ratings Online: The Role of Information Volume, Valence, and Consumer Characteristics,” Computers in Human Behavior 29, no. 4 (July 2013), pp. 1626–1634; and Joseph W. Alba and Howard Marmorstein, “The Effects of Frequency Knowledge on Consumer Decision Making,” Journal of Consumer Research 14, no. 1 (June 1987), pp. 14–25. 67. Holiday, Trust Me, I’m Lying. 68. Singer and Brooking, LikeWar, p. 142, citing Marion R. Just et al., “It’s Trending on Twitter—An Analysis of the Twitter Manipulations in the Massachusetts 2010 Special Senate Election” (2012). Online at: ­https://​­www​.­academia​.­edu​/­24640252​ /­It​_s​_Trending​_on​_Twitter​_-­_An​_Analysis​_of​_the​_Twitter​_Manipulations​_in​ _the​_Massachusetts​_2010​_Special​_Senate​_Election 69. Sara Fischer, “Misinformation Bots Are Smarter Than We Thought,” Axios (November 27, 2018). Online at: ­https://​­www​.­axios​.­com​/­smart​-­misinformation​ -­bots​-­game​-­web​-­platforms​-­studies​-­3eed22e9​-­ffde​-­490e​-­98a6​-­890e5ad39f8a​.­html 70. Samuel C. Woolley and Douglas R. Guilbeault, “Computational Propaganda in the United States of America: Manufacturing Consensus Online,” Working Paper No. 2017.5. Online at: ­http://​­comprop​.­oii​.­ox​.­ac​.­uk​/­wp​-­content​/­uploads​ /­sites​/­89​/­2017​/­06​/­Comprop​-­USA​.­pdf 71. Atlantic Council Digital Forensic Research Lab, “Russian Diplomatic Twitter Accounts Rewrite History of World War II,” DFR Lab (September 16, 2019). Online at: ­https://​­medium​.­com​/­dfrlab​/­russian​-­diplomatic​-­twitter​-­accounts​-­rewrite​ -­history​-­of​-­world​-­war​-­ii​-­3d86c441d10d 72. Caroline Orr, “How Russian & Alt-Right Twitter Accounts Worked Together to Skew the Narrative About Berkeley,” Arc Digital (September 1, 2017). Online at: ­https://​­arcdigital​.­media​/­how​-­russian​-­alt​-­right​-­twitter​-­accounts​-­worked​-­together​ -­to​-­skew​-­the​-­narrative​-­about​-­berkeley​-­f03a3d04ac5d 73. Alice Marwick and Rebecca Lewis, “Media Manipulation and Disinformation Online,” Data & Society (May 2017). Online at: ­https://​­datasociety​.­net​/­output​ /­media​-­manipulation​-­and​-­disinfo​-­online​/ 74. Elizabeth Culliford, “Facebook, YouTube Remove ‘Plandemic’ Video with ‘Unsubstantiated’ Coronavirus Claims,” Reuters (May 7, 2020). Online at: h ­ ttps://​ ­www​.­reuters​.­com​/­article​/­us​-­health​-­coronavirus​-­tech​-­video​-­idUSKBN22K077 75. Atlantic Council Digital Forensic Research Lab, “Confronting the Threat of Disinformation: The Problem,” Google Jigsaw Data Visualizer (February 2020). Online at: ­https://​­jigsaw​.­google​.­com​/­the​-­current​/­disinformation​/­dataviz​/

268Notes 76. Boffey, “EU Disputes Facebook’s Claims of Progress against Fake Accounts.” 77. Adrienne Arsenault, “Partisan Twitter Bots Distorting U.S. Presidential Candidates’ Popularity,” CBC News (October 20, 2016). Online at: ­http://​­www​.­cbc​ .­ca​/­news​/­world​/­twitter​-­bots​-­trump​-­clinton​-­1​.­3814386; Amanda Hess, “On Twitter, a Battle among Political Bots,” New York Times (December 14, 2016). Online at: ­https://​­www​.­nytimes​.­com​/­2016​/­12​/­14​/­arts​/­on​-­twitter​-­a​-­battle​-­among​-­political​ -­bots​.­html; Dan Misener, “Political Bots Spread Misinformation during U.S. Campaign—and They’re Expected in Canada,” CBC News (November 7, 2016). Online at: ­http://​­www​.­cbc​.­ca​/­news​/­technology​/­political​-­bots​-­misinformation​-­1​.­3840300 78. Marwick and Lewis, “Media Manipulation and Disinformation Online,” p. 38; citing Bence Kollanyi, Philip N. Howard, and Samuel C. Woolley, “Bots and Automation over Twitter during the First US Presidential Debate” (COMPROP Data Memo, 2016), ­https://​­assets​.­documentcloud​.­org​/­documents​/­3144967​ /­Trump​-­Clinton​-­Bots​-­Data​.­pdf; Arsenault, “Partisan Twitter Bots Distorting U.S. Presidential Candidates’ Popularity”; and Misener, “Political Bots Spread Misin­ formation during U.S. Campaign.” 79. For more on this, see Alicia Wanless and Michael Berk, “The Audience Is the Amplifier: Participatory Propaganda,” in The Sage Handbook of Propaganda, edited by Paul Baines, Nicholas O’Shaughnessy, and Nancy Snow (London: Routledge, 2019); and Aaron Delwiche, “Computational Propaganda and the Rise of the Fake Audience” in The Sage Handbook of Propaganda. 80. Ali Fisher, “Swarmcast: How Jihadist Networks Maintain a Persistent Online Presence,” Perspectives on Terrorism 9, no. 3 (June 2015), pp. 3–20. 81. Singer and Brooking, LikeWar, p. 208. 82. Atlantic Council Digital Forensic Research Lab, “Confronting the Threat of Disinformation.” 83. Singer and Brooking, LikeWar, p. 113. 84. Mike McIntire, Karen Yourish, and Larry Buchanan, “In Trump’s Twitter Feed: Conspiracy-Mongers, Racists and Spies,” New York Times (November 2, 2019). Online at: ­https://​­www​.­nytimes​.­com​/­interactive​/­2019​/­11​/­02​/­us​/­politics​ /­trump​-­twitter​-­disinformation​.­html 85. Ibid. 86. Ibid; citing Joel Ebert, “Twitter Suspends Fake Tennessee GOP Account Later Linked to Russian ‘troll farm’,” USA Today (October 18, 2017). Online at: ­https://​ ­www​.­tennessean​.­com​/­story​/­news​/­politics​/­2017​/­10​/­18​/­twitter​-­suspends​-­fake​ -­tennessee​-­gop​-­account​-­later​-­linked​-­russian​-­troll​-­farm​/­776937001​/ 87. Singer and Brooking, LikeWar, p. 123. 88. Marwick and Lewis, “Media Manipulation and Disinformation Online,” p. 39. 89. Paraphrasing Marwick and Lewis, “Media Manipulation and Disinformation Online,” p. 50, who cite Kelly Weill, “Racist Trolls Are Behind NYU’s ‘White Student Union’ Hoax,” The Daily Beast (November 23, 2015). Online at: h ­ ttp://​ ­www​.­thedailybeast​ .­com​ /­articles​ /­2015​ /­11​ /­23​ /­racist​ -­trolls​ -­are​ -­behind​ -­nyu​ -­s​ -­white​-­student​-­union​-­hoax​.­html; Andrew Anglin, “White Student Unions Rise across America,” The Daily Stormer (November 24, 2015). Online at: ­http://​­www​ .­d ailystormer​.­c om​ /­w hite​ -­s tudent​ -­u nions​ -­r ise​ -­a cross​ -­a merica​ / ;  “‘White  Student Union’ Pages Appearing On Facebook,” CBS St. Louis (November 23, 2015). Online at: ­http://​­stlouis​.­cbslocal​.­com​/­2015​/­11​/­23​/­white​-­student​-­union​-­facebook;

Notes269 Walbert Castillo, “‘Illini White Student Union’ Challenges ‘Black Lives Matter,’” USA Today (November 21, 2015). Online at: ­http://​­www​.­usatoday​.­com​/­story​/­news​ /­nation​-­now​/­2015​/­11​/­21​/­illini​-­white​-­student​-­union​-­challenges​-­black​-­lives​-­matter​ /­76165878​/; Bears for Equality, “Racists Probably Started a White Student Union at Your School. They’re All Fake.,” ­Medium​.­com (November 23, 2015). Online at: https://medium.com/@b4e2015/racists-probably-started-a-white-student-union -at-your-school-they-re-all-fake-5d1983a0b229#.hv09kobey; Brendan O’Connor, “Who’s Behind the Fake ‘Union of White NYU Students’?,” Gawker (November 23, 2015). Online at: ­http://​­gawker​.­com​/­who​-­s​-­behind​-­the​-­fake​-­union​-­of​-­white​-­nyu​ -­students​-­1744300282; and Yanan Wang, “More than 30 Purported ‘White Student Unions’ Pop up across the Country,” Washington Post (November 24, 2015). Online at: ­https://​­www​.­washingtonpost​.­com​/­news​/­morning​-­mix​/­wp​/­2015​/­11​/­24​/­more​ -­than​-­30​-­questionably​-­real​-­white​-­students​-­unions​-­pop​-­up​-­across​-­the​-­country​/ 90. Holiday, Trust Me, I’m Lying. 91. Ibid., pp. 27–30. 92. Ibid., p. 142. 93. Ibid., p. 166. 94. Marwick and Lewis, “Media Manipulation and Disinformation Online.” 95. For an early analysis of trolling, see Donath, “Identity and Deception in the Virtual Community.” 96. For a detailed analysis of this, see James J.F. Forest, Digital Influence Mercenaries: Exploiting Fear and Bias for Profit (forthcoming, 2021). 97. Christian Davies, “Undercover Reporter Reveals Life in a Polish Troll Farm,” The Guardian (November 1, 2019). Online at: ­https://​­www​.­theguardian​.­com​ /­world​/­2019​/­nov​/­01​/­undercover​-­reporter​-­reveals​-­life​-­in​-­a​-­polish​-­troll​-­farm 98. Katarzyna Pruszkiewicz, Wojciech Ciesla, and Konrad Szczygiel, “Undercover at a Troll Farm,” Investigate Europe (November 1, 2019). Online at: ­https://​ ­www​.­investigate​-­europe​.­eu​/­undercover​-­at​-­a​-­troll​-­farm​/ 99. Limor Shifman, Memes in Digital Culture (Cambridge, MA: MIT Press, 2014), p. 2. 100. Ibid., p. 8. 101. Ibid., p. 12. 102. Ibid., pp. 19–20. 103. Ibid., pp. 67–68. 104. Ibid. 105. Ibid., p. 20. 106. Faiz Siddiqui and Susan Svrluga, “N.C. Man Told Police He Went to D.C. Pizzeria With Gun to Investigate Conspiracy Theory,” Washington Post (December 5, 2016). Online at: ­https://​­www​.­washingtonpost​.­com​/­news​/­local​/­wp​/­2016​ /­12​/­04​/­d​-­c​-­police​-­respond​-­to​-­report​-­of​-­a​-­man​-­with​-­a​-­gun​-­at​-­comet​-­ping​-­pong​ -­restaurant​/ 107. Dorothy Denning, “Activism, Hacktivism and Cyberterrorism: The Internet as a Tool for Influencing Foreign Policy,” in Networks and Netwars: The Future of Terror, Crime and Militancy, edited by John Arquilla and David Ronfeldt (Santa Monica, CA: Rand Corportation, 2001), pp. 239–288. Online at: ­https://​­www​.­rand ​.­org​/­pubs​/­monograph​_reports​/­MR1382​.­html 108. David Nikel, “Norwegian Newspaper Website Taken Offline after Content Hack,” Forbes (October 19, 2019). Online at: ­https://​­www​.­forbes​.­com​/­sites​

270Notes /­davidnikel​ /­2019​ /­10​ /­19​ /­norwegian​ -­newspaper​ -­website​ -­taken​ -­offline​ -after -content-hack/amp/ 109. Samantha Bradshaw and Philip N. Howard, Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation, Working Paper 2018.1. Oxford: Project on Computational Propaganda (July 20, 2018), p. 12. 110. Eric Lach, “Why You Should Read the Latest Mueller Indictment Yourself,” New York Times (July 13, 2018). Online at: ­https://​­www​.­newyorker​.­com​/­current​ /­guccifer​-­indictment​-­robert​-­mueller 111. Mark Mazzetti and Katie Benner, “12 Russian Agents Indicted in Mueller Investigation,” New York Times (July 13, 2018). Online at: ­https://​­www​.­nytimes​ .­com​/­2018​/­07​/­13​/­us​/­politics​/­mueller​-­indictment​-­russian​-­intelligence​-­hacking​ .­html 112. Brooke Jarvis, “How One Woman’s Digital Life Was Weaponized against Her,” Wired (November 14, 2017). Online at: ­https://​­www​.­wired​.­com​/­story​/­how​ -­one​-­womans​-­digital​-­life​-­was​-­weaponized​-­against​-­her​/­amp 113. Kakutani, The Death of Truth, p. 121. 114. McMahon, The Psychology of Social Media, pp. 36–37, citing research by John Suler. 115. Ibid. 116. Philip N. Howard, Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations and Political Operatives (New Haven, CT: Yale University Press, 2020), p. 112. 117. Bret Schafer, View from the Digital Trenches: Lessons from Year One of Hamilton 68, German Marshall Fund, Report No. 33 (2018), p. 11. Online at: ­http://​­www​ .­gmfus​.­org​/­publications​/­a​-­view​-­from​-­the​-­digital​-­trenches​-­lessons​-­from​-­year​-­one​ -­of​-­hamilton​-­68

CHAPTER 4 1. Aristotle, “Rhetoric,” in Aristotle, Rhetoric and Poetics, translated by W. Roberts (New York: Modern Library, 1954); cited in Anthony R. Pratkanis and Elliot Aronson, Age of Propaganda: The Everyday Use and Abuse of Persuasion (New York: Henry Holt and Company, 1992), pp. 51–52. 2. See, for example, Michael R. Leippe, Andrew P. Manion, and Ann Romanczyk, “Eyewitness Persuasion: How and How Well Do Fact Finders Judge the Accuracy of Adults’ and Childrens’ Memory Reports,” Journal of Personality and Social Psychology 63 (1992), pp. 191–197; cited in Pratkanis and Aronson, Age of Propaganda, p. 162. 3. John B. Nezlek and C. Veronica Smith, “Social Influence and Personality,” in The Oxford Handbook of Social Influence, edited by Stephen G. Harkins, Kipling D. Williams, and Jerry M. Burger (New York: Oxford University Press, 2017), p. 61. 4. Ibid., p. 60. 5. Philip G. Zimbardo, Ebbe B. Ebbesen, and Christina Maslach, Influencing Attitudes and Changing Behavior, Second edition (New York: Random House, 1977), pp. 94–98. 6. Jarol B. Manheim, Strategy and Information in Influence Campaigns (New York: Routledge, 2011), p. 23.

Notes271 7. Robert B. Cialdini, Influence: The Psychology of Persuasion, Revised edition (New York: Harper Collins, 2007), pp. 167–207. 8. Ibid.; and Kathleen Taylor, Brain Washing: The Science of Thought Control (London: Oxford University Press, 2004), pp. 74–76. 9. Michiko Kakutani, The Death of Truth: Notes on Falsehood in the Age of Trump (New York: Tim Duggan Books, 2018), p. 93. 10. Albert Bandura, Social Foundations of Thought and Action: A Social Cognitive Theory (Upper Saddle River, NJ: Prentice-Hall, 1986). 11. Rosanna E. Guadagno, “Compliance: A Classic and Contemporary Review,” in The Oxford Handbook of Social Influence, pp. 117–118. 12. Manheim, Strategy and Information in Influence Campaigns, p. 78. 13. Zimbardo et al., Influencing Attitudes and Changing Behavior, pp. 94–98. 14. Denise Winn, The Manipulated Mind: Brainwashing, Conditioning and Indoctrination (Los Altos, CA: Malor Books, 2000), p. 208. 15. Richard S. Crutchfield, “Conformity and Character,” American Psychologist, 10 (1955), pp. 191–198. 16. Nezlek and Smith, “Social Influence and Personality,” p. 55; also citing Crutchfield, “Conformity and Character,” op cit. 17. Nezlek and Smith, “Social Influence and Personality,” p. 53. 18. Zimbardo et al., Influencing Attitudes and Changing Behavior, pp. 94–98. 19. Nezlek and Smith, “Social Influence and Personality,” p. 53. 20. Jerry M. Burger, “Obedience,” in The Oxford Handbook of Social Influence, p. 129. 21. T.W. Adorno, E. Frenkel-Brunswick, D.J. FLevinson, and R.N. Sanford, The Authoritarian Personality (New York: Harper-Row, 1950). 22. Nezlek and Smith, “Social Influence and Personality,” pp. 53, 60–61. 23. Ibid., p. 59. 24. Taylor, Brain Washing, p. 113. 25. Stanley Milgram, Obedience to Authority: An Experimental View (New York: Harper & Row, 1974), p. 24. Also, for a synthesis and evaluation of Milgram’s work, see Jerry M. Burger, “Obedience,” in The Oxford Handbook of Social Influence, pp. 129–145; and Taylor, Brain Washing, pp. 110–112. 26. Taylor, Brain Washing, pp. 110–112. 27. Ibid., pp. 110–112. 28. Ibid. 29. Ibid. 30. Guadagno, “Compliance,” p. 118. 31. Taylor, Brain Washing, p. 113. 32. Jerry M. Burger, “Obedience,” in The Oxford Handbook of Social Influence, p. 135. 33. Zimbardo et al., Influencing Attitudes and Changing Behavior, pp. 98–99. 34. Winn, The Manipulated Mind, p. 208. 35. Richard E. Petty and John T. Cacioppo, Communication and Persuasion: Central and Peripheral Routes to Attitude Change (New York: Springer-Verlag, 1986); Manheim, Strategy and Information in Influence Campaigns, pp. 75–76. 36. Manheim, Strategy and Information in Influence Campaigns, pp. 75–76. 37. Ibid.

272Notes 38. P.W. Singer and Emerson T. Brooking, LikeWar: The Weaponization of Social Media (Boston: Houghton Mifflin Harcourt, 2018), p. 158. 39. Gordon Pennycook and David Rand, “Who Falls for Fake News? The Roles of Bullshit Receptivity, Overclaiming, Familiarity, and Analytic Thinking,” Journal of Personality (March 2019). Online at: ­­https://​­doi​.­org​/­10​.­1111​/­jopy​.­12476; and Tommy Shane, “The Psychology of Misinformation: Why We’re Vulnerable,” First Draft (June 30, 2020). Online at: ­­https://​­firstdraftnews​.­org​/­latest​/­the​-­psychology​ -­of​-­misinformation​-­why​-­were​-­vulnerable​/ 40. Manheim, Strategy and Information in Influence Campaigns, pp. 75–76. 41. Alice H. Eagly and Shelly Chaiken, The Psychology of Attitudes (San Diego, CA: Harcourt Brace Jovanovich College Publishers, 1993); and Miriam J. Metzger and Andrew J. Flanagin, “Credibility and Trust of Information in Online Environments: The Use of Cognitive Heuristics,” Journal of Pragmatics 59 (B) (2013). Online at: ­­https://​­www​.­sciencedirect​.­com​/­science​/­article​/­abs​/­pii​/­S0378216613001768 42. Shane, “The Psychology of Misinformation.” 43. Taylor, Brain Washing, p. 320. 44. Lee McIntyre, Post-Truth (Cambridge, MA: MIT Press, 2018), p. 51. 45. Ibid., p. 52; citing Justin Kruger and David Dunning, “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments,” Journal of Personality and Social Psychology 77, no. 6 (1999), p. 1121. Online at: ­­https://​­www​.­ncbi​.­nlm​.­nih​.­gov​/­pubmed​/­10626367 46. Stephan Lewandowsky, “The Loud Fringe: Pluralistic Ignorance and Democracy,” Shaping Tomorrow’s World (October 18, 2011). Online at: ­­http://​­www​ .­shapingtomorrowsworld​.­org​/­lewandowskypluraligno​.­html 47. Ibid. 48. Shane, “The Psychology of Misinformation.” 49. Oana Ștefanita, Nicoleta Corbu, and Raluca Buturoiu, “Fake News and the Third-Person Effect: They Are More Influenced than Me and You,” Journal of Media Research 11, no. 3 (2018), pp. 5–23. Online at: ­­https://​­pdfs​.­semanticscholar​.­org​/­c942​ /­ b8c0aba96883c26ec23df3ba60ee98bba6a4​ .­ pdf; and Shane, “The Psychology of Misinformation.” 50. Paraphrasing Maxwell McCombs, Setting the Agenda, Second edition (Malden, MA: Policy Press, 2014), pp. 64–65. 51. Ibid., p. 55, citing W. Russell Neuman, Marion Jones, and Ann Crigler, Common Knowledge: News and the Construction of Political Meaning (Chicago: University of Chicago Press, 1992). 52. McCombs, Setting the Agenda, pp. 82–83. 53. Singer and Brooking, LikeWar, p. 160. 54. Paraphrasing McCombs, Setting the Agenda, p. 69, citing Dixie Evatt and Salma Ghanem, “Building a Scale to Measure Salience,” Paper presented to the World Association for Public Opinion Research, Rome, Italy, 2001. 55. Kate Starbird, “The Surprising Nuance Behind the Russian Troll Strategy,” Medium (October 20, 2018). Online at: ­­https://​­medium​.­com​/­s​/­story​/­the​-­trolls​ -­w ithin​ -­h ow​ -­russian​ -­i nformation​ -­o perations​ -­i nfiltrated​ -­o nline​ -­c ommunities​ -­691fb969b9e4 56. Anthony R. Pratkanis and Elliot Aronson, Age of Propaganda: The Everyday Use and Abuse of Persuasion, rev. ed. (New York: Henry Holt and Co., 2001), p. 51.

Notes273 57. Ibid., p. 100. 58. The original on-the-street interviews can be viewed on YouTube at: https://​ ­­ ­www​.­youtube​.­com​/­watch​?­v​=​­sx2scvIFGjE; you can also see their 2016 follow-up test, in which the same question was asked (with the same results). Online at: ­­https://​­www​.­youtube​.­com​/­watch​?­v​=​­N6m7pWEMPlA 59. This Tweet can be accessed online at: ­­https://​­twitter​.­com​/­realDonaldTrump​ /­status​/­1052883467430694912 60. Starbird, “The Surprising Nuance Behind the Russian Troll Strategy.” 61. Joseph G. Lehman, “An Introduction to the Overton Window of Political Possibility,” Mackinac Center for Public Policy (April 8, 2010). Online at: https://​ ­­ ­www​.­mackinac​.­org​/­12481 62. Rachel Anne Barr, “Galaxy Brain: The Neuroscience of How Fake News Grabs Our Attention, Produces False Memories, and Appeals to our Emotions,” NiemanLab (November 21, 2019). Online at: ­­https://​­www​.­niemanlab​.­org​/­2019​ /­11​ /­g alaxy​ -­b rain​ -­t he​ -­n euroscience​ -­o f​ -­h ow​ -­f ake​ -­n ews​ -­g rabs​ -­o ur​ -­a ttention​ -­produces​-­false​-­memories​-­and​-­appeals​-­to​-­our​-­emotions​/ 63. Roger McNamee, “How to Fix Facebook—Before It Fixes Us,” Washington Monthly (January–March, 2018). Online at: ­­https://​­washingtonmonthly​.­com​ /­magazine​/­january​-­february​-­march​-­2018​/­how​-­to​-­fix​-­facebook​-­before​-­it​-­fixes​-­us​ / (cited in Michael V. Hayden, The Assault on Intelligence: American National Security in an Age of Lies [New York: Penguin Press, 2018], p. 223). 64. Rohit Bhargava, Likeonomics: The Unexpected Truth Behind Earning Trust, Influencing Behavior and Inspiring Action (Hoboken, NJ: John Wiley & Sons, 2012), p. xxix. 65. Ibid. 66. Roy Eidelson and Judy Eidelson, “Dangerous Ideas: Five Beliefs That Propel Groups Toward Conflict,” American Psychologist 58 (2003), pp. 182–192. 67. Nezlek and Smith, “Social Influence and Personality,” pp. 59–60. 68. Pratkanis and Aronson, Age of Propaganda, p. 51. 69. Ibid., pp. 171–172. 70. Singer and Brooking, LikeWar, p. 160. 71. Manheim, Strategy and Information in Influence Campaigns, pp. 14–15. 72. Alice Marwick and Rebecca Lewis, “Media Manipulation and Disinformation Online,” Data & Society (May 2017), p. 36. Online at: ­­https://​­datasociety​.­net​ /­output​/­media​-­manipulation​-­and​-­disinfo​-­online​/ 73. Limor Shifman, Memes in Digital Cultures (Cambridge, MA: MIT Press, 2014), p. 12. 74. Ibid., p. 8. 75. Pratkanis and Aronson, Age of Propaganda, p. 73. 76. Ibid., p. 30. 77. Robert Walker, “Combating Strategic Weapons of Influence on Social Media” (Final Thesis) Naval Postgraduate School (June 2019). Online at: https://​­ ­­ www​ .­chds​.­us​/­ed​/­items​/­20165 78. Naomi Oreskes and Erik M. Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (New York: Bloomsbury Press, 2010); see also Allan Brandt, The Cigarette Century: The Rise, Fall and Deadly Persistence of the Product that Defined America (New York: Basic Books, 2007), pp. 220, 228–230.

274Notes 79. Kakutani, The Death of Truth, p. 74. 80. Ibid., p. 55. 81. Oreskes and Conway, Merchants of Doubt, pp. 20–21, citing M. Parascandola, “Cigarettes and the U.S. Public Health Service in the 1950s,” American Journal of Public Health 91, no. 2 (February 2001) pp. 196–205. 82. Oreskes and Conway, Merchants of Doubt, pp. 20–21, citing Mark Parascandola, “Two Approaches to Etiology: The Debate over Smoking and Lung Cancer in the 1950s,” Endeavor 28, no. 2 (June 2008), pp. 81–86. 83. Oreskes and Conway, Merchants of Doubt, pp. 20–21, Citing Dean F. Davies, “A Statement on Lung Cancer,” CA: A Cancer Journal for Clinicians 9, no. 6 (1959), pp. 207–208. 84. Oreskes and Conway, Merchants of Doubt, p. 23. 85. McIntyre, Post-Truth, p. 23; citing Ari Rabin-Havt, Lies Incorporated: The World of Post-Truth Politics (New York: Anchor Books, 2016), pp. 26–27; and Oreskes and Conway, Merchants of Doubt, p. 16. 86. McIntyre, Post-Truth, p. 23. 87. Oreskes and Conway, Merchants of Doubt, p. 23. 88. Michael J. Mazarr et al., The Emerging Risk of Virtual Societal Warfare (Santa Monica, CA: Rand Corporation, 2019), p. 100; see also Rabin-Havt, Lies, Incorporated. 89. Oreskes and Conway, Merchants of Doubt, p. 34. 90. Ibid., p. 24. 91. Ibid., p. 32. 92. Judith Warner, “Fact-Free Science,” New York Times Magazine (February 25, 2011). Online at: ­­https://​­www​.­nytimes​.­com​/­2011​/­02​/­27​/­magazine​/­27FOB​ -­WWLN​-­t​.­html 93. For a brief description of this, see Kakutani, The Death of Truth, pp. 74–75. 94. Ibid., pp. 112–113. 95. Ibid., Pratkanis and Aronson, Age of Propaganda, p. 113. 96. David Matthews, “Rush Limbaugh Denied Health Risks of Smoking Years Before Lung Cancer Diagnosis,” New York Daily News (February 3, 2020). Online at: ­­https://​­www​.­nydailynews​.­com​/­news​/­national​/­ny​-­rush​-­limbaugh​ -­smoking​-­effects​-­cancer​-­diagnosis​-­20200203​-­4ma66mowazektovzh7hg2aynhq​ -­story​.­html 97. Ibid. 98. Tom Nichols, The Death of Expertise: The Campaign Against Established Knowledge and Why It Matters (London: Oxford University Press, 2017), p. 58. 99. Brian Resnick, “The Dark Allure of Conspiracy Theories, Explained by a Psychologist,” Vox (May 25, 2017). Online at: ­­https://​­www​.­vox​.­com​/­science​-­and​ -­health​/­2017​/­4​/­25​/­15408610​/­conspiracy​-­theories​-­psychologist​-­explained 100. Adrienne LaFrance, “The Prophecies of Q,” The Atlantic (June 2020). Online at: ­­https://​­www​.­theatlantic​.­com​/­magazine​/­archive​/­2020​/­06​/­qanon​-­nothing​ -­can​-­stop​-­what​-­is​-­coming​/­610567​/ 101. Cailin O’Connor and James Owen Weatherall, The Misinformation Age: How False Beliefs Spread (New Haven, CT: Yale University Press, 2019), p. 43. 102. Winn, The Manipulated Mind, p. 45. 103. Aronson, Elliot, The Social Animal (San Francisco: W.H. Freeman, 1976); cited in Winn, The Manipulated Mind, p. 36.

Notes275 104. Susan T. Fiske and Shelley E. Taylor, Social Cognition (New York: McGrawHill, 1991); cited in Pratkanis and Aronson, Age of Propaganda, p. 38. 105. Martie G. Haselton, Daniel Nettle, and Paul W. Andrews, “The Evolution of Cognitive Bias,” in The Handbook of Evolutionary Psychology, edited by David M. Buss (Hoboken, NJ: John Wiley & Sons Inc., 2005), pp. 724–746. Also, see Marcus Lu, “50 Cognitive Biases in the Modern World,” Visual Capitalist (February 1, 2020). Online at: ­­https://​­www​.­visualcapitalist​.­com​/­50​-­cognitive​-­biases​-­in​-­the​-­modern​-­world​/; and for an excellent graphic illustration of 188 biases, see the “Cognitive Bias Codex” developed by John Manoogian III and Buster Benson, at: h ­­ ttps://​­www​.­visualcapitalist​ .­com​/­wp​-­content​/­uploads​/­2017​/­09​/­cognitive​-­bias​-­infographic​.­html 106. Elizabeth Kolbert, “Why Facts Don’t Change our Minds,” The New Yorker (February 27, 2017). Online at: ­­https://​­www​.­newyorker​.­com​/­magazine​/­2017​/­02​ /­27​/­why​-­facts​-­dont​-­change​-­our​-­minds; cited in Kakutani, The Death of Truth, p. 113. 107. Raymond Nickerson, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of General Psychology 2, no. 2 (1998), pp. 175–220. Online at: ­­https://​­www​.r­ esearchgate​.n ­ et​/p ­ ublication​/2­ 80685490​_Confirmation​_Bias​_A​ _Ubiquitous​_Phenomenon​_in​_Many​_Guises 108. Bert H. Hodges, “Conformity and Divergence in Interactions, Groups and Cultures,” in The Oxford Handbook of Social Influence, p. 100. 109. Donald Trump, on Morning Joe, MSNBC (March 16, 2016). See Eliza Collins, “Trump: I Consult Myself on Foreign Policy,” Politico (March 16, 2016). Online at: ­­https://​­www​.­politico​.­com​/­blogs​/­2016​-­gop​-­primary​-­live​-­updates​-­and​-­results​ /­2016​/­03​/­trump​-­foreign​-­policy​-­adviser​-­220853; Jim Swift, “Donald Trump Talks to Himself for Foreign Policy Advice,” Weekly Standard (March 16, 2016). Online at: ­­https://​­www​.­weeklystandard​.­com​/­donald​-­trump​-­talks​-­to​-­himself​-­for​-­foreign​ -­policy​-­advice​/­article​/­2001601​/ 110. Marc Fisher, “Donald Trump Doesn’t Read Much. Being President Probably Wouldn’t Change That,” Washington Post (July 17, 2016). Online at: https://​ ­­ ­www​.­washingtonpost​ .­com​ /­p olitics​ /­d onald​ -­trump​ -­doesnt​ -­read​ -­much​ -­b eing​ -­president​-­probably​-­wouldnt​-­change​-­that​/­2016​/­07​/­17​/­d2ddf2bc​-­4932​-­11e6​-­90a8​ -­fb84201e0645​_story​.­html 111. McIntyre, Post-Truth, p. 62. 112. Pratkanis and Aronson, Age of Propaganda, p. 281, citing Lance Canon, “SelfConfidence and Selective Exposure to Information,” in Conflict, Decision and Dissonance, edited by Leon Festinger (Palo Alto, CA: Stanford University Press, 1964), pp. 83–96. 113. Kakutani, The Death of Truth, p. xi. 114. C.S. Lewis, The Four Loves (San Diego, CA: Harcourt Brace, 1960), p. 61. 115. Amber M. Gaffney and Michael A. Hogg, “Self-Presentation and Social Influence: Evidence for an Automatic Process,” in The Oxford Handbook of Social Influence, pp. 117–118. 116. Cialdini, Influence, pp. 114–166; Guadagno, “Compliance,” p. 118. 117. Ibid. 118. Nezlek and Smith, “Social Influence and Personality,” p. 53. 119. Gaffney and Hogg, “Self-Presentation and Social Influence: Evidence for an Automatic Process,” p. 260. 120. Bhargava, Likeonomics, p. xxxi.

276Notes 121. Jack Schafer, The Like Switch (New York: Simon & Schuester, 2015), p xiii. 122. Philip N. Howard, Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations and Political Operatives (New Haven, CT: Yale University Press, 2020), p. 79. 123. For example, see Neal Romanek, “Fake Followers and Rotten Reviews,” Feed (October 28, 2019). Online at: ­­https://​­feedmagazine​.­tv​/­content​-­focus​/­advertising​ /­fake​-­followers​-­and​-­rotten​-­reviews​/ 124. Ciarán McMahon, The Psychology of Social Media (London: Routledge, 2019), p. 33. 125. Cialdini, Influence, pp. 167–207; and Taylor, Brain Washing, pp. 74–76. 126. Manheim, Strategy and Information in Influence Campaigns, p. 27. 127. Pratkanis and Aronson, Age of Propaganda, p. 237. 128. Ibid. 129. Cialdini, Influence, pp. 167–207; and Taylor, Brain Washing, pp. 74–76. 130. Pratkanis and Aronson, Age of Propaganda, p. 239. 131. Manheim, Strategy and Information in Influence Campaigns, p. 27. 132. Pratkanis and Aronson, Age of Propaganda, p. 182, citing the Goebbels quote from Robert E. Herzstein, The War that Hitler Won (New York: Paragon House, 1987), p. 31. 133. Rolf Reber and Christian Unkelbach, “The Epistemic Status of Processing Fluency as Source for Judgments of Truth,” Review of Philosophy and Psychology 1, no. 4 (December 2010), pp. 563–581. Online at: ­­https://​­link​.­springer​.­com​/­article​/­10​ .­1007​%­2Fs13164​-­010​-­0039​-­7; and Shane, “The Psychology of Misinformation.” 134. Ibid. 135. McIntyre, Post-Truth, p. 42. 136. McNamee, “How to Fix Facebook” (cited in Hayden, The Assault on Intelligence, p. 223). 137. McIntyre, Post-Truth, p. 95. 138. Gaffney and Hogg, “Self-Presentation and Social Influence: Evidence for an Automatic Process,” p. 260; also citing Angus Campbell et al., The American Voter (New York: John Wiley & Sons, Inc., 1960). 139. For example, see Manheim, Strategy and Information in Influence Campaigns, pp. 14–15.

CHAPTER 5 1. Richard Fletcher, “The Truth Behind Filter Bubbles: Bursting Some Myths,” Reuters Institute, University of Oxford (January 22, 2020). Online at: ­­https://​ ­reutersinstitute​.­politics​.­ox​.­ac​.­uk​/­risj​-­review​/­truth​-­behind​-­filter​-­bubbles​-­bursting​ -­some​-­myths. In this piece, Fletcher explains how “this distinction is important because echo chambers could be a result of filtering or they could be the result of other processes, but filter bubbles have to be the result of algorithmic filtering.” 2. Anthony R. Pratkanis and Elliot Aronson, Age of Propaganda: The Everyday Use and Abuse of Persuasion (New York: Henry Holt and Company, 1992), p. 49. 3. Katherine E. Brown and Elizabeth Pearson, “Social Media, the Online Environment, and Terrorism,” in Routledge Handbook of Terrorism and Counterterrorism, edited by Andrew Silke (London: Routledge, 2019), p. 151.

Notes277 4. Philip N. Howard, Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations and Political Operatives (New Haven, CT: Yale University Press, 2020), p. 105. 5. Jon Keegan, “Blue Feed, Red Feed: See Liberal Facebook and Conservative Facebook, Side by Side,” The Wall Street Journal (May 18, 2016). Online at: ­­https://​ ­graphics​.­wsj​.­com​/­blue​-­feed​-­red​-­feed​/ 6. David Patrikarakos, War in 140 Characters: How Social Media Is Reshaping Conflict in the Twenty-First Century (New York: Basic Books, 2017), pp. 12–13. 7. Ibid. 8. Jason Gainous and Kevin M. Wagner, Tweeting to Power: The Social Media Revolution in American Politics (London: Oxford University Press, 2014), p. 14; citing Cass Sunstein, ­­Republic​.­com (Princeton, NJ: Princeton University Press, 2002). 9. Lee McIntyre, Post-Truth (Cambridge, MA: MIT Press, 2018), p. 94. 10. Cailin O’Connor and James Owen Weatherall, The Misinformation Age: How False Beliefs Spread (New Haven, CT: Yale University Press, 2019), p. 16. 11. Michael J. Mazarr et al., The Emerging Risk of Virtual Societal Warfare: Social Manipulation in a Changing Information Environment (Santa Monica, CA: Rand Corporation, 2019), p. 111. 12. McKay Coppins, “The Billion-Dollar Disinformation Campaign to Reelect the President,” The Atlantic (February 10, 2020). Online at: ­­https://​­www​.­theatlantic​ .­com​/­magazine​/­archive​/­2020​/­03​/­the​-­2020​-­disinformation​-­war​/­605530​/ 13. Ibid. 14. Elizabeth Culliford, “Domestic Online Interference Mars Global Elections: Report,” Reuters (November 5, 2019). Online at: ­­https://​­es​.­reuters​.­com​/­article​ /­worldNews​/­idUSKBN1XF0K8 15. Ibid. 16. Samantha Bradshaw and Philip N. Howard, “Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation,” Computational Propaganda Research Project, Oxford Internet Institute, Working Paper No. 2017.12 (July 17, 2017). Online at: ­­https://​­comprop​.­oii​.­ox​.­ac​.­uk​/­research​ /­troops​-­trolls​-­and​-­trouble​-­makers​-­a​-­global​-­inventory​-­of​-­organized​-­social​-­media​ -­manipulation​/; Michiko Kakutani, The Death of Truth: Notes on Falsehood in the Age of Trump (New York: Tim Duggan Books, 2018), p. 132. 17. Shelly Banjo, “Facebook, Twitter and the Digital Disinformation Mess,” Washington Post (October 31, 2019). Online at: ­­https://​­www​.­washingtonpost​.­com​ /­business​/­facebook​-­twitter​-­and​-­the​-­digital​-­disinformation​-­mess​/­2019​/­10​/­31​ /­3f81647c​-­fbd1​-­11e9​-­9e02​-­1d45cb3dfa8f​_story​.­html 18. Ibid. 19. Freedom House “2019 Internet Freedom Election Monitor” report, https://​ ­­ ­freedomhouse​.­org​/­report​/­special​-­reports​/­internet​-­freedom​-­election​-­monitor 20. Ibid. 21. Ibid. 22. Banjo, “Facebook, Twitter and the Digital Disinformation Mess.” 23. Ibid. 24. Ibid. 25. Gainous and Wagner, Tweeting to Power, p. 109 [paraphrased]. 26. Larry Sanger, “Internet Silos,” in What Should We Be Worried About? edited by John Brockman (New York: Harper, 2014), p. 401.

278Notes 27. Gainous and Wagner, Tweeting to Power, p. 109. 28. Eli Pariser, The Filter Bubble: What the Internet Is Hiding From You (New York: Penguin Press, 2011), p. 4; Kakutani, The Death of Truth, pp. 116–117. 29. P. W. Singer and Emerson T. Brooking, LikeWar: The Weaponization of Social Media (Boston: Houghton Mifflin Harcourt, 2018), p. 121. 30. Jennifer Kavanagh et al., News in a Digital Age: Comparing the Presentation of News Information over Time and Across Media Platforms (Santa Monica, CA: Rand Corporation, 2019). Online at: ­­http://​­www​.­rand​.­org​/­t​/­RR2960, p. 3. 31. Ibid., p. 26. 32. Ibid., p. 28. 33. Ibid., pp. xvii–xix. 34. Elisa Shearer and Jeffrey Gottfried, “News Use Across Social Media Platforms 2017,” Pew Research Center (September 7, 2017). Online at: ­­https://​­www​ .­journalism​.­org​/­2017​/­09​/­07​/­news​-­use​-­across​-­social​-­media​-­platforms​-­2017​/ 35. McIntyre, Post-Truth, p. 93. 36. Ryan Holiday, Trust Me, I’m Lying: Confessions of a Media Manipulator, Revised & Updated edition (New York: Portfolio/Penguin, 2017). 37. Samantha Bradshaw and Philip N. Howard, “Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation,” Working Paper 2018.1. Oxford, UK: Project on Computational Propaganda (July 20, 2018), p. 7. 38. Singer and Brooking, LikeWar, p. 58. 39. Jarol B. Manheim, Strategy in Information and Influence Campaigns (New York: Routledge, 2011), p. 52. 40. Singer and Brooking, LikeWar, p. 57. 41. Manheim, Strategy in Information and Influence Campaigns, p. 51. 42. Pariser, The Filter Bubble, p. 3; Kakutani, The Death of Truth, pp. 116–117. 43. For more on this, please see James J.F. Forest, Digital Influence Mercenaries: Profits and Power through Information Warfare (Annapolis, MD: Naval Institute Press, 2021). 44. O’Connor and Weatherall, The Misinformation Age, p. 16. 45. Mazarr et al., The Emerging Risk of Virtual Societal Warfare, p. 118. 46. Philip G. Zimbardo, Ebbe B. Ebbesen, and Christina Maslach, Influencing Attitudes and Changing Behavior, Second edition (New York: Random House, 1977), pp. 104–105. 47. Ibid., pp. 104–105. 48. Naomi Oreskes and Erik M. Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (New York: Bloomsbury Press, 2010), p. 57. 49. Tom Nichols, The Death of Expertise: The Campaign against Established Know­ ledge and Why It Matters (London: Oxford University Press, 2017), p. 146. 50. Pratkanis and Aronson, Age of Propaganda, pp. 58–65; citing L.M. Janes and J.M. Olson, “Jeer Pressures: The Behavioral Effects of Observing Ridicule of Oth­ ers,” Personality and Social Psychology Bulletin 26(2000), pp. 474–485. 51. Pratkanis and Aronson, Age of Propaganda, p. 66. 52. Pew Research Center, “Cable TV at a Crossroads” (August 14, 2006). Online at: ­­https://​­www​.­journalism​.­org​/­2006​/­08​/­14​/­cable​-­tv​-­at​-­a​-­crossroads​/ 53. Amy Mitchell et al., “Political Polarization and Media Habits,” Pew Research Center (October 21, 2014). Online at: ­­https://​­www​.­journalism​.­org​/­2014​/­10​/­21​ /­political​-­polarization​-­media​-­habits​/

Notes279 54. “Fox News Viewed as Most Ideological Network,” Pew Research Center, News Interest Index (October 29, 2009). Online at: ­­https://​­www​.­people​-­press​.­org​ /­2009​/­10​/­29​/­fox​-­news​-­viewed​-­as​-­most​-­ideological​-­network​/ 55. Susan Jacoby, The Age of American Unreason (New York: Vintage Books, 2009), p. xviii. 56. Pratkanis and Aronson, Age of Propaganda, p. 273. 57. Ibid., p. 87. 58. Pew Research Center, “American News Pathways Project” (October 2020). Online at: ­­https://​­www​.­pewresearch​.­org​/­pathways​-­2020​/­COVIDTRUMPMSSG​ /­main​_source​_of​_election​_news​/­us​_adults 59. Paul Farhi, “Whatever Happened to Breitbart?” Washington Post (July 2, 2019).  Online  at: ­­https://​­www​.­washingtonpost​.­com​/­lifestyle​/­style​/­whatever​ -­happened​-­to​-­breitbart​-­the​-­insurgent​-­star​-­of​-­the​-­right​-­is​-­in​-­a​-­long​-­slow​-­fade​ /­2019​/­07​/­02​/­c8f501a2​-­9cde​-­11e9​-­85d6​-­5211733f92c7​_story​.­html 60. Yochai Benkler, Robert Faris, Hal Roberts, and Ethan Zuckerman, “Study: Breitbart-led Right-Wing Media Ecosystem Altered Broader Media Agenda,” Columbia Journalism Review (March 3, 2017). Online at: ­­https://​­www​.­cjr​.­org​/­analysis​ /­breitbart​-­media​-­trump​-­harvard​-­study​.­php 61. Kakutani, The Death of Truth, p. 108, citing Pew Research Center, “Sharp Partisan Divisions in Views of National Institutions” (July 10, 2017). Online at: ­­https://​­www​.­people​-­press​.­org​/­2017​/­07​/­10​/­sharp​-­partisan​-­divisions​-­in​-­views​ -­of​-­national​-­institutions​/ 62. Alice Marwick and Rebecca Lewis, “Media Manipulation and Disinformation Online,” Data & Society (May 2017), p. 38. Online at: ­­https://​­datasociety​.­net​ /­output​/­media​-­manipulation​-­and​-­disinfo​-­online​/; citing Art Swift, “Americans’ Trust in Mass Media Sinks to New Low,” Gallup, September 14, 2016, http://​­ ­­ www​ .­gallup​.­com​/­poll​/­195542​/­americans​-­trust​-­mass​-­media​-­sinks​-­new​-­low​.­aspx; and Benkler et al., “Study: Breitbart-Led Right-Wing Media Ecosystem Altered Broader Media Agenda.” 63. McIntyre, Post-Truth, p. 70, citing Shauna Theel et al., “Study: Media Sowed Doubt in Coverage of UN Climate Report,” Media Matters (October 10, 2013). Online at ­­https://​­www​.­mediamatters​.­org​/­washington​-­post​/­study​-­media​-­sowed​ -­doubt​-­coverage​-­un​-­climate​-­report 64. Ted Koppel, “Olbermann, O’Reilly and the Death of Real News,” Washington Post (November 14, 2010). Online at: ­­http://​­www​.­washingtonpost​.­com​/­wp​-­dyn​ /­content​/­article​/­2010​/­11​/­12​/­AR2010111202857​.­html 65. Manheim, Strategy in Information and Influence Campaigns, pp. 75–76. 66. Pratkanis and Aronson, Age of Propaganda, p. 282. 67. Kakutani, The Death of Truth, pp. 111–112. 68. Antonia Noori Farzan, “A Library Wanted a New York Times Subscription. Officials Refused, Citing Trump and ‘Fake News,’” Washington Post (November 5, 2019).  Online  at: ­­https://​­www​.­washingtonpost​.­com​/­nation​/­2019​/­11​/­05​/­new​ -­york​-­times​-­citrus​-­county​-­florida​-­library​-­subscription​-­rejected​-­fake​-­news​/ 69. “Trump Supporter Attacks BBC Cameraman at El Paso Rally,” BBC News (February 12, 2019). Online at: ­­https://​­www​.­bbc​.­com​/­news​/­world​-­us​-­canada​ -­47208909; Asher Stockler, “Trump Supporter Charged with Assault on Orlando Sentinel Journalist Covering President’s 2020 Rally,” Newsweek (June 19, 2019). Online at:

280Notes ­­https://​­www​.­newsweek​.­com​/­trump​-­supporter​-­arrested​-­assault​-­journalist​-­rally ​-­1444834 70. Charles Sykes, “How the Right Lost Its Mind and Embraced Donald Trump,” Newsweek (September 21, 2017). Online at: ­­https://​­www​.­newsweek​.­com​/­2017​/­09​ /­29​/­right​-­lost​-­mind​-­embraced​-­donald​-­trump​-­668180​.­html; and Charles Sykes, “Charlie Sykes on Where the Right Went Wrong,” New York Times (December 15, 2016). Online at: ­­https://​­www​.­nytimes​.­com​/­2016​/­12​/­15​/­opinion​/­sunday​ /­charlie​-­sykes​-­on​-­where​-­the​-­right​-­went​-­wrong​.­html; cited in Kakutani, The Death of Truth, p. 112. 71. Sykes, How the Right Lost Its Mind, p. 180; Kakutani, The Death of Truth, p. 115. 72. Pratkanis and Aronson, Age of Propaganda, p. 201. 73. Kakutani, The Death of Truth, p. 112. Citing Benkler et al., “Study: BreitbartLed Right-Wing Media Ecosystem Altered Broader Media Agenda.” 74. Gainous and Wagner, Tweeting to Power, p. 14. 75. Kakutani, The Death of Truth, p. 117. 76. Alexander Nazaryan, “John McCain Cancer Is ‘Godly Justice’ for Challenging Trump, Alt-Right Claims,” Newsweek (July 22, 2017). Online at: ­­https://​­www​ .­newsweek​.­com​/­mccain​-­cancer​-­trump​-­supporters​-­terrible​-­639834; cited in Kakutani, The Death of Truth, p. 113. 77. Yochai Benkler, Robert Faris, and Hal Roberts, Network Propaganda: Manipulation, Disinformation, Radicalization in American Politics (London: Oxford University Press, 2018), p. 20. 78. Mitchell et al., “Political Polarization and Media Habits.” 79. McIntyre, Post-Truth, p. 57; citing Daniel Fessler, Anne Pisor, and Colin Holbrood, “Political Orientation Predicts Credulity Regarding Putative Hazards,” Psychological Science 28, no. 5 (March 2017). Online at: ­­https://​­www​.­researchgate​ .­net​/­publication​/­313243277​_Political​_Orientation​_Predicts​_Credulity​_Regarding​ _Putative​_Hazards 80. Gordon Pennycook and David Rand, “Lazy Not Biased: Susceptibility to Partisan Fake News Is Better Explained by Lack of Reasoning Than by Motivated Reasoning,” Cognition 188 (June 2018), p. 47. https:/­­doi​.­org​/­10​.­1016​/­j​.­cognition​ .­2018​.­06​.­011 81. Kate Starbird, “The Surprising Nuance Behind the Russian Troll Strategy,” Medium (October 20, 2018). Online at: ­­https://​­medium​.­com​/­s​/­story​/­the​-­trolls​ -­w ithin​ -­h ow​ -­russian​ -­i nformation​ -­o perations​ -­i nfiltrated​ -­o nline​ -­c ommunities​ -­691fb969b9e4 82. Sean Illing, “‘A Giant Fog Machine’: How Right-Wing Media Obscures Mueller and Other Inconvenient Stories,” Vox (October 31, 2017). Online at: https://​­ ­­ www​.­vox​ .­com​/­2017​/­10​/­31​/­16579820​/­mueller​-­clinton​-­russia​-­uranium​-­manafort​-­charlie​-­sykes 83. Ibid. 84. For more on this, see John Sides, Michael Tesler, and Lynn Vavreck, Identity Crisis: The 2016 Presidential Campaign and the Battle for the Meaning of America (Princeton, NJ: Princeton University Press, 2019); and Kathleen Hall Jamieson, Cyberwar: How Russian Hackers and Trolls Helped Elect a President—What We Don’t, Can’t, and Do Know (London: Oxford University Press, 2018). 85. Amy Mitchell et al., “Distinguishing between Factual and Opinion Statements in the News,” Pew Research Center (June 18, 2018). Online at: https://​ ­­

Notes281 ­www​.­journalism​.­org​/­2018​/­06​/­18​/­distinguishing​-­between​-­factual​-­and​-­opinion​ -­statements​-­in​-­the​-­news​/ 86. Thomas L. Friedman, “Social Media: Destroyer or Creator,” New York Times (February 3, 2016), citing Wael Ghonim’s TED Talk. Online at: ­­https://​­www​.­ted​ .­com​/­talks​/­wael​_ghonim​_let​_s​_design​_social​_media​_that​_drives​_real​_change 87. Sanger, “Internet Silos,” 401. 88. Arlie Russell Hochschild, Strangers in Their Own Land: Anger and Mourning on the American Right (New York: The New Press, 2016). 89. McIntyre, Post-Truth, pp. 53–54.

CHAPTER 6 1. Stefan Simanowitz, “Turkey Ratchets up Crackdown at Home as Tanks Roll into Syria,” Amnesty International (October 31, 2019). Online at: https://​­ ­­ www​ .­amnesty​.­org​/­en​/­latest​/­news​/­2019​/­10​/­turkey​-­ratchets​-­up​-­crackdown​-­at​-­home​ -­as​-­tanks​-­roll​-­into​-­syria​/ 2. Rod Nordland, “Turkey’s Free Press Withers as Erdogan Jails 120 Journalists,” New York Times (November 17, 2016). Online at: ­­https://​­www​.­nytimes​.­com​/­2016​ /­11​/­18​/­world​/­europe​/­turkey​-­press​-­erdogan​-­coup​.­html; cited in P.W. Singer and Emerson T. Brooking, LikeWar: The Weaponization of Social Media (Boston: Houghton Mifflin Harcourt, 2018), p. 93. 3. Simanowitz, “Turkey Ratchets Up Crackdown at Home as Tanks Roll into Syria.” 4. Ibid. 5. “Jamal Khashoggi: All You Need to Know about Saudi Journalist’s Death,” BBC News (July 20, 2020). Online at: ­­https://​­www​.­bbc​.­com​/­news​/­world​-­europe​ -­45812399 6. For details of these and others, see David Filipov, “Here Are 10 Critics of Vladimir Putin Who Died Violently or in Suspicious Ways,” Washington Post (March 23, 2017). Online at: ­­https://​­www​.­washingtonpost​.­com​/­news​/­worldviews​/­wp​ /­2017​/­03​/­23​/­here​-­are​-­ten​-­critics​-­of​-­vladimir​-­putin​-­who​-­died​-­violently​-­or​-­in​ -­suspicious​-­ways​/ 7. Scott Simon, “Why Do Russian Journalists Keep Falling?” NPR (April 21, 2018). Online at: ­­https://​­www​.­npr​.­org​/­2018​/­04​/­21​/­604497554​/­why​-­do​-­russian​ -­journalists​-­keep​-­falling 8. Ibid. 9. Andrew Kuchins, “Dead Journalists in Putin’s Russia,” Center for Strategic and International Studies (March 19, 2007). Online at: https://​­ ­­ www​.­csis​.­org​ /­analysis​/­dead​-­journalists​-­putins​-­russia 10. For details of these and others, see Filipov, “Here Are 10 Critics of Vladimir Putin Who Died Violently or in Suspicious Ways.” 11. Corin Faife, “Iran’s ‘National Internet’ Offers Connectivity at the Cost of Censorship,” Vice (March 29, 2016). Online at: ­­https://​­www​.­vice​.­com​/­en​_us​/­article​ /­yp3pxg​/­irans​-­national​-­internet​-­offers​-­connectivity​-­at​-­the​-­cost​-­of​-­censorship; cited in Singer and Brooking, LikeWar, p. 93. 12. “Bahrain: Key Developments,” Freedom House (May 2015). Online at: ­­https://​ ­freedomhouse​.­org​/­report​/­freedom​-­net​/­2015​/­bahrain

282Notes 13. Yameen Sherhan, “A Death Penalty for Alleged Blasphemy on Social Media,” The Atlantic (June 12, 2017). Online at: ­­https://​­www​.­theatlantic​.­com​/­news​ /­archive​/­2017​/­06​/­pakistan​-­facebook​-­death​-­penalty​/­529968​/; cited in Singer and Brooking, LikeWar, p. 93. 14. Singer and Brooking, LikeWar, p. 88. 15. Nina Masih, Shams Irfan, and Joanna Slater, “India’s Internet Shutdown in Kashmir Is the Longest Ever in a Democracy,” New York Times (December 16, 2019). Online at: ­­https://​­www​.­washingtonpost​.­com​/­world​/­asia​_pacific​/­indias​ -­internet​-­shutdown​-­in-​ ­kashmir​-­is​-­now​-­the​-­longest-​ ­ever​-­in​-­a​-­democracy​/­2019​/­12​ /­15​/­bb0693ea​-­1dfc​-­11ea​-­977a​-­15a6710ed6da​_story​.­html 16. Singer and Brooking, LikeWar, p. 89. 17. “Russia: New Law Expands Government Control Online,” Human Rights Watch (October 31, 2019). Online at: ­­https://​­www​.­hrw​.­org​/­news​/­2019​/­10​/­31​ /­russia​-­new​-­law​-­expands​-­government​-­control​-­online 18. Justin Sherman and Samuel Bendett, “Russia’s ‘Data Localization’ Efforts May Guide Other Governments,” Defense One (January 13, 2020). Online at: ­­https://​­www​.­defenseone​.­com​/­ideas​/­2020​/­01​/­russias​-­data​-­localization​-­push​ -­may​-­guide​-­other​-­governments​/­162380​/ 19. Soufan Center, “Saudi Arabia’s Relentless Campaign to Silence Its Critics,” IntelBrief (November 14, 2019). Online at: ­­https://​­thesoufancenter​.­org​/­intelbrief​ -­saudi​-­arabias​-­relentless​-­campaign​-­to​-­silence​-­its​-­critics​/ 20. Ben Elgin and Peter Robison, “How Depots Use Twitter to Hunt Dissidents,” Bloomberg (October 27, 2016). Online at: ­­https://​­www​.­bloomberg​.­com​/­news​ /­articles​/­2016​-­10​-­27​/­twitter​-­s​-­firehose​-­of​-­tweets​-­is​-­incredibly​-­valuable​-­and​-­just​ -­as​-­dangerous; cited in Singer and Brooking, LikeWar, p. 93. 21. Soufan Center, “Saudi Arabia’s Relentless Campaign to Silence Its Critics”; Ellen Nakashima and Greg Bensinger, “Former Twitter Employees Charged with Spying for Saudi Arabia by Digging into the Accounts of Kingdom Critics,” Washington Post (November 6, 2019). Online at: ­­https://​­www​.­washingtonpost​.­com​ /­national​-­security​/­former​-­twitter​-­employees​-­charged​-­with​-­spying​-­for​-­saudi​ -­arabia​-­by​-­digging​-­into​-­the​-­accounts​-­of​-­kingdom​-­critics​/­2019​/­11​/­06​/­2e9593da​ -­00a0​-­11ea​-­8bab​-­0fc209e065a8​_story​.­html 22. Soufan Center, “The Social Media Weapons of Authoritarian States,” IntelBrief (September 13, 2019). Online at: ­­https://​­thesoufancenter​.­org​/­intelbrief​-­the​ -­social​-­media​-­weapons​-­of​-­authoritarian​-­states​/ 23. Tim Wu, “Is the First Amendment Obsolete?” Knight First Amendment Institute, Columbia University, September 2017. Online at: ­­https://​­knightcolumbia​.­org​ /­content​/­tim​-­wu​-­first​-­amendment​-­obsolete; cited in Peter Pomerantsev, This Is Not Propaganda (New York: Public Affairs, 2018), p. 27. 24. Peter Pomerantsev, “The Menace of Unreality: How the Kremlin Weaponizes Information, Culture and Money,” The Interpreter (November 22, 2014). Online at: ­­http://​­www​.­interpretermag​.­com​/­the​-­menace​-­of​-­unreality​-­how​-­the​-­kremlin​ -­weaponizes​-­information​-­culture​-­and​-­money 25. Soufan Center, “The Social Media Weapons of Authoritarian States.” 26. Hannah Arendt, The Origins of Totalitarianism (New York: Harcourt Brace, 1951), p. 474. 27. Timothy Snyder, On Tyranny: Twenty Lessons from the Twentieth Century (New York: Tim Duggan Books, 2017), p. 65.

Notes283 28. Michiko Kakutani, The Death of Truth: Notes on Falsehood in the Age of Trump (New York: Tim Duggan Books, 2018), p. 96, citing Masha Gessen, “The Putin Paradigm,” The New York Review (December 13, 2016). Online at: ­­https://​­www​ .­nybooks​.­com​/­daily​/­2016​/­12​/­13​/­putin​-­paradigm​-­how​-­trump​-­will​-­rule​/ 29. Singer and Brooking, LikeWar, p. 263. 30. Kakutani, The Death of Truth, p. 131. 31. Larry M. Wortzel, “The Chinese People’s Liberation Army and Information Warfare,” Strategic Studies Institute and US Army War College Press (March 5, 2014), pp. 29–30. Online at: ­­https://​­publications​.­armywarcollege​.­edu​/­pubs​/­2263​ .­pdf. Note, according to Wortzel, a direct translation of yulun is “public opinion”; thus, in many English translations, the term “public opinion warfare” is used. In some PLA translations of book titles and articles, however, it is called “media warfare.” 32. Renee Diresta et al., Telling China’s Story: The Chinese Communist Party’s Campaign to Shape Global Narratives. Stanford Internet Observatory and Hoover Institution, Stanford University (July 20, 2020), p. 7. Online at: ­­https://​­cyber​.­fsi​.­stanford​ .­edu​/­io​/­news​/­new​-­whitepaper​-­telling​-­chinas​-­story 33. Laura Jackson, “Revisions of Reality: The Three Warfares—China’s New Way of War,” Beyond Propaganda report, Information at War, Legatum Institute, Transitions Forum (September 2015), pp. 5–6. Online at: ­­https://​­li​.­com​/­wp​-­content​ /­uploads​/­2015​/­09​/­information​-­at​-­war​-­from​-­china​-­s​-­three​-­warfares​-­to​-­nato​-­s​ -­narratives​-­pdf​.­pdf 34. Wortzel, “The Chinese People’s Liberation Army and Information Warfare.” 35. Philip N. Howard, Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations and Political Operatives (New Haven, CT: Yale University Press, 2020), p. 75; and Ai Weiwei, “China’s Paid Trolls: Meet the 50-Cent Party,” New Statesman (October 17, 2012). Online at: https://​­ ­­ www​ .­newstatesman​.­com​/­politics​/­politics​/­2012​/­10​/­china​%­E2​%­80​%­99s​-­paid​-­trolls​ -­meet​-­50​-­cent​-­party 36. Diresta et al., Telling China’s Story. 37. Congressional—Executive Commission on China, Freedom of Expression, Speech, and the Press (December 17, 2006). Online at: ­­https://​­www​.­cecc​.­gov​/­pages​ /­virtualAcad​/­exp​/ 38. Diresta et al., Telling China’s Story. 39. PEN America, Forbidden Feeds: Government Controls on Social Media in China (Los Angeles, CA: PEN America, 2019), pp. 8–12. Online at: ­­ https://​­ pen​ .­ org​ /­research​-­resources​/­forbidden​-­feeds​/ 40. “China Imprisoned More Journalists Than Any Other Country in 2019: CPJ,” Reuters (December 11, 2019). Online at: ­­https://​­www​.­reuters​.­com​/­article​/­us​ -­global​-­rights​-­journalists​-­graphic​/­china​-­imprisoned​-­more​-­journalists​-­than​-­any​ -­other​-­country​-­in​-­2019​-­cpj​-­idUSKBN1YF0KA 41. Steve Stecklow, “Refinitiv Deployed Filter to Block Reuters Reports as Hong Kong Protests Raged,” A Reuters Special Report (December 12, 2019). Online at: ­­https://​­www​.­reuters​.­com​/­investigates​/­special​-­report​/­hongkong​-­protests​-­media​/ 42. Michael J. Mazarr et al., The Emerging Risk of Virtual Societal Warfare: Social Manipulation in a Changing Information Environment (Santa Monica, CA: Rand Corporation, 2019), p. 123. China’s censorship efforts in this area are recounted in Evan Osnos, “Making China Great Again,” The New Yorker, January 8, 2018.

284Notes 43. Congressional—Executive Commission on China, Monitoring Compliance with Human Rights (2006). Online at: ­­http://​­www​.­cecc​.­gov​/­pages​/­annualRpt​ /­annualRpt06​/­Expression​.­php​?­PHPSESSID​=​­b9ccfd027b737f95161bbecb17e36557​ #­govlicb 44. Zixue Tai, “Casting the Ubiquitous Net of Information Control: Internet Surveillance in China from Golden Shield to Green Dam,” International Journal of Advanced Pervasive and Ubiquitous Computing 2, no. 1 (2010), p. 239; and Singer and Brooking, LikeWar, pp. 96–97. 45. Diresta et al., Telling China’s Story. 46. Singer and Brooking, LikeWar, p. 98, citing: David Wertime, “Chinese Websites Deleted One Billion Posts in 2014, State Media Says,” Tea Leaf Nation (blog) Foreign Policy (January 17, 2015). Online at: ­­https://​­foreignpolicy​.­com​/­2015​ /­01​/­17​/­chinese​-­websites​-­deleted​-­one​-­billion​-­posts​-­in​-­2014​-­state​-­media​-­says​/; Nikhail Sonnad, “261 Ways to Refer to the Tiananmen Square Massacre in China,” Quartz (June 3, 2016). Online at: ­­https://​­qz​.­com​/­698990​/­261​-­ways​-­to​-­refer​-­to​ -­the​-­tiananmen​-­square​-­massacre​-­in​-­china​/; and Malcolm Moore, “Tiananmen Massacre 25th Anniversary: The Silencing Campaign,” The Telegraph (May 18, 2014).  Online  at: ­­https://​­www​.­telegraph​.­co​.­uk​/­news​/­worldnews​/­asia​/­china​ /­10837992​/­Tiananmen​-­Massacre​-­25th​-­anniversary​-­the​-­silencing​-­campaign​.­html 47. PEN America, Forbidden Feeds, p. 7. 48. Tai, “Casting the Ubiquitous Net of Information Control: Internet Surveillance in China from Golden Shield to Green Dam,” p. 239; and Singer and Brooking, LikeWar, pp. 96–97. 49. PEN America, Forbidden Feeds, p. 4. 50. Tai, “Casting the Ubiquitous Net of Information Control: Internet Surveillance in China from Golden Shield to Green Dam,” p. 239; and Singer and Brooking, LikeWar, pp. 96–97. 51. Singer and Brooking, LikeWar, p. 99, citing: Gary King, Jennifer Pan, and Margaret E. Roberts, “How Censorship in China Allows Government Criticism but Silences Collective Expression,” American Political Science Review 107, no. 2 (2013), pp. 1–18. Online at: ­­https://​­gking​.­harvard​.­edu​/­files​/­gking​/­files​/­50c​.­pdf 52. Singer and Brooking, LikeWar, p. 99. Citing: “China Threatens Tough Punishment for Online Rumor Spreading,” Reuters (September 9, 2013). Online at: ­­h ttps://​­ w ww​.­reuters​ .­c om​ /­a rticle​ /­u s​ -­c hina​ -­i nternet​ /­c hina​ -­t hreatens​ -­t ough​ -­punishment​-­for​-­online​-­rumor​-­spreading​-­idUSBRE9880CQ20130909 53. King et al., “How Censorship in China Allows Government Criticism but Silences Collective Expression,” pp. 1–18. Online at: ­­https://​­gking​.­harvard​.­edu​ /­files​/­gking​/­files​/­50c​.­pdf; cited in Singer and Brooking, LikeWar, p. 100. 54. Soufan Center, “The Social Media Weapons of Authoritarian States.” 55. Diresta et al., Telling China’s Story. 56. Adam Segal, The Hacked World Order: How Nations Fight, Trade, Maneuver, and Manipulate in the Digital Age (New York: PublicAffairs, 2016), p. 215. 57. King et al., “How Censorship in China Allows Government Criticism but Silences Collective Expression”; quoted in Pomerantsev, This Is Not Propaganda, p. 189. 58. Kerry Allen, “China Internet: Top Talking Points of 2019 and How They Evaded the Censors,” BBC (December 31, 2019). Online at: ­­https://​­www​.­bbc​.­com​ /­news​/­world​-­asia​-­china​-­50859829 59. Ibid.

Notes285 60. Esther Chan and Rachel Blundy, “‘Bulletproof’ China-Backed Site Attacks HK Democracy Activists,” AFP (Agence France Presse) (November 1, 2019). Online at: ­­https://​­news​.­yahoo​.­com​/­bulletproof​-­china​-­backed​-­attacks​-­hk​-­democracy​-­activists​ -­070013463​.­html 61. Shelly Banjo, “Facebook, Twitter and the Digital Disinformation Mess,” Washington Post (October 31, 2019). Online at: ­­https://​­www​.­washingtonpost​.­com​ /­business​/­facebook​-­twitter​-­and​-­the​-­digital​-­disinformation​-­mess​/­2019​/­10​/­31​ /­3f81647c​-­fbd1​-­11e9​-­9e02​-­1d45cb3dfa8f​_story​.­html 62. Shane Huntley, “Maintaining the Integrity of Our Platforms,” Google Threat Analysis Group (August 22, 2019). Online at: ­­https://​­www​.­blog​.­google​/­outreach​ -­initiatives​/­public​-­policy​/­maintaining​-­integrity​-­our​-­platforms​/ 63. Singer and Brooking, LikeWar, p. 101; citing: “Planning Outline for the Construction of a Social Credit System (2014–2020),” China Copyright and Media (April 25, 2015). Online at: ­­https://​­chinacopyrightandmedia​.­wordpress​ .­c om​/­2 014​/­0 6​/­1 4​/­p lanning​-­o utline​-­f or​-­t he​-­c onstruction​-­o f​-­a​-­s ocial​-­c redit​ -­system​-­2014​-­2020​/; see also Bruce Sterling, “Chinese Planning Outline for a Social Credit System,” Wired (June 3, 2015). Online at: ­­https://​­www​.­wired​ .­c om​/­b eyond​-­t he​-­b eyond​/­2 015​/­0 6​/­c hinese​-­p lanning​-­o utline​-­s ocial​-­c redit​ -­system​/ 64. Jacob Silverman, “China’s Troubling New Social Credit System—and Ours,” New Republic (October 29, 2015). Online at: ­­https://​­newrepublic​.­com​/­article​ /­123285​/­chinas​-­troubling​-­new​-­social​-­credit​-­system​-­and​-­ours; cited in Singer and Brooking, Likewar, p. 101; and “Planning Outline for the Construction of a Social Credit System (2014–2020).” 65. Rosie Perper, “Chinese Government Forces People to Scan Their Face Before They Can Use Internet as Surveillance Efforts Mount,” Business Insider (December 2, 2019). Online at: ­­https://​­www​.­businessinsider​.­com​/­china​-­to​-­require​-­facial​-­id​ -­for​-­internet​-­and​-­mobile​-­services​-­2019​-­10 66. Allen, “China Internet.” 67. Perper, “Chinese Government Forces People to Scan Their Face Before They Can Use Internet as Surveillance Efforts Mount.” 68. Singer and Brooking, LikeWar, p. 101; citing: Jonah M. Kessel and Paul Mozur, “How China Is Changing Your Internet,” New York Times (August 9, 2016). Online at: ­­https://​­www​.­nytimes​.­com​/­video​/­technology​/­100000004574648​/­china​ -­internet​-­wechat​.­html 69. Singer and Brooking, LikeWar, p. 102; citing: Celia Hatton, “China’s ‘Social Credit’: Beijing Sets Up Huge System,” BBC News (October 26, 2015). Online at: ­­https://​­www​.­bbc​.­com​/­news​/­world​-­asia​-­china​-­34592186 70. Singer and Brooking, LikeWar, p. 101; citing: Clinton Nguyen, “China Might Use Data to Create a Score for Each Citizen Based on How Trustworthy They Are,” Business Insider (October 26, 2016). Online at: ­­https://​­www​.­businessinsider​.­com​ /­china​-­social​-­credit​-­score​-­like​-­black​-­mirror​-­2016​-­10 71. Singer and Brooking, LikeWar, p. 94; citing: “Planning Outline for the Construction of a Social Credit System (2014–2020).” 72. Ibid. 73. Allen, “China Internet.” 74. Singer and Brooking, LikeWar, p. 101; citing: Oiwan Lam, “China’s Xinjiang Residents Are Being Forced to Install Surveillance Apps on Mobile Phones,”

286Notes Vice (July 19, 2017). Online at: ­­https://​­www​.­vice​.­com​/­en​_us​/­article​/­ne94dg​ /­jingwang​-­app​-­no​-­encryption​-­china​-­force​-­install​-­urumqi​-­xinjiang 75. Perper, “Chinese Government Forces People to Scan Their Face Before They Can Use Internet as Surveillance Efforts Mount”; The announcement referenced is online at: ­­http://​­www​.­miit​.­gov​.­cn​/­n1146285​/­n1146352​/­n3054355​/­n3057724​ /­n3057728​/­c7448683​/­content​.­html 76. PEN America, Forbidden Feeds, p. 4. 77. Pomerantsev, This Is Not Propaganda, p. 85. 78. Information Security Doctrine of the Russian Federation (Approved by President of the Russian Federation Vladimir Putin on September 9, 2000). Online at: ­­https://​­www​.­itu​.­int​/­en​/­ITU​-­D​/­Cybersecurity​/­Documents​/­National​_Strategies​ _Repository​/­Russia​_2000​.­pdf 79. Sherman and Bendett, “Russia’s ‘Data Localization’ Efforts May Guide Other Governments.” 80. “Disinformation Review: Twenty Years of Distorting the Media,” EU vs. Disinfo (January 10, 2020). Online at: ­­https://​­www​.­stopfake​.­org​/­en​/­disinformation​ -­review​-­twenty​-­years​-­of​-­distorting​-­the​-­media​/ 81. Simon Shuster, “Russia Today: Inside Putin’s On-Air Machine,” Time (March 5, 2015). Online at: ­­http://​­time​.­com​/­rt​-­putin​/; cited in Singer and Brooking, LikeWar, p. 107. 82. Matthew Armstrong, “RT as a Foreign Agent: Political Propaganda in a Globalized World,” War on the Rocks (May 4, 2015). Online at: https://​­ ­­ warontherocks​ .­c om​ /­2 015​ /­0 5​ /­r t​ -­a s​ -­a​ -­f oreign​ -­a gent​ -­p olitical​ -­p ropaganda​ -­i n​ -­a​ -­g lobalized​ -­world​/ 83. Singer and Brooking, LikeWar, p. 94; citing Amar Toor, “How Putin’s Cronies Seized Control of Russia’s Facebook,” The Verge (January 31, 2014). Online at: ­­https://​­www​.­theverge​.­com​/­2014​/­1​/­31​/­5363990​/­how​-­putins​-­cronies​-­seized​ -­control​-­over​-­russias​-­facebook​-­pavel​-­durov​-­vk 84. It should be noted that in August 2020, Alexei Navalny was poisoned and hospitalized in critical condition. 85. Pomerantsev, This Is Not Propaganda, p. 24. 86. Source: Adrian Chen, “The Agency,” New York Times Magazine (June 2, 2015). Online at: ­­https://​­www​.­nytimes​.­com​/­2015​/­06​/­07​/­magazine​/­the​-­agency​.­html 87. Tetyana Lokot, “Hard Labor for Woman Who Reposted Online Criticism of Russia’s Actions in Ukraine,” Global Voices (February 22, 2016). Online at: ­­https://​ ­www​.­stopfake​.­org​/­en​/­hard​-­labor​-­for​-­woman​-­who​-­reposted​-­online​-­criticism​-­of​ -­russia​-­s​-­actions​-­in​-­ukraine​/; cited in Singer and Brooking, LikeWar, p. 94. 88. “Russia: New Law Expands Government Control Online.” 89. Ibid. 90. Ibid. 91. Sherman and Bendett, “Russia’s ‘Data Localization’ Efforts May Guide Other Governments.” 92. Kakutani, The Death of Truth, p. 141; citing Christopher Paul and Miriam Matthews, The Russian ‘Firehose of Falsehood’ Propaganda Model (Rand Corporation, 2016), pp. 1–5. Online at: ­­https://​­www​.­rand​.­org​/­pubs​/­perspectives​/­PE198​.­html 93. The Russian ‘Firehose of Falsehood’ Propaganda Model, pp. 1–5. 94. “Disinformation Review.”

Notes287 95. Pomerantsev, This Is Not Propaganda, p. 89. 96. Anton Sobolev, “How Pro-Government ‘Trolls’ Influence Online Conversations in Russia” (January 2019). Online at: ­­http://​­www​.­asobolev​.­com​/­files​/­Anton​ -­Sobolev​-­Trolls​.­pdf 97. Kakutani, The Death of Truth, pp. 142–143. 98. Pomerantsev, This Is Not Propaganda, pp. 22–23. 99. For more on this, please see James J.F. Forest, Digital Influence Mercenaries: Profits and Power through Information Warfare (Annapolis, MD: Naval Institute Press, 2021). 100. Samantha Bradshaw and Philip N. Howard, Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation, Working Paper 2018.1. Oxford: Project on Computational Propaganda (July 20, 2018), p. 3. Online at: ­­https://​­blogs​.­oii​.­ox​.­ac​.­uk​/­comprop​/­research​/­cybertroops2018​/ 101. See the discussion about “Attention Scarcity” in Wu, “Is the First Amendment Obsolete?” 102. Anthony R. Pratkanis and Elliot Aronson, Age of Propaganda: The Everyday Use and Abuse of Persuasion (New York: Henry Holt and Company, 1992), pp. 213–214. 103. Paraphrased from Pratkanis and Aronson, Age of Propaganda. 104. Lee McIntyre, Post-Truth (Cambridge, MA: MIT Press, 2018), p. 148. According to the Washington Post Fact Checher, by the end of his term Donald Trump had told “30,573 untruths” during his presidency. Glenn Kessler et al., “A Term of Untruths,” The Washington Post (January 23, 2021). Online at: https://​­ ­­ www​ .­w ashingtonpost​ .­c om​ /­p olitics​ /­i nteractive​ /­2 021​ /­t imeline​ -­t rump​ -­c laims​ -­a s​ -­president​/ 105. Wu, “Is the First Amendment Obsolete?” 106. Jon Keegan, “Blue Feed, Red Feed: See Liberal Facebook and Conservative Facebook, Side by Side,” The Wall Street Journal (May 18, 2016). Online at: ­­https://​ ­graphics​.­wsj​.­com​/­blue​-­feed​-­red​-­feed​/ 107. For more on this, see John Sides, Michael Tesler, and Lynn Vavreck, Identity Crisis: The 2016 Presidential Campaign and the Battle for the Meaning of America (Princeton, NJ: Princeton University Press, 2019); and Kathleen Hall Jamieson, Cyberwar: How Russian Hackers and Trolls Helped Elect a President—What We Don’t, Can’t, and Do Know (London: Oxford University Press, 2018). 108. Friedrich Nietzsche, On Truth and Untruth: Selected Writings, translated and edited by Taylor Carman (New York: Harper Perennial, 2010), p. 24, cited in Mazarr et al., The Emerging Risk of Virtual Societal Warfare, p. 101. 109. Mazarr et al., The Emerging Risk of Virtual Societal Warfare, p. 101. 110. Charles Lord, Lee Ross, and Mark Lepper, “Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence,” Journal of Personality and Social Psychology 37 (1979), pp. 2098–2109. ­­https://​ ­doi​.­org​/­10​.­1037​/­0022​-­3514​.­37​.­11​.­2098 111. Pratkanis and Aronson, Age of Propaganda: p. 49. 112. Howard, Lie Machines, p. 105; and Lisa Fazio, “Unbelievable News? Read It again and You Might Think It’s True,” The Conversation (December 5, 2016). Online at: ­­https://​­ t heconversation​ .­c om​ /­u nbelievable​ -­n ews​ -­read​ -­i t​ -­a gain​ -­a nd​ -­y ou​ -­might​-­think​-­its​-­true​-­69602; Lisa Fazio, Nadia M. Brashier, B. Keith Payne, and Elizabeth J. Marsh, “Knowledge Does Not Protect Against Illusory Truth,” Journal

288Notes of Experimental Psychology. General 144, no. 5 (October 2015), pp. 993–1002. Online at: ­­https://​­www​.­apa​.­org​/­pubs​/­journals​/­features​/­xge​-­0000098​.­pdf 113. Daniel A. Effron and Medha Raj, “Misinformation and Morality: Encountering Fake-News Headlines Makes Them Seem Less Unethical to Publish and Share,” Psychological Science (November 21, 2019). Online at: ­­https://​­doi​.­org​/­10​ .­1177​/­0956797619887896 114. Jarol B. Manheim, Strategy and Information in Influence Campaigns (New York: Routledge, 2011), p. 83. 115. Singer and Brooking, LikeWar, p. 103. 116. Gary Kasparov, in a tweet available online at ­­https://​­twitter​.­com​/­Kasparov63​ /­status​/­808750564284702720; cited in Kakutani, The Death of Truth, p. 143. 117. Pomerantsev, This Is Not Propaganda, pp. 48–49. 118. Howard, Lie Machines, p. 76. 119. Pomerantsev, This Is Not Propaganda, p. 58, citing Samuel C. Woolley and Douglas R. Guilbeault, “Computational Propaganda in the United States of America: Manufacturing Consensus Online,” Working Paper No. 2017.5. Project on Computational Propaganda. Online at: ­­http://​­comprop​.­oii​.­ox​.­ac​.­uk​/­wp​-­content​ /­uploads​/­sites​/­89​/­2017​/­06​/­Comprop​-­USA​.­pdf 120. Diego A. Martin and Jacob N. Shapiro, “Trends in Online Foreign Influence Efforts,” Woodrow Wilson School of Public and International Affairs, Princeton University (July 8, 2019), p. 10. Online at: ­­https://​­scholar​.­princeton​.­edu​/­jns​ /­research​-­reports; citing: G. King, J. Pan, and M.E. Roberts, “How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument,” American Political Science Review 111, no. 3 (2017), pp. 484–501; D. Stukal, S. Sanovich, J.A. Tucker, and R. Bonneau, “For Whom the Bot Tolls: A Neural Networks Approach to Measuring Political Orientation of Twitter Bots in Russia,” SAGE Open 9, no. 2 (2019); and M. Zhuang, “Intergovernmental Conflict and Censorship: Evidence from China’s Anti-Corruption Campaign” (April 25, 2019). ­­https://​­doi​.­org​/­10​.­2139​/­ssrn​.­3267445 121. Alex Romero, “An Ecosystem of Mistrust and Disinformation,” Disinfo Portal (June 19, 2019). Online at: ­­https://​­disinfoportal​.­org​/­an​-­ecosystem​-­of​-­mistrust​ -­and​-­disinformation​/

CHAPTER 7 1. Craig Timberg et al., “Russian Ads, Now Publicly Released, Show Sophistication of Influence Campaign,” Washington Post (November 1, 2017). Online at: ­­https://​ ­w ww​.­w ashingtonpost​ .­c om​ /­b usiness​ /­t echnology​ /­russian​ -­a ds​ -­n ow​ -­p ublicly​ -­released​-­show​-­sophistication​-­of​-­influence​-­campaign​/­2017​/­11​/­01​/­d26aead2​-­bf1b​ -­11e7​-­8444​-­a0d4f04b89eb​_story​.­html; Michiko Kakutani, The Death of Truth: Notes on Falsehood in the Age of Trump (New York: Tim Duggan Books, 2018), p. 129. 2. Eric Tucker, “U.S. Officials: Russia Behind Spread of Virus Disinformation,” Associated Press (July 29, 2020). Online at: ­­https://​­apnews​.­com​/­3acb089e6a333e05 1dbc4a465cb68ee1 3. For more on this, please see James J.F. Forest, Digital Influence Mercenaries: Profits and Power through Information Warfare (Annapolis, MD: Naval Institute Press, 2021). 4. Ibid.

Notes289 5. Matthew Rosenberg, Nicole Perlroth, and David E. Sanger, “‘Chaos Is the Point’: Russian Hackers and Trolls Grow Stealthier in 2020,” New York Times (January 10, 2020). Online at: ­­https://​­www​.­nytimes​.­com​/­2020​/­01​/­10​/­us​/­politics​ /­russia​-­hacking​-­disinformation​-­election​.­html 6. Reuters, “Facebook Suspends Accounts Tied to Putin Ally for Political Meddling,” New York Post (October 30, 3019). Online at: ­­https://​­nypost​.­com​/­2019​/­10​ /­30​/­facebook​-­suspends​-­accounts​-­tied​-­to​-­putin​-­ally​-­for​-­political​-­meddling​/ 7. Elizabeth Culliford and Gabriella Borter, “Facebook’s Dilemma: How to Police Claims about Unproven COVID-19 Vaccines,” Reuters (August 7, 2020). Online at:­ ­h ttps://​­ w ww​.­reuters​ .­c om​ /­a rticle​ /­u s​ -­h ealth​ -­c oronavirus​ -­f acebook​ -­i nsight​ /­facebooks​​ -­dilemma​ -­how​ -­to​ -­police​ -­claims​ -­about​ -­unproven​ -­covid​ -­19​ -­vaccines​ -­idUSKCN2530I8 8. Ibid. 9. Ibid. 10. P.W. Singer and Emerson T. Brooking, LikeWar: The Weaponization of Social Media (Boston: Houghton Mifflin Harcourt, 2018), pp. 263–264. 11. Philip N. Howard, Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations and Political Operatives (New Haven, CT: Yale University Press, 2020), p. 12. 12. Paraphrasing Brad J. Sagarin and Lynn Miller Henningsen, “Resistance to Influence,” in Oxford Handbook of Social Influence, edited by Stephen G. Harkins, Kipling D. Williams, and Jerry M. Burger (London: Oxford University Press), pp. 437–456. 13. Anthony R. Pratkanis and Elliot Aronson, Age of Propaganda: The Everyday Use and Abuse of Persuasion (New York: Henry Holt and Company, 1992), p. 87. 14. Liesl Yearsley, “We Need to Talk about the Power of AI to Manipulate Humans,” MIT Technology Review (June 5, 2017). Online at: ­­https://​­www​.­techno​ logyreview​.c­ om​/s­ ​/6­ 08036​/w ­ e​-n ­ eed​-t­ o​-t­ alk​-a­ bout​-t­ he​-p ­ ower​-o ­ f​-a­ i​-t­ o​-m ­ anipu​late​ -­humans​/ 15. For more on this, please see Forest, Digital Influence Mercenaries. 16. Clare Wardle, “First Draft’s Essential Guide to Understanding Information Disorder” (October 2019). Online at: ­­https://​­firstdraftnews​.­org​/­latest​/­information​ -­disorder​-­the​-­techniques​-­we​-­saw​-­in​-­2016​-­have​-­evolved​/ 17. Joshua Tucker et al., “Social Media Political Polarization and Political Disinformation: A Review of the Scientific Literature,” Hewlett Foundation (March 2018).  Online  at: ­­https:// ​­ w ww​.­h ewlett​ .­o rg​ /­w p​ -­c ontent​ /­u ploads​ /­2 018​ /­0 3​ /­Social​-­Media​-­Political​-­Polarization​-­and​-­Political​-­Disinformation​-­Literature​ -­Review​.­pdf 18. “Threatcasting Report: What Is Weaponized Narrative?” Arizona State University: Weaponized Narrative Initiative (2019), p. 16. Online at: ­­ https://​ ­weaponizednarrative​.­asu​.­edu​/ 19. Michael J. Mazarr et al., The Emerging Risk of Virtual Societal Warfare (Santa Monica, CA: Rand Corporation, 2019), pp. 65–66. 20. For instance, see Rob Price, “AI and CGI Will Transform Information Warfare, Boost Hoaxes, and Escalate Revenge Porn,” Business Insider (August 12, 2017); cited in Mazarr et al., The Emerging Risk of Virtual Societal Warfare, p. 87. 21. Will Knight, “Fake America Great Again: Inside the Race to Catch the Worryingly Real Fakes That Can Be Made Using Artificial Intelligence,” Technology Review

290Notes (August 17, 2018). Online at: ­­https://​­www​.­technologyreview​.­com​/­s​/­611810​/­fake​ -­america​-­great​-­again​/; also, for some examples of realistic Instagram memes created by powerful computer graphics equipment combined with AI, see: https://​ ­­ ­www​.­instagram​.­com​/­the​_fakening​/ 22. Avi Selk, “This Audio Clip of a Robot as Trump May Prelude a Future of Fake Human Voices,” Washington Post (May 3, 2017); Bahar Gholipour, “New AI Tech Can Mimic Any Voice,” Scientific American (May 2, 2017); cited in Mazarr et al., The Emerging Risk of Virtual Societal Warfare, pp. 85–86. 23. “Imitating People’s Speech Patterns Precisely Could Bring Trouble,” The Economist (April 20, 2017); cited in Mazarr et al., The Emerging Risk of Virtual Societal Warfare, p. 86. 24. “Fake News: You Ain’t Seen Nothing Yet,” The Economist (July 1, 2017); Faizan Shaikh, “Introductory Guide to Generative Adversarial Networks (GANs) and Their Promise!” Analytics Vidhya (June 15, 2017); cited in Mazarr et al., The Emerging Risk of Virtual Societal Warfare, p. 88. 25. Mazarr et al., The Emerging Risk of Virtual Societal Warfare, pp. 69–70. 26. James Vlahos, “Fighting Words,” Wired 26, no. 3 (March 2018). Online at: ­­https://​­www​.­questia​.­com​/­magazine​/­1P4​-­2006858459​/­fighting​-­words; cited in Mazarr et al., The Emerging Risk of Virtual Societal Warfare, p. 71. 27. Peter Stone et al., Artificial Intelligence and Life in 2030: One Hundred Year Study on Artificial Intelligence (Stanford, CA: Stanford University, 2016), pp. 14–17; cited in Mazarr et al., The Emerging Risk of Virtual Societal Warfare, p. 69. 28. Singer and Brooking, LikeWar, p. 257. 29. David Ingram and Jacob Ward, “Digitally Altered ‘Deepfake’ Videos a Growing Threat as 2020 Election Approaches,” NBC News (December 14, 2019). Online at: ­­https://​­www​.­nbcnews​.­com​/­tech​/­tech​-­news​/­little​-­tells​-­why​-­battle​-­against​ -­deepfakes​​-­2020​-­may​-­rely​-­verbal​-­n1102881 30. Adam Segal, The Hacked World Order: How Nations Fight, Trade, Maneuver, and Manipulate in the Digital Age (New York: Public Affairs, 2016), pp. 99–101; cited in Mazaar et al., The Emerging Risk of Virtual Societal Warfare, p. 80. 31. Mazarr et al., The Emerging Risk of Virtual Societal Warfare, pp. 77–80. 32. Rosenberg et al., “‘Chaos Is the Point’: Russian Hackers and Trolls Grow Stealthier in 2020.” 33. Ibid. 34. Ibid. 35. United States Senate, 116th Congress, “Report of the Senate Committee on Intelligence on Russian Active Measures Campaigns and Interference in the 2016 Election,” Volume 1, Report 116-XX (2019). Online at: ­­https://​­www​.­intelligence​ .­senate​.­gov​/­sites​/­default​/­files​/­documents​/­Report​_Volume1​.­pdf; and Russian Active Measures Campaigns and Interference in the 2016 U.S. Election, Volume 5: Counterintelligence Threats and Vulnerabilities. Report of the Senate Committee on Intelligence, United States Senate (August 2020). Online at: ­­https://​­intelligence​.­senate​ .­gov​/­sites​/­default​/­files​/­documents​/­report​_volume5​.­pdf 36. See their website at: ­­https://​­www​.­darpa​.­mil​/­program​/­media​-­forensics 37. Statement of Christopher Wray before the Senate Judicial Committee (July 23, 2019), pp. 4–5. Online at: ­­https://​­www​.­judiciary​.­senate​.­gov​/­imo​/­media​/­doc​ /­Wray​%­20Testimony1​.­pdf

Notes291 38. Alex Hern, “Facebook Bans ‘Deepfake’ Videos in Run-Up to US Election,” The Guardian (January 7, 2020). Online at: ­­https://​­www​.­theguardian​.­com​/­technology​ /­2020​/­jan​/­07​/­facebook​-­bans​-­deepfake​-­videos​-­in​-­run​-­up​-­to​-­us​-­election; The Deep Fake Detection Dataset is available at ­­https://​­github​.­com​/­ondyari​/­FaceForensics​ /­tree​/­master​/d ­ ataset, together with instructions for downloading and using the data. 39. Elizabeth Dwoskin, “YouTube Is Changing Its Algorithms to Stop Recommending Conspiracies,” Washington Post (January 25, 2019). Online at: ­­https://​ ­www​.­washingtonpost​.­com​/­technology​/­2019​/­01​/­25​/­youtube​-­is​-­changing​-­its​ -­algorithms​-­stop​-­recommending​-­conspiracies​/ 40. Twitter, “Platform Manipulation and Spam Policy” (September 2019). Online at: ­­https://​­help​.­twitter​.­com​/­en​/­rules​-­and​-­policies​/­platform​-­manipulation 41. Kevin Roose and Kate Conger, “YouTube to Remove Thousands of Videos Pushing Extreme Views,” New York Times (June 5, 2019). Online at: https://​­ ­­ www​ .­nytimes​.­com​/­2019​/­06​/­05​/­business​/­youtube​-­remove​-­extremist​-­videos​.­html; and Elizabeth Dwoskin, “YouTube Will Remove More White Supremacist and Hoax Videos, a More Aggressive Stance on Hate Speech,” Washington Post (June 5, 2019). Online at: ­­https://​­www​.­washingtonpost​.­com​/­technology​/­2019​/­06​/­05​ /­youtube​-­will​-­remove​-­more​-­white​-­supremacist​-­hoax​-­videos​-­greater​-­hate​-­speech​ -­effort​/ 42. Nick Clegg, “Facebook Is Preparing for an Election Like No Other,” The Telegraph (June 17, 2020). Online at: ­­https://​­www​.­telegraph​.­co​.­uk​/­news​/­2020​/­06​/­17​ /­facebook​-­preparing​-­election​-­like​-­no​/ 43. Maggie Miller, “Report Highlights Instagram, Deepfake Videos as Key Dis­ information Threats in 2020 Elections,” The Hill (September 3, 2019). Online at: ­­https://​­thehill​.­com​/­regulation​/­cybersecurity​/­459492​-­report​-­highlights​-­instagram​ -­deepfake​-­videos​-­as​-­key​-­threats​-­in​-­2020 44. Instagram Press Release, “Reducing Inauthentic Activity on Instagram” (November 19, 2018). Online at: ­­https://​­instagram​-­press​.­com​/­blog​/­2018​/­11​/­19​ /­reducing​-­inauthentic​-­activity​-­on​-­instagram​/ 45. Mike Isaac, “Facebook Finds New Disinformation Campaigns and Braces for 2020 Torrent,” New York Times (October 21, 2019). Online at: ­­https://​­www​.­nytimes​ .­com​/­2019​/­10​/­21​/­technology​/­facebook​-­disinformation​-­russia​-­iran​.­html 46. See the Google Jigsaw Assembler online at: ­­https://​­jigsaw​.­google​.­com​ /­assembler​/ 47. Twitter, “Expanding Transparency around Political Ads on Twitter” (February 19, 2019). Online at: ­­https://​­blog​.­twitter​.­com​/­en​_us​/­topics​/­company​/­2019​ /­transparency​-­political​-­ads​.­html 48. Twitter, “Information Operations,” Report. Online at: ­­https://​­transparency​ .­twitter​.­com​/­en​/­information​-­operations​.­html 49. Eric Auchard and Joseph Menn, “Facebook Cracks Down on 30,000 Fake Accounts in France,” Reuters (April 13, 2017). Online at: ­­https://​­www​.­reuters​ .­com​/­article​/­us​-­france​-­security​-­facebook​/­facebook​-­cracks​-­down​-­on​-­30000​-­fake​ -­accounts​-­in​-­france​-­idUSKBN17F25G 50. Facebook, “August 2020 Coordinated Inauthentic Behavior Report” (September 1, 2020). Online at: ­­https://​­about​.­fb​.­com​/­news​/­2020​/­09​/­august​-­2020​-­cib​ -­report​/

292Notes 51. Davey Alba and Sheera Frenkel, “Russia Tests New Disinformation Tactics in Africa to Expand Influence,” New York Times (October 30, 2019). Online at: ­­https://​ ­www​.­nytimes​.­com​/­2019​/­10​/­30​/­technology​/­russia​-­facebook​-­disinformation​ -­africa​.­html 52. Mike Isaac and Kevin Roose, “Facebook Bars Alex Jones, Louis Farrakhan and Others from Its Services,” New York Times (May 2, 2019). Online at: https://​ ­­ ­www​.­nytimes​.­com​/­2019​/­05​/­02​/­technology​/­facebook​-­alex​-­jones​-­louis​-­farrakhan​ -­ban​.­html 53. Kate Conger and Jack Nicas, “Twitter Bars Alex Jones and Infowars, Citing Harassing Messages,” New York Times (September 6, 2018). Online at: ­­https://​ ­www​.­nytimes​.­com​/­2018​/­09​/­06​/­technology​/­twitter​-­alex​-­jones​-­infowars​.­html 54. Ben Collins and Brandy Zadrozny, “Twitter Bans 7,000 QAnon Accounts, Limits 150,000 Others as Part of Broad Crackdown,” NBC News (July 21, 2020). Online at: ­­https://​­www​.­nbcnews​.­com​/­tech​/­tech​-­news​/­twitter​-­bans​-­7​-­000​-­qanon​ -­accounts​-­limits​-­150​-­000​-­others​-­n1234541 55. Jane Wakefield, “Donald Trump Jr. Suspended from Tweeting after Covid Post,” BBC News (July 28, 2020). Online at: ­­https://​­www​.­bbc​.­com​/­news​/­technology​ -­53567681 56. For example, see “Facebook’s AI Wipes Terrorism-Related Posts,” BBC News (November 29, 2017). Online at: ­­http://​­www​.­bbc​.­co​.­uk​/­news​/­technology​ -­42158045; and James F. Peltz, “Twitter Says It Suspended 1.2 Million Accounts for Terrorism-Promotion Violations,” Los Angeles Times (April 5, 2018). Online at: ­­http://​­www​.­latimes​.­com​/­business​/­la​-­fi​-­twitter​-­terrorism​-­accounts​-­20180405​ -­story​.­html. Also for a discussion on the impact of such efforts, see Thomas Holt, Joshua Freilich, and Steven Chermak, “Can Taking Down Websites Really Stop Terrorists and Hate Groups?” VOX-Pol (November 29, 2017). Online at: ­­https://​ ­www​.­voxpol​.­eu​/­can​-­taking​-­websites​-­really​-­stop​-­terrorists​-­hate​-­groups​/ 57. Ingram and Ward, “Digitally Altered ‘Deepfake’ Videos a Growing Threat as 2020 Election Approaches.” 58. James Vincent, “Adobe’s Prototype AI Tool Automatically Spots Photoshopped Faces,” The Verge (June 14, 2019). Online at: ­­https://​­www​.­theverge​.­com​ /­2019​/­6​/­14​/­18678782​/­adobe​-­machine​-­learning​-­ai​-­tool​-­spot​-­fake​-­facial​-­edits​ -­liquify​-­manipulations 59. Online at: ­­https://​­securingdemocracy​.­gmfus​.­org​/­hamilton​-­dashboard​/ 60. See the university’s press release, at: ­­https://​­news​.­iu​.­edu​/­stories​/­2019​/­09​ /­iub​/­releases​/­12​-­botslayer​-­launch​.­html 61. Online at: ­­https://​­deeptracelabs​.­com​/ 62. Online at: ­­https://​­navigator​.­oii​.­ox​.­ac​.­uk​/ 63. Oxford Internet Institute (Press Release, December 10, 2019). Online at: ­­https://​­www​.­oii​.­ox​.­ac​.­uk​/­news​/­releases​/­oxford​-­experts​-­launch​-­new​-­online​ -­tool​-­to​-­help​-­fight​-­disinformation​/ 64. Online at: ­­https://​­www​.­disinfoportal​.­org and ­­https://​­www​.­atlanticcouncil​ .­org​/­issue​/­disinformation​/. The organization also distributes information about disinformation via their Twitter account, at ­­https://​­twitter​.­com​/­disinfoportal 65. For more information about PCIO, see ­­ https://​­ carnegieendowment​ .­ org​ /­specialprojects​/­counteringinfluenceoperations 66. Judd Legum, “Facebook Allows Pro-Trump Super PAC to Lie in Ads,” Popular Information (January 22, 2020). Online at: ­­https://​­popular​.­info​/­p​/­facebook​ -­allows​-­pro​-­trump​-­super​-­pac​-­6a9

Notes293 67. Tony Romm and Isaac Stanley-Becker, “Twitter to Ban All Political Ads Amid 2020 Election Uproar,” Washington Post (October 30, 2019). Online at: ­­https://​ ­www​.­washingtonpost​.­com​/­technology​/­2019​/­10​/­30​/­twitter​-­ban​-­all​-­political​-­ads​ -­amid​-­election​-­uproar​/ 68. Tony Romm, “Zuckerberg: Standing for Voice and Free Expression,” Washington Post (October 17, 2019). Online at: ­­https://​­www​.­washingtonpost​.­com​ /­technology​/­2019​/­10​/­17​/­zuckerberg​-­standing​-­voice​-­free​-­expression​/ 69. Gilad Edelman, “Why YouTube Won’t Ban Trump’s Misleading Ads about Biden,” Wired (December 3, 2019). Online at: ­­https://​­www​.­wired​.­com​/­story​ /­youtube​-­trump​-­biden​-­political​-­ads​/ 70. Romm, “Zuckerberg: Standing for Voice and Free Expression.” 71. Ibid. 72. Ibid. 73. Ibid. 74. Jeff Horowitz, “Facebook to Limit Political Ads Week Before Election, Label Premature Calls,” The Wall Street Journal (September 3, 2020). Online at: https://​ ­­ ­www​.­wsj​.­com​/­articles​/­facebook​-­to​-­limit​-­political​-­ads​-­week​-­before​-­election​-­label​ -­premature​-­calls​-­11599130800 75. Drew Harwell, “Facebook Acknowledges Pelosi Video Is Faked but Declines to Delete It,” Washington Post (May 24, 2019). Online at: https://​­ ­­ www​ .­washingtonpost​.­com​/­technology​/­2019​/­05​/­24​/­facebook​-­acknowledges​-­pelosi​ -­video​-­is​-­faked​-­declines​-­delete​-­it​/ 76. Alex Horton, “Facebook Defends Decision to Leave Up Fake Pelosi Video and Says Users Should Make Up Their Own Minds,” Washington Post (May 25, 2019). Online at: ­­https://​­www​.­washingtonpost​.­com​/­politics​/­2019​/­05​/­25​/­nancy​ -­pelosi​-­fake​-­video​-­facebook​-­defends​-­its​-­decision​-­not​-­delete​/ 77. Ben Nimmo et al., “Secondary Infektion,” Graphika (June 2020), Online at: ­­https://​­secondaryinfektion​.­org; Nike Aleksajeva et al., “Operation Secondary Infektion,” Atlantic Council Digital Forensic Research Lab (June 22, 2019). Online at: ­­https://​­www​.­atlanticcouncil​.­org​/­in​-­depth​-­research​-­reports​/­report​/­operation​ -­secondary​-­infektion​/; Ellen Nakashima and Craig Timberg, “Russian Disinformation Operation Relied on Forgeries, Fake Posts on 300 Platforms, New Report Says,” Washington Post (June 16, 2020). Online at: ­­https://​­www​.­washingtonpost​ .­com​/­national​-­security​/­russian​-­disinformation​-­operation​-­relied​-­on​-­forgeries​ -­fake​-­posts​-­on​-­300​-­platforms​-­new​-­report​-­says​/­2020​/­06​/­16​/­679f5b5c​-­ae8d​-­11ea​ -­8f56​-­63f38c990077​_story​.­html 78. Shelly Banjo, “Facebook, Twitter and the Digital Disinformation Mess,” Washington Post (October 31, 2019). Online at: ­­https://​­www​.­washingtonpost​.­com​ /­business​/­facebook​-­twitter​-­and​-­the​-­digital​-­disinformation​-­mess​/­2019​/­10​/­31​ /­3f81647c​-­fbd1​-­11e9​-­9e02​-­1d45cb3dfa8f​_story​.­html 79. Jonathan Owen, “Exclusive: Government to Train Public Sector Comms Troops for Battle in Escalating Disinformation War,” PR Week (April 10, 2019). Online  at: ­­https://​­www​.­prweek​.­com​/­article​/­1581558​/­exclusive​-­government​ -­train​-­public​-­sector​-­comms​-­troops​-­battle​-­escalating​-­disinformation​-­war 80. Nina Jankowicz, “The Only Way to Defend against Russia’s Information War,” New York Times (September 25, 2017); cited in Mazarr et al., The Emerging Risk of Virtual Societal Warfare, pp. 164–165. 81. Several examples are mentioned below, and others include Alistair Reid, “Think ‘Sheep’ before you share to avoid getting tricked by online misinformation,”

294Notes First Draft (December 9, 2019). Online at: ­­https://​­firstdraftnews​.­org​/­latest​/­think​ -­sheep​-­before​-­you​-­share​-­to​-­avoid​-­getting​-­tricked​-­by​-­online​-­misinformation​/; Scott Bedley, “I Taught My 5th Graders How to Spot Fake News,” Vox (May 29, 2017). Online at: ­­https://​­www​.­vox​.­com​/­first​-­person​/­2017​/­3​/­29​/­15042692​/­fake​ -­news​-­education​-­election 82. Eugene Kiely and Lori Robertson, “How to Spot Fake News,” ­­FactCheck​ .­org (November 18, 2016). Online at: ­­https://​­www​.­factcheck​.­org​/­2016​/­11​/­how​-­to​ -­spot​-­fake​-­news​/ 83. Ibid. 84. Stephan Lewandowsky, “Disinformation and Human Cognition,” Security and Human Rights Monitor (August 13, 2019). Online at: ­­https://​­www​.­shrmonitor​ .­org​/­disinformation​-­and​-­human​-­cognition​/ 85. First Draft Toolbox: ­­https://​­start​.­me​/­p​/­YazB12​/­first​-­draft​-­toolbox 86. “Fake News ‘Vaccine’ Works: ‘Pre-bunking’ Game Reduces Susceptibility to Disinformation,” Science Daily (June 24, 2019). Online at: ­­https://​­www​.­sciencedaily​ .­com​/­releases​/­2019​/­06​/­190624204800​.­htm 87. Buzzfeed “Fake News Quiz.” Online at: ­­https://​­www​.­buzzfeed​.­com​/­tag​ /­fake​-­news​-­quiz 88. “Facebook’s “The News Hero.” Online at: ­­https://​­www​.­facebook​.­com​ /­thenewshero​/ 89. Facebook guidelines for how its users can spot fake news. Online at: ­­https://​ ­www​.­facebook​.­com​/­help​/­188118808357379 90. Wikipedia lists available of fake news media outlets. Online at: https://​­ ­­ en​ .­wikipedia​.­org​/­wiki​/­List​_of​_fake​_news​_websites 91. Kiely and Robertson, “How to Spot Fake News.” 92. Atlantic Council Digital Forensic Research Lab, “Confronting the Threat of Disinformation: The Problem,” Google Jigsaw Data Visualizer (February 2020). Online at: ­­https://​­jigsaw​.­google​.­com​/­the​-­current​/­disinformation​/­dataviz​/ 93. EUvsDisinfo, “Disinformation Database.” Online at: ­­https://​­euvsdisinfo​.­eu​ /­disinformation​-­cases​/ 94. Brooke Borel, The Chicago Guide to Fact-Checking (Chicago: University of Chicago Press, 2016); Bill Kovach and Tom Rosenstiel, Blur: How to Know What’s True in the Age of Information Overload (New York: Bloomsbury, 2010); Daniel J. Levitin, Weaponized Lies: How to Think Critically in the Post-Truth Era (New York: Dutton, 2017); Sarah Harrison Smith, The Fact Checker’s Bible: A Guide to Getting It Right (New York: Anchor Books, 2004). 95. Cindy L. Otis, True or False: A CIA Analyst’s Guide to Spotting Fake News (New York: Feiwel and Friends/Macmillan, 2020). 96. Stanford History Education Group, “Lateral Reading” (January 16, 2020). Online at: ­­https://​­www​.­youtube​.­com​/­watch​?­v​=​­SHNprb2hgzU​&­feature​=​­youtu​.­be 97. Carl Miller, The Death of the Gods: The New Global Power Grab (London: Windmill Books, 2019). A summary of these seven rules can be found online at his Twitter account (@carljackmiller) (October 21, 2019): ­­https://​­twitter​.­com​/­carljackmiller​ /­status​/­1186206735540862976​?­lang​=​­en 98. Carlotta Dotto and Sebatien Cubbon, “How to Spot a Bot (or not): The Main Indicators of Online Automation, Co-ordination and Inauthentic Activity,” FirstDraft (November 28, 2019). Online at: ­­https://​­firstdraftnews​.­org​/­latest​/­how​-­to​

Notes295 -­spot​-­a​-­bot​-­or​-­not​-­the​-­main​-­indicators​-­of​-­online​-­automation​-­co​-­ordination​-­and​ -­inauthentic​-­activity 99. Online at: ­­https://​­www​.­io​-­archive​.­org​/#/ 100. Online at: ­­https://​­botometer​.­iuni​.­iu​.­edu​/#!/ 101. Rory Smith and Carlotto Dotto, “The Not-So-Simple Science of Social Media ‘Bots’,” First Draft (November 28, 2019). Online at: ­­https://​­firstdraftnews​ .­org​/­latest​/­the​-­not​-­so​-­simple​-­science​-­of​-­social​-­media​-­bots​/ 102. Howard, Lie Machines, p. 58. 103. Adi Robertson, “How to Fight Lies, Tricks, and Chaos Online,” The Verge (December 3, 2019). Online at: ­­https://​­www​.­theverge​.­com​/­2019​/­12​/­3​/­20980741​ /­fake​-­news​-­facebook​-­twitter​-­misinformation​-­lies​-­fact​-­check​-­how​-­to​-­internet​ -­guide 104. Ibid. 105. Lee McIntyre, Post-Truth (Cambridge, MA: MIT Press, 2018), pp. 158–159. 106. Kakutani, The Death of Truth, p. 172. 107. Bruce Bartlett, The Truth Matters: A Citizen’s Guide to Separating Facts from Lies and Stopping Fake News in Its Tracks (New York: Ten Speed Press, 2017), p. 126.

Index

Note: Page numbers followed by t indicate tables and f indicate figures. Active Measures program (Russia), 10, 11, 35–36, 54–55, 57 Adorno, Theodor, 119 Age of Propaganda (Aronson and Pratkanis), 8, 111, 131, 138, 171, 208 Ailes, Roger, 168, 171–172, 183, 208 Algorithmic filtering, 153 Algorithms: attention dominance and, 25; deepfake detection, 234; digital influence silos and, 153, 155, 156, 162, 163–164; psychological persuasion and, 131, 145, 148, 151; tools and tactics of digital influence warfare, 68, 69, 72, 78–79, 82, 87, 91, 93, 97 Alliance for Securing Democracy, 231 Aphansenko, Victor, 192 Aristotle, 111, 131 Armstrong, Matt, 203 Aronson, Elliot, 8, 111, 131, 138, 140, 171, 208 Arquilla, John, 29 Attention dominance, 25, 66, 189–190, 206–218 Attention economy, 2, 38, 62, 68, 72, 146, 154, 161, 206, 210

Authoritarianism: conformity and, 119–122; control of information, 34; digital influence silos and, 156, 157–158; gaslighting and, 135; information dominance and, 190–196, 206–207, 212, 215–217; journalists and, 227; trolling and troll farms, 52 Bandura, Albert, 115 Bangladesh, 157 Bannon, Steve, 10, 183, 208 Barr, Rachel, 131 Bartlett, Bruce, 243–244 Bickert, Monika, 236 Biden, Joe, 17, 73, 86, 235 Borel, Brooke, 240 Borodin, Maxim, 192 Bot Sentinel, 241 Botometer, 241 Boyd, Danah, 5 Braddock, Kurt, 10 Bradshaw, Samantha, 104, 156 Brazil, 12, 15, 157 Breedlove, Philip, 5

298Index Breitbart, 160, 174–175, 177, 179, 186, 208, 209 Brooking, Emerson T., 6–7, 53, 79, 85, 89, 93, 96–97, 123, 128, 132, 162, 193, 195, 197, 200–201, 203, 217, 223, 230 Brown, Katherine, 61, 66, 154 Brown, Scott, 90 BuzzFeed, 81, 160, 240 Cacioppo, John, 122 Cambridge Analytica, 40, 71, 164 “Censorship by noise” strategy, 194, 202, 206, 207, 218 China: digital influence campaigns, 13–14; digital influence campaigns targeting Hong Kong, 13; digital influence campaigns targeting Taiwan, 13–14; Golden Shield Project, 198; information dominance in, 193, 196–202; Three Warfares Doctrine, 19, 48–50, 196 Cialdini, Robert, 8, 22, 114, 144, 146–147, 148 Clausewitz, Carl von, 6 Clegg, Nick, 232 Clinton, Hillary, 11, 55, 56, 80, 170 Cognitive biases, 133, 141, 158; cherry picking, 141; confirmation bias, 141–143, 145, 150, 154, 161, 164, 173, 175, 177, 182, 183, 187, 207, 214–215, 217, 226, 239; influence silos and, 25, 27, 158, 161, 164, 173, 175, 177, 182, 183, 187; overconfidence bias, 125, 142, 159 Cohen-Watnick, Ezra, 12 Cole, Samantha, 18, 79 Commitment and consistency, 146–149 Committee to Defend the President (GOP SuperPAC), 235 Conformity and perceived authority, 117–122 Conspiracy theories, 138–140 Contextual relevance, 22, 116, 126–130, 143–144, 158, 164–165, 185, 211 Conway, Erik, 137 Coppins, McKay, 156 Corvin, Roger, 144

COVID-19 pandemic, 50, 56, 91, 127, 221–223, 234 Crimea, 5, 11, 53, 106, 188 Crutchfield, Richard S., 119 Daily Caller, 160, 177, 187, 209, 214 Deception, digital tools and tactics for, 75–94; engagement deception, 90–94; identity deception, 84–90; information deception, 76–83 DeepFaceLab, 21 Deepfake videos, 21, 42, 62, 65, 78–80, 93, 226, 228–230, 232, 234, 237 Defense Advanced Research Projects Agency (DARPA), 232 Demir, Hakan, 191–192 Denial-of-service (DOS) attacks, 104 Dezinformatziya, 10, 16, 51, 54, 205 Digital, definition of, 1 Digital ecosystem, 159–168 Digital influence mercenaries, 15, 38, 43, 58, 80, 206, 210, 218, 221–222, 224, 243 Digital influence silos, 153–159; in authoritarian regimes, 156, 157–158; in Bangladesh, 157; cognitive biases and, 25, 27, 158, 161, 164, 173, 175, 177, 182, 183, 187; creation of influence silos, 153–154; definition of digital influence silo, 166; definition of influence silo, 153; in democracies, 156–158, 168; digital ecosystem and, 159–168; ethnic breakout of U.S. House of Representatives, 183, 184t; homophily and, 93, 154–155; in India, 156–157; influence aggressors and, 158–159, 168; “Internet Silos” (Sanger), 159; in Myanmar, 157; news silos, 155–156; “othering” and, 165, 166, 169, 171, 175, 178, 181–184; in the Philippines, 156; political polarization and, 154–155, 164–165, 176, 180, 185–188; politically conservative silos, 168–180; politically liberal silos, 180–185; power of influence silos, 154; rise in influence silos, 187–188;

Index299 in Sri Lanka, 157; strategies for disinformation and provocation, 156 Digital influence warfare: central goal of, 2; definition of, 2; examples of, 11–19; examples of goals in, 37t; future of, 228–231; information operations compared with, 3–5; meeting future challenges, 235–238; responding to attacks, 231–235; strategies of, 225–226; strategic goals of non-state actors involved in, 58–65; strategic goals of states involved in, 45–48; template for campaigns, 36–45; terminology, 6–11, 20f. See also Tactics and tools, digital influence Digital literacy, 237, 242–243, 244 Disinformation fatigue, 195, 213–214 Distributed denial-of-service (DOS) attacks, 104 Dorsey, Jack, 235, 241 Dunning-Kruger effect, 125 Duterte, Rodrigo, 15, 46, 47, 156, 192 Egypt, 15, 187, 192 Emotional relevance, 128 Engagement, tools and tactics to provoke, 94–103; attracting media coverage, 97; breadcrumbing, 98; memes, 100–103; repetition, 95–96; trolling and troll farms, 99–100; use of prominent voices, 96 Erdoğan, Recep Tayyip, 192 Estemirova, Natalia, 192 Faceswap, 21 FactCheck.org, 240 Fairness Doctrine, 168–169, 177 FakeApp, 21 Fatigue, disinformation, 195, 213–214 Fatigue, outrage, 16, 205, 213 Fear of missing out (FOMO), 145–146 Fingerprinting (browser information technique), 68 Firehose of falsehoods strategy, 15–16, 204–206, 213, 218, 226 Fletcher, Richard, 153

Flooding tactics, 57, 64, 71, 74, 81, 91, 146, 195, 199, 210, 212–213 Foot in the door technique, 147 Foreign Influence Task Force (FITF), 232 Fowler, Geoffrey, 69 Fox Broadcasting Corp, 172 Fox News, 59, 63, 84, 96, 160, 172–181, 186, 203, 209 Freedom House, 156 Gaba, Charles, 183, 183t Gaffney, Amber M., 143, 151 Gainous, Jason, 155, 159–160, 180 Galeotti, Mark, 11, 33–34 Gaslighting, 134–135, 138, 213, 230 Gessen, Masha, 195 Ghonim, Wael, 187 Gleicher, Nathaniel, 14, 58, 84–85 Goebbels, Joseph, 31, 34, 149, 174, 224 Gorka, Sebastian, 183 Group identity, 143–146 Hammes, Thomas, 10 Hashtag flooding, 71, 91, 146 Herring, Robert, 174 “Hijacking Our Heroes: Exploiting Veterans Through Disinformation on Social Media” (House Committee on Veterans’ Affairs hearing), 17, 85 Hodges, Bert H., 142 Hogg, Michael A., 143, 151 Holiday, Ryan, 89, 97–98, 99, 161 Homophily, 93, 154–155 House Committee on Veterans’ Affairs hearing, 17, 85 Howard, Philip, 23–24, 32, 46, 48, 58, 104, 106–107, 145, 154, 156, 218, 223, 241–242 Huffington Post, 98, 160, 186 IMPED model (for spotting automated accounts), 241–242 India, 156–157 Influence, definition of, 1–2 Influence industry, 162 Influence silos. See Digital influence silos Influence warfare: goals of, 30–33; history of, 33–36

300Index Influencer, attributes of, 112–116; informal authority, 115; legitimacybuilding, 113–114; reciprocity and, 114–115; research on, 112–113; risk-takers, 115 Information dominance, 189–190; in authoritarian countries, 190–196; in China, 193, 196–202; in Russia, 202–205 Information operations, 9; advantages of digital platforms, 108; definition of, 9; Soviet-era, 35–36; terminology of, 3, 5, 9–10, 45 Information Operations Archive, 241 Information processing, central and peripheral routes of, 122–124 Information warfare, definition of, 9 “Information wars,” 5 Infotainment, 16, 173, 204 Intelligence (attribute of a target), 124–126 Internet of Things, 230 Iran, 14–15, 83, 96, 193, 222 Iran nuclear deal (JCPOA), 14 Jacoby, Susan, 173 Jankowicz, Nina, 6 Jeer pressure, 170 Jenkins, Brian, 8 Joint Comprehensive Plan of Action (JCPOA), 14 Kakutani, Michiko, 15–16, 136, 177, 180, 195, 204, 205, 243 Kasparov, Gary, 217 Kennan, George, 7 Khartoum massacre (2019), 15 Khashoggi, Jamal, 85, 192 Kiely, Eugene, 240 King, Angus, 221 Kolbert, Elizabeth, 141 Kompromat, 55, 105 Koppel, Ted, 176 Ktovskaya, Olga, 192 Leaking official documents, 104–105 Legitimacy-building, 113–114 Lehman, Joseph, 130 Lesin, Mikhail, 192

Lewandowsky, Stephan, 125, 239 Lewis, Rebecca, 91, 92, 97, 132, 176 Limbaugh, Rush, 98, 139, 169–174, 177, 180, 183, 207–208 Literacy: digital, 237, 242–243, 244; Internet, 75 Lord, Carnes, 7 Manheim, Jarol, 30, 67, 116–117, 122, 123, 149, 163, 216 Martin, Diego A., 20, 46, 218 Marwick, Alice, 91, 92, 97, 132, 176 Matthews, Miriam, 204 Mazarr, Michael J., 80, 167, 212 McCain, John, 180 McIntyre, Lee, 125, 142, 150, 151, 155, 161, 188, 209, 243 McMaster, H. R., 12 McNamee, Roger, 131, 151 Memes, 100–103, 132, 156, 224 Mercer, Rebekah, 174 Mercer, Robert, 174 Micro-influencers, 156 Miller, Carl, 26, 29, 240–241 MSNBC, 173, 176, 181 Mubarak, Hosni, 187 Mueller, Robert S., III, 12 Mueller Report (U.S. Special Counsel Investigation), 12, 53–54 Murdoch, Rupert, 171, 172, 173, 174 Myanmar, 157 Narratives, weaponized, 5, 226 Nezlek, John B., 119–120 Nichols, Tom, 139, 170 Nimmo, Ben, 12, 57 Nixon, Richard, 168 Non-state actors, 58–65 North Korea, 45, 96, 193–194 O’Connor, Cailin, 140, 155 Olbermann, Keith, 181 Orban, Viktor, 192 O’Reilly, Bill, 239 Oreskes, Naomi, 137 “Othering,” 23, 60, 99, 165, 166, 169, 171, 175, 178, 181–184, 206, 208, 226, 236, 243

Index301 Outrage fatigue, 16, 205, 213 Overton Window, 41, 98, 130 Pariser, Eli, 160, 163–164 Parscale, Brad, 156 Patrikarakos, David, 154 Paul, Christopher, 204 Pearson, Elizabeth, 61, 66, 154 Pelosi, Nancy, 17–18, 77 Peña Nieto, Enrique, 46 Pennycook, Gordon, 182 Personal relevance, 128 Petty, Richard, 122 Philippines, 15, 46–47, 156, 192 PhotoDNA, 240 Politically conservative silos, 168–180 Politically liberal silos, 180–185 Politico, 160 Politics WatchDog, 17 Politkovskaya, Anna, 192 Pomerantsev, Peter, 47, 57, 66, 195, 202, 203, 204–205, 217 Pratkanis, Anthony, 8, 111, 131, 138, 171, 208 Prigozhin, Yevgeniy, 222 Privacy paradox, 68 Psychological operations, definition of, 9 Psychological persuasion, 111–112; attributes of influencers, 112–116; attributes of targets, 116–126; central and peripheral routes of information processing, 122–124; cherry picking, 141; conformity and perceived authority, 117–122; contextual relevance and, 126–130, 143–144; fear of missing out (FOMO), 145–146; foot in the door technique, 147; gaslighting, 134–135, 138, 213, 230; intelligence and, 124–126; questions for influencers, 116–117; “tobacco strategy,” 136–138; vivid appeals, 131–132. See also Cognitive biases Putin, Vladimir: administration and staff, 5; authoritarianism and, 57; Cold War experience, 51; dezinformatziya, 51; “firehose of

falsehoods” strategy, 15–16, 204–206, 213, 218; Information Security Doctrine of 2000, 52, 202; invasion of Ukraine, 53; “Sovereign Internet” law, 206 QAnon, 61–62, 139–140, 174, 233–234 Rand, David, 182 Rappler, 47 Reagan, Ronald, 115, 168–169, 177 Relevance: contextual, 22, 116, 126–130, 143–144, 158, 164–165, 185, 211; emotional, 128; personal, 128; social, 128 Repetition, 149–150 Research Director at the Center for the Analysis of Social Media, 240 Ressa, Maria, 47 Rid, Thomas, 7, 27, 35, 36, 42, 53, 55 Robertson, Adi, 242 Robertson, Lori, 240 Ronfeld, David, 29 Rosenberger, Laura, 231 RT (Russian news channel), 16, 53, 202–203 Russia: Active Measures program, 10, 11, 35–36, 54–55, 57; DCLeaks .com (fake news website), 11–12; digital influence campaigns, 11–14; “firehose of falsehoods” strategy, 15–16, 204–206, 213, 218; information dominance in, 202–206; Internet Research Agency (IRA), 12, 53–55, 86, 205; “Operation Secondary Infektion,” 12, 237; “Redick” fake Facebook user, 11–12; Sovereign Internet law, 203–204, 206; support for Trump, 53–54, 55; “warfare” perspective, 5 Safronov, Ivan, 192 Sanger, Larry, 45, 159, 187–188 Schafer, Jack, 144 Search engine optimization, 20, 61, 93, 153, 213 “Seven Rules to Keep Yourself Safe Online,” 240–241 Shane, Tommy, 125–126

302Index Shapiro, Jacob N., 20, 46, 218 Silos. See Digital influence silos Simon, Scott, 192 Sinclair Broadcast Group, 174, 177 Singer, P. W., 6–7, 53, 79, 85, 89, 93, 96–97, 123, 128, 132, 162, 193, 195, 197, 200–201, 203, 217, 223, 230 Smith, C. Veronica, 119–120 Smith, Paul, 7 Snyder, Timothy, 195 Social proof, 143–146 Social relevance, 128 Sock puppets, 10, 47, 85, 224 Soros, George, 11, 222 Soviet Union: dezinformatziya, 10, 16, 51, 54, 205 Sputnik News (Russian news channel), 16, 204 Sri Lanka, 157 Stanley, Jason, 9 Stengel, Richard, 5, 8 Suchman, Mark, 113 Sun Tzu, 29, 33, 43, 50 Sykes, Charlie, 179, 182–183 Syria, 62, 84, 132, 191 Tactics and tools, digital influence, 19–20, 20f; categories of, 19; deception, 19; for deception, 75–94; direct attacks, 19; for direct attacks, 104–106; for engagement deception, 90–94; flooding tactics, 57, 64, 71, 91, 146, 195, 199, 210, 212–213; for identity deception, 76–83, 84–90; for information deception, 76–83; provocation, 19; to provoke engagement, 94–103; sampling of tactics, 74t Target, attributes of, 116–126; central and peripheral routes of information processing, 122–124; conformity and perceived authority, 117–122; intelligence, 124–126; questions for influencers, 116–117 Target, strategies for influencing behavior of, 126–134; commitment and consistency, 146–149; conspiracy theories, 138–140; exploiting reliance

on group identity and social proof, 143–146; manipulating uncertainty, 134–138, 140–143; repetition, 149–150 Taylor, Kathleen, 30, 120, 124 Template for digital influence warfare campaigns, 36–45; collecting and analyzing data on targets, 39–40; crafting a plan to achieve goals, 38–39; identifying goals, 37–38; implementing tactics and tools, 40–43; monitoring, evaluating, and refining the campaign, 44–45 “Tobacco strategy,” 136–138 Tools. See Tactics and tools, digital influence Trolling, 27, 105, 194–196, 213, 227; China and, 13, 199; concern trolling, 99, 103; definition and goals of, 84–88, 99–100; keyboard trolls, 156; patriotic trolling, 15, 156; Philippines and, 15, 47, 156; Russia and, 27, 52–56, 70, 86, 103, 203–204, 225, 231; Saudi Arabia and, 194; troll farms, 10, 40, 52, 81, 86–87, 100, 107, 199 Trump, Donald: altered video of Jim Acosta and, 77; altered video of Nancy Pelosi, 17–18, 18f; attention dominance and, 209; bots and, 92; campaign ad attacking Biden, 73; campaign rallies, 58, 115, 148, 175, 227; “censorship by noise” strategy, 218; Cohen-Watnick, Ezra, and, 12; conspiracy theories and, 211; COVID-19 misinformation and, 223; digital influence silos and, 156, 174, 175, 177–180, 182–183, 185, 208, 211; disinformation and, 59; Facebook campaign ads, 58; “fake news” and, 177; fake Pope Francis endorsement of, 81; impeachment of, 148–149; legitimacy-building and, 113–115; lessons learned from, 212–213; “Make America Great Again” slogan, 59, 133, 208; QAnon and, 140; Russian support for, 53–54, 55; use of data on Facebook users, 71; use of repetition and name-calling, 149 Trump, Donald, Jr., 233–234

Index303 Ukraine, 51, 53, 83, 106, 203, 238 Uncertainty, manipulating, 134–138, 140–143 U.S. Capitol siege (January 6, 2021), 13, 234 Vivid appeals, 131–132 Wagner, Kevin, 155, 159–160, 180 Walker, Robert, 85, 135 Ward, Brad, 7 Wardle, Claire, 42, 224

Warfare, definition of, 2 Warner, Judith, 138 Weatherall, James Owen, 140, 155 WhatsApp, 79, 157 Winn, Denise, 122, 140 Wray, Christopher, 232 Wu, Tim, 195, 209 Zimbardo, Philip, 9, 41, 113, 118, 119, 167 Zuckerberg, Mark, 73, 235–236

About the Author JAMES J. F. FOREST, PhD, is a professor in the School of Criminology and Justice Studies at the University of Massachusetts Lowell. He is also a visiting professor at the Fletcher School of Law and Diplomacy, Tufts University, and coeditor of the internationally distributed journal Perspectives on Terrorism. He has taught courses and seminars on terrorism, counterterrorism, and security studies for a broad range of civilian, law enforcement, and military audiences for two decades. Dr. Forest has previously served as a senior fellow for the U.S. Joint Special Operations University (2010–2019) and as a faculty member of the United States Military Academy (2001–2010), six of those years as director of terrorism studies (in the Combating Terrorism Center at West Point) and three years as assistant dean for academic assessment. He has served as an expert witness for terrorism-related court cases and has provided testimony and briefings to the intelligence community and committee hearings of the U.S. Senate. Dr. Forest has published 20 books, including The Terrorism Lectures, 3rd Edition (Nortia Press, 2019; 2nd Ed. 2015; 1st Ed. 2012); Essentials of Counterterrorism (Praeger, 2015); Homeland Security and Terrorism (McGraw-Hill, 2013, with R. Howard and J. Moore); Intersections of Crime and Terror (Routledge, 2013); Weapons of Mass Destruction and Terrorism (McGraw-Hill, 2012, with R. Howard); Confronting the Terrorist Threat of Boko Haram in Nigeria (JSOU Press, 2012); Influence Warfare (Praeger, 2009); Handbook of Defence Politics (Routledge, 2008, with I. Wilson); Countering Terrorism and Insurgency in the 21st Century (Praeger, 2007); Teaching Terror: Strategic and Tactical Learning in the Terrorist World (Rowman & Littlefield, 2006); and The Making of a Terrorist: Recruitment, Training, and Root Causes (Praeger, 2005). Dr. Forest is a member of the editorial board for several scholarly journals. He has also published dozens of articles in journals such as Terrorism and Political Violence, Contemporary Security Policy, Crime and Delinquency, Perspectives on Terrorism, the Journal of Strategic Studies, the Cambridge Review of International Affairs, Democracy and Security, the Georgetown Journal of International Affairs, and the Journal of Political Science Education. Dr. Forest has been interviewed by many newspaper, radio, and television journalists, and he is regularly invited to give speeches and lectures in the United States and other countries. He received his graduate degrees from Stanford University and Boston College and undergraduate degrees from Georgetown University and De Anza College.